text
stringlengths 6
128k
|
---|
pdflatexsn-jnl
2021
[2]Li Shen
1]IEOR, Columbia Unviersity, NY, USA
2]JD Explore Academy, JD.com Inc, Beijing, China
3]Booth School of Business, University of Chicago, IL, USA
# Local AdaGrad-Type Algorithm for Stochastic Convex-Concave Optimization
Luofeng Liao<EMAIL_ADDRESS><EMAIL_ADDRESS>Jia Duan
<EMAIL_ADDRESS>Mladen Kolar<EMAIL_ADDRESS>Dacheng Tao
<EMAIL_ADDRESS>[ [ [
###### Abstract
Large scale convex-concave minimax problems arise in numerous applications,
including game theory, robust training, and training of generative adversarial
networks. Despite their wide applicability, solving such problems efficiently
and effectively is challenging in the presence of large amounts of data using
existing stochastic minimax methods. We study a class of stochastic minimax
methods and develop a communication-efficient distributed stochastic
extragradient algorithm, LocalAdaSEG, with an adaptive learning rate suitable
for solving convex-concave minimax problems in the Parameter-Server model.
LocalAdaSEG has three main features: (i) a periodic communication strategy
that reduces the communication cost between workers and the server; (ii) an
adaptive learning rate that is computed locally and allows for tuning-free
implementation; and (iii) theoretically, a nearly linear speed-up with respect
to the dominant variance term, arising from the estimation of the stochastic
gradient, is proven in both the smooth and nonsmooth convex-concave settings.
LocalAdaSEG is used to solve a stochastic bilinear game, and train a
generative adversarial network. We compare LocalAdaSEG against several
existing optimizers for minimax problems and demonstrate its efficacy through
several experiments in both homogeneous and heterogeneous settings.
###### keywords:
Stochastic Minimax Problem, Adaptive Optimization, Distributed Computation
## 1 Introduction
Stochastic minimax optimization problems arise in applications ranging from
game theory neumann1928theorie , robust optimization
delage2010distributionally , and AUC Maximization guo2020communication , to
adversarial learning wang2019towards and training of generative adversarial
networks (GANs) goodfellow2014generative . In this work, we consider
$\displaystyle\min_{x\in\mathcal{X}}\max_{y\in\mathcal{Y}}\;\bigg{\\{}F(x,y)=\int_{\Xi}f(x,y,\xi)P(\mathop{}\\!\mathrm{d}\xi)\bigg{\\}},$
(1)
where ${\mathcal{X}}\subseteq\mathbb{X}$, ${\mathcal{Y}}\subseteq\mathbb{Y}$
are nonempty compact convex sets, $\mathbb{X}$, $\mathbb{Y}$ are finite
dimensional vector spaces, $\xi$ is a random vector with an unknown
probability distribution $P$ supported on a set $\Xi$, and
$f:{\mathcal{X}}\times{\mathcal{Y}}\times\Xi\to{{\mathbb{R}}}$ is a real
valued function, which may be nonsmooth. Throughout the paper, we assume that
the expectation ${\mathbb{E}}_{\xi\sim P}[f(x,y,\xi)]$ is well defined and
finite. For all $\xi\in\Xi$, we assume that the function $F(x,y)$ is convex in
$x\in{\mathcal{X}}$ and concave in $y\in{\mathcal{Y}}$. In addition, we assume
that $F(x,y)$ is a Lipschitz continuous function.
There are three main challenges in developing an efficient solver for the
large-scale minimax problem (1). First, the solver should generate converging
iterates. In contrast to convex optimization, convergence results for minimax
problems are harder to obtain. Second, the solver should be able to take
advantage of parallel computing in a communication-efficient way. Only then
can it be applied to problems with large-scale datasets, which are often
distributed across multiple workers. Third, it is desirable for the solver to
choose learning rates in an adaptive manner. It is well known that, in minimax
problems, solver performance is susceptible to learning rates. We discuss
these challenges in detail below.
First, it has been shown that direct application of the (stochastic) gradient
descent ascent ((S)GDA) to solve (1) may result in divergence of the iterates
mertikopoulos2018optimistic ; daskalakis2018training ; gidel2019negative ;
mertikopoulos2018cycles . Possible ways to overcome the divergence issue are
to apply the primal-dual hybrid gradient (PDHG) or (stochastic) extragradient
method and their variants mertikopoulos2018optimistic ; daskalakis2018training
; gidel2018a ; azizian2020tight ; liu2020towards ; zhao2021accelerated ;
NEURIPS2020_52aaa62e .
Second, it is often desirable to have a communication-efficient distributed
solver to solve the stochastic minimax problem (1). The first reason being
that the minimax problem (1) is often instantiated as a finite-sum problem
with large-scale datasets (with the distribution $P$ being the empirical
distribution over millions of data points), and thus storing and manipulating
datasets on multiple workers is a must. For example, when problem (1) is
specified as BigGAN brock2018large over ImageNet deng2009imagenet , the
number of training samples is as many as 14 million. Traditional distributed
SGDA on the problem (1) may suffer from a considerable communication burden;
reducing communication complexity of the algorithm is a major concern in our
paper. The second reason is that, in some scenarios, data are distributed on
mobile devices (such as cell phones or smart watches), and due to privacy
concerns, local data must stay on the device. Furthermore, frequent
communication among devices is not feasible due to failures of mobile devices
(network connectivity, battery level, etc.). This further motivates the design
of communication-efficient distributed solvers to eliminate central data
storage and improve communication efficiency. For these reasons,
communication-efficient distributed solvers for minimax problems have been
investigated recently beznosikov2021distributed ; deng2020local ;
hou2021efficient ; mingruiliu2020decentralized .
Third, the performance of stochastic minimax solvers for (1) is highly
dependent on the learning rate tuning mechanism heusel2017gans ;
antonakopoulos2021adaptive . And yet, designing a solver for (1) with an
adaptive learning rate is much more challenging compared to the convex case;
the value of $F$ at an iterate $(x,y)$ does _not_ serve as a performance
criterion. For example, for classical minimization problems, the learning rate
can be tuned based on the loss evaluated at the current iterate, which
directly quantifies how close the iterate is to the minimum. However, such an
approach does not extend to minimax problems and, therefore, a more
sophisticated approach is required for tuning the learning rate. Development
of adaptive learning rate tuning mechanisms for large scale stochastic minimax
problems has been explored only recently bach2019universal ;
babanezhad2020geometry ; ene2020adaptive ; antonakopoulos2021adaptive ;
liu2020towards . Hence, we ask
_Can we develop an efficient algorithm for the stochastic minimax problem ( 1)
that enjoys convergence guarantees, communication-efficiency and adaptivity
simultaneously?_
Figure 1: A Venn Diagram for related works. Left circle: Communication-
efficient methods for stochastic minimax problems. Right circle: Adaptive
methods for stochastic minimax problems.
We provide an affirmative answer to this question and develop
${\operatorname*{{LocalAdaSEG}}}$ (Local Adaptive Stochastic Extragradient)
algorithm. Our contributions are three-fold:
Novel communication-efficient distributed minimax algorithm. Fig. 1
illustrates the difference between ${\operatorname*{{LocalAdaSEG}}}$ algorithm
and the existing works. ${\operatorname*{{LocalAdaSEG}}}$ falls under the
umbrella of the Parameter-Server model smola2010architecture and adopts a
periodic communication mechanism to reduce the communication cost between the
server and the workers, similar to Local SGD/FedAvg Yu2019onthelinear ;
stich2018local ; Li2020On in federated learning mcmahan2021advances . In
addition, in each worker, a local stochastic extragradient algorithm with an
adaptive learning rate is performed independently with multiple iterations.
Every once in a while, current iterates and adaptive learning rates from all
workers are sent to the server. The server computes a weighted average of the
iterates, where the weights are constructed from the received local adaptive
learning rates. We emphasize that adaptive learning in each worker is distinct
from others and is automatically updated according to local data as is done in
chen2021quantized ; beznosikov2021distributed , and different from the
existing adaptive distributed algorithms xie2019local ; reddi2021adaptive ;
chen2021cada .
Theoretically optimal convergence rate. Let $M$ denote the number of workers,
$\sigma$ denote the variance of stochastic gradients, and $T$ denote the
number of local iterations on each worker. For stochastic convex-concave
minimax problems, we establish the rate in terms of the duality gap metric
nemirovski2004prox ; lin2020near as $\tilde{O}({\sigma}{/}{\sqrt{MT}})$ in
the _nonsmooth and noise-dominant_ case and the rate
$\tilde{O}({\sigma}{/}{{\sqrt{MT}}}+\text{ higher-order terms})$ in _smooth
case with slow cumulative gradient growth_. The terms depending on the
variance $\sigma$ achieve the statistical lower bound and are not improvable
without further assumptions. Therefore, the ${\operatorname*{{LocalAdaSEG}}}$
algorithm enjoys the linear speed-up property in the stochastic gradient
variance term due to the periodic communication mechanism.
Experimental verification. We conduct several experiments on the stochastic
bilinear game and the Wasserstein GAN arjovsky2017wasserstein to verify the
efficiency and effectiveness of the ${\operatorname*{{LocalAdaSEG}}}$
algorithm. We also extend the ${\operatorname*{{LocalAdaSEG}}}$ algorithm to
solve the challenging federated GANs in a heterogeneous setting. The
experimental results agree with the theoretical guarantees and demonstrate the
superiority of ${\operatorname*{{LocalAdaSEG}}}$ against several existing
minimax optimizers, such as SEGDA nemirovski2004prox , UMP bach2019universal ,
ASMP ene2020adaptive , LocalSEGDA beznosikov2021distributed , LocalSGDA
deng2020local , and Local Adam beznosikov2021distributed .
## 2 Related Work
Although there has been a lot of work on minimax optimization, due to space
constraints, we summarize only the most closely related work. Our work is
related to the literature on stochastic minimax algorithms, adaptive minimax
algorithms, and distributed minimax algorithms. We defer a detailed discussion
of related work to Appendix A in the appendix.
Our work and the proposed ${\operatorname*{{LocalAdaSEG}}}$ contribute to the
literature described above. To our knowledge, the proposed
${\operatorname*{{LocalAdaSEG}}}$ algorithm is the first distributed
communication-efficient algorithm for the stochastic minimax problem and
simultaneously supports the adaptive learning rate and minibatch size.
Moreover, ${\operatorname*{{LocalAdaSEG}}}$ communicates only periodically to
improve communication efficiency and uses a local adaptive learning rate,
computed on local data in each worker, to improve the efficiency of
computation. In addition, ${\operatorname*{{LocalAdaSEG}}}$ can also be
applied in a non-smooth setting with the convergence guarantee.
${\operatorname*{{LocalAdaSEG}}}$ can be seen as a distributed extension of
bach2019universal with period communication as local SGD stich2018local . We
note that only very recently a local adaptive stochastic minimax algorithm,
called Local Adam, has been used heuristically to train GANs without a
convergence guarantee beznosikov2021distributed . We summarize the
relationship with the existing literature in Table 1.
Stochastic minimax algorithms | Nonsmooth ? | Comm. eff. ? | Adaptive ?
---|---|---|---
Mirror SA nemirovski2009robust , SMP Juditsky2011solving , SAMP chen2017accelerated , Optimal Stochastic PDHG-type zhao2021accelerated | ✓ | ✗ | ✗
SCAFFOLD-Catalyst-S hou2021efficient , Local SGDA deng2020local , Extra Step Local SGD beznosikov2021distributed | ✗ | ✓ | ✗
Universal Mirror-prox bach2019universal , Adaptive Single-gradient Mirror-prox ene2020adaptive , Geometry-Aware Universal Mirror-prox babanezhad2020geometry , AdaProx antonakopoulos2021adaptive | ✓ | ✗ | ✓
Optimistic AdaGrad liu2020towards | ✗* | ✗ | ✓
Our ${\operatorname*{{LocalAdaSEG}}}$ | ✓ | ✓ | ✓
Table 1: Comparison to related works on adaptive or communication-efficient
approaches to stochastic minimax problems. Here ”Nonsmooth ?” asks whether the
algorithm enjoys theoretical guarantees in the nonsmooth convex-concave
setting; ”Comm. eff. ?” asks whether the proposed algorithm is communication-
efficient; ”Adaptive ?” asks whether the proposed algorithm requires knowledge
of problem parameters. ”*”: The work of liu2020towards discusses non-convex
non-concave minimax problems.
## 3 Methodology
### 3.1 Notations and Assumptions
A point $(x^{*},y^{*})\in{\mathcal{X}}\times{\mathcal{Y}}$ is called a saddle-
point for the minimax problem in (1) if for all
$(x,y)\in{\mathcal{X}}\times{\mathcal{Y}}$,
$\displaystyle F(x^{*},y)\leq F(x^{*},y^{*})\leq F(x,y^{*}).$ (2)
Under the assumptions stated in Section 1, the corresponding primal,
$\min_{x}\\{\max_{y}F(x,y)\\}$, and dual problem,
$\max_{y}\\{\min_{x}F(x,y)\\}$, have optimal solutions and equal optimal
values, denoted $F^{*}$. The pairs of optimal solutions $(x^{*},y^{*})$ form
the set of saddle-points of $F$ on ${\mathcal{X}}\times{\mathcal{Y}}$. We
denote $\mathbb{Z}=\mathbb{X}\times\mathbb{Y}$,
$\mathcal{Z}={\mathcal{X}}\times{\mathcal{Y}}$, $z=(x,y)\in\mathcal{Z}$, and
$z^{*}=(x^{*},y^{*})\in\mathcal{Z}$. We use $\|\cdot\|_{\mathcal{X}}$,
$\|\cdot\|_{\mathcal{Y}}$, and $\|\cdot\|_{\mathcal{Z}}$ to denote the
Euclidean norms on $\mathbb{X}$, $\mathbb{Y}$, $\mathbb{Z}$, respectively, and
let $\|\cdot\|_{{\mathcal{X}},*}$, $\|\cdot\|_{{\mathcal{Y}},*}$ and
$\|\cdot\|_{\mathcal{Z},*}$ denote the corresponding dual norms. With this
notation,
$\|z\|_{\mathcal{Z}}=\sqrt{\|x\|_{\mathcal{X}}^{2}+\|y\|_{\mathcal{Y}}^{2}}$
and
$\|z\|_{{\mathcal{Z}},*}=\sqrt{\|x\|_{{\mathcal{X}},*}^{2}+\|y\|_{{\mathcal{Y}},*}^{2}}$.
Throughout the paper, we focus on the Euclidean setting, but note that the
results can readily generalize to non-Euclidean cases.
We are interested in finding a saddle-point of $F$ over
${\mathcal{X}}\times{\mathcal{Y}}$. For a candidate solution
$\tilde{z}=(\tilde{x},\tilde{y})\in\mathcal{Z}$, we measure its quality by the
duality gap, defined as
$\displaystyle{\operatorname*{{DualGap}}}(\tilde{z})\vcentcolon=\max_{y\in{\mathcal{Y}}}F(\tilde{x},y)-\min_{x\in{\mathcal{X}}}F(x,\tilde{y}).$
(3)
The duality gap is commonly used as a performance criterion for general
convex-concave minimax problems (see, e.g., nemirovski2004prox ; lin2020near
). Note that for all $z\in{\mathcal{Z}}$ it holds
${\operatorname*{{DualGap}}}(z)\geq 0$ and ${\operatorname*{{DualGap}}}(z)=0$
if and only if $z$ is a saddle-point.
For the stochastic minimax problem (1), we assume that neither the function
$F(x,y)$ nor its sub/supgradients in $x$ and $y$ are available. Instead, we
assume access to an unbiased stochastic oracle
$G(x,y,\xi)=[G_{x}(x,y,\xi),-G_{y}(x,y,\xi)]$, such that the vector
${\mathbb{E}}_{\xi}[G(x,y,\xi)]$ is well-defined and
${\mathbb{E}}_{\xi}[G(x,y,\xi)]\in[\partial_{x}F(x,y),-\partial_{y}F(x,y)]$.
For notational convenience, we let
$\displaystyle\tilde{G}(z)\vcentcolon=G(x,y,\xi),\quad
G(z)\vcentcolon={\mathbb{E}}_{\xi}[G(x,y,\xi)].$ (4)
Below, we impose assumptions on the minimax problem (1) and the stochastic
gradient oracle (4).
###### Assumption 1 (Bounded Domain).
There exists $D$ such that $\sup_{z\in{\mathcal{Z}}}\frac{1}{2}\|z\|^{2}\leq
D^{2}$.
###### Assumption 2 (Bounded Stochastic Gradients).
There exists $G$ such that $\sup_{z\in{\mathcal{Z}}}\|\tilde{G}(z)\|_{*}\leq
G$, P-almost surely.
Domain boundedness 1 is commonly assumed in the convex-concave minimax
literature; see references in Section 1. However, we note that the assumption
might be removed in certain settings. For example, Chen2014 ;
monteiro2011complexity use a perturbation-based variant of the duality gap as
the convergence criterion, and antonakopoulos2021adaptive handles unbounded
domains via the notion of local norms, while zhao2021accelerated handles
unbounded domains with access to a convex optimization oracle. The almost sure
boundedness Assumption 2 on the gradient oracle seems restrictive but is
common in the literature on adaptive stochastic gradient methods (see, e.g.,
duchi2011adaptive ; chen2018on ; bach2019universal ; liu2020towards ). In
Remark 2 we discuss how to extend our analysis to unbounded oracles.
###### Assumption 3 (Bounded Variance).
There exists $\sigma$ such that
${\mathbb{E}}_{\xi}\big{[}\|G(z)-\tilde{G}(z)\|_{*}^{2}\,|\,z\big{]}\leq\sigma^{2}$
for $P$-almost every $z$.
We separately analyze the case when the saddle function $F$ is differentiable
with Lipschitz gradients.
###### Assumption 4 (Smoothness).
Assume that for all $z,z^{\prime}\in{\mathcal{Z}}$, we have
$\|G(z)-G(z^{\prime})\|_{*}\leq L\|z-z^{\prime}\|$.
### 3.2 ${\operatorname*{{LocalAdaSEG}}}$ Algorithm
We introduce the ${\operatorname*{{LocalAdaSEG}}}$ algorithm used to solve (1)
and describe its main features. Algorithm 1 details the procedure.
Algorithm 1 ${\operatorname*{{LocalAdaSEG}}}(G_{0},D\mathchar
24635\relax\;K,M,R\mathchar 24635\relax\;\alpha)$
1:Input: $G_{0}$, a guess on the upper bound of gradients, $D$, the diameter
of the set ${\mathcal{Z}}$, $K$, communication interval, $M$, the number of
workers, $R$, number of rounds, $\alpha$, base learning rate.
2:Initialize: $\eta^{m}_{1}=D\alpha/G_{0}$,
$\tilde{z}_{0}=\tilde{z}^{m}_{0}=\tilde{z}^{m,*}_{0}=0$ for all $m$, and
$S\vcentcolon=\\{0,K,2K,\dots,RK\\}$.
3:for $t=1,\dots,T=RK$, parallel for workers $m=1,\dots,M$ do
4: update learning rate ${\eta^{m}_{t}}=$
$\displaystyle{D\alpha}\big{/}{\sqrt{G_{0}^{2}+\sum_{\tau=1}^{t-1}\frac{\textstyle{\|z^{m}_{\tau}-\tilde{z}^{m,*}_{\tau-1}\|^{2}+\|z^{m}_{\tau}-\tilde{z}^{m}_{\tau}\|^{2}}}{{{5}({\eta^{m}_{\tau}})^{2}}}}}$
5: if $t-1\in S$ then
6: worker $m$: send $({\eta^{m}_{t}},{\tilde{z}^{m}_{t-1}})$ to server
7: server: compute ${\tilde{z}^{\circ}_{t-1}}$, the weighted average of
$\\{{\tilde{z}^{m}_{t-1}}\\}_{m\in[M]}$, and broadcast it to workers
$w^{m}_{t}=\frac{({\eta^{m}_{t}})^{-1}}{\textstyle\sum_{m^{\prime}=1}^{M}(\eta^{m^{\prime}}_{t})^{-1}},\;{\tilde{z}^{\circ}_{t-1}}={\sum_{m=1}^{M}}w^{m}_{t}\cdot{\tilde{z}^{m}_{t-1}}$
8: worker $m$: set ${\tilde{z}^{m,*}_{t-1}}={\tilde{z}^{\circ}_{t-1}}$
9: else
10: set ${\tilde{z}^{m,*}_{t-1}}={\tilde{z}^{m}_{t-1}}$
11: end if
12: update $\displaystyle{z^{m}_{t}}$
$\displaystyle=\Pi_{{\mathcal{Z}}}[{\tilde{z}^{m,*}_{t-1}}-{\eta^{m}_{t}}{M^{m}_{t}}]$
$\displaystyle\text{ with }{M^{m}_{t}}=\tilde{G}({\tilde{z}^{m,*}_{t-1}})$
$\displaystyle{\tilde{z}^{m}_{t}}$
$\displaystyle=\Pi_{{\mathcal{Z}}}[{\tilde{z}^{m,*}_{t-1}}-{\eta^{m}_{t}}{g^{m}_{t}}]$
$\displaystyle\text{ with }{g^{m}_{t}}=\tilde{G}({z^{m}_{t}})$
13:end for
14:Output: $\frac{1}{TM}{\sum_{m=1}^{M}}{\sum_{t=1}^{T}}z^{m}_{t}$
The Parameter-Server model. ${\operatorname*{{LocalAdaSEG}}}$ uses $M$
parallel workers which, in each of $R$ rounds, independently execute $K$ steps
of extragradient updates (Line 12). The adaptive learning rate is computed
solely based on iterates occurred in the local worker (Line 4). Let
$S\vcentcolon=\\{0,K,2K,\dots,RK=T\\}$ denote the time points of
communication. At a time of communication ($t\in S+1$, Lines 5–8), the workers
communicate and compute the weighted iterate, ${\tilde{z}^{\circ}_{t-1}}$,
defined in Line 7. Then the next round begins with a common iterate
${\tilde{z}^{\circ}_{t-1}}$. Finally, ${\operatorname*{{LocalAdaSEG}}}$
outputs the average of the sequence $\\{{z^{m}_{t}}\\}_{m\in[M],t\in[T]}$.
Overall, each worker computes $T=KR$ extragradient steps locally, for a total
of $2MT$ stochastic gradient calls (since each extragradient step, Line 12,
requires two calls of gradient oracle) with $R$ rounds of communication (every
$K$ steps of computation).
Extragradient step. At the time when no communication happens ($t-1\notin S$),
Line 12 reduces to
$\displaystyle{z^{m}_{t}}$
$\displaystyle=\Pi_{{\mathcal{Z}}}[{\tilde{z}^{m}_{t-1}}-{\eta^{m}_{t}}{M^{m}_{t}}]$
$\displaystyle\text{ with }{M^{m}_{t}}=\tilde{G}({\tilde{z}^{m}_{t-1}}),$
$\displaystyle{\tilde{z}^{m}_{t}}$
$\displaystyle=\Pi_{{\mathcal{Z}}}[{\tilde{z}^{m}_{t-1}}-{\eta^{m}_{t}}{g^{m}_{t}}]$
$\displaystyle\text{ with }{g^{m}_{t}}=\tilde{G}({z^{m}_{t}}),$
where
$\Pi_{\mathcal{Z}}(z)=\operatorname*{argmin}_{z^{\prime}\in{\mathcal{Z}}}\|z-z^{\prime}\|_{2}$
is the projection operator onto the compact set ${\mathcal{Z}}$. The above
update is just the extragradient (EG) algorithm korpel1976 that is commonly
used to solve minimax problems; see references in Section 1.
Periodic averaging weights. The proposed weighted averaging scheme in Line 7
is different from existing works on local SGD and Local Adam
beznosikov2021distributed . At the time of averaging ($t\\!-\\!1\\!\in\\!S$),
${\operatorname*{{LocalAdaSEG}}}$ pulls the averaged iterate towards the local
iterate with a smaller learning rate. For the homogeneous case studied in this
paper, we expect $w^{m}\sim 1/M$.
Intuition of local adaptive learning rate scheme. The adaptive learning rate
scheme (Line 4) follows that of Bach and Levy bach2019universal closely. To
develop intuition, consider the deterministic setting where $\sigma=0$ and
define
$({\delta_{t}^{m}})^{2}\vcentcolon=\|{g^{m}_{t}}\|_{*}^{2}+\|{M^{m}_{t}}\|_{*}^{2}$.
If we ignore the projection operation, the learning rate ${\eta^{m}_{t}}$
would look like ${\eta^{m}_{t}}\sim
1/(1+\sum_{\tau=1}^{t-1}({\delta_{\tau}^{m}})^{2})^{1/2}$. In the nonsmooth
case, the subgradients might not vanish as we approach the solution (in the
case of convex optimization, consider the function
$f(x)=\mbox{{}\rm\sf\leavevmode\hbox{}\hbox{}x\/}$near$0$),andweonlyhave$\liminf_{t\to\infty}
{\delta_t^m}>
0$.Thisimplies${\eta^m_t}$willvanishattherate$1/\sqrt{t}$,whichistheoptimallearningrateschemefornonsmoothconvex-
concaveproblems\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{bach2019universal,
antonakopoulos2021adaptive}{\@@citephrase{(}}{\@@citephrase{)}}}.Forthesmoothcase,onemightexpectthesequence$\\{{\delta_t^m}\\}_t$tobesquare-
summableand${\eta^m_t}\to\eta_{\infty}^m >
0$,inwhichcasethelearningratedoesnotvanish.Additionally,theadaptivelearningrateforeachworkerislocallyupdatedtoexploittheproblemstructureavailableinworker^{\prime}slocaldataset.Thismakesourlocaladaptivelearningrateschemedistinctcomparedtoexistingdistributedadaptivealgorithmsforminimizationproblems\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{xie2019local,reddi2021adaptive,chen2021cada}{\@@citephrase{(}}{\@@citephrase{)}}}.Veryrecently,\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{beznosikov2021distributed}{\@@citephrase{(}}{\@@citephrase{)}}}usedlocalAdamfortrainingconditionalGANsefficiently,buttheyprovidetheoreticalguaranteesonlyforthelocalextragradientwithoutadaptivity.\par\textbf{Adaptivity
to $(G,L,\sigma)$.}\
Ouralgorithmdoesnotrequireknowledgeofproblemparameterssuchasthesizeofthegradients$G$,thesmoothness$L$,orthevarianceofgradientestimates$\sigma$.Instead,weonlyneedaninitialguessof$G$,denoted$G_0$,andthediameterofthefeasibleset,$D$.Following\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{bach2019universal}{\@@citephrase{(}}{\@@citephrase{)}}},wedefine\begin{aligned}
\gamma\vcentcolon=\max\\{G/G_{0},G_{0}/G\\}\geq
1.\end{aligned}Thisquantitymeasureshowgoodourguessisandappearsintheconvergenceguaranteesforthealgorithm.Ouralgorithmstillrequiresknowledgeoftheproblemclass,asweneedtouseadifferentbaselearningrate,$\alpha$,forsmoothandnonsmoothproblems\mathchar
24635\relax\;seeTheorems~{}\ref{thm:nonsmooth}and\ref{thm:smooth},respectively.\par\par\@@numbered@section{subsection}{toc}{Convergence
Results} \par We state two theorems characterizing the convergence rate of
${\operatorname*{{LocalAdaSEG}}}$ for the smooth and nonsmooth problems. We
use the notation $\tilde{O}$ to hide absolute constants and logarithmic
factors of $T=KR$ and problem parameters. The proofs are given in
\lx@cref{creftype~refnum}{sec:proof:thmnonsmooth} and
\lx@cref{creftype~refnum}{sec:proof:thmsmooth} of the appendix. Recall the
definition of $\gamma$ in~{}\eqref{eq:defgamma}.
\par\par\begin{theorem}[Nonsmooth Case] Assume that Assumptions
\ref{as:bddomain}, \ref{as:bdsg}, and \ref{as:bdvar} hold. Let
$\bar{z}={\operatorname*{{LocalAdaSEG}}}(G_{0},D\mathchar
24635\relax\;K,M,R\mathchar 24635\relax\;1)$. Then
$${\mathbb{E}}[\operatorname*{DualGap}(\bar{z})]=\tilde{O}\bigg{(}\frac{\gamma
GD}{\sqrt{T}}+\frac{\sigma D}{\sqrt{MT}}\bigg{)}\,.$$ \end{theorem}
\par\begin{theorem}[Smooth Case] Assume that Assumptions \ref{as:bddomain},
\ref{as:bdsg}, \ref{as:bdvar}, and \ref{as:smooth} hold. Let
$\bar{z}={\operatorname*{{LocalAdaSEG}}}(G_{0},D\mathchar
24635\relax\;K,M,R\mathchar 24635\relax\;1/\sqrt{M})$. Define the cumulative
norms of stochastic gradients occurred on worker $m$ as
\@@amsalign{\mathcal{V}_{m}(T)}\vcentcolon={\mathbb{E}}\left[\sqrt{{\sum_{t=1}^{T}}\|{g^{m}_{t}}\|_{*}^{2}+\|{M^{m}_{t}}\|_{*}^{2}}\right].
Then
\@@amsalign{\mathbb{E}}[\operatorname*{DualGap}(\bar{z})]=\tilde{O}\\!\bigg{(}\\!\frac{\sigma
D}{{\sqrt{MT}}}\\!+\\!\frac{D\sqrt{M}{\mathcal{V}_{1}(T)}}{T}\\!+\\!\frac{\gamma^{2}LD^{2}M^{-1/2}}{T}\\!+\\!\frac{\gamma
GD\sqrt{M}}{T}\bigg{)}. \end{theorem} \par\begin{remark}[The term $\cVoneT$.]
Note that by symmetry ${\mathcal{V}_{m}(T)}=\mathcal{V}_{1}(T)$ for all $m$.
Although a trivial bound on ${\mathcal{V}_{1}(T)}$ is
${\mathcal{V}_{1}(T)}\leq G\sqrt{2T}$, typically we have
${\mathcal{V}_{1}(T)}\ll\sqrt{T}$ in practice \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{duchi2011adaptive, Reddi2018on, chen2018universal,
chen2018on, liu2020towards}{\@@citephrase{(}}{\@@citephrase{)}}}, especially
in the sparse data scenarios. For example, consider the bilinear saddle-point
problem
$\min_{x\in{\mathcal{X}}}\max_{y\in{\mathcal{Y}}}\big{\\{}x^{\top}(\sum_{i=1}^{n}p_{i}M_{i})y\big{\\}}$,
where a larger weight $p_{i}>0$ means the matrix $M_{i}$ appears more
frequently in the dataset. When most of matrices with large weights are row-
sparse and column-sparse, the quantity ${\mathcal{V}_{1}(T)}$ is much smaller
than $G\sqrt{2T}$. Theorem~{}\ref{thm:smooth_noV}, in the appendix, shows that
with a different choice of the base learning rate $\alpha$ one can obtain a
near linear speed-up result, which removes the dependence on
${\mathcal{V}_{1}(T)}$: for large $T$,
$${\mathbb{E}}[\operatorname*{DualGap}(\bar{z})]=\tilde{O}\left(\frac{\sigma
D}{{\sqrt{MT^{1-2\epsilon}}}}+\frac{\gamma^{2}LD^{2}}{T^{1-2\epsilon}}+\frac{LD^{2}M}{T}+\frac{\gamma
GDM^{3/2}}{T^{1+\epsilon}}\right),$$ for any $\epsilon\in(0,\tfrac{1}{2})$.
Following the discussion in \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{chen2018universal,liu2020towards}{\@@citephrase{(}}{\@@citephrase{)}}},
when the cumulative growth of the stochastic gradient is slow, i.e.,
${\mathcal{V}_{1}(T)}=O(T^{b})$ for some $0<b<\tfrac{1}{2}$, then the second
term in~{}\eqref{eq:bound2} is $O(DM^{3/2}/T^{1-b})$ and linear speed-up is
achieved, since as $T\to\infty$, the dominating term become $O(\sigma
D/\sqrt{MT})$. \end{remark} \par\begin{remark}[Extension to unbounded
stochastic gradient oracle] Our analysis can be extended to unbounded
homogeneous and light-tailed oracles using the following argument. Let
\@@amsalign\|G\|_{\infty}\vcentcolon=\sup_{z\in{\mathcal{Z}}}\|G(z)\|_{*}<\infty,
which upper bounds the expectation of the SG oracle. Assume
$\|\tilde{G}(z)-G(z)\|_{*}/\|G\|_{\infty}$ is independent of $z$ and follows
the distribution of the absolute value of a standard normal. Define the set
${\mathcal{Z}}^{\prime}\vcentcolon=\\{{z^{m}_{t}},{\tilde{z}^{m,*}_{t-1}}\\}_{t,m}$
of all iterates. For any $0<\delta<1$, define the event
\@@amsalign{\mathcal{E}}\vcentcolon=\Big{\\{}\max_{z^{\prime}\in{\mathcal{Z}}^{\prime}}\|\tilde{G}(z^{\prime})-G(z^{\prime})\|_{*}\leq
G_{T,\delta}\vcentcolon=\|G\|_{\infty}\cdot\big{(}\sqrt{2\log(4MT)}+\sqrt{2\log(2/\delta)}\big{)}\Big{\\}}.
Then ${\mathbb{P}}({\mathcal{E}})\geq 1-\delta$; see Appendix
\ref{app:unbdsg}. We can repeat the proof of Theorem~{}\ref{thm:nonsmooth} and
Theorem~{}\ref{thm:smooth} on the event ${\mathcal{E}}$ and interpret our
results with $G$ replaced by $G_{T,\delta}$, which effectively substitutes $G$
with $\|G\|_{\infty}$ at the cost of an extra $\log(T)$ factor. \end{remark}
\par\begin{remark}[Baseline 1: Minibatch EG] \par We comment on the
performance of an obvious baseline that implements minibatch stochastic EG
using $M$ workers. Suppose the algorithm takes $R$ extragradient steps, with
each step using a minibatch of size $KM$, resulting in a procedure that
communicates exactly $R$ times. The performance of such a minibatch EG for
general nonsmooth and smooth minimax problems \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{bach2019universal,ene2020adaptive}{\@@citephrase{(}}{\@@citephrase{)}}}
is, respectively, \@@amsalign{O}\bigg{(}\frac{\sigma
D}{\sqrt{KMR}}+\frac{\|G\|_{\infty}D}{\sqrt{R}}\bigg{)}\text{ and
}{O}\bigg{(}\frac{\sigma D}{\sqrt{KMR}}+\frac{LD^{2}}{R}\bigg{)}\,. Under the
same computation and communication structure, our algorithm enjoys adaptivity,
achieves the same linear speed-up in the variance term $\frac{\sigma
D}{\sqrt{KMR}}$, and improves dependence on the gradient upper bound
$\|G\|_{\infty}$ and the smoothness parameter $L$, which is a desirable
property for problems where these parameters are large. \end{remark}
\par\begin{remark}[Baseline 2: EG on a single worker] Another natural baseline
is to run EG on a single worker for $T$ iterations with batch-size equal to
one. The convergence rates for this procedure in nonsmooth and smooth cases
are $O(\sigma D/\sqrt{T}+\|G\|_{\infty}D/\sqrt{T})$ and $O(\sigma
D/\sqrt{T}+LD^{2}/T)$, respectively. In the smooth case, EG on a single worker
is inferior to minibatch EG, since the dominant term for the former is
$1/\sqrt{T}$, but it is $1/\sqrt{MT}$ for the latter. On the other hand, in
the nonsmooth case, minibatch EG reduces the variance term, but the term
involving the deterministic part degrades. Therefore, in the nonsmooth case,
we can only claim that the minibatch EG is better than the single-worker mode
in the noise-dominant regime $\sigma=\Omega(\|G\|_{\infty}\sqrt{M})$.
\end{remark} \par\par\begin{remark}[On the choice of $K$] Consider the
baseline minibatch EG (see \lx@cref{creftype~refnum}{rm:minibatch_EG}) which
runs as follows: the algorithm takes $R$ extragradient steps, with each step
using a minibatch of size $KM$, resulting in a procedure that communicates
exactly $R$ times. Note this procedure has exactly the same computation and
communication structure as ${\operatorname*{{LocalAdaSEG}}}$, facilitating a
fair comparison. In the non-smooth case, our theory shows that
${\operatorname*{{LocalAdaSEG}}}$ dominates minibatch EG regardless of the
choice $K$. Therefore, let us focus the discussion on the \emph{smooth loss
with slow gradient growth case}. Suppose that the gradient growth term
$\mathcal{V}_{m}(T):=\mathbb{E}\big{[}(\sum_{t=1}^{T}\left\|g_{t}^{m}\right\|_{*}^{2}+\left\|M_{t}^{m}\right\|_{*}^{2})^{1/2}\big{]}$
admits a rate $\mathcal{V}_{m}(T)=O(T^{b})$ for some $0<b<1/2$. Theorem 2 then
shows that ${\operatorname*{{LocalAdaSEG}}}$ enjoys a convergence rate
(ignoring problem parameters $L,D$ and $G$)
\@@amsalign\frac{1}{\sqrt{MKR}}+\frac{\sqrt{M}}{(KR)^{1-b}}+\frac{\sqrt{M}}{KR}\;,
where $M$ is the number of machines, $R$ the communication rounds, and $K$ is
the length between two communications. The minibatch EG attains the
convergence rate \@@amsalign\frac{1}{\sqrt{MKR}}+\frac{1}{R}\;. Both
algorithms achieve linear speedup, i.e., the dominant term is
$O(\sigma/\sqrt{MKR})$. In order for ${\operatorname*{{LocalAdaSEG}}}$ to be
comparable with minibatch EG in the higher order term, we set
$\sqrt{M}/(KR)^{1-b}=\Theta(1/R)$ and $\sqrt{M}/(KR)=O(1/R)$ and obtain
$K=\Theta(\sqrt{M}T^{b})$. With this choice of $K$,
${\operatorname*{{LocalAdaSEG}}}$ achieves a communication efficiency no worse
than minibatch EG with the crucial advantage of being tuning-free. Compared
with case of optimizing strongly-convex functions, local SGD needs
$K=O(\sqrt{T})$ to achieve linear speedup \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{stich2018local}{\@@citephrase{(}}{\@@citephrase{)}}}. The
discussion here is purely theoretical, since the exponent of gradient growth
$b$ is hard to estimate in practice. \end{remark}
\par\par\par\@@unnumbered@section{paragraph}{toc}{Proof Sketch of Theorem
\ref{thm:smooth}} We present a proof sketch for the smooth case. Recall the
update formula
\@@amsalign{z^{m}_{t}}&=\Pi_{{\mathcal{Z}}}[{\tilde{z}^{m,*}_{t-1}}-{\eta^{m}_{t}}{M^{m}_{t}}]&&\text{
with }{M^{m}_{t}}=\tilde{G}({\tilde{z}^{m,*}_{t-1}}),\\\
{\tilde{z}^{m}_{t}}&=\Pi_{{\mathcal{Z}}}[{\tilde{z}^{m,*}_{t-1}}-{\eta^{m}_{t}}{g^{m}_{t}}]&&\text{
with }{g^{m}_{t}}=\tilde{G}({z^{m}_{t}}).
\lx@cref{creftype~refnum}{fig:computation} provides a computation diagram and
illustrates the relationship between the above variables. \begin{figure}[ht]
\centering\includegraphics[scale={.6}]{computation.pdf}
\@@toccaption{{\lx@tag[ ]{{2}}{The computation diagram for
${\operatorname*{{LocalAdaSEG}}}$. Left panel: computation on machine $m$ when
no communication ($t\notin S$). Right panel: computation on machine $m$ when
on communication round ($t\in S$)}}}\@@caption{{\lx@tag[: ]{{Figure 2}}{The
computation diagram for ${\operatorname*{{LocalAdaSEG}}}$. Left panel:
computation on machine $m$ when no communication ($t\notin S$). Right panel:
computation on machine $m$ when on communication round ($t\in S$)}}}
\@add@centering\end{figure} \par We define the noise in the gradient operator
$G$ by
\@@amsalign{\xi^{m}_{t}}\vcentcolon=G({z^{m}_{t}})-{g^{m}_{t}}=G({z^{m}_{t}})-\tilde{G}({z^{m}_{t}}).
Moreover, we define a gradient-like quantity
$$\left(Z_{t}^{m}\right)^{2}:=\frac{\left\|z_{t}^{m}-\tilde{z}_{t-1}^{m,
*}\right\|^{2}+\left\|z_{t}^{m}-\tilde{z}_{t}^{m}\right\|^{2}}{5\left(\eta_{t}^{m}\right)^{2}}.
$$ If we ignore the projection operator in the update, the term $(Z_{t}^{m})$
will be of a similar scale as the gradients $\tilde{G}({z^{m}_{t}})$ and
$\tilde{G}({\tilde{z}^{m}_{t}})$. \par\par We begin with the following
decomposition: for all $z\in{\mathcal{Z}}$,
\@@amsalign&{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\big{\langle}{z^{m}_{t}}-z,G({z^{m}_{t}})\big{\rangle}\\\
&={\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\big{\langle}{z^{m}_{t}}-z,{\xi^{m}_{t}}\rangle+{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\langle{z^{m}_{t}}-z,{g^{m}_{t}}\rangle\\\
&\leq\underbracket{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\big{\langle}{z^{m}_{t}}-z,{\xi^{m}_{t}}\big{\rangle}}_{{I}(z)}\\\
&\quad+\underbracket{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\frac{1}{{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{m,*}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}}_{{II}(z)}\\\
&\quad\underbracket{-{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\frac{1}{{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}}_{{III}}\\\
&\quad+\underbracket{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\|{g^{m}_{t}}-{M^{m}_{t}}\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|}_{{IV}},
where we have used a descent lemma for EG updates common in the literature
(Lemma 4 in our paper). The reason we care about the above quantity is that by
the convexity-concavity of the problem, the duality gap metric can be upper-
bounded by this term. \par Next, we analyze each term separately. The term
$I(z)$ characterizes the noise of the problem and eventually contributes to
the noise term $\frac{\sigma}{\sqrt{KMR}}$. For the term $II$ we use a
telescoping argument and show that it can be upper bounded by
$\sum_{m,t}{\eta^{m}_{t}}{(Z^{m}_{t})^{2}}$. The telescoping argument can be
applied due to the averaging weights $w^{m}_{t}$ in the algorithm. The term
$III$ is negative. We keep the tail part of $III$ which cancels the tail part
of the term $IV$. For the term $IV$ we use the smoothness property of the
problem and show that it can be bounded by
$\sum_{m,t}{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}$. Finally, two sums of the
form $\sum_{m,t}{\eta^{m}_{t}}{(Z^{m}_{t})^{2}}$ and
$\sum_{m,t}{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}$ remain to be handled. For
this we use the well-known basic inequality
$\sum_{i=1}^{n}{a_{i}}/({a_{0}+\sum_{j=1}^{i-1}a_{j}})=O(\log(1+\sum_{i}a_{i}))$
and
$\sum_{i=1}^{n}{a_{i}}/{\sqrt{a_{0}+\sum_{j=1}^{i-1}a_{j}}}=\Theta(\sqrt{\sum_{i}a_{i}})$
for positive numbers $a_{i}$'s. \par\par Nonadaptive local algorithms rely on
choosing a vanishing stepsize that is usually inversely proportional to a
prespecified number of total iterations $T$. The freedom to choose the
stepsize based on a prespecified $T$ is crucial in the proofs of these
algorithms and allows canceling of the asynchronicity of updates caused by
local updates and the bias in those updates caused by data heterogeneity. This
is the case for both convex optimization and convex-concave optimization.
However, in the adaptive algorithm regimes, such a proof technique is clearly
not viable. \par Our algorithm requires a carefully designed iterates
averaging scheme, with weight inversely proportional to stepsize. Such
averaging-scheme is designed to account for the asynchronicity of local
iterates and is automatically determined by the optimization process. This is
what enables the extension of an Adam-type stepsize to parallel settings,
which is highly nontrivial.
\par\par\par\@@numbered@section{section}{toc}{Experiments} \par We apply
${\operatorname*{{LocalAdaSEG}}}$ to the stochastic bilinear minimax problem
introduced in \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{gidel2018a,beznosikov2021distributed}{\@@citephrase{(}}{\@@citephrase{)}}}
and train the Wasserstein generative adversarial neural network (Wasserstein
GAN) \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{arjovsky2017wasserstein}{\@@citephrase{(}}{\@@citephrase{)}}}.
For the homogeneous setting, to demonstrate the efficiency of our proposed
algorithm, we compare ${\operatorname*{{LocalAdaSEG}}}$ with minibatch
stochastic extragradient gradient descent (MB-SEGDA)
\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{nemirovski2004prox}{\@@citephrase{(}}{\@@citephrase{)}}},
minibatch universal mirror-prox (MB-UMP) \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{bach2019universal}{\@@citephrase{(}}{\@@citephrase{)}}},
minibatch adaptive single-gradient mirror-Prox (MB-ASMP)
\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{ene2020adaptive}{\@@citephrase{(}}{\@@citephrase{)}}},
extra step local SGD (LocalSEGDA) \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{beznosikov2021distributed}{\@@citephrase{(}}{\@@citephrase{)}}},
and local stochastic gradient descent ascent (LocalSGDA)
\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{deng2020local}{\@@citephrase{(}}{\@@citephrase{)}}}. We
further extend the proposed ${\operatorname*{{LocalAdaSEG}}}$ algorithm to
solve federated WGANs with a heterogeneous dataset to verify its efficiency.
\leavevmode\color[rgb]{0,0,1}\ignorespaces To validate the practicality of
${\operatorname*{{LocalAdaSEG}}}$, we also train the BigGAN
\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{brock2018large}{\@@citephrase{(}}{\@@citephrase{)}}} over
CIFAR10 dataset under the heterogeneous setting. In this setting, we also
compare ${\operatorname*{{LocalAdaSEG}}}$ with Local Adam
\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{beznosikov2021distributed}{\@@citephrase{(}}{\@@citephrase{)}}}.
We emphasize here that whether Local Adam converges is still an open question,
even for the stochastic convex-concave setting.
\par\begin{figure*}\centering\begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_local_iter_with_total_iter_sigma0-1}\end{@subfigure} \begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_local_iter_with_round_sigma0-1}\end{@subfigure} \begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_local_iter_with_total_iter_sigma0-5}\end{@subfigure} \begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_local_iter_with_round_sigma0-5}\end{@subfigure}
\par\@@toccaption{{\lx@tag[ ]{{3}}{\small Subfigures (a)-(b) and (c)-(d) plot
the residual of ${\operatorname*{{LocalAdaSEG}}}$ against the total number of
iterations $T$ and communications $R$, with varying numbers of local
iterations $K$. We also investigate the effect of noise level ($\sigma=0.1$ in
(a)(b) and $\sigma=0.5$ in (c)(d)). }}}\@@caption{{\lx@tag[: ]{{Figure
3}}{\small Subfigures (a)-(b) and (c)-(d) plot the residual of
${\operatorname*{{LocalAdaSEG}}}$ against the total number of iterations $T$
and communications $R$, with varying numbers of local iterations $K$. We also
investigate the effect of noise level ($\sigma=0.1$ in (a)(b) and $\sigma=0.5$
in (c)(d)). }}} \@add@centering\end{figure*}
\par\begin{figure*}\centering\begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_opt_with_total_iter_sigma0-1}\end{@subfigure} \begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_opt_with_round_sigma0-1}\end{@subfigure} \par\begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_opt_with_total_iter_sigma0-5}\end{@subfigure} \begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_opt_with_round_sigma0-5}\end{@subfigure} \par\@@toccaption{{\lx@tag[
]{{4}}{\small Subfigures (a)-(b) and (c)-(d) compare
${\operatorname*{{LocalAdaSEG}}}$ with existing optimizers. We plot the
residuals against the total number of iterations $T$ and communications $R$
with different noise levels ($\sigma=0.1$ in (a)(b) and $\sigma=0.5$ in
(c)(d)).}}}\@@caption{{\lx@tag[: ]{{Figure 4}}{\small Subfigures (a)-(b) and
(c)-(d) compare ${\operatorname*{{LocalAdaSEG}}}$ with existing optimizers. We
plot the residuals against the total number of iterations $T$ and
communications $R$ with different noise levels ($\sigma=0.1$ in (a)(b) and
$\sigma=0.5$ in (c)(d)).}}} \vspace{0cm} \@add@centering\end{figure*}
\par\par\@@numbered@section{subsection}{toc}{Stochastic bilinear minimax
problem} We consider the stochastic bilinear minimax problem with box
constraints \@@amsalign\min_{x\in C^{n}}\max_{y\in C^{n}}F(x,y) where
\@@amsalign F(x,y)\vcentcolon={\mathbb{E}}_{\xi\sim
P}\big{[}x^{\top}Ay+(b+\xi)^{\top}x+(c+\xi)^{\top}y\big{]}, Here
$C^{n}=[-1,1]^{n}$ is a box in ${{\mathbb{R}}}^{n}$, the tuple $(A,b,c)$ is
deterministic, and the perturbation variable $\xi$ follows the normal
distribution with variance $\sigma$. We define the KKT residual ${\rm
Res}(x,y)$ as: \@@amsalign{\rm
Res}(x,y)^{2}\vcentcolon=&\|x-\Pi_{C^{n}}\big{(}x-(Ay+b)\big{)}\|^{2}\\\
&+\|y-\Pi_{C^{n}}\big{(}y+(Ax+c)\big{)}\|^{2}. It is not hard to verify that
given $(x^{*},y^{*})\in\mathbb{R}^{n}\times\mathbb{R}^{n}$, ${\rm
Res}(x^{*},y^{*})=0$ if and only if $(x^{*},y^{*})$ belongs to the set of
saddle-points of the bilinear minimax problem \eqref{spp-bilinear}. During
experiments, we use ${\rm Res}(x,y)$ to measure the quality of the approximate
solution obtained by different optimizers. \par{\bf Dataset Generation.}\ We
uniformly generate $b$ and $c$ in $[-1,1]^{n}$ with $n=10$. The symmetric
matrix $A$ is constructed as
$A={\bar{A}}/{\max\big{(}\mbox{{}\rm\sf\leavevmode\hbox{}\hbox{}b\/}_{\max},
\mbox{{}\rm\sf\leavevmode\hbox{}\hbox{}c}_{\max}\big{)}}$,where$\bar{A}\in[-1,
1]^{n \times
n}$isarandomsymmetricmatrix.Weemphasizethat$A$ismerelysymmetric,butnotsemi-
definite.Tosimulatethedistributedenvironment,wedistribute$(A, b,
c)$to$M$workers,where$M =
4$.Eachworkersolvestheabovebilinearproblemlocallywithanoptimizationalgorithm.Weinstantiate${\operatorname*{{LocalAdaSEG}}}$withdifferentnumbersoflocaliterations$K
\in\\{1, 5, 10, 50, 100, 250, 500\\}$,anddifferentnoiselevels$\sigma\in\\{0.1,
0.5\\}$,shownin\lx@cref{creftype~refnum}{fig:diff_local_iter}.Alarger$\sigma$indicatesmorenoiseinthestochasticgradients,makingproblem~{}\eqref{spp-
bilinear}harder.Furthermore,wecompare${\operatorname*{{LocalAdaSEG}}}$bysettingthelocaliteration$K
=
50$againstseveralexistingoptimizers,illustratedin\lx@cref{creftype~refnum}{fig:diff_local_iter_agaisnt_SOTA}.\par{\bf
ExperimentalResults.}\
In\lx@cref{creftype~refnum}{fig:diff_local_iter},${\operatorname*{{LocalAdaSEG}}}$providesstableconvergenceresultsunderdifferentconfigurationsoflocaliterations$K$andnoiselevels$\sigma$.Figure(b)(d)illustratesthatasuitablylarge$K$couldacceleratetheconvergencespeedof${\operatorname*{{LocalAdaSEG}}}$.Figure(a)(c)illustratesthatalargevariancewouldresultinunstableoptimizationtrajectories.Thefindingsoftheexperimentagreewithourtheoreticalpredictions:(i)alarger$T=KR$improvesconvergence\mathchar
24635\relax\;(ii)thevariancetermdominatestheconvergencerateof${\operatorname*{{LocalAdaSEG}}}$\mathchar
24635\relax\;alargevariancetermwillslowdown${\operatorname*{{LocalAdaSEG}}}$.In\lx@cref{creftype~refnum}{fig:diff_local_iter_agaisnt_SOTA},(a)(c)illustratethatadaptivevariantsofstochasticminimaxoptimizers,i.e.,${\operatorname*{{LocalAdaSEG}}}$,MB-
UMP,andMB-
ASMP,achievebetterperformancecomparedtostandardonessuchasLocalSGDA,LocalSEGDA,andMB-
SEGDA,whoselearningratesarehardtotuneforminimaxproblems.Furthermore,whencomparedintermsofcommunicationroundsin(b)(d),${\operatorname*{{LocalAdaSEG}}}$convergesfasterthanotherdistributedstochasticminimaxoptimizers,demonstratingthesuperiorityof${\operatorname*{{LocalAdaSEG}}}$.\par\par
Tovalidatetheperformanceofourproposedmethod,weconductthecomparisonoftheasynchronouscaseandthesynchronouscaseof${\operatorname*{{LocalAdaSEG}}}$forthestochasticbilinearminimaxproblem.Wealsocompareasynchronousandsynchronouscaseswiththesingle-
threadversion(SEGDAwithMKRiterations)fromtheaspectsofresidualandwallclocktime.Finally,weevaluatethequantityof$V_t$withtheupdate$t$.TheexperimentaldetailsaredescribedinAppendix\ref{app:add_exp}.Ascanbeseenin\lx@cref{creftype~refnum}{fig:additional_exp}(inAppendix\ref{app:add_exp}),comparedwithsynchronouscases,asynchronicityonlyaffectstheconvergenceratethatisslowerthanthesynchronousversionwithrespecttothecommunicationrounds.ComparedtoSEGDAofMKRiterations,ourproposed${\operatorname*{{LocalAdaSEG}}}$canachievemorestableandbetterperformance.Regardingthequantityof$Vt$,itisreallymuchsmallerthanthedominantvarianceterm.\par\par\par\par\@@numbered@section{subsection}{toc}{Wasserstein
GAN} \par We train Wasserstein GAN (WGAN) to validate the efficiency of
${\operatorname*{{LocalAdaSEG}}}$ on a real-world application task. This is a
challenging minimax problem as the objectives of both generator and
discriminator are non-convex and non-concave. The description of the problem
and implementation details are placed in \lx@cref{creftype~refnum}{wgan-
discription}. \par\par\par\par\par{\bf Experimental results.} Fig.
\ref{fig:wgan_iid_agaisnt_SOTA} and \ref{fig:wgan_noniid_agaisnt_SOTA} (in
\lx@cref{creftype~refnum}{wgan-discription}) compare MB-UMP, MB-ASMP,
LocalAdam and ${\operatorname*{{LocalAdaSEG}}}$ in a homogeneous and
heterogeneous setting, respectively. In
\lx@cref{creftype~refnum}{fig:wgan_iid_agaisnt_SOTA}(a) and
\lx@cref{creftype~refnum}{fig:wgan_noniid_agaisnt_SOTA}(a), MB-UMP, MB-ASMP,
LocalAdam and ${\operatorname*{{LocalAdaSEG}}}$ quickly converge to a solution
with a low FID value. However, when compared in terms of communication rounds
in \lx@cref{creftype~refnum}{fig:wgan_iid_agaisnt_SOTA}(b) and
\lx@cref{creftype~refnum}{fig:wgan_noniid_agaisnt_SOTA}(b),
${\operatorname*{{LocalAdaSEG}}}$ and Local Adam converge faster than other
optimizers and reach a satisfactory solution in just a few rounds. In
\lx@cref{creftype~refnum}{fig:wgan_iid_agaisnt_SOTA}(c) and
\lx@cref{creftype~refnum}{fig:wgan_noniid_agaisnt_SOTA}(c), all the listed
optimizers achieve a high IS. In particular, the IS of
${\operatorname*{{LocalAdaSEG}}}$ and Local Adam increases much faster with
less communication than MB-UMP, MB-ASMP as shown in
\lx@cref{creftype~refnum}{fig:wgan_iid_agaisnt_SOTA}(d) and
\lx@cref{creftype~refnum}{fig:wgan_noniid_agaisnt_SOTA}(d). \par In
\lx@cref{creftype~refnum}{fig:wgan_under_diff_dist} and
\lx@cref{creftype~refnum}{fig:compare_wgan_under_diff_dist}, we show and
compare the FID and IS of ${\operatorname*{{LocalAdaSEG}}}$ with other
optimizers under different data distributions. As can be seen from
\lx@cref{creftype~refnum}{fig:wgan_under_diff_dist},
${\operatorname*{{LocalAdaSEG}}}$ converges faster when the Dirichlet
distribution parameter $\alpha$ decreases. In
\lx@cref{creftype~refnum}{fig:compare_wgan_under_diff_dist}, when data
distribution changes, our ${\operatorname*{{LocalAdaSEG}}}$ can still converge
faster than other existing optimizers.
\par\par\par\@@numbered@section{subsection}{toc}{BigGAN} To validate the
practicability of our proposed ${\operatorname*{{LocalAdaSEG}}}$ method, we
apply LocaAdaSEG to train the large-scale BigGAN \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{brock2018large}{\@@citephrase{(}}{\@@citephrase{)}}} model
over the CIFAR10 dataset. The description of BigGAN and parameter setup are
placed in $\lx@cref{creftype~refnum}{biggan_cifar10}$. \par{\bf Experimental
results.} \lx@cref{creftype~refnum}{fig:bigGAN_FID_IS} illustrates the
comparison of the FID and IS against communication rounds by using
${\operatorname*{{LocalAdaSEG}}}$ and existing optimizers. As can be seen from
\lx@cref{creftype~refnum}{fig:bigGAN_FID_IS}(a),
${\operatorname*{{LocalAdaSEG}}}$ and Local Adam can reach a satisfactory FID
value in a few rounds. Similarly, from
\lx@cref{creftype~refnum}{fig:bigGAN_FID_IS}(b), we can see that the IS value
of ${\operatorname*{{LocalAdaSEG}}}$ and Local Adam is much higher than that
of MB-UMP and MB-ASMP. In a word, the FID and IS values of
${\operatorname*{{LocalAdaSEG}}}$ and Local Adam converge much faster than
that of other optimizers.
\par\par\par\@@unnumbered@section{paragraph}{toc}{Additional Discussions} To
end this section, we briefly discuss the limitation of current work. \par
Theoretical limitations. Our theory is applicable to the homogeneous setting,
meaning each worker has access to data from one distribution. However, in
practice, data heterogeneity is a main factor practitioners must take into
account for distributed learning. We briefly discuss technical challenges
here. For the heterogeneous case, the theory for \emph{non-adaptive}
algorithms relies on choosing a very small stepsize, usually inverse
proportional to a prespecified number of total iterations $T$. The freedom to
choose the stepsize based on a prespecified $T$ is crucial in those proofs and
enables canceling the bias caused by local updates, a.k.a.\ client drifts. The
same situation also occurs in the convex optimization case. However, our goal
is to have an adaptive algorithm that does not depend on the problem
parameters or a prespecified $T$. For this reason, we leave such an important
open question for future work. \par Experimental limitations. In the scale of
the dataset, we experimented with should be increased to showcase the
computation benefit of the proposed algorithm. At the current stage we have
experimented with MNIST data and further, add CIFAR 10 experiments after
reviewers' suggestions. Application to other ultra-large datasets such as
ImageNet requires significant engineering efforts and will be left for future
investigation. We should emphasize that our paper mainly contributes to the
theoretical understanding of adaptive algorithms in distributed settings.
\par\par\par\@@numbered@section{section}{toc}{Conclusion} We proposed an
adaptive communication-efficient distributed stochastic extragradient
algorithm in the Parameter-Server model for stochastic convex-concave minimax
problem, ${\operatorname*{{LocalAdaSEG}}}$. We theoretically showed
${\operatorname*{{LocalAdaSEG}}}$ that achieves the optimal convergence rate
with a linear speed-up property for both nonsmooth and smooth objectives.
Experiments verify our theoretical results and demonstrate the efficiency of
${\operatorname*{{LocalAdaSEG}}}$. \par For future work, since that the
current analysis merely holds for the homogeneous setting, a promising
direction is to extend the theoretical result of
${\operatorname*{{LocalAdaSEG}}}$ to the heterogeneous setting that better
models various real-world applications, such as federated GANs
\cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{beznosikov2021distributed}{\@@citephrase{(}}{\@@citephrase{)}}}
and robust federated learning \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{NEURIPS2020_ac450d10}{\@@citephrase{(}}{\@@citephrase{)}}}.
In addition, extending theoretical results from the stochastic convex-concave
setting to the stochastic nonconvex-(non)concave setting is an interesting and
challenging research direction.
\par\par\par\@@unnumbered@section{section}{}{Declarations} \par\begin{itemize}
\par\itemize@item@Funding (This work is supported by the Major Science and
Technology Innovation 2030 “Brain Science and Brain-like Research” key project
(No. 2021ZD0201405).) \par\itemize@item@Conflict of interest/Competing
interests (The authors declare that they have no conflict of interest.)
\par\itemize@item@Ethics approval (Not Applicable.) \par\itemize@item@Consent
to participate (Not Applicable.) \par\itemize@item@Consent for publication
(Not Applicable.) \par\itemize@item@Availability of data and materials (The
data used in this work is all public.) \par\itemize@item@Code availability
(The codes of the proposed method will be released after publishing.)
\par\itemize@item@Authors' contributions (All authors contributed to the study
conception and design. The first draft of the manuscript was written by
Luofeng Liao, and all authors commented on previous versions of the
manuscript. All authors read and approved the final manuscript.)
\par\end{itemize} \par\noindent If any of the sections are not relevant to
your manuscript, please include the heading and write `Not applicable' for
that section. \par\vskip 12.0pt plus 4.0pt minus
4.0pt\begin{flushleft}Editorial Policies for: \par\vskip 12.0pt plus 4.0pt
minus 4.0pt\noindent Springer journals and proceedings:
\url{https://www.springer.com/gp/editorial-policies} \par\vskip 12.0pt plus
4.0pt minus 4.0pt\noindent Nature Portfolio journals:
\url{https://www.nature.com/nature-research/editorial-policies} \par\vskip
12.0pt plus 4.0pt minus 4.0pt\noindent{Scientific Reports}:
\url{https://www.nature.com/srep/journal-policies/editorial-policies}
\par\vskip 12.0pt plus 4.0pt minus 4.0pt\noindent BMC journals:
\url{https://www.biomedcentral.com/getpublished/editorial-policies}
\end{flushleft}
\par\thebibliography\@@lbibitem{antonakopoulos2021adaptive}\NAT@@wrout{1}{}{}{}{(1)}{antonakopoulos2021adaptive}\lx@bibnewblock
K.~{}Antonakopoulos, V.~{}Belmega, and P.~{}Mertikopoulos. \lx@bibnewblock
Adaptive extra-gradient methods for min-max optimization and games.
\lx@bibnewblock In {\em International Conference on Learning Representations},
2021.
\par\@@lbibitem{arjovsky2017towards}\NAT@@wrout{2}{}{}{}{(2)}{arjovsky2017towards}\lx@bibnewblock
M.~{}Arjovsky and L.~{}Bottou. \lx@bibnewblock Towards principled methods for
training generative adversarial networks. \lx@bibnewblock{\em arXiv preprint
arXiv:1701.04862}, 2017.
\par\@@lbibitem{arjovsky2017wasserstein}\NAT@@wrout{3}{}{}{}{(3)}{arjovsky2017wasserstein}\lx@bibnewblock
M.~{}Arjovsky, S.~{}Chintala, and L.~{}Bottou. \lx@bibnewblock Wasserstein
generative adversarial networks. \lx@bibnewblock In D.~{}Precup and Y.~{}W.
Teh, editors, {\em Proceedings of the 34th International Conference on Machine
Learning, {ICML} 2017, Sydney, NSW, Australia, 6-11 August 2017}, volume~{}70
of {\em Proceedings of Machine Learning Research}, pages 214--223. {PMLR},
2017.
\par\@@lbibitem{azizian2020tight}\NAT@@wrout{4}{}{}{}{(4)}{azizian2020tight}\lx@bibnewblock
W.~{}Azizian, I.~{}Mitliagkas, S.~{}Lacoste-Julien, and G.~{}Gidel.
\lx@bibnewblock A tight and unified analysis of gradient-based methods for a
whole spectrum of differentiable games. \lx@bibnewblock In {\em International
Conference on Artificial Intelligence and Statistics}, pages 2863--2873. PMLR,
2020.
\par\@@lbibitem{babanezhad2020geometry}\NAT@@wrout{5}{}{}{}{(5)}{babanezhad2020geometry}\lx@bibnewblock
R.~{}Babanezhad and S.~{}Lacoste-Julien. \lx@bibnewblock Geometry-aware
universal mirror-prox. \lx@bibnewblock{\em arXiv preprint arXiv:2011.11203},
2020.
\par\@@lbibitem{bach2019universal}\NAT@@wrout{6}{}{}{}{(6)}{bach2019universal}\lx@bibnewblock
F.~{}Bach and K.~{}Y. Levy. \lx@bibnewblock A universal algorithm for
variational inequalities adaptive to smoothness and noise. \lx@bibnewblock In
{\em Conference on Learning Theory}, pages 164--194. PMLR, 2019.
\par\@@lbibitem{beznosikov2021distributed}\NAT@@wrout{7}{}{}{}{(7)}{beznosikov2021distributed}\lx@bibnewblock
A.~{}Beznosikov, V.~{}Samokhin, and A.~{}Gasnikov. \lx@bibnewblock Distributed
saddle-point problems: Lower bounds, optimal algorithms and federated gans.
\lx@bibnewblock{\em arXiv preprint arXiv:2010.13112}, 2021.
\par\@@lbibitem{brock2018large}\NAT@@wrout{8}{}{}{}{(8)}{brock2018large}\lx@bibnewblock
A.~{}Brock, J.~{}Donahue, and K.~{}Simonyan. \lx@bibnewblock Large scale gan
training for high fidelity natural image synthesis. \lx@bibnewblock In {\em
International Conference on Learning Representations}, 2019.
\par\@@lbibitem{Chambolle2010}\NAT@@wrout{9}{}{}{}{(9)}{Chambolle2010}\lx@bibnewblock
A.~{}Chambolle and T.~{}Pock. \lx@bibnewblock A first-order primal-dual
algorithm for convex problems with~{}applications to imaging.
\lx@bibnewblock{\em Journal of Mathematical Imaging and Vision}, 40(1):120--
145, Dec. 2010.
\par\@@lbibitem{Chatterjee2014}\NAT@@wrout{10}{}{}{}{(10)}{Chatterjee2014}\lx@bibnewblock
S.~{}Chatterjee. \lx@bibnewblock{\em Superconcentration and Related Topics}.
\lx@bibnewblock Springer International Publishing, 2014.
\par\@@lbibitem{chen2021quantized}\NAT@@wrout{11}{}{}{}{(11)}{chen2021quantized}\lx@bibnewblock
C.~{}Chen, L.~{}Shen, H.~{}Huang, and W.~{}Liu. \lx@bibnewblock Quantized adam
with error feedback. \lx@bibnewblock{\em ACM Transactions on Intelligent
Systems and Technology (TIST)}, 12(5):1--26, 2021.
\par\@@lbibitem{chen2021towards}\NAT@@wrout{12}{}{}{}{(12)}{chen2021towards}\lx@bibnewblock
C.~{}Chen, L.~{}Shen, F.~{}Zou, and W.~{}Liu. \lx@bibnewblock Towards
practical adam: Non-convexity, convergence theory, and mini-batch
acceleration. \lx@bibnewblock{\em arXiv preprint arXiv:2101.05471}, 2021.
\par\@@lbibitem{chen2021cada}\NAT@@wrout{13}{}{}{}{(13)}{chen2021cada}\lx@bibnewblock
T.~{}Chen, Z.~{}Guo, Y.~{}Sun, and W.~{}Yin. \lx@bibnewblock Cada:
Communication-adaptive distributed adam. \lx@bibnewblock In {\em International
Conference on Artificial Intelligence and Statistics}, pages 613--621. PMLR,
2021.
\par\@@lbibitem{chen2018on}\NAT@@wrout{14}{}{}{}{(14)}{chen2018on}\lx@bibnewblock
X.~{}Chen, S.~{}Liu, R.~{}Sun, and M.~{}Hong. \lx@bibnewblock On the
convergence of a class of adam-type algorithms for non-convex optimization.
\lx@bibnewblock In {\em International Conference on Learning Representations},
2019.
\par\@@lbibitem{chen2020distributed}\NAT@@wrout{15}{}{}{}{(15)}{chen2020distributed}\lx@bibnewblock
X.~{}Chen, S.~{}Yang, L.~{}Shen, and X.~{}Pang. \lx@bibnewblock A distributed
training algorithm of generative adversarial networks with quantized
gradients. \lx@bibnewblock{\em arXiv preprint arXiv:2010.13359}, 2020.
\par\@@lbibitem{Chen2014}\NAT@@wrout{16}{}{}{}{(16)}{Chen2014}\lx@bibnewblock
Y.~{}Chen, G.~{}Lan, and Y.~{}Ouyang. \lx@bibnewblock Optimal primal-dual
methods for a class of saddle point problems. \lx@bibnewblock{\em{SIAM}
Journal on Optimization}, 24(4):1779--1814, Jan. 2014.
\par\@@lbibitem{chen2017accelerated}\NAT@@wrout{17}{}{}{}{(17)}{chen2017accelerated}\lx@bibnewblock
Y.~{}Chen, G.~{}Lan, and Y.~{}Ouyang. \lx@bibnewblock Accelerated schemes for
a class of variational inequalities. \lx@bibnewblock{\em Mathematical
Programming}, 165(1):113--149, June 2017.
\par\@@lbibitem{chen2018universal}\NAT@@wrout{18}{}{}{}{(18)}{chen2018universal}\lx@bibnewblock
Z.~{}Chen, Z.~{}Yuan, J.~{}Yi, B.~{}Zhou, E.~{}Chen, and T.~{}Yang.
\lx@bibnewblock Universal stagewise learning for non-convex problems with
convergence on averaged solutions. \lx@bibnewblock In {\em International
Conference on Learning Representations}, 2019.
\par\@@lbibitem{daskalakis2018training}\NAT@@wrout{19}{}{}{}{(19)}{daskalakis2018training}\lx@bibnewblock
C.~{}Daskalakis, A.~{}Ilyas, V.~{}Syrgkanis, and H.~{}Zeng. \lx@bibnewblock
Training {GAN}s with optimism. \lx@bibnewblock In {\em International
Conference on Learning Representations}, 2018.
\par\@@lbibitem{delage2010distributionally}\NAT@@wrout{20}{}{}{}{(20)}{delage2010distributionally}\lx@bibnewblock
E.~{}Delage and Y.~{}Ye. \lx@bibnewblock Distributionally robust optimization
under moment uncertainty with application to data-driven problems.
\lx@bibnewblock{\em Operations research}, 58(3):595--612, 2010.
\par\@@lbibitem{deng2009imagenet}\NAT@@wrout{21}{}{}{}{(21)}{deng2009imagenet}\lx@bibnewblock
J.~{}Deng, W.~{}Dong, R.~{}Socher, L.-J. Li, K.~{}Li, and L.~{}Fei-Fei.
\lx@bibnewblock Imagenet: A large-scale hierarchical image database.
\lx@bibnewblock In {\em 2009 IEEE conference on computer vision and pattern
recognition}, pages 248--255. Ieee, 2009.
\par\@@lbibitem{NEURIPS2020_ac450d10}\NAT@@wrout{22}{}{}{}{(22)}{NEURIPS2020_ac450d10}\lx@bibnewblock
Y.~{}Deng, M.~{}M. Kamani, and M.~{}Mahdavi. \lx@bibnewblock Distributionally
robust federated averaging. \lx@bibnewblock In {\em Advances in Neural
Information Processing Systems}, volume~{}33, pages 15111--15122. Curran
Associates, Inc., 2020.
\par\@@lbibitem{deng2020local}\NAT@@wrout{23}{}{}{}{(23)}{deng2020local}\lx@bibnewblock
Y.~{}Deng and M.~{}Mahdavi. \lx@bibnewblock Local stochastic gradient descent
ascent: Convergence analysis and communication efficiency. \lx@bibnewblock In
{\em Proceedings of The 24th International Conference on Artificial
Intelligence and Statistics}, volume 130 of {\em Proceedings of Machine
Learning Research}, pages 1387--1395. PMLR, 13--15 Apr 2021.
\par\@@lbibitem{duchi2011adaptive}\NAT@@wrout{24}{}{}{}{(24)}{duchi2011adaptive}\lx@bibnewblock
J.~{}Duchi, E.~{}Hazan, and Y.~{}Singer. \lx@bibnewblock Adaptive subgradient
methods for online learning and stochastic optimization. \lx@bibnewblock{\em
Journal of machine learning research}, 12(7), 2011.
\par\@@lbibitem{ene2020adaptive}\NAT@@wrout{25}{}{}{}{(25)}{ene2020adaptive}\lx@bibnewblock
A.~{}Ene and H.~{}L. Nguyen. \lx@bibnewblock Adaptive and universal single-
gradient algorithms for variational inequalities. \lx@bibnewblock{\em arXiv
preprint arXiv:2010.07799}, 2020.
\par\@@lbibitem{gidel2018a}\NAT@@wrout{26}{}{}{}{(26)}{gidel2018a}\lx@bibnewblock
G.~{}Gidel, H.~{}Berard, G.~{}Vignoud, P.~{}Vincent, and S.~{}Lacoste-Julien.
\lx@bibnewblock A variational inequality perspective on generative adversarial
networks. \lx@bibnewblock In {\em International Conference on Learning
Representations}, 2019.
\par\@@lbibitem{gidel2019negative}\NAT@@wrout{27}{}{}{}{(27)}{gidel2019negative}\lx@bibnewblock
G.~{}Gidel, R.~{}A. Hemmat, M.~{}Pezeshki, R.~{}Le~{}Priol, G.~{}Huang,
S.~{}Lacoste-Julien, and I.~{}Mitliagkas. \lx@bibnewblock Negative momentum
for improved game dynamics. \lx@bibnewblock In {\em The 22nd International
Conference on Artificial Intelligence and Statistics}, pages 1802--1811. PMLR,
2019.
\par\@@lbibitem{goodfellow2014generative}\NAT@@wrout{28}{}{}{}{(28)}{goodfellow2014generative}\lx@bibnewblock
I.~{}J. Goodfellow, J.~{}Pouget-Abadie, M.~{}Mirza, B.~{}Xu, D.~{}Warde-
Farley, S.~{}Ozair, A.~{}C. Courville, and Y.~{}Bengio. \lx@bibnewblock
Generative adversarial nets. \lx@bibnewblock In {\em NIPS}, 2014.
\par\@@lbibitem{guo2020communication}\NAT@@wrout{29}{}{}{}{(29)}{guo2020communication}\lx@bibnewblock
Z.~{}Guo, M.~{}Liu, Z.~{}Yuan, L.~{}Shen, W.~{}Liu, and T.~{}Yang.
\lx@bibnewblock Communication-efficient distributed stochastic auc
maximization with deep neural networks. \lx@bibnewblock In {\em International
Conference on Machine Learning}, pages 3864--3874. PMLR, 2020.
\par\@@lbibitem{heusel2017gans}\NAT@@wrout{30}{}{}{}{(30)}{heusel2017gans}\lx@bibnewblock
M.~{}Heusel, H.~{}Ramsauer, T.~{}Unterthiner, B.~{}Nessler, and
S.~{}Hochreiter. \lx@bibnewblock Gans trained by a two time-scale update rule
converge to a local nash equilibrium. \lx@bibnewblock In {\em NIPS}, 2017.
\par\@@lbibitem{hou2021efficient}\NAT@@wrout{31}{}{}{}{(31)}{hou2021efficient}\lx@bibnewblock
C.~{}Hou, K.~{}K. Thekumparampil, G.~{}Fanti, and S.~{}Oh. \lx@bibnewblock
Efficient algorithms for federated saddle point optimization, 2021.
\par\@@lbibitem{judisky2011firstorder}\NAT@@wrout{32}{}{}{}{(32)}{judisky2011firstorder}\lx@bibnewblock
A.~{}Juditsky, A.~{}Nemirovski, et~{}al. \lx@bibnewblock First order methods
for nonsmooth convex large-scale optimization, ii: utilizing problems
structure. \lx@bibnewblock{\em Optimization for Machine Learning}, 30(9):149--
183, 2011.
\par\@@lbibitem{Juditsky2011solving}\NAT@@wrout{33}{}{}{}{(33)}{Juditsky2011solving}\lx@bibnewblock
A.~{}Juditsky, A.~{}Nemirovski, and C.~{}Tauvel. \lx@bibnewblock Solving
variational inequalities with stochastic mirror-prox algorithm.
\lx@bibnewblock{\em Stochastic Systems}, 1(1):17--58, June 2011.
\par\@@lbibitem{kingma2017adam}\NAT@@wrout{34}{}{}{}{(34)}{kingma2017adam}\lx@bibnewblock
D.~{}P. Kingma and J.~{}Ba. \lx@bibnewblock Adam: A method for stochastic
optimization. \lx@bibnewblock In {\em International Conference on Learning
Representations}, 2017.
\par\@@lbibitem{korpel1976}\NAT@@wrout{35}{}{}{}{(35)}{korpel1976}\lx@bibnewblock
G.~{}M. Korpelevich. \lx@bibnewblock The extragradient method for finding
saddle points and other problems. \lx@bibnewblock{\em Matecon}, 1976.
\par\@@lbibitem{Li2020On}\NAT@@wrout{36}{}{}{}{(36)}{Li2020On}\lx@bibnewblock
X.~{}Li, K.~{}Huang, W.~{}Yang, S.~{}Wang, and Z.~{}Zhang. \lx@bibnewblock On
the convergence of fedavg on non-iid data. \lx@bibnewblock In {\em
International Conference on Learning Representations}, 2020.
\par\@@lbibitem{lin2020near}\NAT@@wrout{37}{}{}{}{(37)}{lin2020near}\lx@bibnewblock
T.~{}Lin, C.~{}Jin, and M.~{}I. Jordan. \lx@bibnewblock Near-optimal
algorithms for minimax optimization. \lx@bibnewblock In {\em Conference on
Learning Theory}, pages 2738--2779. PMLR, 2020.
\par\@@lbibitem{lin2019don}\NAT@@wrout{38}{}{}{}{(38)}{lin2019don}\lx@bibnewblock
T.~{}Lin, S.~{}U. Stich, K.~{}K. Patel, and M.~{}Jaggi. \lx@bibnewblock Don't
use large mini-batches, use local sgd. \lx@bibnewblock In {\em International
Conference on Learning Representations}, 2020.
\par\@@lbibitem{liu2020towards}\NAT@@wrout{39}{}{}{}{(39)}{liu2020towards}\lx@bibnewblock
M.~{}Liu, Y.~{}Mroueh, J.~{}Ross, W.~{}Zhang, X.~{}Cui, P.~{}Das, and
T.~{}Yang. \lx@bibnewblock Towards better understanding of adaptive gradient
algorithms in generative adversarial nets. \lx@bibnewblock In {\em
International Conference on Learning Representations}, 2020.
\par\@@lbibitem{mingruiliu2020decentralized}\NAT@@wrout{40}{}{}{}{(40)}{mingruiliu2020decentralized}\lx@bibnewblock
M.~{}Liu, W.~{}Zhang, Y.~{}Mroueh, X.~{}Cui, J.~{}Ross, T.~{}Yang, and
P.~{}Das. \lx@bibnewblock A decentralized parallel algorithm for training
generative adversarial nets. \lx@bibnewblock volume~{}33, 2020.
\par\@@lbibitem{mcmahan2021advances}\NAT@@wrout{41}{}{}{}{(41)}{mcmahan2021advances}\lx@bibnewblock
H.~{}B. McMahan et~{}al. \lx@bibnewblock Advances and open problems in
federated learning. \lx@bibnewblock{\em Foundations and
Trends{\textregistered} in Machine Learning}, 14(1), 2021.
\par\@@lbibitem{mertikopoulos2018optimistic}\NAT@@wrout{42}{}{}{}{(42)}{mertikopoulos2018optimistic}\lx@bibnewblock
P.~{}Mertikopoulos, B.~{}Lecouat, H.~{}Zenati, C.-S. Foo, V.~{}Chandrasekhar,
and G.~{}Piliouras. \lx@bibnewblock Optimistic mirror descent in saddle-point
problems: Going the extra(-gradient) mile. \lx@bibnewblock In {\em
International Conference on Learning Representations}, 2019.
\par\@@lbibitem{mertikopoulos2018cycles}\NAT@@wrout{43}{}{}{}{(43)}{mertikopoulos2018cycles}\lx@bibnewblock
P.~{}Mertikopoulos, C.~{}Papadimitriou, and G.~{}Piliouras. \lx@bibnewblock
Cycles in adversarial regularized learning. \lx@bibnewblock In {\em
Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete
Algorithms}, pages 2703--2717. SIAM, 2018.
\par\@@lbibitem{monteiro2011complexity}\NAT@@wrout{44}{}{}{}{(44)}{monteiro2011complexity}\lx@bibnewblock
R.~{}D.~{}C. Monteiro and B.~{}F. Svaiter. \lx@bibnewblock Complexity of
variants of tseng's modified f-b splitting and korpelevich's methods for
hemivariational inequalities with applications to saddle-point and convex
optimization problems. \lx@bibnewblock{\em SIAM J. Optimization}, 21:1688--
1720, 2011.
\par\@@lbibitem{nemirovski2004prox}\NAT@@wrout{45}{}{}{}{(45)}{nemirovski2004prox}\lx@bibnewblock
A.~{}Nemirovski. \lx@bibnewblock Prox-method with rate of convergence o (1/t)
for variational inequalities with lipschitz continuous monotone operators and
smooth convex-concave saddle point problems. \lx@bibnewblock{\em SIAM Journal
on Optimization}, 15(1):229--251, 2004.
\par\@@lbibitem{nemirovski2009robust}\NAT@@wrout{46}{}{}{}{(46)}{nemirovski2009robust}\lx@bibnewblock
A.~{}Nemirovski, A.~{}Juditsky, G.~{}Lan, and A.~{}Shapiro. \lx@bibnewblock
Robust stochastic approximation approach to stochastic programming.
\lx@bibnewblock{\em SIAM Journal on Optimization}, 19(4):1574--1609, 2009.
\par\@@lbibitem{neumann1928theorie}\NAT@@wrout{47}{}{}{}{(47)}{neumann1928theorie}\lx@bibnewblock
J.~{}v. Neumann. \lx@bibnewblock Zur theorie der gesellschaftsspiele.
\lx@bibnewblock{\em Mathematische annalen}, 100(1):295--320, 1928.
\par\@@lbibitem{reddi2021adaptive}\NAT@@wrout{48}{}{}{}{(48)}{reddi2021adaptive}\lx@bibnewblock
S.~{}J. Reddi, Z.~{}Charles, M.~{}Zaheer, Z.~{}Garrett, K.~{}Rush,
J.~{}Kone{\v{c}}n{\'{y}}, S.~{}Kumar, and H.~{}B. McMahan. \lx@bibnewblock
Adaptive federated optimization. \lx@bibnewblock In {\em International
Conference on Learning Representations}, 2021.
\par\@@lbibitem{Reddi2018on}\NAT@@wrout{49}{}{}{}{(49)}{Reddi2018on}\lx@bibnewblock
S.~{}J. Reddi, S.~{}Kale, and S.~{}Kumar. \lx@bibnewblock On the convergence
of adam and beyond. \lx@bibnewblock In {\em International Conference on
Learning Representations}, 2018.
\par\@@lbibitem{rogozin2021decentralized}\NAT@@wrout{50}{}{}{}{(50)}{rogozin2021decentralized}\lx@bibnewblock
A.~{}Rogozin, A.~{}Beznosikov, D.~{}Dvinskikh, D.~{}Kovalev,
P.~{}Dvurechensky, and A.~{}Gasnikov. \lx@bibnewblock Decentralized
distributed optimization for saddle point problems, 2021\.
\par\@@lbibitem{smola2010architecture}\NAT@@wrout{51}{}{}{}{(51)}{smola2010architecture}\lx@bibnewblock
A.~{}Smola and S.~{}Narayanamurthy. \lx@bibnewblock An architecture for
parallel topic models. \lx@bibnewblock{\em Proceedings of the VLDB Endowment},
3(1-2):703--710, 2010.
\par\@@lbibitem{stich2018local}\NAT@@wrout{52}{}{}{}{(52)}{stich2018local}\lx@bibnewblock
S.~{}U. Stich. \lx@bibnewblock Local {SGD} converges fast and communicates
little. \lx@bibnewblock In {\em International Conference on Learning
Representations}, 2019.
\par\@@lbibitem{wang2019towards}\NAT@@wrout{53}{}{}{}{(53)}{wang2019towards}\lx@bibnewblock
J.~{}Wang, T.~{}Zhang, S.~{}Liu, P.-Y. Chen, J.~{}Xu, M.~{}Fardad, and
B.~{}Li. \lx@bibnewblock Towards a unified min-max framework for adversarial
exploration and robustness. \lx@bibnewblock{\em arXiv preprint
arXiv:1906.03563}, 2019.
\par\@@lbibitem{xie2019local}\NAT@@wrout{54}{}{}{}{(54)}{xie2019local}\lx@bibnewblock
C.~{}Xie, O.~{}Koyejo, I.~{}Gupta, and H.~{}Lin. \lx@bibnewblock Local
adaalter: Communication-efficient stochastic gradient descent with adaptive
learning rates. \lx@bibnewblock{\em arXiv preprint arXiv:1911.09030}, 2019.
\par\@@lbibitem{yan2020adaptive}\NAT@@wrout{55}{}{}{}{(55)}{yan2020adaptive}\lx@bibnewblock
Y.~{}Yan and Y.~{}Xu. \lx@bibnewblock Adaptive primal-dual stochastic gradient
method for expectation-constrained convex stochastic programs.
\lx@bibnewblock{\em arXiv preprint arXiv:2012.14943}, 2020.
\par\@@lbibitem{Yu2019onthelinear}\NAT@@wrout{56}{}{}{}{(56)}{Yu2019onthelinear}\lx@bibnewblock
H.~{}Yu, R.~{}Jin, and S.~{}Yang. \lx@bibnewblock On the linear speed-up
analysis of communication efficient momentum {SGD} for distributed non-convex
optimization. \lx@bibnewblock In {\em Proceedings of the 36th International
Conference on Machine Learning}, volume~{}97 of {\em Proceedings of Machine
Learning Research}, pages 7184--7193. PMLR, 09--15 Jun 2019.
\par\@@lbibitem{NEURIPS2020_52aaa62e}\NAT@@wrout{57}{}{}{}{(57)}{NEURIPS2020_52aaa62e}\lx@bibnewblock
J.~{}Zhang, P.~{}Xiao, R.~{}Sun, and Z.~{}Luo. \lx@bibnewblock A single-loop
smoothed gradient descent-ascent algorithm for nonconvex-concave min-max
problems. \lx@bibnewblock In {\em Advances in Neural Information Processing
Systems}, volume~{}33, pages 7377--7389. Curran Associates, Inc., 2020.
\par\@@lbibitem{zhao2021accelerated}\NAT@@wrout{58}{}{}{}{(58)}{zhao2021accelerated}\lx@bibnewblock
R.~{}Zhao. \lx@bibnewblock Accelerated stochastic algorithms for convex-
concave saddle-point problems. \lx@bibnewblock{\em arXiv preprint
arXiv:1903.01687}, 2021.
\par\@@lbibitem{zou2019sufficient}\NAT@@wrout{59}{}{}{}{(59)}{zou2019sufficient}\lx@bibnewblock
F.~{}Zou, L.~{}Shen, Z.~{}Jie, W.~{}Zhang, and W.~{}Liu. \lx@bibnewblock A
sufficient condition for convergences of adam and rmsprop. \lx@bibnewblock In
{\em Proceedings of the IEEE/CVF Conference on computer vision and pattern
recognition}, pages 11127--11135, 2019. \par\endthebibliography
\par\LTX@newpage \par\par\@@numbered@section{appendix}{toc}{Related Works}
\par{Stochastic minimax algorithms.}\ Stochastic convex-concave minimax
problems~{}\eqref{spp} have been extensively studied in the optimization
literature and are usually solved via variants of PDHG or extragradient
methods, for example,
\cite[cite]{[\@@bibref{Number}{Chambolle2010,zhao2021accelerated,
nemirovski2004prox, nemirovski2009robust,
Juditsky2011solving,judisky2011firstorder,
Chen2014,beznosikov2021distributed}{}{}]}.
\cite[cite]{[\@@bibref{Number}{chen2017accelerated}{}{}]} and
\cite[cite]{[\@@bibref{Number}{Juditsky2011solving}{}{}]} adopted mirror-prox-
type methods to tackle the stochastic convex-concave minimax problem with
${O}({1}/{\sqrt{T}})$ convergence rate.
\cite[cite]{[\@@bibref{Number}{zhao2021accelerated}{}{}]} proposed an
accelerated stochastic PDHG-type algorithm with Bergman divergence to solve
the stochastic convex-concave minimax problem with a similar
${O}({1}/{\sqrt{T}})$ convergence rate dominated by the stochastic variance
term. However, while all these algorithms
\cite[cite]{[\@@bibref{Number}{chen2017accelerated,Juditsky2011solving,zhao2021accelerated}{}{}]}
have achieved the optimal rate according to the low and upper bound for the
stochastic convex-concave minimax problem
\cite[cite]{[\@@bibref{Number}{beznosikov2021distributed}{}{}]}, their
performance is highly influenced by the choice of the learning rate, which is
either using sufficiently small constants or diminishing learning rates. \\\
\par{Adaptive minimax algorithms.}\ Adaptive learning rate in stochastic
optimization is first developed for minimization problems
\cite[cite]{[\@@bibref{Number}{duchi2011adaptive}{}{}]}. Its variants
\cite[cite]{[\@@bibref{Number}{kingma2017adam,Reddi2018on,
zou2019sufficient,chen2021towards,chen2021quantized}{}{}]} are widely used to
train deep learning models. The key feature of the adaptive learning rate is
that it can automatically adjust the learning rate during the training process
and achieve faster convergence. Recently, the adaptive learning rate has also
been developed for minimax algorithms to accelerate the training process,
since the learning rate in stochastic minimax algorithm is hard to tune based
on the minimax loss, as compared to minimization problems. Several recent
papers have tried to analyze the convergence rate of adaptive extragradient in
the convex-concave minimax settings. The universal mirror-prox method
\cite[cite]{[\@@bibref{Number}{bach2019universal}{}{}]} proposed a new
adaptive learning rate technique that adapts to problem parameters, such as
the unknown Lipschitz parameter, and achieves optimal convergence rates in
stochastic setting.
\cite[cite]{[\@@bibref{Number}{babanezhad2020geometry}{}{}]} extended the
universal mirror-prox \cite[cite]{[\@@bibref{Number}{bach2019universal}{}{}]}
by replacing the norm dependence in the learning rate with a general Bregman
divergence dependence. \cite[cite]{[\@@bibref{Number}{ene2020adaptive}{}{}]}
proposed an adaptive stochastic single-call extragradient algorithm for
variational inequality problems.
\cite[cite]{[\@@bibref{Number}{antonakopoulos2021adaptive}{}{}]} proposed a
similar adaptive mirror-prox algorithm, but their method handles an unbounded
domain by introducing the notion of local norms in the deterministic setting.
In addition to the adaptive extragradient methods mentioned above for the
general stochastic minimax problem,
\cite[cite]{[\@@bibref{Number}{yan2020adaptive}{}{}]} proposed an adaptive
primal-dual method for expectation-constrained convex stochastic programs,
which can be formulated as a minimax optimization with the coupled term being
a linear function with dual variable. Training of a GAN model
\cite[cite]{[\@@bibref{Number}{goodfellow2014generative}{}{}]} corresponds to
solving a specific non-convex non-concave minimax problem. Several works have
heuristically adopted a stochastic adaptive extragradient for training GANs
\cite[cite]{[\@@bibref{Number}{gidel2018a,mertikopoulos2018optimistic,beznosikov2021distributed}{}{}]}.
Recently, \cite[cite]{[\@@bibref{Number}{liu2020towards}{}{}]} studied the
convergence behavior of an adaptive optimistic stochastic gradient algorithm
for a class of non-convex non-concave minimax problems under the MVI condition
to train GANs. \\\ \par{Distributed minimax algorithms.}\ As datasets and deep
learning architectures become larger and larger distributed minimax algorithms
are needed for GANs and adversarial training.
\cite[cite]{[\@@bibref{Number}{beznosikov2021distributed}{}{}]} established
upper and lower bounds for iteration complexity for strongly-convex-strongly-
concave and convex-concave minimax problems in both a centralized and
decentralized setting. However, the convergence rate for their Extra Step
Local SGD is established only in a strongly-convex-strongly-concave setting
with a linear speed-up property with respect to the number of works; while for
their proposed local Adam no convergence results are provided.
\cite[cite]{[\@@bibref{Number}{deng2020local}{}{}]} provided convergence
guarantees for a primal-dual local stochastic gradient algorithm in the
strongly-convex-strongly-concave-setting and several non-convex settings with
PL-inequality-type conditions.
\cite[cite]{[\@@bibref{Number}{chen2020distributed}{}{}]} and
\cite[cite]{[\@@bibref{Number}{mingruiliu2020decentralized}{}{}]} studied the
convergence of a distributed optimistic stochastic gradient algorithm for non-
convex non-concave minimax problems under the pseudomonotonicity condition and
MVI condition, respectively. However, their convergence rates hold only for a
sufficiently large minibatch size or a sufficiently large number of workers.
In addition, there also exist several decentralized or federated algorithms
for stochastic strongly-convex-strongly-concave minimax problems
\cite[cite]{[\@@bibref{Number}{hou2021efficient,
rogozin2021decentralized}{}{}]}. In this work, we mainly focus on the
centralized setting for the stochastic convex-concave minimax problems.
\par\par\par\@@numbered@section{appendix}{toc}{Appendix to Main Text}
\par\par\@@numbered@section{subsection}{toc}{Extension to Unbounded Stochastic
Gradient Oracle} Let $\\{Z_{i}\\}_{i=1}^{n}$ be a sequence of
i.i.d.~{}standard normals. We have the following well-known results (see
Appendix A of \cite[cite]{[\@@bibref{Number}{Chatterjee2014}{}{}]}):
\@@amsalign&{\mathbb{P}}\big{(}\max_{i}Z_{i}>{\mathbb{E}}[\max_{i}Z_{i}]+t\big{)}\leq\exp(-t^{2}/2)\text{
for all }t>0,\\\ &{\mathbb{E}}[\max_{i}|Z_{i}|]\leq\sqrt{2\log(2n)}. With
this, we have ${\mathbb{P}}(\max_{i}|Z_{i}|\geq\sqrt{2\log(2n)}+t)\leq
2\exp(-t^{2}/2)$. We apply this result to the sequence
$\big{\\{}\|G({z^{m}_{t}})-\tilde{G}({z^{m}_{t}})\|_{*}/\|G\|_{\infty},\|G({\tilde{z}^{m,*}_{t-1}})-\tilde{G}({\tilde{z}^{m,*}_{t-1}})\|_{*}/\|G\|_{\infty}\big{\\}}_{m,t}$,
which is a sequence of $2MT$ i.i.d.~{}standard normals by the homogeneity of
the oracle. \par\par\@@numbered@section{appendix}{toc}{Proof of Theorems}
\par\par\begin{lemma} For all $m\in[M]$, consider the sequence
$\\{{\eta^{m}_{t}},{\tilde{z}^{m,*}_{t-1}},{z^{m}_{t}},{\tilde{z}^{m}_{t}}\\}_{t=1}^{T}$
defined in \lx@cref{creftype~refnum}{algo}. It holds
\@@amsalign\|{\tilde{z}^{m,*}_{t-1}}-{z^{m}_{t}}\|/{\eta^{m}_{t}}\leq
G,\quad\|{\tilde{z}^{m}_{t}}-{z^{m}_{t}}\|/{\eta^{m}_{t}}\leq G. \end{lemma}
\rm{\par\addvspace{\medskipamount}\noindent{\bf Proof: }}\ignorespaces[Proof
of \lx@cref{creftype~refnum}{lm:bdimprovement}] Let
$I:{\mathcal{Z}}\to{\mathcal{Z}}^{*}$ be the identity map which maps an
element $z\in{\mathcal{Z}}$ to the corresponding element in the dual space
${\mathcal{Z}}^{*}$ (we are considering Euclidean case). The first-order
optimality condition of the update rule
${z^{m}_{t}}=\Pi_{\mathcal{Z}}[{\tilde{z}^{m,*}_{t-1}}-{\eta^{m}_{t}}{M^{m}_{t}}]$
is
\@@amsalign\langle{\eta^{m}_{t}}{M^{m}_{t}}+I({z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}),z-{z^{m}_{t}}\rangle\geq
0,\forall z\in{\mathcal{Z}}. Set $z={\tilde{z}^{m,*}_{t-1}}$, apply the
Cauchy-Schwartz inequality and we obtain
\@@amsalign{\eta^{m}_{t}}\|{M^{m}_{t}}\|_{*}\cdot\|{\tilde{z}^{m,*}_{t-1}}-{z^{m}_{t}}\|&\geq\langle{\eta^{m}_{t}}{M^{m}_{t}},{\tilde{z}^{m,*}_{t-1}}-{z^{m}_{t}}\rangle\\\
&\geq\langle
I({\tilde{z}^{m,*}_{t-1}}-{z^{m}_{t}}),{\tilde{z}^{m,*}_{t-1}}-{z^{m}_{t}}\rangle=\|{\tilde{z}^{m,*}_{t-1}}-{z^{m}_{t}}\|^{2}.
The second inequality holds due to similar reasoning. We conclude the proof of
Lemma~{}\ref{lm:bdimprovement}. \end@proof \par\begin{lemma}[One-step
analysis] For all $m\in[M]$, consider the sequence
$\\{{\eta^{m}_{t}},{\tilde{z}^{m,*}_{t-1}},{M^{m}_{t}}=\tilde{G}({\tilde{z}^{m,*}_{t-1}}),{z^{m}_{t}},{g^{m}_{t}}=\tilde{G}({\tilde{z}^{m}_{t}}),{\tilde{z}^{m}_{t}}\\}_{t=1}^{T}$
defined in Algorithm~{}\ref{algo}. It holds for all $z\in{\mathcal{Z}}$,
\@@amsalign\langle{z^{m}_{t}}-z,{g^{m}_{t}}\rangle\leq\frac{1}{{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{m,*}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}-\frac{1}{{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}\\\
+\|{g^{m}_{t}}-{M^{m}_{t}}\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|.
\end{lemma} \par\rm{\par\addvspace{\medskipamount}\noindent{\bf Proof:
}}\ignorespaces[Proof of Lemma~{}\ref{lm:onestep}] For any
$c,g\in{\mathcal{Z}}$, consider the update of the form
$a^{*}=\Pi_{{\mathcal{Z}}}[c-g]=\operatorname*{argmin}_{z\in{\mathcal{Z}}}\,\langle
g,z\rangle+\tfrac{1}{2}\|z-c\|^{2}$. It holds for all $b\in{\mathcal{Z}}$,
\@@amsalign\langle
g,a^{*}-b\rangle\leq\tfrac{1}{2}\|b-c\|^{2}-\tfrac{1}{2}\|b-a^{*}\|^{2}-\tfrac{1}{2}\|a^{*}-c\|^{2}.
By the update rule of ${z^{m}_{t}}$ and ${\tilde{z}^{m}_{t}}$, we have (taking
$a^{*}\leftrightarrow{z^{m}_{t}}$, $b\leftrightarrow{\tilde{z}^{m}_{t}}$,
$g\leftrightarrow{\eta^{m}_{t}}{M^{m}_{t}}$,
$c\leftrightarrow{\tilde{z}^{m,*}_{t-1}}$)
\@@amsalign\langle{\eta^{m}_{t}}{M^{m}_{t}},{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\rangle\leq\tfrac{1}{2}\|{\tilde{z}^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}-\tfrac{1}{2}\|{\tilde{z}^{m}_{t}}-{z^{m}_{t}}\|^{2}-\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2},
and for all $z\in{\mathcal{Z}}$ (taking
$a^{*}\leftrightarrow{\tilde{z}^{m}_{t}}$, $b\leftrightarrow z$,
$g\leftrightarrow{\eta^{m}_{t}}{g^{m}_{t}}$,
$c\leftrightarrow{\tilde{z}^{m,*}_{t-1}}$)
\@@amsalign\langle{\eta^{m}_{t}}{g^{m}_{t}},{\tilde{z}^{m}_{t}}-z\rangle\leq\tfrac{1}{2}\|z-{\tilde{z}^{m,*}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}-\tfrac{1}{2}\|{\tilde{z}^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}.
Finally we apply the Cauchy-Schwarz inequality and plug in
Eqs.~{}\eqref{eq:eq1} and \eqref{eq:eq2}.
\@@amsalign\langle{g^{m}_{t}},{z^{m}_{t}}-z\rangle&=\langle{g^{m}_{t}},{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\rangle+\langle{g^{m}_{t}},{\tilde{z}^{m}_{t}}-z\rangle\\\
&=\langle{g^{m}_{t}}-{M^{m}_{t}},{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\rangle+\langle{g^{m}_{t}},{\tilde{z}^{m}_{t}}-z\rangle+\langle{M^{m}_{t}},{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\rangle\\\
&\leq\|{g^{m}_{t}}-{M^{m}_{t}}\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|+\langle{g^{m}_{t}},{\tilde{z}^{m}_{t}}-z\rangle+\langle{M^{m}_{t}},{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\rangle\\\
&\leq\|{g^{m}_{t}}-{M^{m}_{t}}\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|\
\\\
&\quad\quad+{\frac{1}{\eta^{m}_{t}}}\Big{(}{\tfrac{1}{2}\|{\tilde{z}^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}}-\tfrac{1}{2}\|{\tilde{z}^{m}_{t}}-{z^{m}_{t}}\|^{2}-\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}\Big{)}\\\
&\quad\quad+{\frac{1}{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{m,*}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}-{\tfrac{1}{2}\|{\tilde{z}^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}}\Big{)}\\\
&=\frac{1}{{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{m,*}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}-\frac{1}{{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}\\\
&\quad\quad+\|{g^{m}_{t}}-{M^{m}_{t}}\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|.
This finishes the proof of Lemma~{}\ref{lm:onestep}. \end@proof
\par\par\@@numbered@section{subsection}{toc}{Proof of
Theorem~\ref{thm:nonsmooth}} \rm{\par\addvspace{\medskipamount}\noindent{\bf
Proof: }}\ignorespaces[Proof of Theorem~{}\ref{thm:nonsmooth}, Non-smooth
Case] The proof strategy follows closely that of Bach and Levy
\cite[cite]{[\@@bibref{Number}{bach2019universal}{}{}]}. {Step 1.} We apply
the Lemma~{}\ref{lm:onestep} and sum over all $m\in[M]$ and $t\in[T]$. Define
\@@amsalign{\xi^{m}_{t}}\vcentcolon=G({z^{m}_{t}})-{g^{m}_{t}}=G({z^{m}_{t}})-\tilde{G}({z^{m}_{t}}).
For all $z\in{\mathcal{Z}}$,
\@@amsalign{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\big{\langle}{z^{m}_{t}}-z,G({z^{m}_{t}})\big{\rangle}&={\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\big{\langle}{z^{m}_{t}}-z,{\xi^{m}_{t}}\rangle+{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\langle{z^{m}_{t}}-z,{g^{m}_{t}}\rangle\\\
&\leq\underbracket{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\big{\langle}{z^{m}_{t}}-z,{\xi^{m}_{t}}\big{\rangle}}_{{I}(z)}\\\
&\quad+\underbracket{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\frac{1}{{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{m,*}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}}_{{II}(z)}\\\
&\quad\underbracket{-{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\frac{1}{{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}}_{{III}}\\\
&\quad+\underbracket{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\|{g^{m}_{t}}-{M^{m}_{t}}\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|}_{{IV}}.
Now we use Lemma~{}\ref{lm:gaptoregret} and obtain \@@amsalign
TM\cdot{\mathbb{E}}[{\operatorname*{{DualGap}}}(\bar{z})]&\leq{\mathbb{E}}[\sup_{z\in{\mathcal{Z}}}\\{{I}(z)+{II}(z)+{III}+{IV}\\}]\\\
&\leq{\mathbb{E}}[\sup_{z\in{\mathcal{Z}}}I(z)]+{\mathbb{E}}[\sup_{z\in{\mathcal{Z}}}II(z)]+{\mathbb{E}}[III]+{\mathbb{E}}[IV]
Next we upper bound each term in turns. Steps 2--5 rely heavily on the
learning rate scheme. Define
\@@amsalign(Z^{m}_{t})^{2}\vcentcolon=\frac{\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}}{{5}({\eta^{m}_{t}})^{2}}
for all $t\in[T]$ and $m\in[M]$. By Lemma~{}\ref{lm:bdimprovement} we know
$Z^{m}_{t}\leq G$ almost surely. This is due to
\@@amsalign\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}\leq\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+2\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+2\|{\tilde{z}^{m,*}_{t-1}}-{\tilde{z}^{m}_{t}}\|^{2}\leq
5G^{2}{(\eta^{m}_{t})^{2}}. \par\par Moreover, for the nonsmooth case
($\alpha=1$), ${\eta^{m}_{t}}$ can be expressed by
\@@amsalign{\eta^{m}_{t}}=\frac{D}{\sqrt{G_{0}^{2}+\sum_{\tau=1}^{t-1}(Z^{m}_{\tau})^{2}}}.
\par{Step 2.} Show ${\mathbb{E}}[\sup_{z\in{\mathcal{Z}}}I(z)]=O(\sigma
D\sqrt{MT})$. For all $z\in{\mathcal{Z}}$, \@@amsalign
I(z)={\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\langle{z^{m}_{t}}-\tilde{z}^{m}_{0},{\xi^{m}_{t}}\rangle+{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\langle\tilde{z}^{m}_{0}-z,{\xi^{m}_{t}}\rangle.
The first term is a martingale difference sequence (MDS) and is zero in
expectation. For the second term, we use the Cauchy–Schwarz inequality. For
all $z\in{\mathcal{Z}}$,
\@@amsalign{\mathbb{E}}\bigg{[}\sup_{z}{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\langle\tilde{z}^{m}_{0}-z,{\xi^{m}_{t}}\rangle\bigg{]}&={\mathbb{E}}\bigg{[}\sup_{z}\Big{\langle}\tilde{z}_{0}-z,{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}{\xi^{m}_{t}}\Big{\rangle}\bigg{]}\\\
&\leq{\mathbb{E}}\bigg{[}\sup_{z}\|\tilde{z}_{0}-z\|\cdot\bigg{\|}{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}{\xi^{m}_{t}}\bigg{\|}_{*}\bigg{]}\\\
&\leq
D\cdot\sqrt{{\mathbb{E}}\bigg{[}\bigg{\|}{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}{\xi^{m}_{t}}\bigg{\|}_{*}^{2}\bigg{]}}\leq\sigma
D\sqrt{MT}. In the last equality, we use the fact that $\\{{\xi^{m}_{t}}\\}$
is an MDS. This establishes ${\mathbb{E}}[\sup_{z}I(z)]\leq\sigma D\sqrt{MT}$.
\par{Step 3.} Show ${\mathbb{E}}[\sup_{z\in{\mathcal{Z}}}II(z)]=O(DG\cdot
M\sqrt{T})$. For all $z\in{\mathcal{Z}}$, \tiny\@@amsalign
II(z)&={\sum_{t=1}^{T}}{\sum_{m=1}^{M}}{\frac{1}{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{m,*}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}\\\
&={\sum_{m=1}^{M}}{\sum_{t\notin
S+1}}{\frac{1}{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{m,*}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}+{\sum_{m=1}^{M}}{\sum_{t\in
S+1}}{\frac{1}{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{m,*}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}\\\
&={\sum_{m=1}^{M}}{\sum_{t\notin
S+1}}{\frac{1}{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\color[rgb]{1,0,0}{\tilde{z}^{m}_{t-1}}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}+{\sum_{m=1}^{M}}{\sum_{t\in
S+1}}{\frac{1}{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\color[rgb]{1,0,0}{\tilde{z}^{\circ}_{t-1}}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}\\\
&=\underbracket{{\sum_{m=1}^{M}}{\sum_{t=1}^{T}}{\frac{1}{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}}_{A}+\underbracket{{\sum_{m=1}^{M}}{\sum_{t\in
S+1}}{\frac{1}{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{\circ}_{t-1}}\|^{2}-\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t-1}}\|^{2}\Big{)}}_{B}
\normalsize where we used the definition of ${\tilde{z}^{m,*}_{t-1}}$ for two
cases $t\in S+1$ and $t\notin S+1$ (Line~{}\ref{line:sync} and
\ref{line:nosync} in algorithm). \par We upper bound $A$ and show $B\leq 0$.
\par Recall for $t\in S+1$, we have
${\tilde{z}^{m,*}_{t-1}}={\tilde{z}^{\circ}_{t-1}}={\sum_{m=1}^{M}}w_{m}\cdot{\tilde{z}^{m}_{t-1}}$,
and for $t\notin S+1$, we have
${\tilde{z}^{m,*}_{t-1}}={\tilde{z}^{m}_{t-1}}$. For the first term $A$ we use
$\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t}}\|^{2}\leq D^{2}$ and then telescope.
\@@amsalign
A&={\sum_{m=1}^{M}}\bigg{[}\frac{1}{\eta^{m}_{1}}\Big{(}\tfrac{1}{2}\|\tilde{z}^{m}_{0}-z\|^{2}\Big{)}-\frac{1}{\eta^{m}_{T}}\Big{(}\tfrac{1}{2}\|\tilde{z}^{m}_{T}-z\|^{2}\Big{)}+\sum_{t=2}^{T}\Big{(}\frac{1}{{\eta^{m}_{t}}}-\frac{1}{\eta^{m}_{t-1}}\Big{)}\Big{(}\tfrac{1}{2}\|{\tilde{z}^{m}_{t-1}}-z\|^{2}\Big{)}\bigg{]}\\\
&\leq{\sum_{m=1}^{M}}\bigg{[}\frac{D^{2}}{\eta^{m}_{1}}+\sum_{t=2}^{T}\Big{(}\frac{1}{{\eta^{m}_{t}}}-\frac{1}{\eta^{m}_{t-1}}\Big{)}D^{2}\bigg{]}\\\
&\leq{\sum_{m=1}^{M}}\bigg{[}\frac{D^{2}}{\eta^{m}_{1}}+\frac{D^{2}}{\eta^{m}_{T}}\bigg{]}
For each $m$, we have $D^{2}/\eta^{m}_{1}=DG_{0}$. For $D^{2}/\eta^{m}_{T}$ we
use the learning rate scheme. Recall the definition of ${Z^{m}_{t}}$. Then
\@@amsalign\frac{D^{2}}{\eta^{m}_{T}}=D\sqrt{G_{0}^{2}+\sum_{t=1}^{T-1}(Z^{m}_{t})^{2}}\leq
D\sqrt{G_{0}+G^{2}T}\leq DG_{0}+DG\sqrt{T}. This implies $A\leq
M(2DG_{0}+DG\sqrt{T})=O(DG\cdot M\sqrt{T})$. \par For the term $B$, we use the
definition of ${\tilde{z}^{\circ}_{t-1}}$ and the weights $\\{w_{m}\\}$ to
show $B\leq 0$. For each $t$, since ${\tilde{z}^{\circ}_{t-1}}$ the same for
all workers,
\@@amsalign{\sum_{m=1}^{M}}{\frac{1}{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{\circ}_{t-1}}\|^{2}\Big{)}&=\Big{(}{\sum_{m=1}^{M}}{\frac{1}{\eta^{m}_{t}}}\Big{)}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{\circ}_{t-1}}\|^{2}\Big{)}\\\
&=\Big{(}{\sum_{m=1}^{M}}{\frac{1}{\eta^{m}_{t}}}\Big{)}\Big{(}\tfrac{1}{2}\big{\|}{\textstyle\sum_{m=1}^{M}{w_{m}}^{1/2}\cdot{w_{m}}^{1/2}(z-{\tilde{z}^{m}_{t-1}})}\big{\|}^{2}\Big{)}\\\
&\leq\Big{(}{\sum_{m=1}^{M}}{\frac{1}{\eta^{m}_{t}}}\Big{)}\Big{(}{\sum_{m=1}^{M}}w_{m}\Big{)}\Big{(}{\sum_{m=1}^{M}}w_{m}\cdot\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t-1}}\|^{2}\Big{)}\\\
&={\sum_{m=1}^{M}}{\frac{1}{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|z-{\tilde{z}^{m}_{t-1}}\|^{2}\Big{)}.
In the last equality we use ${\sum_{m=1}^{M}}w_{m}=1$ and
$({\sum_{m=1}^{M}}1/{\eta^{m}_{t}})w_{m^{\prime}}=1/\eta^{m^{\prime}}_{t}$ for
all $m^{\prime}\in[M]$. This implies $B\leq 0$. This establishes
${\mathbb{E}}[\sup_{z}II(z)]\leq{\mathbb{E}}[\sup_{z}A]=O(DG\cdot M\sqrt{T})$.
\par\par{Step 4.} Show ${\mathbb{E}}[III]\leq 0$. This is obviously true.
\par{Step 5.} Show ${\mathbb{E}}[IV]=\tilde{O}(\gamma DG\cdot M\sqrt{T})$.
Define $\gamma=G/G_{0}$. By \ref{as:bdsg} we have
$\|{g^{m}_{t}}-{M^{m}_{t}}\|_{*}\leq 2G$. It holds almost surely that
\@@amsalign IV&\leq
2G{\sum_{m=1}^{M}}{\sum_{t=1}^{T}}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|\\\ &\leq
2G\sqrt{T}\cdot{\sum_{m=1}^{M}}\sqrt{{\sum_{t=1}^{T}}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}}\\\
&\leq
2G\sqrt{T}\cdot{\sum_{m=1}^{M}}\sqrt{{\sum_{t=1}^{T}}({\eta^{m}_{t}}{Z^{m}_{t}})^{2}}\\\
&=2G\sqrt{T}\cdot
D\cdot{\sum_{m=1}^{M}}\sqrt{{\sum_{t=1}^{T}}\frac{({Z^{m}_{t}})^{2}}{G_{0}^{2}+{\sum_{\tau=1}^{t-1}}(Z^{m}_{\tau})^{2}}}\\\
&\leq
2GD\sqrt{T}\cdot{\sum_{m=1}^{M}}\sqrt{2+4\gamma^{2}+2\log\Big{(}\frac{G_{0}^{2}+{\sum_{t=1}^{T-1}}{(Z^{m}_{t})^{2}}}{G_{0}^{2}}\Big{)}}\\\
&\leq
2GD\sqrt{T}\cdot{\sum_{m=1}^{M}}\sqrt{2+4\gamma^{2}+2\log\Big{(}\frac{G_{0}^{2}+G^{2}T}{G_{0}^{2}}\Big{)}}\\\
&\leq
2GD\sqrt{T}\cdot{\sum_{m=1}^{M}}\sqrt{2+4\gamma^{2}+2\log(1+\gamma^{2}T)}\\\
&=\tilde{O}(\gamma GD\cdot M\sqrt{T}) Finally, we plug in the upper bounds for
$I$--$IV$ and continue Eq~{}\eqref{eq:boundnonsmooth}. \@@amsalign
TM\cdot{\mathbb{E}}[{\operatorname*{{DualGap}}}(\bar{z})]=\tilde{O}(\gamma
DG\cdot M\sqrt{T}+\sigma D\sqrt{MT}). This finishes the proof of
Theorem~{}\ref{thm:nonsmooth} \end@proof
\par\par\par\@@numbered@section{subsection}{toc}{Proof of
Theorem~\ref{thm:smooth}} \rm{\par\addvspace{\medskipamount}\noindent{\bf
Proof: }}\ignorespaces[Proof of Theorem~{}\ref{thm:smooth}, Smooth Case] The
proof strategy follows closely that of Bach and Levy
\cite[cite]{[\@@bibref{Number}{bach2019universal}{}{}]}. Using the notation
for Step~{}1 in the proof for the nonsmooth case, we have the bound
\@@amsalign
TM{\mathbb{E}}[{\operatorname*{{DualGap}}}(\bar{z})]\leq{\mathbb{E}}[\sup_{z}\\{I(z)+II(z)+III+IV\\}],
where $I$--$IV$ are defined in Eqs.~{}\eqref{eq:defI}--\eqref{eq:defIV}. We
deal with these terms in a different manner. \par For the term $I(z)$ in
Eq.~{}\eqref{eq:defI}, following Step~{}2 we have
${\mathbb{E}}[\sup_{z}I(z)]=O(\gamma\sigma D\sqrt{MT})$. \par Next, we define
a stopping time. For each $m\in[M]$, let
\@@amsalign\tau^{*}_{m}\vcentcolon=\max\bigg{\\{}t\in[T]:{\frac{1}{\eta^{m}_{t}}}\leq
1/(2L)\bigg{\\}}. Recall our learning rate scheme for the smooth case
\@@amsalign\eta^{m}_{1}=\frac{D\alpha}{G_{0}},\quad{\eta^{m}_{t}}=\frac{D\alpha}{\sqrt{G_{0}^{2}+\sum_{\tau=1}^{t-1}(Z^{m}_{\tau})^{2}}}.
\par For the term $II(z)$ in Eq.~{}\eqref{eq:defII}, we follow Step~{}3 and
obtain for all $z\in{\mathcal{Z}}$, \@@amsalign
II(z)\leq{\sum_{m=1}^{M}}\bigg{(}\frac{D^{2}}{\eta^{m}_{1}}+\frac{D^{2}}{\eta^{m}_{T}}\bigg{)}.
By the definition of $\eta^{m}_{1}$, we have
${\sum_{m=1}^{M}}D^{2}/\eta^{m}_{1}\leq DMG_{0}/\alpha$. For the second term,
for fixed $m\in[M]$,
\@@amsalign{\sum_{m=1}^{M}}D^{2}/{\eta^{m}_{T}}&={\sum_{m=1}^{M}}\frac{D}{\alpha}\sqrt{G_{0}^{2}+{\sum_{t=1}^{T-1}}{(Z^{m}_{t})^{2}}}\\\
&\leq{\sum_{m=1}^{M}}\frac{D}{\alpha}\Bigg{(}G_{0}+{\sum_{t=1}^{T}}\frac{{(Z^{m}_{t})^{2}}}{\sqrt{G_{0}^{2}+{\sum_{\tau=1}^{t-1}}{(Z^{m}_{\tau})^{2}}}}\Bigg{)}\\\
&=\frac{MDG_{0}}{\alpha}+\underbracket{{\sum_{m=1}^{M}}{\sum_{t=1}^{T}}\frac{1}{\alpha^{2}}{\eta^{m}_{t}}{(Z^{m}_{t})^{2}}}_{\vcentcolon={\mathcal{A}}}
So we have ${\mathbb{E}}[\sup_{z}II(z)]\leq 2\gamma
MDG/\alpha+{\mathbb{E}}[{\mathcal{A}}]$. \par For the term $III$ in
Eq.~{}\eqref{eq:defIII}, we also split it into two parts by $\tau^{*}_{m}$.
\@@amsalign
III&\vcentcolon=-{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\frac{1}{{\eta^{m}_{t}}}\Big{(}\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+\tfrac{1}{2}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}\\\
&=-{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\frac{5}{2}{\eta^{m}_{t}}{(Z^{m}_{t})^{2}}\\\
&=-\underbracket{{\sum_{m=1}^{M}}{\sum_{t=1}^{\tau^{*}_{m}}}\frac{5}{2}{\eta^{m}_{t}}{(Z^{m}_{t})^{2}}}_{\geq
0}-\underbracket{{\sum_{m=1}^{M}}{\sum_{t=\tau^{*}_{m}+1}^{T}}\frac{5}{2}{\eta^{m}_{t}}{(Z^{m}_{t})^{2}}}_{\vcentcolon={\mathcal{B}}_{\operatorname*{{tail}}}}
For the term $IV$ in defined in Eq.~{}\eqref{eq:defIV}, we first introduce a
margtingale difference sequence. For all $t\in[T],m\in[M]$, let
\@@amsalign\zeta^{m}_{t}\vcentcolon=\big{(}{g^{m}_{t}}-G({z^{m}_{t}})\big{)}+\big{(}{M^{m}_{t}}-G({\tilde{z}^{m,*}_{t-1}})\big{)}.
By the triangular inequality, we have \@@amsalign
IV&\vcentcolon={\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\|{g^{m}_{t}}-{M^{m}_{t}}\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|\\\
&\leq\underbracket{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\|\zeta^{m}_{t}\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|}_{\vcentcolon=V}+{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\|G({z^{m}_{t}})-G({\tilde{z}^{m,*}_{t-1}})\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|\\\
&\leq
V+{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\Big{(}\frac{L}{2}\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+\frac{L}{2}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}\Big{)}\\\
&=V+{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\frac{5L}{2}{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}\\\
&=V+\underbracket{{\sum_{m=1}^{M}}{\sum_{t=1}^{\tau^{*}_{m}}}\frac{5L}{2}{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}}_{\vcentcolon={\mathcal{C}}_{\operatorname*{{head}}}}+\underbracket{{\sum_{m=1}^{M}}{\sum_{t=\tau^{*}_{m}+1}^{T}}\frac{5L}{2}{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}}_{\vcentcolon={\mathcal{C}}_{\operatorname*{{tail}}}}
Eq.~{}\eqref{eq:smoothnesskickin} holds due to smoothness, i.e., for all
$z,z^{\prime}\in{\mathcal{Z}}$, $\|G(z)-G(z^{\prime})\|_{*}\leq
L\|z-z^{\prime}\|$. Using smoothness, we can verify
Eq.~{}\eqref{eq:smoothnesskickin} as follows.
\@@amsalign&\|G({z^{m}_{t}})-G({\tilde{z}^{m,*}_{t-1}})\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|\\\
&\leq\frac{1}{2L}\|G({z^{m}_{t}})-G({\tilde{z}^{m,*}_{t-1}})\|_{*}^{2}+\frac{L}{2}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}\\\
&\leq\frac{L}{2}\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+\frac{L}{2}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}.
To summarize, we have shown \@@amsalign
TM\cdot{\mathbb{E}}[{\operatorname*{{DualGap}}}(\bar{z})]&\leq{\mathbb{E}}[\sup_{z}\\{I(z)+II(z)+III+IV\\}]\\\
&\leq O\Big{(}\gamma\sigma D\sqrt{MT}\Big{)}+{2\gamma MDG}/{\alpha}\\\
&\quad+{\mathbb{E}}[{\mathcal{A}}+{\mathcal{C}}_{\operatorname*{{head}}}+(-{\mathcal{B}}_{\operatorname*{{tail}}}+{\mathcal{C}}_{\operatorname*{{tail}}})+V].
\par{Step a.} Show ${\mathbb{E}}[{\mathcal{A}}]\leq 8\gamma
GDM/\alpha+3DM{\mathcal{V}_{1}(T)}/\alpha$. Recall its definition in
Eq.~{}\eqref{eq:defcA:nonsmooth}.
\@@amsalign{\mathcal{A}}&\vcentcolon={\sum_{m=1}^{M}}{\sum_{t=1}^{T}}\frac{1}{\alpha^{2}}{\eta^{m}_{t}}{(Z^{m}_{t})^{2}}\\\
&=\frac{D}{\alpha}{\sum_{m=1}^{M}}{\sum_{t=1}^{T}}\frac{{(Z^{m}_{t})^{2}}}{\sqrt{G_{0}^{2}+{\sum_{\tau=1}^{t-1}}{(Z^{m}_{\tau})^{2}}}}\\\
&\leq\frac{D}{\alpha}{\sum_{m=1}^{M}}\Bigg{(}5\gamma
G+3\sqrt{G_{0}^{2}+{\sum_{t=1}^{T-1}}{(Z^{m}_{t})^{2}}}\Bigg{)}\\\
&\leq\frac{D}{\alpha}{\sum_{m=1}^{M}}\Bigg{(}8\gamma
G+3\sqrt{{\sum_{t=1}^{T-1}}{(Z^{m}_{t})^{2}}}\Bigg{)} Note by
Lemma~{}\ref{lm:bdimprovement} we know
${(Z^{m}_{t})^{2}}\leq(\|{g^{m}_{t}}\|_{*}^{2}+\|{M^{m}_{t}}\|_{*}^{2})/5\leq\|{g^{m}_{t}}\|_{*}^{2}+\|{M^{m}_{t}}\|_{*}^{2}$.
Recall the definition of ${\mathcal{V}_{m}(T)}$ in Eq.~{}\eqref{eq:defcVmT}.
By the symmetry of the algorithm over all workers, we know
${\mathcal{V}_{1}(T)}={\mathcal{V}_{m}(T)}$ for all $m\in[M]$. Then
\@@amsalign{\mathbb{E}}[{\mathcal{A}}]&\leq 8\gamma
DMG/\alpha+\frac{3D}{\alpha}{\sum_{m=1}^{M}}{\mathbb{E}}\Bigg{[}\sqrt{{\sum_{t=1}^{T-1}}{(Z^{m}_{t})^{2}}}\Bigg{]}\\\
&\leq 8\gamma
DMG/\alpha+\frac{3D}{\alpha}{\sum_{m=1}^{M}}{\mathbb{E}}\Bigg{[}\sqrt{{\sum_{t=1}^{T-1}}\|{g^{m}_{t}}\|_{*}^{2}+\|{M^{m}_{t}}\|_{*}^{2}}\Bigg{]}\\\
&=8\gamma
DMG/\alpha+\frac{3D}{\alpha}{\sum_{m=1}^{M}}{\mathcal{V}_{m}(T)}=8\gamma
DMG/\alpha+3DM{\mathcal{V}_{1}(T)}/\alpha. By our choice of $\alpha$ we have
${\mathbb{E}}[{\mathcal{A}}]=O(\gamma
DM^{3/2}G+DM^{3/2}{\mathcal{V}_{1}(T)})$. \par{Step b.} Show
${\mathbb{E}}[{\mathcal{C}}_{\operatorname*{{head}}}]=O(1)$. Recall its
definition in Eq.~{}\eqref{eq:defcC}.
\@@amsalign{\mathcal{C}}_{\operatorname*{{head}}}&\vcentcolon={\sum_{m=1}^{M}}{\sum_{t=1}^{\tau^{*}_{m}}}\frac{5L}{2}{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}\\\
&=\frac{5\alpha^{2}D^{2}L}{2}{\sum_{m=1}^{M}}{\sum_{t=1}^{\tau^{*}_{m}}}\frac{{(Z^{m}_{t})^{2}}}{G_{0}^{2}+{\sum_{\tau=1}^{t-1}}{(Z^{m}_{\tau})^{2}}}\\\
&\leq\frac{5\alpha^{2}D^{2}L}{2}{\sum_{m=1}^{M}}\bigg{(}6\gamma^{2}+2\log\Big{(}\frac{G_{0}^{2}+{\sum_{t=1}^{\tau^{*}_{m}-1}}{(Z^{m}_{\tau})^{2}}}{G_{0}^{2}}\Big{)}\bigg{)}\\\
&=\frac{5\alpha^{2}D^{2}L}{2}{\sum_{m=1}^{M}}\bigg{(}6\gamma^{2}+2\log\Big{(}\frac{\alpha^{2}D^{2}}{G_{0}^{2}(\eta^{m}_{\tau^{*}_{m}})^{2}}\Big{)}\bigg{)}\\\
&\leq\frac{5\alpha^{2}D^{2}LM}{2}\bigg{(}6\gamma^{2}+4\log\Big{(}\frac{\alpha
D}{2G_{0}L}\Big{)}\bigg{)} The last inequality is due to the definition of
$\tau_{m}^{*}$. By our choice of $\alpha$ we have
${\mathbb{E}}[{\mathcal{C}}_{\operatorname*{{head}}}]={\tilde{O}}(\gamma^{2}LD^{2})$.
\par{Step c.} Show
${\mathcal{C}}_{\operatorname*{{tail}}}-{\mathcal{B}}_{\operatorname*{{tail}}}\leq
0$. Recall ${\mathcal{B}}_{\operatorname*{{tail}}}$ is defined in
Eq.~{}\eqref{eq:defBtail}. By definition,
\@@amsalign{\mathcal{C}}_{\operatorname*{{tail}}}-{\mathcal{B}}_{\operatorname*{{tail}}}={\sum_{m=1}^{M}}{\sum_{t=\tau^{*}_{m}+1}^{T}}\Big{(}\frac{5L}{2}{\eta^{m}_{t}}-\frac{5}{2}\Big{)}{\eta^{m}_{t}}{(Z^{m}_{t})^{2}}.
We show $\frac{5L}{2}{\eta^{m}_{t}}-\frac{5}{2}\leq 0$ for all
$t\in[T],m\in[M]$. Note that for all $t\geq\tau^{*}_{m}+1$ we have
${\eta^{m}_{t}}\leq 1/(2L)$. And so
$\frac{5L}{2}{\eta^{m}_{t}}-\frac{5}{2}\leq(5/4)-(5/2)=-5/4$. Summarizing, we
have shown
${\mathcal{C}}_{\operatorname*{{tail}}}-{\mathcal{B}}_{\operatorname*{{tail}}}\leq
0$. \par{Step d.} Show ${\mathbb{E}}[V]={\tilde{O}}(\gamma\sigma D\sqrt{MT})$.
Recall its definition in Eq.\eqref{eq:defV}. Also note
${\mathbb{E}}[\|\zeta^{m}_{t}\|_{*}^{2}]\leq 4\sigma^{2}$.
\@@amsalign{\mathbb{E}}[V]&\vcentcolon={\mathbb{E}}\Bigg{[}{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\|\zeta^{m}_{t}\|_{*}\cdot\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|\Bigg{]}\\\
&\leq{\mathbb{E}}\Bigg{[}\sqrt{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\|\zeta^{m}_{t}\|_{*}^{2}}\Bigg{]}\cdot{\mathbb{E}}\Bigg{[}\sqrt{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}}\Bigg{]}\\\
&\leq\sqrt{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}{\mathbb{E}}\big{[}\|\zeta^{m}_{t}\|_{*}^{2}\big{]}}\cdot{\mathbb{E}}\Bigg{[}\sqrt{{\sum_{t=1}^{T}}{\sum_{m=1}^{M}}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}}\Bigg{]}\\\
&\leq
2\sigma\sqrt{MT}\cdot{\mathbb{E}}\Bigg{[}\sqrt{{\sum_{m=1}^{M}}{\sum_{t=1}^{T}}\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}}\Bigg{]}\\\
&\leq
2\sigma\sqrt{MT}\cdot{\mathbb{E}}\Bigg{[}\sqrt{{\sum_{m=1}^{M}}{\sum_{t=1}^{T}}\|{z^{m}_{t}}-{\tilde{z}^{m,*}_{t-1}}\|^{2}+\|{z^{m}_{t}}-{\tilde{z}^{m}_{t}}\|^{2}}\Bigg{]}\\\
&=2\sigma\sqrt{MT}\cdot{\mathbb{E}}\Bigg{[}\sqrt{{\sum_{m=1}^{M}}{\sum_{t=1}^{T}}5\cdot{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}}\Bigg{]}\\\
&=2\sqrt{5}\cdot\sigma\sqrt{MT}\cdot
D\alpha\cdot{\mathbb{E}}\Bigg{[}\sqrt{{\sum_{m=1}^{M}}{\sum_{t=1}^{T}}\frac{{(Z^{m}_{t})^{2}}}{G_{0}^{2}+{\sum_{\tau=1}^{t-1}}{(Z^{m}_{\tau})^{2}}}}\Bigg{]}\\\
&\leq 6\cdot\sigma\sqrt{MT}\cdot
D\alpha\cdot{\mathbb{E}}\Bigg{[}\sqrt{{\sum_{m=1}^{M}}\bigg{(}6\gamma^{2}+2\log\Big{(}\frac{G_{0}^{2}+{\sum_{t=1}^{T-1}}{(Z^{m}_{t})^{2}}}{G_{0}^{2}}\Big{)}\bigg{)}}\Bigg{]}\\\
&\leq 6\sigma\sqrt{MT}\cdot
D\alpha\cdot\sqrt{M(6\gamma^{2}+2\log(1+\gamma^{2}T))}. By our choice of
$\alpha$, we have ${\mathbb{E}}[V]={\tilde{O}}(\gamma\sigma D\sqrt{MT})$. \par
Continuing Eq.~{}\eqref{eq:smoothdecomp_1}, we have
\@@amsalign&TM\cdot{\mathbb{E}}[{\operatorname*{{DualGap}}}(\bar{z})]\\\ &\leq
O\Big{(}\gamma\sigma D\sqrt{MT}\Big{)}+{2\gamma
MDG}/{\alpha}+{\mathbb{E}}[{\mathcal{A}}+{\mathcal{C}}_{\operatorname*{{head}}}+(-{\mathcal{B}}_{\operatorname*{{tail}}}+{\mathcal{C}}_{\operatorname*{{tail}}})+V]\\\
&={\tilde{O}}\Big{(}\gamma\sigma D\sqrt{MT}+\underbracket{\gamma
DM^{3/2}G+DM^{3/2}{\mathcal{V}_{m}(T)}}_{{\mathcal{A}}}+\underbracket{\gamma^{2}LD^{2}}_{{\mathcal{C}}_{\operatorname*{{head}}}}+\underbracket{\gamma\sigma
D\sqrt{MT}}_{V}\Big{)}. \par This finishes the proof of
Theorem~{}\ref{thm:smooth}. \par\end@proof \par\begin{remark}[Getting rid of
$\cVoneT$] We could also use the free parameters $\alpha$ (base learning rate)
and obtain the following near linear speed-up result.
\par\begin{theorem}[Smooth Case, free of $\cVoneT$] \par Assume
\ref{as:bddomain}, \ref{as:bdsg}, \ref{as:bdvar} and \ref{as:smooth}. Let
$\sigma,D,G,L$ be defined therein. For any $\epsilon\in(0,\frac{1}{2})$, let
$\bar{z}={\operatorname*{{LocalAdaSEG}}}(G_{0},D\mathchar
24635\relax\;K,M,R\mathchar 24635\relax\;T^{\epsilon}/\sqrt{M})$. If $T\geq
M^{1/(2\epsilon)}$, then
$${\mathbb{E}}[\operatorname*{DualGap}(\bar{z})]=\tilde{O}\bigg{(}\frac{\sigma
D}{{\sqrt{MT^{1-2\epsilon}}}}+\frac{\gamma^{2}LD^{2}}{T^{1-2\epsilon}}+\frac{LD^{2}M}{T}+\frac{\gamma
GDM^{3/2}}{T^{1+\epsilon}}\bigg{)}\,,$$ where $\tilde{O}$ hides absolute
constants, logarithmic factors of problem parameters and logarithmic factors
of $T$. \end{theorem} \par\rm{\par\addvspace{\medskipamount}\noindent{\bf
Proof: }}\ignorespaces[Proof of Theorem~{}\ref{thm:smooth_noV}] We decompose
the term $II$ in Eq.\eqref{eq:defII} in a different way. Recall in Step~{}3 we
have shown for all $z\in{\mathcal{Z}}$,
$II(z)\leq{\sum_{m=1}^{M}}\frac{D^{2}}{\eta^{m}_{1}}+\frac{D^{2}}{\eta^{m}_{T}}.$
For the second term, for fixed $m\in[M]$,
\@@amsalign{\sum_{m=1}^{M}}D^{2}/{\eta^{m}_{T}}&={\sum_{m=1}^{M}}\frac{D}{\alpha}\sqrt{G_{0}^{2}+{\sum_{t=1}^{T-1}}{(Z^{m}_{t})^{2}}}\\\
&\leq{\sum_{m=1}^{M}}\frac{D}{\alpha}\Bigg{(}G_{0}+{\sum_{t=1}^{T}}\frac{{(Z^{m}_{t})^{2}}}{\sqrt{G_{0}^{2}+{\sum_{\tau=1}^{t-1}}{(Z^{m}_{\tau})^{2}}}}\Bigg{)}\\\
&=\frac{MDG_{0}}{\alpha}+{\sum_{m=1}^{M}}{\sum_{t=1}^{T}}\frac{1}{\alpha^{2}}{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}\\\
&\leq\frac{\gamma
MDG}{\alpha}+\underbracket{{\sum_{m=1}^{M}}{\sum_{t=1}^{\tau^{*}_{m}}}\frac{1}{\alpha^{2}}{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}}_{\vcentcolon={\mathcal{A}}_{\operatorname*{{head}}}}+\underbracket{{\sum_{m=1}^{M}}{\sum_{t=\tau^{*}_{m}+1}^{T}}\frac{1}{\alpha^{2}}{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}}_{\vcentcolon={\mathcal{A}}_{\operatorname*{{tail}}}}
So we have ${\mathbb{E}}[\sup_{z}II(z)]\leq 2\gamma
MDG/\alpha+{\mathbb{E}}[{\mathcal{A}}_{\operatorname*{{head}}}+{\mathcal{A}}_{\operatorname*{{tail}}}]$.
Then, following the proof in the smooth case, we have \par\@@amsalign
TM\cdot{\mathbb{E}}[{\operatorname*{{DualGap}}}(\bar{z})]&\leq{\mathbb{E}}[\sup_{z}\\{I(z)+II(z)+III+IV\\}]\\\
&\leq O\Big{(}\gamma\sigma D\sqrt{MT}\Big{)}+{2\gamma MDG}/{\alpha}\\\
&\quad+{\mathbb{E}}[{\mathcal{A}}_{\operatorname*{{head}}}+{\mathcal{C}}_{\operatorname*{{head}}}+({\mathcal{A}}_{\operatorname*{{tail}}}-{\mathcal{B}}_{\operatorname*{{tail}}}+{\mathcal{C}}_{\operatorname*{{tail}}})+V].
\par\par\par Recall our choice of $\alpha=T^{\epsilon}/\sqrt{M}$. \par\par
Show ${\mathbb{E}}[{\mathcal{A}}_{\operatorname*{{head}}}]={\tilde{O}}(1)$.
Recall its definition in Eq.~{}\eqref{eq:defcA}.
\@@amsalign{\mathcal{A}}_{\operatorname*{{head}}}&\vcentcolon={\sum_{m=1}^{M}}{\sum_{t=1}^{\tau^{*}_{m}}}\frac{1}{\alpha^{2}}{(\eta^{m}_{t})^{2}}{(Z^{m}_{t})^{2}}\\\
&=\frac{D}{\alpha}{\sum_{m=1}^{M}}{\sum_{t=1}^{\tau^{*}_{m}}}\frac{{(Z^{m}_{t})^{2}}}{\sqrt{G_{0}^{2}+{\sum_{\tau=1}^{t-1}}{(Z^{m}_{\tau})^{2}}}}\\\
&\leq\frac{D}{\alpha}{\sum_{m=1}^{M}}\Bigg{(}5\gamma
G+3\sqrt{G_{0}^{2}+{\sum_{t=1}^{\tau^{*}_{m}-1}}{(Z^{m}_{t})^{2}}}\Bigg{)}\\\
&=\frac{D}{\alpha}{\sum_{m=1}^{M}}\Big{(}5\gamma
G+\frac{3D\alpha}{\eta^{m}_{\tau^{*}_{m}}}\Big{)}\\\
&\leq\frac{D}{\alpha}{\sum_{m=1}^{M}}\Big{(}5\gamma G+6\alpha
LD\Big{)}=\frac{5\gamma GDM}{\alpha}+6LD^{2}M. By our choice of $\alpha$ we
have ${\mathbb{E}}[{\mathcal{A}}_{\operatorname*{{head}}}]\leq 5\gamma
GDM^{3/2}T^{-\epsilon}+6LD^{2}M$. \par\par For
${\mathcal{C}}_{{\operatorname*{{head}}}}$ defined in Eq.~{}\eqref{eq:defcC},
following Eq~{}\eqref{eq:endofcChead}, we have
${\mathbb{E}}[{\mathcal{C}}_{\operatorname*{{head}}}]={\tilde{O}}(\gamma^{2}LD^{2}T^{2\epsilon})$.
\par\par Show
${\mathcal{A}}_{\operatorname*{{tail}}}+{\mathcal{C}}_{\operatorname*{{tail}}}-{\mathcal{B}}_{\operatorname*{{tail}}}\leq
0$. Recall ${\mathcal{B}}_{\operatorname*{{tail}}}$ is defined in
Eq.~{}\eqref{eq:defBtail}. By definition,
\@@amsalign{\mathcal{A}}_{\operatorname*{{tail}}}+{\mathcal{C}}_{\operatorname*{{tail}}}-{\mathcal{B}}_{\operatorname*{{tail}}}={\sum_{m=1}^{M}}{\sum_{t=\tau^{*}_{m}+1}^{T}}\Big{(}\frac{1}{\alpha^{2}}+\frac{5L}{2}{\eta^{m}_{t}}-\frac{5}{2}\Big{)}{\eta^{m}_{t}}{(Z^{m}_{t})^{2}}.
We show $\frac{1}{\alpha^{2}}+\frac{5L}{2}{\eta^{m}_{t}}-\frac{5}{2}\leq 0$
for all $t\in[T],m\in[M]$. Note that \@@amsalign T\geq
M^{1/(2\epsilon)}\implies\alpha^{2}=(T^{\epsilon}/\sqrt{M})^{2}\geq 1, and
that for all $t\geq\tau^{*}_{m}+1$ we have ${\eta^{m}_{t}}\leq 1/(2L)$. And so
$\frac{1}{\alpha^{2}}+\frac{5L}{2}{\eta^{m}_{t}}-\frac{5}{2}\leq
1+(5/4)-(5/2)=-1/4$. Summarizing, we have shown
${\mathcal{A}}_{\operatorname*{{tail}}}+{\mathcal{C}}_{\operatorname*{{tail}}}-{\mathcal{B}}_{\operatorname*{{tail}}}\leq
0$. \par For $V$ defined in Eq.~{}\eqref{eq:defV}, following
Eq.~{}\eqref{eq:boundVend}, ${\mathbb{E}}[V]={\tilde{O}}(\gamma\sigma
D\sqrt{MT^{1+2\epsilon}})$. \par Putting together we have
\@@amsalign&TM\cdot{\mathbb{E}}[{\operatorname*{{DualGap}}}(\bar{z})]\\\ &\leq
O\Big{(}\gamma\sigma D\sqrt{MT}\Big{)}+{2\gamma
MDG}/{\alpha}+{\mathbb{E}}[{\mathcal{A}}_{\operatorname*{{head}}}+{\mathcal{C}}_{\operatorname*{{head}}}+({\mathcal{A}}_{\operatorname*{{tail}}}-{\mathcal{B}}_{\operatorname*{{tail}}}+{\mathcal{C}}_{\operatorname*{{tail}}})+V]\\\
&={\tilde{O}}\Big{(}\gamma\sigma D\sqrt{MT}+\underbracket{\gamma
GDM^{3/2}T^{-\epsilon}+LD^{2}M}_{{\mathcal{A}}_{\operatorname*{{head}}}}+\underbracket{\gamma^{2}LD^{2}T^{2\epsilon}}_{{\mathcal{C}}_{\operatorname*{{head}}}}+\underbracket{\gamma\sigma
D\sqrt{MT^{1+2\epsilon}}}_{V}\Big{)}. \par This finishes the proof of
Theorem~{}\ref{thm:smooth_noV} \end@proof \end{remark}
\par\par\@@numbered@section{appendix}{toc}{Helper Lemmas} \par\begin{lemma}
For any non-negative real numbers $a_{1},\dots,a_{n}\in[0,a]$, and $a_{0}>0$,
it holds
\@@amsalign\sum_{i=1}^{n}\frac{a_{i}}{a_{0}+\sum_{j=1}^{i-1}a_{j}}\leq
2+\frac{4a}{a_{0}}+2\log\Big{(}1+\sum_{i=1}^{n-1}a_{i}/a_{0}\Big{)}.
\rm{\par\addvspace{\medskipamount}\noindent{\bf Proof: }}\ignorespaces[Proof
of Lemma~{}\ref{lm:boundwithlog}] See Lemma A.2 of
\cite[cite]{[\@@bibref{Number}{bach2019universal}{}{}]}. \par\end@proof
\end{lemma} \par\begin{lemma} For any non-negative numbers
$a_{1},\dots,a_{n}\in[0,a]$, and $a_{0}>0$, it holds
\@@amsalign\sqrt{a_{0}+\sum_{i=1}^{n-1}a_{i}}-\sqrt{a_{0}}\leq\sum_{i=1}^{n}\frac{a_{i}}{\sqrt{a_{0}+\sum_{j=1}^{i-1}}a_{j}}\leq\frac{2a}{a_{0}}+3\sqrt{a}+3\sqrt{a_{0}+\sum_{i=1}^{n-1}a_{i}}.
\rm{\par\addvspace{\medskipamount}\noindent{\bf Proof: }}\ignorespaces[Proof
of Lemma~{}\ref{lm:boundwithlog}] See Lemma A.1 of
\cite[cite]{[\@@bibref{Number}{bach2019universal}{}{}]}. \par\end@proof
\end{lemma} \par\par\begin{lemma}9 For any sequence
$\\{z_{t}\\}_{t=1}^{T}\subset{\mathcal{Z}}^{o}$, let $\bar{z}$ denote its
mean. It holds \@@amsalign
T\cdot{\operatorname*{{DualGap}}}(\bar{z})\leq\sup_{z\in{\mathcal{Z}}}{\sum_{t=1}^{T}}\big{\langle}z_{t}-z,G(z_{t})\big{\rangle}.
\end{lemma} \rm{\par\addvspace{\medskipamount}\noindent{\bf Proof:
}}\ignorespaces[Proof of Lemma~{}\ref{lm:gaptoregret}] This lemma depends on
the convexity-concavity of the saddle function $F$. \par Denote
$\bar{z}\vcentcolon=[\bar{x},\bar{y}]$, $z_{t}\vcentcolon=[x_{t},y_{t}]$. Note
$\bar{x}=(1/T){\sum_{t=1}^{T}}x_{t}$ and $\bar{y}=(1/T){\sum_{t=1}^{T}}y_{t}$.
By definition of the duality gap and the convexity-concavity of $F$,
\@@amsalign{\operatorname*{{DualGap}}}(\bar{z})&\vcentcolon=\sup_{x\in{\mathcal{X}},y\in{\mathcal{Y}}}F(\bar{x},y)-F(x,\bar{y})\\\
&\leq\sup_{x\in{\mathcal{X}},y\in{\mathcal{Y}}}\frac{1}{T}{\sum_{t=1}^{T}}F(x_{t},y)-\frac{1}{T}{\sum_{t=1}^{T}}F(x,y_{t}).
Let $G(z_{t})=G(x_{t},y_{t})\vcentcolon=[d_{x,t},-d_{y,t}]$. Since
$d_{x,t}\in\partial_{x}F(x_{t},y_{t})$, for all $x\in{\mathcal{X}}$ and
$y\in{\mathcal{Y}}$ , \@@amsalign F(x_{t},y)+\langle
d_{x,t},x-x_{t}\rangle\leq F(x,y). Similarly, for all $x\in{\mathcal{X}}$ and
$y\in{\mathcal{Y}}$, it holds \@@amsalign F(x,y_{t})+\langle
d_{y,t},y-y_{t}\rangle\geq F(x,y). We have \@@amsalign
T\cdot{\operatorname*{{DualGap}}}(\bar{z})&\leq\sup_{x\in{\mathcal{X}},y\in{\mathcal{Y}}}{\sum_{t=1}^{T}}\langle
d_{x,t},x_{t}-x\rangle-\langle d_{y,t},y_{t}-y\rangle\\\
&=\sup_{z\in{\mathcal{Z}}}{\sum_{t=1}^{T}}\langle G(z_{t}),z_{t}-z\rangle.
This completes the proof of Lemma~{}\ref{lm:gaptoregret}. \end@proof
\par\par\@@numbered@section{appendix}{toc}{Additional Experiments} We
implement our algorithm and conduct all the experiments on a computer with
Intel Core i5 CPU @ 3.20GHz cores, 8GB RAM, and GPU @ GeForce RTX 3090. The
deep learning framework we use is PyTorch 1.8.1. The OS environment was
created by Conda over Ubuntu 20.04. We use Python 3.7. Python library
requirement is specified in the configuration file provided in the
supplemental materials. Due to the hardware limitation, we simulate the
distributed environment by creating object instances to simulate multiple
clients and a central server on one GPU card.
\par\par\par\par\@@numbered@section{subsection}{toc}{Stochastic bilinear
minimax problem} \par\begin{figure*}[tp] \centering \begin{@subfigure}
\includegraphics[width=195.12767pt]{ syn_asyn}\end{@subfigure}
\begin{@subfigure} \includegraphics[width=195.12767pt]{ MKR}\end{@subfigure}
\par\begin{@subfigure} \includegraphics[width=195.12767pt]{
convergence_time}\end{@subfigure} \begin{@subfigure}
\includegraphics[width=195.12767pt]{ vt_sqrt}\end{@subfigure}
\par\@@toccaption{{\lx@tag[ ]{{5}}{\small(a) Residual comparison between
synchronous and asynchronous cases with the communication rounds; (b) Residual
comparison between ${\operatorname*{{LocalAdaSEG}}}$ (Asynchronous and
Synchronous version) and SEGDA-MKR with samples; (c) Residual comparison
between ${\operatorname*{{LocalAdaSEG}}}$ (Asynchronous and Synchronous
version) and SEGDA-MKR with wallclock time; (d) Comparison of $V_{t}$,
$\sqrt{t}$, $t^{2/5}$ with the update $t$. }}}\@@caption{{\lx@tag[: ]{{Figure
5}}{\small(a) Residual comparison between synchronous and asynchronous cases
with the communication rounds; (b) Residual comparison between
${\operatorname*{{LocalAdaSEG}}}$ (Asynchronous and Synchronous version) and
SEGDA-MKR with samples; (c) Residual comparison between
${\operatorname*{{LocalAdaSEG}}}$ (Asynchronous and Synchronous version) and
SEGDA-MKR with wallclock time; (d) Comparison of $V_{t}$, $\sqrt{t}$,
$t^{2/5}$ with the update $t$. }}} \vspace{0cm} \@add@centering\end{figure*}
\par Experimentally, to validate the performance of our proposed method, we
conduct an asynchronous variant of our proposed
${\operatorname*{{LocalAdaSEG}}}$ for the stochastic bilinear minimax problem.
Specifically, we vary the number of local iterations $K$ in $M$ workers, where
$M=4$ and the noise level $\sigma=0.1$. In the case of 'Asynch-50', the local
iteration $K$ is in the range of $\\{50,45,40,35\\}$ for each worker, and
$K=50$ is adopted for all workers in 'Synch-50'. Similarly, in the case of
'Asynch-100', the local iteration $K$ varies in the range of
$\\{100,90,80,70\\}$. In the comparison, $K$ is fixed to 100 for each worker
in the case of 'Synch-100'. As can be seen from
\lx@cref{creftype~refnum}{fig:additional_exp} (a), both asynchronous and
synchronous cases converge to an optimal point after several communication
rounds. Compared with synchronous cases, asynchronicity only affects the
convergence rate that is slower than the synchronous version with respect to
the communication rounds. \par Secondly, we compared our
${\operatorname*{{LocalAdaSEG}}}$ (both Asynchronous and Synchronous versions)
with SEGDA of MKR iterations to solve bilinear minimax problems(refer to
Section 6.1). Specifically, we choose $M=4$ workers, the noise level
$\sigma=0.1$ and local iteration $K=50$ in the Synchronous case, and $K$ in
the range of $\\{50,45,40,35,30\\}$ in the Asynchronous case. To provide
fairness, we run vanilla SEGDA with $M\times K\times R$ iterations on one
worker with batchsize = 1, where $M$ denotes the number of workers, $K$
denotes the number of local iterations and $R$ represents the number of
rounds. The experimental results are illustrated in
\lx@cref{creftype~refnum}{fig:additional_exp} (b). As can be seen, the
performance of SEGDA is unstable and worse than that of
${\operatorname*{{LocalAdaSEG}}}$ (Asyn. and Syn.). The reason is possible
since the batchsize of stochastic gradient $bs=1$ in each iteration results in
a large variance of stochastic gradient estimation. Because there are several
workers involved in the optimization in ${\operatorname*{{LocalAdaSEG}}}$, it
has much more samples in each iteration than that of SEGDA-MKR. It indicates
that the stochastic variance is smaller than that of SEGDA-MKR, resulting in
the stable performance of ${\operatorname*{{LocalAdaSEG}}}$. \par Thirdly, we
also conduct experiments to validate the performance from the aspect of
wallclock time on a bilinear minimax problem, where the number of workers
$M=4$, and the noise level $\sigma=0.1$. We record the wallclock time of
reaching the target residual value for synchronous
${\operatorname*{{LocalAdaSEG}}}$ ($K=50$), the asynchronous version ($K$ in
the range of $\\{50,45,40,35,30\\}$), and the single thread version. The
results are illustrated in \lx@cref{creftype~refnum}{fig:additional_exp} (c).
As can be seen, compared with the single thread version, our proposed method
speed-ups the convergence. With respect to the wall clock time, Asynchronous
${\operatorname*{{LocalAdaSEG}}}$(${\operatorname*{{LocalAdaSEG}}}$-Asyn) is
slightly better than synchronous ${\operatorname*{{LocalAdaSEG}}}$(synchronous
${\operatorname*{{LocalAdaSEG}}}$-Syn). Since the tested bilinear minimax
problem with noise level $\sigma=0.1$ is very simple (time cost is around 20
seconds), the differences in time cost between synchronous and asynchronous
cases are not significant. \par Fourthly, we conduct the experiments with the
bilinear case to evaluate the quantity of $Vt$ with the update t. Here, we
adopt the same experimental settings as that of experiments in Section 6.1.
The noise level $\sigma=0.1$ and the number of workers $M=4$. As can be seen
from \lx@cref{creftype~refnum}{fig:additional_exp} (d), $Vt$ is really much
smaller than the dominant variance term. \par\par\par\begin{figure*}[tp]
\centering\par\begin{@subfigure} \includegraphics[width=195.12767pt]{
diff_opt_with_total_iter_fid_iid}\end{@subfigure} \begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_opt_with_round_fid_iid}\end{@subfigure} \par\begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_opt_with_total_iter_is_iid}\end{@subfigure} \begin{@subfigure}
\includegraphics[width=195.12767pt]{
diff_opt_with_round_is_iid}\end{@subfigure} \par\@@toccaption{{\lx@tag[
]{{6}}{\small Subfigures (a)-(b) and (c)-(d) show the results of WGAN trained
with ${\operatorname*{{LocalAdaSEG}}}$ and existing optimizers. We plot FID
and IS against the number of iterations and communications,
respectively.}}}\@@caption{{\lx@tag[: ]{{Figure 6}}{\small Subfigures (a)-(b)
and (c)-(d) show the results of WGAN trained with
${\operatorname*{{LocalAdaSEG}}}$ and existing optimizers. We plot FID and IS
against the number of iterations and communications, respectively.}}}
\@add@centering\end{figure*} \par\par\par\begin{figure*}[tp]
\centering\begin{@subfigure} \includegraphics[width=195.12767pt]{
diff_opt_with_total_iter_fid}\end{@subfigure} \begin{@subfigure}
\includegraphics[width=195.12767pt]{ diff_opt_with_round_fid}\end{@subfigure}
\par\begin{@subfigure} \includegraphics[width=195.12767pt]{
diff_opt_with_total_iter_is}\end{@subfigure} \begin{@subfigure}
\includegraphics[width=195.12767pt]{ diff_opt_with_round_is}\end{@subfigure}
\@@toccaption{{\lx@tag[ ]{{7}}{\small Subfigures (a)-(b) and (c)-(d) show the
results of Federated WGAN trained with ${\operatorname*{{LocalAdaSEG}}}$ and
existing optimizers. We plot FID and IS against the number of iterations and
communications, respectively.}}}\@@caption{{\lx@tag[: ]{{Figure 7}}{\small
Subfigures (a)-(b) and (c)-(d) show the results of Federated WGAN trained with
${\operatorname*{{LocalAdaSEG}}}$ and existing optimizers. We plot FID and IS
against the number of iterations and communications, respectively.}}}
\@add@centering\end{figure*}
\par\par\par\@@numbered@section{subsection}{toc}{Wasserstein GAN} \par
Inspired by game theory, generative adversarial networks (GANs) have shown
great performance in many generative tasks to replicate the real-world rich
content, such as images, texts, and music. GANs are composed of two models, a
generator and a discriminator, which are competing with each other to improve
the performance of a specific task. In this experiment, we aim to train a
digit image generator using the MNIST dataset. \par It is challenging to train
a GAN model due to the slow convergence speed, instability of training or even
failure to converge. \cite[cite]{[\@@bibref{Number}{arjovsky2017towards,
arjovsky2017wasserstein}{}{}]} proposed to use the Wasserstein distance as the
GAN loss function to provide stable and fast training. To enforce the
Lipschitz constraint on the discriminator, we adopt WGAN with gradient penalty
as our experimental model. The objective can be described as
\@@amsalign\mathop{\min}_{G}\mathop{\max}_{D}\bigg{\\{}\mathop{\mathbb{E}}_{x\sim\mathbb{P}_{r}}[D(x)]-\mathop{\mathbb{E}}_{z\sim\mathbb{P}_{z}}\big{[}D\big{(}G(z)\big{)}\big{]}-\lambda\big{[}\big{(}\|\nabla_{\hat{x}}D(\hat{x})\|_{2}-1\big{)}^{2}\big{]}\bigg{\\}}\,,
where $D$ and $G$ denote the generator and discriminator, $\mathbb{P}_{r}$ is
the data distribution, and $\mathbb{P}_{z}$ represents the noise distribution
(uniform or Gaussian distribution). The point
$\hat{x}\sim\mathbb{P}_{\hat{x}}$ is sampled uniformly along straight lines
between pairs of points sampled from the real data distribution
$\mathbb{P}_{r}$ and the generator distribution $\mathbb{P}_{\tilde{x}}$,
expressed as $\hat{x}:=\epsilon x+(1-\epsilon)\tilde{x}$, where $\epsilon\sim
U[0,1]$. \par{\bf DCGAN.} We implement WGAN with the DCGAN architecture, which
improves the original GAN with convolutional layers. Specifically, the
generator consists of $3$ blocks, which contain deconvolutional layers, batch
normalization and activations. The details of the whole generator can be
represented as sequential layers \emph{\\{Linear, BN, ReLU, DeConv, BN, ReLU,
DeConv, BN, ReLU, DeConv, Tanh\\}}, where \emph{Linear}, \emph{BN},
\emph{DeConv} denote the linear, batch normalization and deconvolutional
layer, respectively. \emph{ReLU} and \emph{Tanh} represent the activation
functions. Similarly, the discriminator also contains $3$ blocks, which can be
described as sequential layers \emph{\\{Conv, LReLU, Conv, LReLU, Conv, LReLU,
Linear\\}}, where \emph{Conv} and \emph{LReLU} denote the convolutional layer
and Leaky-ReLU activation function, respectively. \par{\bf Inception score
(IS).} Inception score (IS) is proposed to evaluate the performance of a GAN
with an inception model. IS measures GAN from two aspects simultaneously.
Firstly, GAN should output a high diversity of images. Secondly, the generated
images should contain clear objects. Specifically, we feed the generated
images $x$ into a well-trained inception model to obtain the output $y$. Then,
IS can be calculated by the following equation:
\@@amsalign\mathrm{IS}\vcentcolon=\exp\bigg{(}\mathop{\mathbb{E}}_{x\sim\mathbb{P}_{g}}\big{[}D_{\mathrm{KL}}\big{(}p(y\,|\,x)\|p(y)\big{)}\big{]}\bigg{)},
where $\mathbb{P}_{g}$ is the generator model distribution. Essentially, IS
computes the mutual information $I(y\mathchar
24635\relax\;x)=H(y)-H(y\,|\,x)$, where $H(\cdot)$ denotes the entropy. The
larger $H(y)$, the more diversity in the generated images. The lower
$H(y\,|\,x)$ implies the input $x$ belongs to one class with a higher
probability. In summary, IS is bounded by $1\leq\mathrm{IS}\leq 1000$. The
higher IS implies a better performance of a GAN. \par\par{\bf Fr\'{e}chet
Inception Distance (FID).} Although IS can measure the diversity and quality
of the generated images, it still has some limitations, such as losing sight
of the true data distribution, failure to measure the model generalization.
FID is an improved metric for GAN, which cooperates with the training samples
and generated samples to measure the performance together. Specifically, we
feed the generated samples and training samples into an inception model to
extract the feature vectors, respectively. Usually, we extract the logits
value before the last sigmoid activation as the feature vector with dimension
$2048$. Essentially, FID is the Wasserstein metric between two
multidimensional Gaussian distributions: $\mathcal{N}(\mu_{g},\Sigma_{g})$ the
distribution of feature vectors from generated samples and
$\mathcal{N}(\mu_{r},\Sigma_{r})$ the distribution of feature vectors from the
training samples. It can be calculated as
\@@amsalign\mathrm{FID}\vcentcolon=\|u_{r}-u_{g}\|^{2}+\mathrm{tr}\big{(}\Sigma_{r}+\Sigma_{g}-2(\Sigma_{r}\Sigma_{g})^{1/2}\big{)}
where $\mathrm{tr}(\cdot)$ denotes the trace of a matrix. The lower the FID,
the better the performance of a GAN. \par\par\begin{figure*}[tp]
\centering\par\begin{@subfigure}
\includegraphics[width=195.12767pt]{FID_Diff_Dist}\end{@subfigure}
\begin{@subfigure}
\includegraphics[width=195.12767pt]{IS_Diff_Dist}\end{@subfigure}
\par\@@toccaption{{\lx@tag[ ]{{8}}{\small Subfigures (a)-(b) show the FID and
IS against communication rounds of WGAN trained over MNIST dataset under
different Dirichlet distribution.}}}\@@caption{{\lx@tag[: ]{{Figure 8}}{\small
|
Functional Relations on Anisotropic Potts Models
Functional Relations on Anisotropic Potts Models:
from Biggs Formula to the Tetrahedron Equation
Boris BYCHKOV ab, Anton KAZAKOV abc and Dmitry TALALAEV abc
B. Bychkov, A. Kazakov and D. Talalaev
a) Faculty of Mathematics, National Research University Higher School of
Economics,
a) Usacheva 6, 119048, Moscow, Russia<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
b) Centre of Integrable Systems, P.G. Demidov Yaroslavl State University,
b) Sovetskaya 14, 150003, Yaroslavl, Russia
c) Faculty of Mechanics and Mathematics, Moscow State University, 119991
Moscow, Russia
Received July 06, 2020, in final form March 26, 2021; Published online April
07, 2021
We explore several types of functional relations on the family of multivariate
Tutte polynomials: the Biggs formula and the star-triangle ($Y-\Delta$)
transformation at the critical point $n=2$. We deduce the theorem of
Matiyasevich and its inverse from the Biggs formula, and we apply this
relation to construct the recursion on the parameter $n$. We provide two
different proofs of the Zamolodchikov tetrahedron equation satisfied by the
star-triangle transformation in the case of $n=2$ multivariate Tutte
polynomial, we extend the latter to the case of valency 2 points and show that
the Biggs formula and the star-triangle transformation commute.
tetrahedron equation; local Yang–Baxter equation; Biggs formula; Potts model;
Ising model
82B20; 16T25; 05C31
## 1 Introduction
The theory of polynomial invariants of graphs in its current state uses many
methods and tools of integrable statistical mechanics. This phenomenon
demonstrates the inherent intrusion of mathematical physics methods into
topology and combinatorics. In this paper, the main subject of research is
functional relations in the family of polynomial invariants for framed graphs,
in particular for multivariate Tutte polynomials [22], their specializations
for Potts models, multivariate chromatic and flow polynomials.
The flow generating function is closely related to the problems of electrical
networks on a graph over a finite field. Each flow defines a discrete harmonic
function, and non-zero flows can be interpreted as harmonic functions with a
completely non-zero gradient. Specifically, we discuss the full flow
polynomial which is a linearization of the flow polynomial and, in particular,
corresponds to the point of the compactification of the parameter space for
the Biggs model.
One of the central tools of the paper is the Biggs formula (Lemma 2.13), which
connects $n$-Potts models for different parameter values as a convolution with
some weight over all edge subgraphs. In particular, we offer a new proof of
the theorem of Matiyasevich 2.18 about the connection of a flow and chromatic
polynomial, as a special case of the Biggs formula. This interpretation allows
us to construct an inverse statement of the theorem of Matiyasevich. Moreover,
using the connection between the flow and the complete flow polynomial, we
obtain a shift of parameters in the Potts models (Theorem 2.25).
The fundamental type of correspondences on the space of the aforementioned
invariants is the star-triangle type relations (also known as “wye-delta”
relations) and the associated deletion-contraction relations. In a sense, the
kinship between these relations is analogous to the role of the tetrahedron
equation in the local Yang–Baxter equation. Despite the fact that the
invariance of the Ising model with respect to the star-triangle transformation
is very well known [2], we have not found in the literature a full proof of
the fact that the action of this transformation on the weights of an
anisotropic system is a solution of the tetrahedron equation that corresponds
to the orthogonal solution of the local Yang–Baxter equation: Theorem 4.1
(parts of this statement were mentioned in [15, 17, 21]). We offer here two
new proofs of this fact. We find them instructive due to their anticipated
relation to the theory of positive orthogonal grassmannians [14].
The identification of the Potts model and the multivariate Tutte polynomial
allows us to assert the existence of a critical point for the parameter $n$ in
the family of Tutte polynomials. Namely, for $n=2$, this model has a groupoid
symmetry generated by a family of transformations defined by the trigonometric
solution of the Zamolodchikov tetrahedron equation. We extend the star-
triangle transformation for the graphs of lower valency in Section 5. In this
way, we obtain a $14$-term correspondence. This extension commute with the
Biggs formula. We should mention the relation of this subject with the theory
of cluster algebras. We suppose that the multivariate Tutte polynomial on
standard graphs at the critical point $n=2$ corresponds to the orthogonal
version of the Lusztig variety [4] in the case of the unipotent group and the
electrical variety [12] for the symplectic group.
### 1.1 Organization of the paper
In Section 2, we concentrate our attention on the Biggs formalism in the Ising
and Potts type models. We define the main recurrence relations and also
identify the Tutte polynomial with the Potts model. Then, we apply the Biggs
formula to the proof of theorem of Matiyasevich and propose its inverse
version. We examine in details the recursion of the Potts model with respect
to the parameter $n$.
In Section 3, we show that, if $n=2$, then the Potts model is invariant with
respect to the star-triangle transformation given by the orthogonal solution
for the local Yang–Baxter equation and the corresponding solution for the
Zamolodchikov tetrahedron equation. In Section 4 we provide two different
proofs for this fact. Both of them are interesting in the context of cluster
variables on the space of Ising models. The first proof operates with the
space of boundary measurement matrices and the second with the matrix of
boundary partition function.
In Section 5, we show that the Biggs formula considered as a correspondence on
the set of multivariate Tutte polynomials commutes with the star-triangle
transformation.
## 2 Biggs interaction models
### 2.1 $\boldsymbol{n}$-Potts models and Tutte polynomial
We define the anisotropic Biggs model (interaction model) on an undirected
graph $G$ with the set of edges $E$ and the set of vertices $V$ (a graph can
have multiple edges and loops) as follows:
* •
a state $\sigma$ is a map $\sigma\colon V\rightarrow R$, where $R$ is a
commutative ring with the unit,
* •
the weight of the state $\sigma$ is defined by the formula
$\displaystyle W_{G}(\sigma):=\prod\limits_{e\in E}i_{e}(\delta(e)),$
where $\delta(e)=\sigma(v)-\sigma(w)$; the edge $e$ connects the vertices $v$
and $w$, the functions $i_{e}\colon R\rightarrow\mathbb{C}$ are even: $\forall
b\in R\colon i_{e}(b)=i_{e}(-b)$,
* •
the partition function $Z(G)$ of a model is the following sum
$\displaystyle Z(G)=\sum_{\sigma}W_{G}(\sigma),$
where the summation is taken over all possible states $\sigma$.
Let us consider the most simple Biggs interaction models:
###### Definition 2.1.
If $R\cong\mathbb{Z}_{n}$ and functions $i_{e}$ given as
$\displaystyle\begin{cases}i_{e}(0)=\alpha_{e},\\\ i_{e}(a)=\beta_{e},&\forall
a\neq 0\in R,\end{cases}$
we call such model the anisotropic $n$-Potts model with the set of parameters
$\alpha_{e}$ and $\beta_{e}$ and we denote it by $M(G;i_{e})$.
In addition, if the maps $i_{e}=i$ do not depend on edges, then we call such
model the isotropic $n$-Potts model (or just $n$-Potts model) with parameters
$\alpha$ and $\beta$. We denote it by $M(G;i)$, also we use the notation
$M(G;\alpha,\beta)$.
###### Remark 2.2.
In the case $R\cong\mathbb{Z}_{2}$, $i(0)=\exp\big{(}\frac{J}{kT}\big{)}$ and
$i(1)=\exp\big{(}{-}\frac{J}{kT}\big{)}$ this model can be identified with the
classic isotropic Ising model [2]. Therefore we will call any anisotropic or
isotropic $2$-Potts model just Ising model.
###### Definition 2.3.
Consider an anisotropic $n$-Potts model $M(G;i_{e})$, we denote its partition
function as $Z_{n}(G)$. In addition, if $n=2$, we omit index $2$ and write
just $Z(G)$.
###### Remark 2.4.
For the empty graph, we define the partition function of any $n$-Potts model
to be equal to $1$, and for a disjoint set of $m$ points to be equal to
$n^{m}$.
Now we will consider the combinatorial properties of the anisotropic $n$-Potts
models (compare with [3, Theorem 3.2]):
###### Theorem 2.5.
Consider an anisotropic $n$-Potts model $M(G;i)$ and its partition function
$Z_{n}(G)$.
* •
Let graph $G$ be the disjoint union of graphs $G_{1}$ and $G_{2}$, then
$\displaystyle Z_{n}(G)=Z_{n}(G_{1})Z_{n}(G_{2}).$
Figure 1: The joining of two graphs by the vertex $v$.
* •
Let graph $G$ be the joining of graphs $G_{1}$ and $G_{2}$ by the vertex $v$,
then
$\displaystyle nZ_{n}(G)=Z_{n}(G_{1})Z_{n}(G_{2}).$
* •
Consider a graph $G$ and its edge $e$, where $e$ is neither a bridge nor a
loop. Consider the graph $G/e$ obtained by contraction of $e$, and the graph
$G\backslash e$ obtained by deletion of $e$. Then the following formula holds
$\displaystyle
Z_{n}(G)=(\alpha_{e}-\beta_{e})Z_{n}(G/e)+\beta_{e}Z_{n}(G\backslash e).$
###### Proof.
1\. The statement directly follows from Definition 2.1.
2\. Let us rewrite the partition function $Z_{n}(G)$:
$\displaystyle
Z_{n}(G)=\sum\limits_{k\in\\{0,\ldots,n-1\\}}\sum\limits_{\sigma\colon\sigma(v)=k}W_{G}(\sigma).$
Notice that $i(\sigma(v)-\sigma(w))=i(\sigma(v)+1-\sigma(w)-1)$, therefore for
any $i\neq j$ we have the following identity
$\displaystyle\sum\limits_{\sigma\colon\sigma(v)=i}W_{G}(\sigma)=\sum\limits_{\sigma\colon\sigma(v)=j}W_{G}(\sigma).$
Hence we obtain
$\displaystyle
Z_{n}(G)=n\sum\limits_{\sigma\colon\sigma(v)=i}W_{G}(\sigma),\qquad\forall\,i\in\\{0,\ldots,n-1\\}.$
Let us introduce the partial partition functions
$X_{k}:=\\!\\!\sum\limits_{\sigma\colon\sigma(v)=k}\\!\\!W_{G_{1}}(\sigma)$
and
$Y_{k}:=\\!\\!\sum\limits_{\sigma\colon\sigma(v)=k}\\!\\!W_{G_{2}}(\sigma)$,
then we could rewrite
$\displaystyle
Z_{n}(G_{1})Z_{n}(G_{2})=\bigg{(}\sum_{k}\sum_{\sigma\colon\sigma(v)=k}W_{G_{1}}(\sigma)\bigg{)}\bigg{(}\sum_{k}\sum_{\sigma\colon\sigma(v)=k}W_{G_{2}}(\sigma)\bigg{)}$
$\displaystyle\hphantom{Z_{n}(G_{1})Z_{n}(G_{2})}{}=(X_{0}+X_{1}+\dots+X_{n-1})(Y_{0}+Y_{1}+\dots
Y_{n-1})=n^{2}X_{0}Y_{0}$
$\displaystyle\hphantom{Z_{n}(G_{1})Z_{n}(G_{2})}{}=n(X_{0}Y_{0}+X_{1}Y_{1}+\dots+X_{n-1}Y_{n-1})=n\sum_{k}\sum_{\sigma\colon\sigma(v)=k}W_{G}(\sigma)=nZ_{n}(G).$
3\. Let the edge $e$ is neither a bridge nor a loop and denote by $X$ the
income in the partition function of all states such that the values of the
ends of $e$ coincide and by $Y$ another part of the partition function (that
of the distinct values of the ends of $e$), then
$\displaystyle Z_{n}(G)=\alpha_{e}X+\beta_{e}Y,\qquad Z_{n}(G\backslash
e)=X+Y,\qquad Z_{n}(G/e)=X$
and we obtain the statement. ∎
Now let us recall the definition of the Tutte polynomial of a graph $G$.
###### Definition 2.6.
Let us define the Tutte polynomial $T_{G}(x,y)$ by the deletion-contraction
recurrence relation:
1. 1.
If an edge $e$ is neither a bridge nor a loop, then $T_{G}(x,y)=T_{G\backslash
e}(x,y)+T_{G/e}(x,y)$.
2. 2.
If the graph $G$ consists of $i$ bridges and $j$ loops, then
$T_{G}(x,y)=x^{i}y^{j}$.
###### Theorem 2.7 ([9, 23]).
Let $F(G)$ be a function of a graph $G$ satisfuing the following conditions:
* •
$F(G)=1$, if $G$ consists of only one vertex.
* •
$F(G)=aF(G\backslash e)+bF(G/e)$, if an edge $e$ is not a bridge neither a
loop.
* •
$F(G)=F(G_{1})F(G_{2})$, if either $G=G_{1}\sqcup G_{2}$ or the intersection
$G_{1}\cap G_{2}$ consists of only one vertex.
Then
$\displaystyle
F(G)=a^{c(G)}b^{r(G)}T_{G}\bigg{(}\frac{F(K_{2})}{b},\frac{F(L)}{a}\bigg{)},$
where $K_{2}$ is a complete graph on two vertices, $L$ is a loop,
$r(G)=v(G)-k(G)$ is a rank of $G$ and $c(G)=e(G)-r(G)$ is a corank. Here and
below $e(G)$ is the number of edges in the graph $G$.
Now we are ready to connect the partition function $Z_{n}(G)$ of the isotropic
$n$-Potts model $M(G;\alpha,\beta)$ with the Tutte polynomial $T_{G}(x,y)$ of
the same graph $G$ using a well-known trick (for instance see [3]). Let us
consider the weighted partition function
$\displaystyle\frac{Z_{n}(G)}{n^{k(G)}},$
where $k(G)$ is the number of connected components in the graph $G$. It is
easy to verify that the weighted partition function
$\frac{Z_{n}(G)}{n^{k(G)}}$ satisfies Theorem 2.7, therefore the following
theorem holds:
###### Theorem 2.8 (Theorem 3.2 [3]).
The partition function $Z_{n}(G)$ of the $n$-Potts model $M(G;\alpha,\beta)$
coincides with the Tutte polynomial of a graph $G$ up to a multiplicative
factor
$\displaystyle
Z_{n}(G)=n^{k(G)}\beta^{c(G)}(\alpha-\beta)^{r(G)}T_{G}\bigg{(}\frac{\alpha+(n-1)\beta}{\alpha-\beta},\frac{\alpha}{\beta}\bigg{)}.$
###### Example 2.9 (the bad coloring polynomial [9]).
Consider a graph $G$ and all possible colorings of $V(G)$ in $n$ colors.
Define the bad coloring polynomial as
$\displaystyle B_{G}(n,t)=\sum\limits_{j}b_{j}(G,n)t^{j},$
here $b_{j}(G,n)$ is the number of colorings such that each of them has
exactly $j$ bad edges (we call an edge “bad” if its ends have the same
colors). So, easy to see that $B_{G}(n,t)=Z_{n}(G)$, here $Z_{n}(G)$ is the
partition function of the $n$-Potts model $M(G;t,1)$. Hence, using the Theorem
2.8 we immediately obtain
$\displaystyle
B_{G}(n,t+1)=n^{k(G)}t^{r(G)}T_{G}\bigg{(}\frac{t+n}{t},t+1\bigg{)}.$
### 2.2 $\boldsymbol{n}$-Potts models and the theorem of Matiyasevich
The connection between the $n$-Potts models and Tutte polynomials allows us to
give a simple proof of the theorem of Matiyasevich about the chromatic and
flow polynomials, but at first we introduce a few definitions.
###### Definition 2.10.
A graph $A$ is called a spanning subgraph of a graph $G$, if graphs $G$ and
$A$ share the same set of vertices: $V(G)=V(A)$, and the set of edges $E(A)$
is the subset of the set of edges $E(G)$.
###### Definition 2.11.
A graph $A$ is called an edge induced subgraph (Figure 2) of a graph $G$, if
$A$ is induced by a subset of the set $E(G)$. Every edge induced subgraph $A$
of a graph $G$ could be completed to the spanning subgraph $A^{\prime}$ by
adding all the vertices of $G$ which is not contained in the subgraph $A$.
###### Definition 2.12.
For a $n$-Potts model $M(G;i)$ we introduce the normalized partition function
as follows
$\displaystyle\widetilde{Z}_{n}(G)=\frac{Z_{n}(G)}{n^{v(G)}}.$
Figure 2: Edge induced and spanning subgraphs.
We start with the following lemma, which is a generalization of the high
temperature formula for the Ising model:
###### Lemma 2.13 (Biggs formula [5]).
Let us consider two $n$-Potts models $M_{1}(G;i_{1})$ with parameters
$\alpha_{1}$, $\beta_{1}$ and $M_{2}(G;i_{2})$ with parameters $\alpha_{2}$,
$\beta_{2}$. Then the normalized partition function $Z^{1}_{n}(G)$ of the
first model could be expressed in terms of the normalized partition functions
of the models of all edge induced subgraphs of the second model:
$\displaystyle\widetilde{Z}^{1}_{n}(G)=q^{e(G)}\sum\limits_{A\subseteq
G}\left(\frac{p}{q}\right)^{e(A)}\widetilde{Z}^{2}_{n}(A),$
where $p=\frac{\alpha_{1}-\beta_{1}}{\alpha_{2}-\beta_{2}}$, and
$q=\frac{\alpha_{2}\beta_{1}-\alpha_{1}\beta_{2}}{\alpha_{2}-\beta_{2}}$
$\big{(}$we assume that $\widetilde{Z}^{i}_{n}(\varnothing)=1\big{)}$.
###### Proof.
Let us notice that $i_{1}=p\cdot i_{2}+q$, therefore
$\displaystyle\widetilde{Z}^{1}_{n}(G)=\sum\limits_{\sigma\colon
V(G)\rightarrow\mathbb{Z}_{n}}\prod_{e}i_{1}(\delta(e))=\sum\limits_{\sigma\colon
V(G)\rightarrow\mathbb{Z}_{n}}\prod_{e}(pi_{2}(\delta(e))+q)$
$\displaystyle\hphantom{\widetilde{Z}^{1}_{n}(G)}{}=\sum\limits_{\sigma\colon
V(G)\rightarrow\mathbb{Z}_{n}}\sum\limits_{A\subseteq
G}p^{e(A)}q^{e(G)-e(A)}\prod_{e\in E(A)}i_{2}(\delta(e)).$
In order to complete the proof we consider the following term for a fixed $A$:
$\displaystyle\sum\limits_{\sigma\colon
V(G)\rightarrow\mathbb{Z}_{n}}\\!\\!\\!p^{e(A)}q^{e(G)-e(A)}\\!\\!\\!\\!\prod_{e\in
E(A)}\\!\\!i_{2}(\delta(e))=q^{e(G)}\bigg{(}\frac{p}{q}\bigg{)}^{e(A)}\\!\\!\sum\limits_{\sigma\colon
V(G)\rightarrow\mathbb{Z}_{n}}\prod_{e\in E(A)}i_{2}(\delta(e))$
$\displaystyle\hphantom{\sum\limits_{\sigma\colon
V(G)\rightarrow\mathbb{Z}_{n}}\\!\\!\\!p^{e(A)}q^{e(G)-e(A)}\\!\\!\\!\\!\prod_{e\in
E(A)}\\!\\!i_{2}(\delta(e))}{}=q^{e(G)}\bigg{(}\frac{p}{q}\bigg{)}^{e(A)}n^{v(G)-v(A)}\sum\limits_{\sigma\colon
V(A)\rightarrow\mathbb{Z}_{n}}\prod_{e\in E(A)}i_{2}(\delta(e))$
$\displaystyle\hphantom{\sum\limits_{\sigma\colon
V(G)\rightarrow\mathbb{Z}_{n}}\\!\\!\\!p^{e(A)}q^{e(G)-e(A)}\\!\\!\\!\\!\prod_{e\in
E(A)}\\!\\!i_{2}(\delta(e))}{}=n^{v(G)}q^{e(G)}\sum\limits_{A\subseteq
G}\bigg{(}\frac{p}{q}\bigg{)}^{e(A)}\widetilde{Z}^{2}_{n}(A).$ ∎
###### Proposition 2.14.
Consider two anisotropic $n$-Potts models $M_{1}\big{(}G;i^{1}_{e}\big{)}$ and
$M_{2}\big{(}G;i^{2}_{e}\big{)}$. In the same fashion we can obtain
$\displaystyle\widetilde{Z}^{1}_{n}(G)=\prod_{e\in
G}q_{e}\sum\limits_{A\subseteq G}\prod_{e\in
A}\frac{p_{e}}{q_{e}}\widetilde{Z}^{2}_{n}(A),$ (2.1)
here
$p_{e}=\frac{\alpha^{1}_{e}-\beta^{1}_{e}}{\alpha^{2}_{e}-\beta^{2}_{e}}$, and
$q_{e}=\frac{\alpha^{2}_{e}\beta^{1}_{e}-\alpha^{1}_{e}\beta^{2}_{e}}{\alpha^{2}_{e}-\beta^{2}_{e}}$.
We consider further the chromatic and flow polynomials, first of all remain
some well-known definitions.
###### Definition 2.15.
A coloring of the set of vertices $V(G)$ is said to be proper if the ends of
each edge have different colors.
###### Definition 2.16.
Let $G$ be a graph with the edge set $E(G)$ and the vertex set $V(G)$, let us
choose a fixed edge orientation on $G$. Then, a function $f\colon
E\rightarrow\mathbb{Z}_{n}$ is called a nowhere-zero $n$-flow if the following
conditions hold:
* •
$\forall e\in E(G)\colon f(e)\neq 0$,
* •
$\forall v\in V(G)\colon\sum\limits_{e\in M^{+}(v)}f(e)=\sum\limits_{e\in
M^{-}(v)}f(e)$, where $M^{+}(v)$ (respectively $M^{-}(v)$) is the set of edges
each of them is directed to (respectively from) $v$.
Next, we formulate one of the classic results of graph theory which can be
found for instance in [9]:
###### Theorem 2.17.
The number of proper colorings of a graph $G$ in $n$ colors is the following
polynomial $($called chromatic polynomial$)$ in the variable $n$:
$\displaystyle\chi_{G}(n)=(-1)^{v(G)-k(G)}n^{k(G)}T_{G}(1-n,0).$
The number of nowhere-zero $n$-flows of a graph $G$ is independent on the
choice of orientation and is obtained by the following polynomial $($called
flow polynomial$)$ in the variable $n$:
$\displaystyle C_{G}(n)=(-1)^{e(G)+v(G)+k(G)}T_{G}(0,1-n).$
Now we are ready to formulate and prove the theorem of Matiyasevich:
###### Theorem 2.18 (Matiyasevich [20]).
Let us consider a graph $G$, its chromatic polynomial $\chi_{G}$ and its flow
polynomial $C_{G}$, then
$\displaystyle\chi_{G}(n)=\frac{(n-1)^{e(G)}}{n^{e(G)-v(G)}}\sum\limits_{A\subseteq
G}\frac{C_{A}(n)}{(1-n)^{e(A)}},$
where the summation goes through all spanning subgraphs $A$.
###### Proof.
Let us consider two $n$-Potts models with the special parameters: the model
$M_{1}(G;i_{1})$ with the parameters $\alpha_{1}=0$, $\beta_{1}=1$ and the
model $M_{2}(G;i_{2})$ with the parameters $\alpha_{2}=1-n$, $\beta_{2}=1$. By
Theorem 2.8 we could express the partition function of the first model in
terms of the chromatic polynomial
$\displaystyle\chi_{G}(n)=(-1)^{v(G)-k(G)}n^{k(G)}T_{G}(1-n,0)=\frac{(-1)^{v(G)-k(G)-r(G)}n^{k(G)}Z^{1}_{n}(G)}{n^{k(G)}}$
$\displaystyle\hphantom{\chi_{G}(n)}=(-1)^{v(G)-k(G)-r(G)}n^{v(G)}\widetilde{Z}^{1}_{n}(G)=n^{v(G)}\widetilde{Z}^{1}_{n}(G).$
So we have
$\displaystyle\widetilde{Z}^{1}_{n}(G)=\frac{\chi_{G}(n)}{n^{v(G)}}.$
Analogously, we express the partition function of the second model in terms of
the flow polynomial
$\displaystyle
C_{G}(n)=(-1)^{e(G)+v(G)+k(G)}T_{G}(0,1-n)=\frac{(-1)^{e(G)+v(G)+k(G)-r(G)}Z^{2}_{n}(G)}{n^{k(G)}n^{v(G)-k(G)}}$
$\displaystyle\hphantom{C_{G}(n)}{}=(-1)^{e(G)}\widetilde{Z}^{2}_{n}(G).$
(2.2)
So we have
$\displaystyle\widetilde{Z}^{2}_{n}(G)=(-1)^{e(G)}C_{G}(n).$ (2.3)
Then by Lemma 2.13 after the substitutions (2.2) and (2.3) we obtain
$\displaystyle\frac{\chi_{G}(n)}{n^{v(G)}}=\frac{(n-1)^{e(G)}}{n^{e(G)}}\sum_{A^{\prime}\subseteq
G}\frac{C_{A^{\prime}}(n)}{(1-n)^{e(A^{\prime})}},$
where the summation goes through all edge induced subgraphs $A^{\prime}$.
We finish the proof by noticing that the edge induced subgraph differs from
the spanning subgraph by the set of isolated vertices. Therefore we can
complete each edge induced subgraph to its corresponding spanning subgraph and
then replace the summation over all edge induced subgraph by the summation
over all spanning subgraph, because the value of the each flow polynomial
$C_{A^{\prime}}$ remains the same and finally we obtain
$\displaystyle\chi_{G}(n)=\frac{(n-1)^{e(G)}}{n^{e(G)-v(G)}}\sum_{A\subseteq
G}\frac{C_{A}(n)}{(1-n)^{e(A)}}.$ ∎
Note that we could produce series of statements that look like Theorem 2.18:
###### Theorem 2.19.
Let us consider a graph $G$, then we can obtain the following formulas
$\displaystyle
n^{k(G)}\beta_{1}^{c(G)}(\alpha_{1}-\beta_{1})^{r(G)}T_{G}\bigg{(}\frac{\alpha_{1}+(n-1)\beta_{1}}{\alpha_{1}-\beta_{1}},\frac{\alpha_{1}}{\beta_{1}}\bigg{)}=q^{e(G)}\sum_{A\subseteq
G}\bigg{(}\frac{p}{q}\bigg{)}^{e(A)}\chi_{A}(n),$ (2.4)
where $p=-\alpha_{1}+\beta_{1}$, $q=\alpha_{1}$, and the summation (here and
below) goes through all spanning subgraphs $A$,
$\displaystyle C_{G}(n)=(n-1)^{e(G)}\sum_{A\subseteq
G}\frac{n^{e(A)-v(G)}}{(1-n)^{e(A)}}\chi_{A}(n),$ (2.5) $\displaystyle
n^{k(G)-v(G)}\beta_{1}^{c(G)}(\alpha_{1}-\beta_{1})^{r(G)}T_{G}\bigg{(}\frac{\alpha_{1}+(n-1)\beta_{1}}{\alpha_{1}-\beta_{1}},\frac{\alpha_{1}}{\beta_{1}}\bigg{)}$
$\displaystyle\qquad{}=q_{1}^{e(G)}\sum_{A}\bigg{(}\frac{p_{1}}{q_{1}}\bigg{)}^{e(A)}(-1)^{e(A)}C_{A}(n),$
(2.6)
where $p_{1}=\frac{\beta_{1}-\alpha_{1}}{n}$,
$q_{1}=\frac{\alpha_{1}-(1-n)\beta_{1}}{n}$,
$\displaystyle(-1)^{e(G)}C_{G}(n)$
$\displaystyle\qquad=q_{2}^{e(G)}\sum_{A}\bigg{(}\frac{p_{2}}{q_{2}}\bigg{)}^{e(A)}n^{k(A)-v(G)}\beta_{1}^{c(A)}(\alpha_{1}-\beta_{1})^{r(A)}T_{A}\bigg{(}\frac{\alpha_{1}+(n-1)\beta_{1}}{\alpha_{1}-\beta_{1}},\frac{\alpha_{1}}{\beta_{1}}\bigg{)},$
(2.7)
where $p_{2}=\frac{n}{\beta_{1}-\alpha_{1}}$ and
$q_{2}=\frac{\alpha_{1}-(1-n)\beta_{1}}{\alpha_{1}-\beta_{1}}$.
###### Proof.
Let us consider two $n$-Potts models:
* •
Models $M_{1}(G;\alpha_{1},\beta_{1})$ and $M_{2}(G;0,1)$ for the proof of the
formula (2.4).
* •
The specification of the first case: $M_{1}(G;1-n,1)$ and the same
$M_{2}(G;0,1)$ for the proof of the formula (2.5).
* •
Models $M_{1}(G;1-n,1)$ and $M_{2}(G;\alpha_{1},\beta_{1})$ with the
parameters $\alpha_{1}$, $\beta_{1}$ for the proof of the formula (2.6).
* •
And finally, models $M_{1}(G;\alpha_{1},\beta_{1})$ and $M_{2}(G;1-n,1)$ for
the proof of formula (2.7).
Now it is left to repeat step by step the proof of Theorem 2.18 for these two
models. ∎
###### Remark 2.20.
We notice that the formula (2.5) naturally can be considered as “inversion” of
Theorem 2.18.
### 2.3 Shifting the order in the Potts models
Biggs Lemma 2.13 allows us to relate the values of the partition functions of
the $n$-Potts models with fixed $n$, but different values of parameters
$\alpha$ and $\beta$. The goal of the current subsection is to present a
method for connecting partition functions of the $n$-Potts models for
different $n$. We will call it shifting order formulas.
The first method is based on the multiplicativity property of the complete
flow polynomial.
###### Definition 2.21.
Let $G$ be a graph with the edge set $E(G)$ and the vertex set $V(G)$, let us
chose a fixed edge orientation on $G$. Then, a function $f\colon
E\rightarrow\mathbb{Z}_{n}$ is called an $n$-flow if the following condition
holds
$\displaystyle\forall v\in V(G)\colon\ \sum\limits_{e\in
M^{+}(v)}f(e)=\sum\limits_{e\in M^{-}(v)}f(e),$
here again $M^{+}(v)$ (respectively $M^{-}(v)$) is the set of edges each of
them is directed to (respectively from) $v$.
Let us formulate a few well known results concerning a flow polynomial and a
number of all $n$-flows. The proofs could be found for example in [22].
###### Proposition 2.22.
Denote the number of all $n$-flows on a graph $G$ by ${FC}_{G}(n)$, then
${FC}_{G}(n)$ is independent of the choice of an orientation and the following
identity holds
$\displaystyle{FC}_{G}(n)=\sum\limits_{A\subseteq G}C_{A}(n),$
where the summation goes through all spanning subgraphs $A$ of the graph $G$.
###### Proposition 2.23.
The number of all $n$-flows on a graph is the following polynomial $($called
complete flow polynomial$)$
$\displaystyle{FC}_{G}(n)=n^{e(G)-v(G)+k(G)},$
where $e(G)$, $v(G)$, $k(G)$ are numbers of edges, vertices and connected
components in the graph $G$ correspondingly.
###### Proposition 2.24.
The flow polynomial $C_{G}(n)$ of a graph $G$ could be expressed in terms of
the complete flow polynomials of its spanning subgraphs by the following
identity:
$\displaystyle C_{G}(n)=\sum_{A\subseteq G}(-1)^{e(G)-e(A)}{FC}_{A}(n).$
The complete flow polynomial ${FC}_{G}(n)$ is a multiplicative invariant:
${FC}_{G}(n_{1}n_{2})={FC}_{G}(n_{1})\allowbreak\times{FC}_{G}(n_{2})$,
therefore we are ready to formulate the following theorem:
###### Theorem 2.25.
The partition function $Z_{n_{1}n_{2}}(G)$ of the $n_{1}n_{2}$-Potts model
$M(G;\alpha_{1},\beta_{1})$ could be expressed in terms of the partition
functions $Z_{n_{1}}(A)$ and $Z_{n_{2}}(A)$ of the $n_{1}$-Potts model
$M_{1}(A;\alpha_{1},\beta_{1})$ and $n_{2}$-Potts model
$M_{2}(A;\alpha_{1},\beta_{1})$ of all spanning subgraphs $A$ of the graph $G$
correspondingly.
###### Proof.
Indeed, by Theorem 2.8 and the formula (2.6) we have
$\displaystyle
Z_{n_{1}n_{2}}(G)=\gamma_{G}T_{G}\bigg{(}\frac{\alpha_{1}+(n_{1}n_{2}-1)\beta_{1}}{\alpha_{1}-\beta_{1}},\frac{\alpha_{1}}{\beta_{1}}\bigg{)}=\sum_{A\subseteq
G}\lambda_{A}C_{A}(n_{1}n_{2}).$
From Proposition 2.24 we obtain
$\displaystyle\sum_{A\subseteq G}\lambda_{A}C_{A}(n_{1}n_{2})=\sum_{A\subseteq
G}\lambda_{A}\bigg{(}\sum_{A^{\prime}\subseteq
A}(-1)^{e(A)-e(A^{\prime})}{FC}_{A^{\prime}}(n_{1}n_{2})\bigg{)}$
$\displaystyle\hphantom{\sum_{A\subseteq
G}\lambda_{A}C_{A}(n_{1}n_{2})}{}=\sum_{A\subseteq
G}\omega_{A}{FC}_{A}(n_{1}n_{2})=\sum_{A\subseteq
G}\omega_{A}{FC}_{A}(n_{1}){FC}_{A}(n_{2}),$
notice that we used for the second resummations the following simple
observation: if $X$ is a spanning subgraph of $Y$, which is a spanning
subgraph of graph $Z$, so $X$ is a spanning subgraph of a graph $Z$. We omit
this remark below.
The Proposition 2.22 implies
$\displaystyle\sum_{A\subseteq
G}\omega_{A}{FC}_{A}(n_{1}){FC}_{A}(n_{2})=\sum_{A\subseteq
G}\omega_{A}\bigg{(}\sum_{A^{\prime}\subseteq
A}C_{A^{\prime}}(n_{1})\bigg{)}\bigg{(}\sum_{A^{\prime\prime}\subseteq
A}C_{A^{\prime\prime}}(n_{2})\bigg{)}$
$\displaystyle\hphantom{\sum_{A\subseteq
G}\omega_{A}{FC}_{A}(n_{1}){FC}_{A}(n_{2})}{}=\sum_{A^{\prime}\subseteq
G}\sum_{A^{\prime\prime}\subseteq
G}\mu_{A^{\prime}A^{\prime\prime}}C_{A^{\prime}}(n_{1})C_{A^{\prime\prime}}(n_{2}).$
Finally, with the help of formula (2.7) and Theorem 2.8 we obtain
$\displaystyle\sum_{A^{\prime}\subseteq G}\sum_{A^{\prime\prime}\subseteq
G}\mu_{A^{\prime}A^{\prime\prime}}\bigg{(}\sum_{B\subseteq
A^{\prime}}\delta_{B}Z_{n_{1}}(B)\bigg{)}\bigg{(}\sum_{C\subseteq
A^{\prime\prime}}\delta_{C}Z_{n_{2}}(C)\bigg{)}=\sum_{A^{\prime}\subseteq
G}\sum_{A^{\prime\prime}\subseteq
G}\eta_{A^{\prime}A^{\prime\prime}}Z_{n_{1}}(A^{\prime})Z_{n_{2}}(A^{\prime\prime}),$
where $\eta_{A^{\prime}A^{\prime\prime}}$ are some constants, appeared after
the resummations. ∎
###### Remark 2.26 (convolution formula [16]).
It seems extremely interesting and fruitful to compare Lemma 2.13 and Theorem
2.25 with the convolution formula
$\displaystyle T_{G}(x,y)=\sum_{A\subseteq E(G)}T_{G|A}(0,y)T_{G/A}(x,0),$
here the summation is over all possible subsets of $E(G)$, here $G|A$ is a
graph obtained by the restriction of $G$ on the edge subset $A$ and $G/A$ is a
graph obtained from $G$ by the contraction of all edges from $A$ (see [9] for
more details).
Our second method is based on the Tutte identity for the chromatic polynomial:
###### Theorem 2.27 ([9]).
Consider a graph $G$ with the set of edge $V(G)$ then the following formula
holds
$\displaystyle\chi_{G}(n_{1}+n_{2})=\sum_{B\subseteq
V(G)}\chi_{G|B}(n_{1})\chi_{G|B^{c}}(n_{2}),$
where $G|B$ $(G|B^{c})$ is the restriction of $G$ on the vertex subset $B\in
V(G)$ $(B^{c}\in V(G)$, where $B^{c}=V(G)\setminus B)$.
Using this fact we can formulate the following theorem:
###### Theorem 2.28.
The partition function $Z_{n_{1}+n_{2}}(G)$ of the $n_{1}+n_{2}$-Potts model
$M(G;\alpha_{1},\beta_{1})$ could be expressed in terms of the partition
functions $Z_{n_{1}}(A)$ and $Z_{n_{2}}(A)$ of the $n_{1}$-Potts model
$M_{1}(A;\alpha_{1},\beta_{1})$ and $n_{2}$-Potts model
$M_{2}(A;\alpha_{1},\beta_{1})$ of all spanning subgraphs $A$ of the graph $G$
correspondingly.
###### Proof.
The proof is very similar to the proof of Theorem 2.25. Again, from Theorem
2.8 and the formula (2.4) we have
$\displaystyle
Z_{n_{1}+n_{2}}(G)=\gamma_{G}T_{G}\bigg{(}\frac{\alpha_{1}+(n_{1}+n_{2}-1)\beta_{1}}{\alpha_{1}-\beta_{1}},\frac{\alpha_{1}}{\beta_{1}}\bigg{)}=\sum_{A\subseteq
G}\lambda_{A}\chi_{A}(n_{1}+n_{2}).$
From Theorem 2.27 we obtain
$\displaystyle\sum_{A\subseteq
G}\lambda_{A}\chi_{A}(n_{1}+n_{2})=\sum_{A\subseteq
G}\lambda_{A}\bigg{(}\sum_{B\subseteq
V(A)}\chi_{A|B}(n_{1})\chi_{A|B^{c}}(n_{2})\bigg{)}$
$\displaystyle\hphantom{\sum_{A\subseteq
G}\lambda_{A}\chi_{A}(n_{1}+n_{2})}{}=\sum_{A\subseteq
G}\lambda_{A}\Bigg{(}\sum_{B\subseteq V(A)}\\!\\!\bigg{(}\sum_{A_{1}\subseteq
A|B}\\!\\!\omega_{A_{1}}Z_{n_{1}}(A_{1})\bigg{)}\bigg{(}\sum_{A_{2}\subseteq
G|B^{c}}\\!\\!\omega_{A_{2}}Z_{n_{2}}(A_{2})\bigg{)}\Bigg{)}=$
$\displaystyle\hphantom{\sum_{A\subseteq
G}\lambda_{A}\chi_{A}(n_{1}+n_{2})}{}=\sum_{A\subseteq G}\sum_{B\subseteq
V(A)}\sum_{A_{1}\subseteq A|B}\sum_{A_{2}\subseteq
A|B^{c}}\mu_{A_{1}A_{2}}Z_{n_{1}}(A_{1})Z_{n_{2}}(A_{2}).$
Let us complete each subgraph $A_{1}$ (each $A_{2}$) to the corresponding
spanning subgraph of $G$ by adding isolating vertices
$\displaystyle\sum_{A\subseteq G}\sum_{B\subseteq V(A)}\sum_{A_{1}\subseteq
A|B}\sum_{A_{2}\subseteq
A|B^{c}}\mu_{A_{1}A_{2}}Z_{n_{1}}(A_{1})Z_{n_{2}}(A_{2})=\sum_{A^{\prime}\subseteq
G}\sum_{A^{\prime\prime}\subseteq
G}\eta_{A^{\prime}A^{\prime\prime}}Z_{n_{1}}(A^{\prime})Z_{n_{2}}(A^{\prime\prime}),$
where $\eta_{A^{\prime}A^{\prime\prime}}$ are again some constants, appeared
after the resummations. ∎
## 3 Star-triangle equation for Ising and Potts models
### 3.1 General properties
Let us rewrite the partition function of the anisotropic $n$-Potts models in
the so called Fortuin–Kasteleyn representation:
###### Proposition 3.1 (compare with the formula $(2.7)$ from [22]).
Consider the anisotropic $n$-Potts model $M(G;i_{e})$, then its partition
function could be expressed as follows
$\displaystyle Z_{n}(G)=\sum_{\sigma}\prod_{e\in
E}(\beta_{e}+(\alpha_{e}-\beta_{e})\delta(\sigma_{e}))=\prod_{e\in
E}\beta_{e}\sum_{\sigma}\prod_{e\in E}(1+(t_{e}-1)\delta(\sigma_{e})),$ (3.1)
where $\delta(\sigma_{e})$ is a value of standard Kronecker delta function of
the values of $\sigma$ on the boundary vertices of the edge $e$ and
$t_{e}=\frac{\alpha_{e}}{\beta_{e}}$ is a reduced weight of the edge $e$.
###### Proof.
Indeed, it is easy to see that if $\sigma(v)=\sigma(w)$:
$\displaystyle
i_{e}(\delta(e))=i_{e}(\sigma(v)-\sigma(w))=\alpha_{e}=\beta_{e}+(\alpha_{e}-\beta_{e})\delta(\sigma_{e})=\beta_{e}+(\alpha_{e}-\beta_{e})\delta(\sigma(v),\sigma(w)),$
and if $\sigma(v)\neq\sigma(w)$:
$\displaystyle
i_{e}(\delta(e))=i_{e}(\sigma(v)-\sigma(w))=\beta_{e}=\beta_{e}+(\alpha_{e}-\beta_{e})\delta(\sigma_{e})=\beta_{e}+(\alpha_{e}-\beta_{e})\delta(\sigma(v),\sigma(w)).\\!\\!\\!\\!$
∎
Also, we introduce the boundary partition function of the $n$-Potts models:
###### Definition 3.2.
Let $G$ be a graph (with possible loops and multiple edges) with the set of
vertices $V$, the set of edges $E$ and the boundary subset $S\subseteq V$ of
enumerated vertices: $S=\\{v_{1},v_{2},\dots,v_{k}\\}$. The boundary partition
function on $G$ is defined by the following expression
$\displaystyle Z_{n;S(\textbf{A})}(G)=\sum_{\sigma_{\textbf{A}}}\prod_{e\in
E}(\beta_{e}+(\alpha_{e}-\beta_{e})\delta(\sigma_{e})),$
where $\textbf{A}=\\{a_{1},a_{2},\dots,a_{k}\\}$, $\forall i\colon
a_{i}\in\mathbb{Z}_{n}$ is the set of fixed values, and the summation is over
such states $\sigma_{\textbf{A}}$ that $\sigma_{\textbf{A}}(v_{i})=a_{i}$.
###### Remark 3.3.
If $n=2$, we will omit the index $2$ and will write just
$Z_{S(\textbf{A})}(G)$.
The next Lemma connects boundary and ordinary partition functions:
###### Lemma 3.4.
Consider two graphs $G_{1}=(V_{1},E_{1})$ and $G_{2}=(V_{2},E_{2})$ with the
only common vertices in the boundary subset $S=\\{v_{1},v_{2},\dots,v_{n}\\}$
in $V_{1}$ and $V_{2}$. We can glue these graphs and obtain the third graph
$G=(V,E)$, where $E=E_{1}\sqcup E_{2}$, $V=V_{1}\cup_{S}V_{2}$. Then, the
following identity holds
$\displaystyle
Z_{n}(G)=\sum_{\textbf{A}}Z_{n;S(\textbf{A})}(G_{1})Z_{n;S(\textbf{A})}(G_{2}),$
where the summation is over all possible sets A.
Figure 3: $G$ is obtained by merging of $S=\\{v_{1},v_{2},v_{3},v_{4}\\}$.
###### Proof.
The formula is obtained directly from the Proposition 3.1 and Definitions 3.2.
Indeed, by the Definition 3.2 we can write down
$Z_{n}(G)=\sum\limits_{\textbf{A}}Z_{n;S(\textbf{A})}(G),$ but also
$Z_{n;S(\textbf{A})}(G)=Z_{n;S_{1}(\textbf{A})}(G_{1})Z_{n;S_{2}(\textbf{A})}(G_{2})$.
∎
###### Remark 3.5.
The latter property of a partition function (Lemma 3.4) allow us to consider
$n$-Potts model partition function as a discrete version of the topological
quantum field theory in the Atiyah formalism [1], where
$\displaystyle TQFT\colon\ {\rm Cob}\to{\rm Vect}$
is a functor from the category of cobordisms to the category of vector spaces.
### 3.2 The case $\boldsymbol{n=2}$
In this subsection we consider the case $n=2$. Our first goal is to find such
conditions that the partition function (3.1) is invariant under the star-
triangle transformation which changes the subgraph $\Omega$ to the subgraph
$\Omega^{\prime}$. We derive these conditions with the use of the boundary
partition functions: consider a graph $G$ with the subgraph $\Omega$, then
using Lemma 3.4 for graphs $\Omega$ and $G-\Omega$ we obtain the following
identity
$\displaystyle
Z(G)=\sum_{\textbf{A}}Z_{S(\textbf{A})}(\Omega)Z_{S(\textbf{A})}(G-\Omega),$
where $S=\\{v_{1},v_{2},v_{3}\\}$ (Figure 4). After the star-triangle
transformation, we obtain a graph $G^{\prime}$ with the following partition
function
$\displaystyle
Z(G^{\prime})=\sum_{\textbf{A}}Z_{S(\textbf{A})}(\Omega^{\prime})Z_{S(\textbf{A})}(G^{\prime}-\Omega^{\prime}).$
Figure 4: Star-triangle transformation.
Due to the fact that the star-triangle transformation does not change edges of
the graph $G-\Omega$, we deduce that $\forall\textbf{A}\colon
Z_{S(\textbf{A})}(G-\Omega)=Z_{S(\textbf{A})}(G^{\prime}-\Omega^{\prime})$.
Therefore, the sufficient and necessary conditions for the invariance of the
partition function are the following
$\displaystyle\forall\textbf{A}\colon\
Z_{S(\textbf{A})}(\Omega)=Z_{S(\textbf{A})}(\Omega^{\prime}).$ (3.2)
We write them down in detail. Let us note that these conditions do not depend
on the states of the vertices (see Figure 5), but depend on the number and the
positions of the vertices with equal states. Therefore, we have the following
possibilities:
Figure 5: Different possibilities.
* •
two states in the triangle are the same, then the central vertex either has
the same state or has the different state, then
$\alpha_{1}\beta_{2}\beta_{3}+\beta_{1}\alpha_{2}\alpha_{3}\mapsto\alpha^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}$
and two more maps after permuting indexes,
* •
all states are the same, then
$\alpha_{1}\alpha_{2}\alpha_{3}+\beta_{1}\beta_{2}\beta_{3}\mapsto\alpha^{\prime}_{1}\alpha^{\prime}_{2}\alpha^{\prime}_{3}$.
In this way we obtain the following equations
$\displaystyle\begin{cases}\alpha_{1}\beta_{2}\beta_{3}+\beta_{1}\alpha_{2}\alpha_{3}=\alpha^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3},\\\
\alpha_{2}\beta_{1}\beta_{3}+\beta_{2}\alpha_{1}\alpha_{3}=\alpha^{\prime}_{2}\beta^{\prime}_{1}\beta^{\prime}_{3},\\\
\alpha_{3}\beta_{1}\beta_{2}+\beta_{3}\alpha_{1}\alpha_{2}=\alpha^{\prime}_{3}\beta^{\prime}_{1}\beta^{\prime}_{2},\\\
\alpha_{1}\alpha_{2}\alpha_{3}+\beta_{1}\beta_{2}\beta_{3}=\alpha^{\prime}_{1}\alpha^{\prime}_{2}\alpha^{\prime}_{3}.\end{cases}$
(3.3)
After the substitution
$\displaystyle t_{i}=\frac{\alpha_{i}}{\beta_{i}}$
we rewrite (3.3) as
$\displaystyle\begin{cases}\beta_{1}\beta_{2}\beta_{3}(t_{1}+t_{2}t_{3})=\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}t^{\prime}_{1},\\\
\beta_{1}\beta_{2}\beta_{3}(t_{2}+t_{1}t_{3})=\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}t^{\prime}_{2},\\\
\beta_{1}\beta_{2}\beta_{3}(t_{3}+t_{1}t_{2})=\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}t^{\prime}_{3},\\\
\beta_{1}\beta_{2}\beta_{3}(t_{1}t_{2}t_{3}+1)=\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}t^{\prime}_{1}t^{\prime}_{2}t^{\prime}_{3}.\end{cases}$
This set of equations defines a correspondence which preserves the Ising model
partition function if we mutate the graph $G$ to $G^{\prime}$. Let us denote
the product $\beta_{1}\beta_{2}\beta_{3}$ by $\beta$ and the product
$\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}$ by $\beta^{\prime}$.
Then, we obtain the following map from the $(t,\beta)$-variables to the
$(t^{\prime},\beta^{\prime})$ variables, we will call it $\widetilde{F}$,
$\displaystyle\widetilde{F}(t_{1},t_{2},t_{3},\beta)=(t^{\prime}_{1},t^{\prime}_{2},t^{\prime}_{3},\beta^{\prime})\colon$
$\displaystyle
t^{\prime}_{1}=\sqrt{\dfrac{(t_{1}+t_{2}t_{3})(t_{1}t_{2}t_{3}+1)}{(t_{2}+t_{1}t_{3})(t_{3}+t_{1}t_{2})}},$
$\displaystyle
t^{\prime}_{2}=\sqrt{\dfrac{(t_{2}+t_{1}t_{3})(t_{1}t_{2}t_{3}+1)}{(t_{1}+t_{2}t_{3})(t_{3}+t_{1}t_{2})}},$
$\displaystyle
t^{\prime}_{3}=\sqrt{\dfrac{(t_{3}+t_{1}t_{2})(t_{1}t_{2}t_{3}+1)}{(t_{1}+t_{2}t_{3})(t_{2}+t_{1}t_{3})}},$
$\displaystyle\beta^{\prime}=\beta\sqrt{\frac{(t_{1}+t_{2}t_{3})(t_{3}+t_{1}t_{2})(t_{2}+t_{1}t_{3})}{(t_{1}t_{2}t_{3}+1)}}.$
(3.4)
###### Remark 3.6.
Formally speaking to define a map on the space of edge weight adopted to the
star-triangle transformation we have to resolve the map $\widetilde{F}$
somehow for the parameters $\beta_{i}$. For example one can take the following
one
$\beta^{\prime}_{i}=\beta_{i}(\beta^{\prime}/\beta)^{1/3}.$
Actually, the choice of a resolution is not important in what follows.
###### Remark 3.7.
We choose the positive branch of the root function for real positive values of
variables $t_{i}$ for purposes emphasized further. This is relevant to the
almost positive version of the orthogonal grassmanian. See [11] for more
details about the connection of the Ising model and positive orthogonal
grassmanian.
### 3.3 The case $\boldsymbol{n\neq 2}$
Let us demonstrate how the method, described above, works for the star-
triangle transformation in the case $n\geq 3$. Using the same ideas as in the
previous subsection we could obtain the following conditions
$\begin{cases}\beta_{1}\beta_{2}\beta_{3}(t_{1}+t_{2}t_{3}+n-2)=\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}t^{\prime}_{1},\\\\[2.15277pt]
\beta_{1}\beta_{2}\beta_{3}(t_{2}+t_{1}t_{3}+n-2)=\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}t^{\prime}_{2},\\\\[2.15277pt]
\beta_{1}\beta_{2}\beta_{3}(t_{3}+t_{1}t_{2}+n-2)=\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}t^{\prime}_{3},\\\\[2.15277pt]
\beta_{1}\beta_{2}\beta_{3}(t_{1}t_{2}t_{3}+n-1)=\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}t^{\prime}_{1}t^{\prime}_{2}t^{\prime}_{3},\\\\[2.15277pt]
\beta_{1}\beta_{2}\beta_{3}(t_{1}+t_{2}+t_{3}+n-3)=\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}.\end{cases}$
(3.5)
Here the last equation follows from the extra case in which all states are
different.
Figure 6: The extra case.
In general, the system (3.5) does not have a solution and the star-triangle
transformation is not possible. But, if $t_{i}$ satisfy the special condition,
partition function of the $n$-Potts model is still invariant under the star-
triangle transformation.
###### Proposition 3.8.
The system (3.5) together with equation
$\displaystyle
t_{1}t_{2}t_{3}=t_{1}t_{2}+t_{2}t_{3}+t_{3}t_{1}+(n-1)(t_{1}+t_{2}+t_{3})+n^{2}-3n+1$
(3.6)
has a solution in terms of prime variables.
###### Proof.
Using the first three and the last equations of (3.5) we immediately obtain
the expressions for $t^{\prime}_{i}$ and
$\frac{\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}}{\beta_{1}\beta_{2}\beta_{3}}$:
$\displaystyle\begin{cases}\dfrac{\beta^{\prime}_{1}\beta^{\prime}_{2}\beta^{\prime}_{3}}{\beta_{1}\beta_{2}\beta_{3}}=t_{1}+t_{2}+t_{3}+n-3,\\\\[8.61108pt]
t^{\prime}_{1}=\dfrac{t_{1}+t_{2}t_{3}+n-2}{t_{1}+t_{2}+t_{3}+n-3},\\\\[8.61108pt]
t^{\prime}_{2}=\dfrac{t_{2}+t_{1}t_{3}+n-2}{t_{1}+t_{2}+t_{3}+n-3},\\\\[8.61108pt]
t^{\prime}_{3}=\dfrac{t_{3}+t_{1}t_{2}+n-2}{t_{1}+t_{2}+t_{3}+n-3}.\end{cases}$
Substitute these expressions into the fourth equation of (3.5) and obtain the
equation
$\displaystyle
t_{1}t_{2}t_{3}+n-1=\frac{(t_{1}+t_{2}t_{3}+n-2)(t_{2}+t_{1}t_{3}+n-2)(t_{3}+t_{1}t_{2}+n-2)}{(t_{1}+t_{2}+t_{3}+n-3)^{2}}.$
(3.7)
By the straightforward computation, we retrieve that the identity (3.6) is the
consequence from the equation (3.7). ∎
###### Corollary 3.9.
Partition function of the $n$-Potts model $(n\geq 3)$ is invariant under the
star-triangle transformation if and only if the system (3.5) with the equation
(3.6) hold.
Below we present two nontrivial specialization of the partition function of
$n$-Potts model which are agreed with the system (3.5), (3.6).
###### Example 3.10.
Consider a graph $G$ and equip each $e\in E(G)$ with sign $+$ or $-$. Let us
consider the $n$-Potts model $M_{k}(G,\alpha_{e},\beta_{e})$ with following
parameters:
* •
for all $e\in E$ equipped with $+$ the parameters $\alpha_{e}$, $\beta_{e}$
equal $\alpha_{e}=A_{+}=-t^{-\frac{3}{4}}$, $\beta_{e}=B_{+}=t^{\frac{1}{4}}$,
* •
for all $e\in E$ equipped with $-$ the parameters $\alpha_{e}$, $\beta_{e}$
equal $\alpha_{e}=A_{-}=-t^{\frac{3}{4}}$, $\beta_{e}=B_{-}=t^{-\frac{1}{4}}$,
* •
and $n=t+\frac{1}{t}+2$ (we suppose that parameter $t$ is chosen such that
$n\in\mathbb{N}$).
Let the graph $G$ has a triangle subgraph, the edges of which have signs $+$,
$-$, $-$. The reduced weights of edges are
$t_{1}=\frac{A_{+}}{B_{+}}=-\frac{1}{t}$, $t_{2}=\frac{A_{-}}{B_{-}}=-t$,
$t_{3}=\frac{A_{-}}{B_{-}}=-t$. It is easy to see that these $t_{i}$ satisfy
the equation (3.6).
We notice that the signed graph $G$ could be considered as the signed Tait
graph for a diagram $D(K)$ of a knot $K$ ([24], the chapter “Knot invariants
from edge-interaction models”). Moreover, the value of the Jones polynomial of
the knot $K$ at the point $n$ is closely related with the partition function
of the $n$-Potts model $M_{k}(G,\alpha_{e},\beta_{e})$ (see [24, equation
(7.17)]). Thus, the identification of the third Reidemeister move of the
diagram $D(K)$ with the star-triangle transformation of the signed graph $G$
is agreed with the star-triangle transformation defined by the system (3.5),
(3.6) for the $n$-Potts model $M_{k}(G,\alpha_{e},\beta_{e})$.
Our second example is about the models of bond percolation. Firstly, we
briefly give their definitions:
###### Definition 3.11 (bond percolation [13]).
Consider a graph $G$. An edge $e\in E(G)$ is considered to be open with
probability $p_{e}$ or closed with probability $1-p_{e}$. We suppose that all
edges might be closed or open independently. One is interested in
probabilistic properties of cluster formation (i.e. maximal connected sets of
closed edges of the graph $G$).
###### Example 3.12.
The bond percolation models could be considered as a limit $n\to 1$ of the
$n$-Potts models at the level of the boundary partition functions [7]. This
identification corresponds to the specialization of the system (3.5), (3.6) by
$n\to 1$.
Substitute $t_{i}=\frac{1}{p_{i}}$, $t^{\prime}_{i}=\frac{1}{p^{\prime}_{i}}$
and $n=1$ in (3.5) and (3.6), then
$\displaystyle\begin{cases}(p_{1}+p_{2}p_{3}-p_{1}p_{2}p_{3})\alpha_{1}\alpha_{2}\alpha_{3}=p^{\prime}_{2}p^{\prime}_{3}\alpha^{\prime}_{1}\alpha^{\prime}_{2}\alpha^{\prime}_{3},\\\
(p_{2}+p_{1}p_{3}-p_{1}p_{2}p_{3})\alpha_{1}\alpha_{2}\alpha_{3}=p^{\prime}_{1}p^{\prime}_{3}\alpha^{\prime}_{1}\alpha^{\prime}_{2}\alpha^{\prime}_{3},\\\
(p_{3}+p_{1}p_{2}-p_{1}p_{2}p_{3})\alpha_{1}\alpha_{2}\alpha_{3}=p^{\prime}_{1}p^{\prime}_{2}\alpha^{\prime}_{1}\alpha^{\prime}_{2}\alpha^{\prime}_{3},\\\
\alpha_{1}\alpha_{2}\alpha_{3}=\alpha^{\prime}_{1}\alpha^{\prime}_{2}\alpha^{\prime}_{3},\\\\[4.30554pt]
\dfrac{1}{p_{1}p_{2}}+\dfrac{1}{p_{2}p_{3}}+\dfrac{1}{p_{1}p_{3}}-1=\dfrac{1}{p_{1}p_{2}p_{3}}.\end{cases}$
After simplifications we obtain the condition for the star-triangle
transformation of the bond percolation models (for instance, see [13])
$\displaystyle\begin{cases}p_{1}+p_{2}p_{3}-p_{1}p_{2}p_{3}=p^{\prime}_{2}p^{\prime}_{3},\\\
p_{2}+p_{1}p_{3}-p_{1}p_{2}p_{3}=p^{\prime}_{1}p^{\prime}_{3},\\\
p_{3}+p_{1}p_{2}-p_{1}p_{2}p_{3}=p^{\prime}_{1}p^{\prime}_{2},\\\
p_{1}+p_{2}+p_{3}-1=p_{1}p_{2}p_{3}.\end{cases}$
## 4 Tetrahedron equation
The tetrahedron equation firstly was considered by A. Zamolodchikov [25] who
has constructed its solution in $S$-form. We consider the following form of
the equation
$\displaystyle T_{123}T_{145}T_{246}T_{356}=T_{356}T_{246}T_{145}T_{123},$
(4.1)
where $T_{ijk}$ is an operator acting nontrivially in the tensor product of
three vector spaces $V_{i}$, $V_{j}$, $V_{k}$, indexed by $i$, $j$ and $k$.
Tetrahedron equation is the higher order analog of the Yang–Baxter equation.
Both equations are examples of $n$-simplex equations [18] and play an
important role in hypercube combinatorics and higher Bruhat orders. For the
complete introduction to the topic see for example [21]. In this section we
present two proofs of the main theorem of the paper:
###### Theorem 4.1.
The change of variables (3.4) defines the solution of the tetrahedron equation
(4.1).
These two proofs have a lot of common points and ideas, but have the crucial
differences in the last stages. It is interesting to compare proofs for the
purpose of combining arguments of boundary partition functions and the
technique of correlation functions in the Ising–Potts models.
At first in the next subsection we prove that the change of variables (3.4)
corresponds to the variables transform in the trigonometric solution of a
local Yang–Baxter equation.
### 4.1 Local Yang–Baxter equation
Let us recall that the following change of variables
$(t_{1},t_{2},t_{3})\mapsto(t^{\prime}_{1},t^{\prime}_{2},t^{\prime}_{3})$
provides an invariance of the Ising model (3.1) under the star-triangle
transformation (3.4):
$\displaystyle\begin{cases}t^{\prime}_{1}=&\sqrt{\dfrac{(t_{1}+t_{2}t_{3})(t_{1}t_{2}t_{3}+1)}{(t_{2}+t_{1}t_{3})(t_{3}+t_{1}t_{2})}},\\\\[8.61108pt]
t^{\prime}_{2}=&\sqrt{\dfrac{(t_{2}+t_{1}t_{3})(t_{1}t_{2}t_{3}+1)}{(t_{1}+t_{2}t_{3})(t_{3}+t_{1}t_{2})}},\\\\[8.61108pt]
t^{\prime}_{3}=&\sqrt{\dfrac{(t_{3}+t_{1}t_{2})(t_{1}t_{2}t_{3}+1)}{(t_{1}+t_{2}t_{3})(t_{2}+t_{1}t_{3})}}\end{cases}$
$\displaystyle\qquad\qquad\Updownarrow$
$\displaystyle\begin{cases}t^{\prime}_{1}t^{\prime}_{2}=&\dfrac{t_{1}t_{2}t_{3}+1}{t_{3}+t_{1}t_{2}},\\\\[8.61108pt]
t^{\prime}_{2}t^{\prime}_{3}=&\dfrac{t_{1}t_{2}t_{3}+1}{t_{1}+t_{2}t_{3}},\\\\[8.61108pt]
t^{\prime}_{1}t^{\prime}_{3}=&\dfrac{t_{1}t_{2}t_{3}+1}{t_{2}+t_{1}t_{3}}.\end{cases}$
(4.2)
Following [17], we construct orthogonal hyperbolic $3\times 3$ matrices
$R_{ij}$ which solve the local Yang–Baxter equation
$\displaystyle
R_{12}(t_{3})R_{13}(S(t_{2}))R_{23}(t_{1})=R_{23}(S(t^{\prime}_{1}))R_{13}(t^{\prime}_{2})R_{12}(S(t^{\prime}_{3})),$
(4.3)
where $S(t)$ is the following involution
$\displaystyle S(t)=\frac{t-1}{t+1}.$ (4.4)
On the left hand side of (4.3) we have
$\displaystyle
R_{12}(t_{3})=\begin{pmatrix}\mathrm{i}\sinh(\log(t_{3}))&\cosh(\log(t_{3}))&0\\\
\cosh(\log(t_{3}))&-\mathrm{i}\sinh(\log(t_{3}))&0\\\ 0&0&1\end{pmatrix}\\!,$
(4.5) $\displaystyle
R_{13}(S(t_{2}))=\begin{pmatrix}\mathrm{i}\sinh(\log(S(t_{2})))&0&\cosh(\log(S(t_{2})))\\\
0&1&0\\\
\cosh(\log(S(t_{2})))&0&-\mathrm{i}\sinh(\log(S(t_{2})))\end{pmatrix}\\!,$
(4.6) $\displaystyle R_{23}(t_{1})=\begin{pmatrix}1&0&0\\\
0&\mathrm{i}\sinh(\log(t_{1}))&\cosh(\log(t_{1}))\\\
0&\cosh(\log(t_{1}))&-\mathrm{i}\sinh(\log(t_{1}))\end{pmatrix}\\!.$ (4.7)
###### Theorem 4.2.
Matrices (4.5), (4.6), (4.7) together with the rules (3.4), (4.4) give a
solution of (4.3).
###### Proof.
It can be proved by a straightforward computation. For example let us write
down the result of the product on the left hand side
$\displaystyle\begin{pmatrix}\dfrac{t_{2}\big{(}t_{3}^{2}-1\big{)}}{t_{3}(t_{2}^{2}-1)}&\dfrac{{\rm
i}\big{(}t_{1}^{2}t_{2}^{2}t_{3}^{2}-t_{1}^{2}-t_{2}^{2}+t_{3}^{2}\big{)}}{2t_{1}t_{3}\big{(}t_{2}^{2}-1\big{)}}&\dfrac{t_{1}^{2}t_{2}^{2}t_{3}^{2}-t_{1}^{2}+t_{2}^{2}-t_{3}^{2}}{2t_{1}t_{3}\big{(}t_{2}^{2}-1\big{)}}\\\\[12.91663pt]
-\dfrac{{\rm
i}t_{2}\big{(}t_{3}^{2}+1\big{)}}{t_{3}(t_{2}^{2}-1)}&\dfrac{t_{1}^{2}t_{2}^{2}t_{3}^{2}+t_{1}^{2}+t_{2}^{2}+t_{3}^{2}}{2t_{1}t_{3}\big{(}t_{2}^{2}-1\big{)}}&-\dfrac{{\rm
i}\big{(}t_{1}^{2}t_{2}^{2}t_{3}^{2}+t_{1}^{2}-t_{2}^{2}-t_{3}^{2}\big{)}}{2t_{1}t_{3}\big{(}t_{2}^{2}-1\big{)}}\\\\[12.91663pt]
\dfrac{t_{2}^{2}+1}{t_{2}^{2}-1}&\dfrac{{\rm
i}t_{2}\big{(}t_{1}^{2}+1\big{)}}{t_{1}\big{(}t_{2}^{2}-1\big{)}}&\dfrac{t_{2}\big{(}t_{1}^{2}-1\big{)}}{t_{1}\big{(}t_{2}^{2}-1\big{)}}\end{pmatrix}\\!.$
(4.8)
At the first glance the product on the right hand side looks much more
cumbersome, but occasionally all terms are simplified and the matrix on the
right hand side coincides with (4.8). ∎
### 4.2 Tetrahedron equation, first proof
Let us encode the tetrahedron equation by the Figure 7.
Figure 7: Encoding the tetrahedron equation by the standard graph.
The standard graph encodes $R$-matrices in the following way: in each inner
vertex numbered by $k$, which is the intersection of strands $i$ and $j$ we
put the matrix $R_{ij}(t_{k})$ which is the $2\times 2$ matrix in the
$4$-dimensional space with basis vectors indexed by $a$, $b$, $c$, $d$. For
instance,
$\displaystyle
R_{ac}(t_{5})=\begin{pmatrix}\mathrm{i}\sinh(\log(t_{5}))&0&\cosh(\log(t_{5}))&0\\\
0&1&0&0\\\ \cosh(\log(t_{5}))&0&-\mathrm{i}\sinh(\log(t_{5}))&0\\\
0&0&0&1\end{pmatrix}\\!.$
Let us orient each strand from the left to the right and multiply $R$-matrices
in order of the orientation, for instance for the Figure 7 we have the
following product of $R$-matrices
$\displaystyle
R_{cd}(t_{1})R_{bd}(S(t_{2}))R_{bc}(t_{3})R_{ad}(t_{4})R_{ac}(S(t_{5}))R_{ab}(t_{6}).$
(4.9)
We note that the orientation defines the product (4.9) uniquely.
Then let us apply four local Yang–Baxter equations consequently to the inner
triangles with vertices numbered $(1,2,3)$, $(1,4,5)$, $(2,4,6)$ and $(3,5,6)$
as on the Figure 8. As a result, we have one and the same standard graph as on
the Figure 7 rotated by $\pi$.
Figure 8: Local Yang–Baxter equations applied to the standard graph.
At the same time we could apply local Yang–Baxter equations in the opposite
direction: firstly to the triangle $(3,5,6)$, then $(2,4,6)$, $(1,4,5)$ and
$(1,2,3)$. Eventually in this case we will have again the same standard graph.
As the reader may have already guessed, every local Yang–Baxter equation
applied to the triangle $A$, $B$, $C$ defines the factor $T_{ABC}$ in the
tetrahedron equation (4.1). For example we obtain
$\displaystyle T_{1,2,3}\colon\
(t_{1},S(t_{2}),t_{3},t_{4},t_{5},t_{6})\mapsto(S(t^{\prime}_{1}),t^{\prime}_{2},S(t^{\prime}_{3}),t_{4},t_{5},t_{6}).$
By Theorem 4.2 the product (4.9) preserves by each local Yang–Baxter equation
encoded on the Figure 8. As a result of two sequences of four Local
Yang–Baxter equations we obtain an equality of two products of six $4\times 4$
$R$-matrices
$\displaystyle
R_{cd}(u_{1})R_{bd}(u_{2})R_{bc}(u_{3})R_{ad}(u_{4})R_{ac}(u_{5})R_{ab}(u_{6})$
$\displaystyle\qquad{}=R_{cd}(v_{1})R_{bd}(v_{2})R_{bc}(v_{3})R_{ad}(v_{4})R_{ac}(v_{5})R_{ab}(v_{6}),$
(4.10)
where the parameters $u_{i}$, $v_{j}$, $i,j=1,\ldots,6$ depend on the initial
variables $t_{i}$, on the mapping (3.4) and on the involution (4.4).
Let us consider this equation element-wise, and note that we could uniquely
express parameters in the right hand side in terms of the parameters on the
left hand side
$\displaystyle U_{1,4}=b(t_{4}),\qquad U_{2,4}=-a(t_{4})b(t_{2}),\qquad
U_{1,3}=a(t_{4})b(t_{5}),$ $\displaystyle
U_{1,2}=a(t_{4})a(t_{5})b(t_{6}),\quad U_{3,4}=a(t_{2})a(t_{4})b(t_{1}),\quad
U_{2,3}=b(t_{2})b(t_{4})b(t_{5})-a(t_{2})a(t_{5})b(t_{3}).$
Here $U$ is a matrix on the left hand side and $a,b$ are some invertible
functions, come from (4.5), (4.6), (4.7). So we could uniquely determine
$t_{4}$ from the first equation, $t_{2}$ and $t_{5}$ from the second and the
third, then $t_{1}$ and $t_{6}$, and finally $t_{3}$ from the element
$U_{2,3}$.
Let us note that this algebraic proof could be formulated in terms of the
paths on the standard graph (Figure 7) with orientation. So the equation
(4.10) provides coincidence of the parameters in the vertices given by the two
sides of the tetrahedron equation. This finishes the proof.
### 4.3 Tetrahedron equation, second proof
#### 4.3.1 Involution lemma
Let us consider the map
$F(t_{1},t_{2},t_{3})=(t^{\prime}_{1},t^{\prime}_{2},t^{\prime}_{3})$, where
prime variables defined by (3.4). First of all, we formulate one technical
lemma:
###### Lemma 4.3.
The following identity holds for all $t_{1}$, $t_{2}$, $t_{3}$:
$\displaystyle S\times S\times S\circ F\circ S\times S\times S=F^{-1},$
where $S(t)=\frac{t-1}{t+1}$.
We present the proof in Appendix A.
#### 4.3.2 Towards the tetrahedron equation
Let us consider any graph $G$ with a subgraph $\Gamma_{1}$ which coincides
with the leftmost graph on the Figure 9. We can transform the graph $G$ to the
graph $G^{\prime}$ with the subgraph $\Gamma_{2}$ which coincides with the
rightmost graph on the Figure 9. We could make this mutation by two different
chains of star-triangle transformations:
$F_{356}^{-1}F_{246}F_{145}^{-1}F_{123}$ and
$F_{123}^{-1}F_{145}F_{246}^{-1}F_{356}$. Both are figured out on the Figure
8. This observation turns us to the following hypothesis
$\displaystyle
F_{356}^{-1}F_{246}F_{145}^{-1}F_{123}=F_{123}^{-1}F_{145}F_{246}^{-1}F_{356}.$
(4.11)
Figure 9: The graphical representation of the left and right parts of (4.12).
This equality is equivalent to the Zamolodchikov equation
$\displaystyle\Phi_{356}\Phi_{246}\Phi_{145}\Phi_{123}=\Phi_{123}\Phi_{145}\Phi_{246}\Phi_{356},$
(4.12)
where $\Phi_{ijk}=S_{i}S_{k}F_{ijk}S_{j}$. Indeed, using Lemma 4.3 and the
simple observation that $S_{l}F_{ijk}=F_{ijk}S_{l}$, $l\neq\\{i,j,k\\}$ we can
write down
$\displaystyle
F_{356}^{-1}F_{246}F_{145}^{-1}F_{123}=S_{3}S_{5}S_{6}F_{356}S_{3}S_{5}S_{6}F_{246}S_{1}S_{4}S_{5}F_{145}S_{1}S_{4}S_{5}F_{123}$
$\displaystyle\hphantom{F_{356}^{-1}F_{246}F_{145}^{-1}F_{123}}{}=S_{2}S_{5}(S_{3}S_{6}F_{356}S_{5})(S_{2}S_{6}F_{246}S_{4})(S_{1}S_{5}F_{145}S_{4})(S_{1}S_{3}F_{123}S_{2})S_{2}S_{5},$
and
$\displaystyle
F_{123}^{-1}F_{145}F_{246}^{-1}F_{356}=S_{1}S_{2}S_{3}F_{123}S_{1}S_{2}S_{3}F_{145}S_{2}S_{4}S_{6}F_{246}S_{2}S_{4}S_{6}F_{356}$
$\displaystyle\hphantom{F_{123}^{-1}F_{145}F_{246}^{-1}F_{356}}{}=S_{2}S_{5}(S_{1}S_{3}F_{123}S_{2})(S_{1}S_{5}F_{145}S_{4})(S_{2}S_{6}F_{246}S_{4})(S_{1}S_{3}F_{356}S_{2})S_{2}S_{5}.$
Conjugating both sides of (4.11) by $S_{2}S_{5}$ we obtain the Zamolodchikov
tetrahedron equation.
#### 4.3.3 Solution for the tetrahedron equation
###### Proposition 4.4.
The functions
* •
$\dfrac{\partial\ln(Z(G))}{\partial t_{e}}$, where $e$ is any edge belonging
to $G-\Omega$, and
* •
$\dfrac{Z_{S(\textbf{A})}(G)}{Z(G)}$, where $Z_{S(\textbf{A})}(G)$ is the
boundary partition function and $S$ is any vertex subset of $G-\Omega$ or
$G-\Omega^{\prime}$ $($see the Figure $\ref{fig startr}),$
are invariant under the star-triangle transformation. Moreover, these
functions do not depend on variables $\beta_{e}$.
###### Remark 4.5.
The function
$\displaystyle\dfrac{Z_{S(\textbf{A})}(G)}{Z(G)}$
can be interpreted as a probability of the fixed values A of spins in $S$,
related to the boundary partition function $Z_{S(\textbf{A})}(G)$.
###### Proof.
The crucial point in the demonstration of the first part of the statement is
the fact that the derivative $\frac{\partial\ln(Z(G))}{\partial t_{e_{i}}}$
does not depend on parameters $\beta$. Indeed, this follows from the explicit
form of the partition function
$\displaystyle Z(G)=\prod_{e\in E}\beta_{e}\sum_{\sigma}\prod_{e\in
E}(1+(t_{e}-1)\delta(\sigma_{e})).$
The proof of the second part of the statement is straightforward. It follows
from the Definition 3.2 and the condition (3.2). ∎
We will prove the Zamolodchikov equation in its equivalent form
$\displaystyle
F_{356}^{-1}F_{246}F_{145}^{-1}F_{123}=F_{123}^{-1}F_{145}F_{246}^{-1}F_{356}.$
(4.13)
Let us notice that due to the local nature of the star-triangle transformation
and the convolution property of the boundary partition function we have a
choice to take some suitable graph to prove equation (4.13). So let us take
the graph $\Gamma_{1}$ from the Figure 10 with the following choice of
boundary set $S_{0}$:
$\displaystyle S_{0}:=\\{v_{1},v_{2},v_{3},v_{4}\\}.$
Figure 10: The graphs $\Gamma_{1}$ and $\Gamma_{2}$.
We will prove that the values of the second-type invariant functions which are
preserved by both sides of the equation (4.13) allows us to uniquely
reconstruct weights of all edges. Explain this idea in detail, let us consider
the left hand side of the equation (4.13) and the map $F_{123}$, then for any
$\textbf{A}=\\{a_{1},\dots,a_{4}\\}$ the following identity holds
$\displaystyle\frac{Z_{S_{0}(\textbf{A})}(\Gamma_{1})}{Z(\Gamma_{1})}=\frac{Z_{S_{1}(\textbf{A}_{1})}(\Gamma_{1})}{Z(\Gamma_{1})}+\frac{Z_{S_{1}(\textbf{A}_{2})}(\Gamma_{1})}{Z(\Gamma_{1})},$
here $S_{1}=\\{v_{1},v_{2},v_{3},v_{4},v_{5}\\}$ (see Figure 11),
$\textbf{A}_{1}=\\{a_{1},\dots,a_{4},0\\}$,
$\textbf{A}_{2}=\\{a_{1},\dots,a_{4},1\\}$. The Proposition 4.4 provides
$\displaystyle\frac{Z_{S_{1}(\textbf{A}_{1})}(\Gamma_{1})}{Z(\Gamma_{1})}=\frac{Z_{S_{1}(\textbf{A}_{1})}(\Gamma^{\prime})}{Z(\Gamma^{\prime})},\qquad\frac{Z_{S_{1}(\textbf{A}_{2})}(\Gamma_{1})}{Z(\Gamma_{1})}=\frac{Z_{S_{1}(\textbf{A}_{2})}(\Gamma^{\prime})}{Z(\Gamma^{\prime})},$
where $\Gamma^{\prime}$ is obtained from $\Gamma_{1}$ by the star-triangle
transformation (see Figure 11). And therefore we deduce that
$\displaystyle\frac{Z_{S_{0}(\textbf{A})}(\Gamma_{1})}{Z(\Gamma_{1})}=\frac{Z_{S_{0}(\textbf{A})}(\Gamma^{\prime})}{Z(\Gamma^{\prime})}.$
Repeating these arguments for the remaining maps $F_{ijk}$ from the left hand
side of (4.13) we obtain
$\displaystyle\frac{Z_{S_{0}(\textbf{A})}(\Gamma_{1})}{Z(\Gamma_{1})}=\frac{Z_{S_{0}(\textbf{A})}(\Gamma_{2})}{Z(\Gamma_{2})}.$
(4.14)
In the same fashion, if we consider the right hand side of the equation
(4.13), we similarly obtain that
$\displaystyle\frac{Z_{S_{0}(\textbf{A})}(\Gamma_{1})}{Z(\Gamma_{1})}=\frac{Z_{S_{0}(\textbf{A})}(\Gamma_{2})}{Z(\Gamma_{2})}.$
Hence, in order to prove the equation (4.13) it is sufficient to prove that we
can reconstruct the parameters $t_{i}$, $i=1,\ldots,6$ from the values
${Z_{S_{0}(\textbf{A})}(\Gamma_{2})}/{Z(\Gamma_{2})}$ for different values of
A in a unique way.
Figure 11: The left hand side of (4.13). Figure 12: The graph $\Gamma_{2}$.
We understand the identity (4.14) as a system of $2^{4}$ linear equations with
unknowns $Z_{S_{0}(\textbf{A})}(\Gamma_{2})$ of the following type
$\displaystyle
Z_{S(\textbf{A})}(\Gamma_{2})\colon\quad\frac{Z_{S(\textbf{A})}(\Gamma_{2})}{Z(\Gamma_{2})}=\alpha(\textbf{A}),\qquad\forall\textbf{A}=(a_{1},\ldots,a_{4})$
which is equivalent to
$\displaystyle\sum_{\textbf{A}^{\prime}}Z_{S(\textbf{A}^{\prime})}(\Gamma_{2})=Z_{S(\textbf{A})}(\Gamma_{2})/\alpha(\textbf{A}).$
The rank of the system is equal to 15. Indeed, the rank is $\geq 15$ and we
know that there is a nontrivial solution coming from the boundary partition
functions for the graph $\Gamma_{2}$.
Hence any solution has the form
$\displaystyle
Z_{S_{0}(\textbf{A})}(\Gamma_{2})=C\cdot\alpha_{0}(a_{1},a_{2},a_{3},a_{4}),$
(4.15)
where $C$ is some constant and $a_{i}$ are the states. Now we will prove that
the parameters $t_{1},\ldots,t_{6}$ are reconstructed uniquely from the
equation (4.15).
Let us introduce some auxiliary variables and rewrite the partition function
in the following way: we have 16 states of boundary vertices
$S_{0}=\\{v_{1},v_{2},v_{3},v_{4}\\}$. Each expression
$Z_{S_{0}(\textbf{A})}(\Gamma_{2})$ is a sum of four terms corresponding to
the states of internal vertices $v_{5}$ and $v_{6}$. We consider in details
the case $S_{0}=\\{0,0,0,0\\}$. Let us denote the weights of the states of the
square $\\{v_{1},v_{6},v_{5},v_{4}\\}$ by $v$, $z$, $y$ and $x$ (Figure 13).
Then, we obtain the following equations
$\displaystyle v=t_{3}t_{5}B,\qquad$ $\displaystyle v_{1}=t_{5}t_{6}B,$
$\displaystyle z=t_{3}t_{4}t_{5}t_{6}B,\qquad$ $\displaystyle
z_{1}=t_{4}t_{5}B,$ $\displaystyle y=t_{6}t_{3}B,\qquad$ $\displaystyle
y_{1}=B,$ $\displaystyle x=t_{3}t_{4}B,\qquad$ $\displaystyle
x_{1}=t_{4}t_{6}B,$ (4.16)
where $B=\beta_{3}\beta_{4}\beta_{5}\beta_{6}$ and
$\displaystyle
v+t_{1}t_{2}z+t_{2}y+t_{1}x=\frac{C}{B_{1}}\alpha_{0}(0,0,0,0),B_{1}=\beta_{1}\beta_{2}.$
Figure 13: The auxiliary variables.
Similarly we obtain seven more equations (we omit eighth equation with
$v_{1}=1$ due to the symmetry of the model with respect to the total
involution of spins)
$\displaystyle
v_{1}+t_{1}t_{2}z_{1}+t_{2}y_{1}+t_{1}x_{1}=\frac{C}{B_{1}}b_{1}=\frac{C}{B_{1}}\alpha^{\prime}(0,0,0,1),$
$\displaystyle
t_{1}v_{1}+t_{2}z_{1}+y_{1}t_{1}t_{2}+x_{1}=\frac{C}{B_{1}}b_{2}=\frac{C}{B_{1}}\alpha^{\prime}(0,1,0,1),$
$\displaystyle
t_{1}t_{2}v_{1}+z_{1}+t_{1}y_{1}+t_{2}x_{1}=\frac{C}{B_{1}}b_{3}=\frac{C}{B_{1}}\alpha^{\prime}(0,1,1,1),$
$\displaystyle
t_{2}v_{1}+t_{1}z_{1}+y_{1}+t_{1}t_{2}x_{1}=\frac{C}{B_{1}}b_{4}=\frac{C}{B_{1}}\alpha^{\prime}(0,0,1,1),$
$\displaystyle
v+t_{1}t_{2}z+t_{2}y+t_{1}x=\frac{C}{B_{1}}a_{1}=\frac{C}{B_{1}}\alpha^{\prime}(0,0,0,0),$
$\displaystyle
t_{1}v+t_{2}z+yt_{1}t_{2}+x=\frac{C}{B_{1}}a_{2}=\frac{C}{B_{1}}\alpha^{\prime}(0,1,0,0),$
$\displaystyle
t_{1}t_{2}v+z+t_{1}y+t_{2}x=\frac{C}{B_{1}}a_{3}=\frac{C}{B_{1}}\alpha^{\prime}(0,1,1,0),$
$\displaystyle
t_{2}v+t_{1}z+y+t_{1}t_{2}x=\frac{C}{B_{1}}a_{4}=\frac{C}{B_{1}}\alpha^{\prime}(0,0,1,0).$
By straightforward calculation we retrieve
$\displaystyle
y_{1}=\frac{C}{B_{1}}\frac{-t_{2}b_{1}+t_{1}t_{2}b_{2}+b_{4}-t_{1}b_{3}}{-t_{2}^{2}+1+t_{1}^{2}t_{2}^{2}-t_{1}^{2}},\qquad
z_{1}=\frac{C}{B_{1}}\frac{t_{1}t_{2}b_{1}-t_{2}b_{2}+b_{3}-t_{1}b_{4}}{-t_{2}^{2}+1+t_{1}^{2}t_{2}^{2}-t_{1}^{2}},$
$\displaystyle
x_{1}=\frac{C}{B_{1}}\frac{t_{2}t_{1}b_{4}-t_{1}b_{1}-t_{2}b_{3}+b_{2}}{-t_{2}^{2}+1+t_{1}^{2}t_{2}^{2}-t_{1}^{2}},\qquad
v_{1}=\frac{C}{B_{1}}\frac{b_{1}+t_{1}t_{2}b_{3}-t_{2}b_{4}-t_{1}b_{2}}{-t_{2}^{2}+1+t_{1}^{2}t_{2}^{2}-t_{1}^{2}},$
(4.17) $\displaystyle
y=\frac{C}{B_{1}}\frac{-t_{2}a_{1}+t_{1}t_{2}a_{2}+a_{4}-t_{1}a_{3}}{-t_{2}^{2}+1+t_{1}^{2}t_{2}^{2}-t_{1}^{2}},\qquad
z=\frac{C}{B_{1}}\frac{t_{1}t_{2}a_{1}-t_{2}a_{2}+a_{3}-t_{1}a_{4}}{-t_{2}^{2}+1+t_{1}^{2}t_{2}^{2}-t_{1}^{2}},$
$\displaystyle
x=\frac{C}{B_{1}}\frac{t_{2}t_{1}a_{4}-t_{1}a_{1}-t_{2}a_{3}+a_{2}}{-t_{2}^{2}+1+t_{1}^{2}t_{2}^{2}-t_{1}^{2}},\qquad
v=\frac{C}{B_{1}}\frac{a_{1}+t_{1}t_{2}a_{3}-t_{2}a_{4}-t_{1}a_{2}}{-t_{2}^{2}+1+t_{1}^{2}t_{2}^{2}-t_{1}^{2}}.$
Using the auxiliary variables it is easy to see that
$\displaystyle\frac{z_{1}}{x_{1}}=\frac{v}{y}=\frac{-t_{2}a_{4}+t_{1}t_{2}a_{3}-t_{1}a_{2}+a_{1}}{t_{2}t_{1}a_{2}-t_{2}a_{1}-t_{1}a_{3}+a_{4}}=\frac{t_{1}t_{2}b_{1}-t_{2}b_{2}+b_{3}-t_{1}b_{4}}{t_{2}t_{1}b_{4}-t_{1}b_{1}-t_{2}b_{3}+b_{2}}.$
In the same fashion we obtain
$\displaystyle\frac{v_{1}}{x_{1}}=\frac{v}{x}.$
Then we can straightforwardly deduce expressions for the variables $t_{1}$ and
$t_{2}$ (equations (B.1), (B.2) in Appendix B), then obtain expressions for
the auxiliary variables from equations (4.17) and finally obtain variables
$t_{3}$, $t_{4}$, $t_{5}$, $t_{6}$ from the equations (4.16):
$\displaystyle\begin{cases}t_{3}=\sqrt{\dfrac{vyx}{zy^{2}_{1}}},\\\
t_{5}=\dfrac{v}{t_{3}y_{1}},\\\ t_{6}=\dfrac{y}{t_{3}y_{1}},\\\
t_{4}=\dfrac{x}{t_{3}y_{1}}.\end{cases}$
This completes the proof of the Zamolodchikov equation due to the fact that
there is a unique way to choose positive weights for the edges of the model to
provide the expected values of boundary partitions function for the graph
$\Gamma_{2}$.
## 5 Star-triangle transformation, Biggs formula and conclusion
The main results of the paper represent the functional relations on the space
of multivariate Tutte polynomials. This problem is a step of the program of
investigation of the framed graph structures and the related statistical
models. We examined in details the Biggs formula and applied it to the
multivariate case. We also provided a new proof of the theorem of Matiyasevich
as a partial case of such formula. The second principal result is the reveal
of the tetrahedral symmetry of the multivariate Tutte polynomial at the point
$n=2$. Therefore, we have a connection between the multivariate Tutte
polynomial, functions on Lustig cluster manifolds [4] and its electrical
analogues [12, 19]. We would like to interpret this property as the critical
point of the model described by the multivariate Tutte polynomial, and the
tetrahedral symmetry as a longstanding analog of the conformal symmetry of the
Ising model at the critical point [8].
Both correspondences are related by the following observation. Let $G$ and
$G^{\prime}$ be two graphs related by the star-triangle transformation. Let us
consider the case when the partition function is invariant with respect to
this transformation ($n=2$ or $n>2$ and the system (3.5), (3.6) holds)
$Z_{n}(G^{\prime})=Z_{n}(G).$
On the other hand the star-triangle transformation provides a groupoid
symmetry on a wide class of objects, in our case on the space of Ising models.
The Biggs formula allows us to extend this action to the points of valency $1$
and $2$. And we can obtain the 14-term relation (Theorem 5.1) by comparing the
right-hand sides of Biggs formulas for $G$ and $G^{\prime}$.
Let us explain this idea in details, consider two pairs of $n$-Potts models:
$M_{1}(G,i^{1}_{e})$ and $M_{1}(G^{\prime},i^{1}_{e})$, $M_{2}(G,i^{2}_{e})$
and $M_{2}(G^{\prime},i^{2}_{e})$ (for simplicity we denote these models
$M_{1}(G)$, $M_{2}(G)$, $M_{1}(G^{\prime})$, $M_{2}(G^{\prime})$
correspondingly). After multiplying both parts of the formula (2.1) for
$M_{1}(G)$ and $M_{2}(G)$ by $n^{v(G)}$ (by $n^{v(G^{\prime})}$ for
$M_{1}(G^{\prime})$ and $M_{2}(G^{\prime})$) we obtain
$\displaystyle Z^{1}_{n}(G)=\prod\limits_{e\in G}q_{e}\sum_{A\subseteq
G}\prod\limits_{e\in
A}\frac{p_{e}}{q_{e}}Z^{2}_{n}(A)=Z^{1}_{n}(G^{\prime})=\prod_{e\in
G^{\prime}}q^{\prime}_{e}\sum_{A^{\prime}\subseteq G^{\prime}}\prod_{e\in
A^{\prime}}\frac{p^{\prime}_{e}}{q^{\prime}_{e}}Z^{2}_{n}(A^{\prime}),$ (5.1)
here in both cases we take the sum over the set of all spanning subgraphs.
Let us rewrite the first part of the formula (5.1) by separating two kinds of
terms
$\displaystyle Z^{1}_{n}(G)=\prod_{e\in G}q_{e}\sum\limits_{A_{1}\subseteq
G}\prod_{e\in A_{1}}\frac{p_{e}}{q_{e}}Z^{2}_{n}(A_{1})+\prod_{e\in
G}q_{e}\sum\limits_{A_{2}\subseteq G}\prod_{e\in
A_{2}}\frac{p_{e}}{q_{e}}Z^{2}_{n}(A_{2}),$ (5.2)
where each subgraph $A_{1}$ contains the full triangle and each $A_{2}$
contains only a part of the triangle.
After the star-triangle transformation of the $M_{1}(G)$ and $M_{2}(G)$ we
obtain the following formula for the models $M_{1}(G^{\prime})$ and
$M_{2}(G^{\prime})$:
$\displaystyle Z^{1}_{n}(G^{\prime})=\prod_{e\in
G^{\prime}}q^{\prime}_{e}\sum\limits_{A_{1}^{\prime}\subseteq
G^{\prime}}\prod_{e\in
A^{\prime}_{1}}\frac{p^{\prime}_{e}}{q^{\prime}_{e}}Z^{2}_{n}(A_{1}^{\prime})+\prod_{e\in
G^{\prime}}q^{\prime}_{e}\sum\limits_{A^{\prime}_{2}\subseteq
G^{\prime}}\prod_{e\in
A^{\prime}_{2}}\frac{p^{\prime}_{e}}{q^{\prime}_{e}}Z^{2}_{n}(A_{2}^{\prime}),$
(5.3)
where each subgraph $A^{\prime}_{1}$ contains the full star and each
$A^{\prime}_{2}$ contains only a part of the star. Then, we compare the terms
of these formulas:
* •
We notice that due to the star-triangle transformation
$Z_{n}^{i}(G)=Z_{n}^{i}(G^{\prime})$ and
$Z_{n}^{i}(A_{1})=Z_{n}^{i}(A_{1}^{\prime})$ (here and below $A_{1}$ is
different from $A_{1}^{\prime}$ only by the star-triangle transformation).
* •
Also it is easy to see that $\prod_{e\in G}q_{e}\frac{1}{\prod_{e\in
A_{1}}q_{e}}=\prod_{e\in G^{\prime}}q^{\prime}_{e}\frac{1}{\prod_{e\in
A_{1}^{\prime}}q^{\prime}_{e}}$.
* •
If the model $M_{1}(G)$ is chosen such that
$p_{1}p_{2}p_{3}=p_{1}^{\prime}p_{2}^{\prime}p_{3}^{\prime}$, we conclude that
$\prod_{e\in A_{1}}p_{e}=\prod_{e\in A^{\prime}_{1}}p^{\prime}_{e}$.
Now, we are ready to formulate the following theorem:
###### Theorem 5.1.
Consider two $n$-Potts models $M_{2}(G)$ and $M_{2}(G^{\prime})$, which are
different from each other by the star-triangle transformation. Then, the
following formula holds
$\displaystyle
q_{1}q_{2}q_{3}\bigg{(}Z^{2}_{n}(G_{0})+\frac{p_{1}}{q_{1}}Z^{2}_{n}(G_{1})+\frac{p_{2}}{q_{2}}Z^{2}_{n}(G_{2})+\frac{p_{3}}{q_{3}}Z^{2}_{n}(G_{3})+\frac{p_{1}p_{2}}{q_{1}q_{2}}Z^{2}_{n}(G_{12})+\frac{p_{1}p_{3}}{q_{1}q_{3}}Z^{2}_{n}(G_{13})$
$\displaystyle\hphantom{q_{1}q_{2}q_{3}\bigg{(}}{}+\frac{p_{2}p_{3}}{q_{2}q_{3}}Z^{2}_{n}(G_{23})\bigg{)}=q_{1}^{\prime}q_{2}^{\prime}q_{3}^{\prime}\bigg{(}Z^{2}_{n}(G^{\prime}_{0})+\frac{p^{\prime}_{1}}{q^{\prime}_{1}}Z^{2}_{n}(G^{\prime}_{1})+\frac{p^{\prime}_{2}}{q^{\prime}_{2}}Z^{2}_{n}(G_{2})+\frac{p^{\prime}_{3}}{q^{\prime}_{3}}Z^{2}_{n}(G^{\prime}_{3})$
$\displaystyle\hphantom{q_{1}q_{2}q_{3}\bigg{(}}+\frac{p^{\prime}_{1}p^{\prime}_{2}}{q^{\prime}_{1}q^{\prime}_{2}}Z^{2}_{n}(G^{\prime}_{12})+\frac{p^{\prime}_{1}p^{\prime}_{3}}{q^{\prime}_{1}q^{\prime}_{3}}Z^{2}_{n}(G^{\prime}_{13})+\frac{p^{\prime}_{2}p^{\prime}_{3}}{q^{\prime}_{2}q^{\prime}_{3}}Z^{2}_{n}(G^{\prime}_{23})\bigg{)},$
(5.4)
where
$\displaystyle
p_{i}=\frac{\alpha^{1}_{i}-\beta^{1}_{i}}{\alpha^{2}_{i}-\beta^{2}_{i}},\qquad
q_{i}=\frac{\alpha^{2}_{i}\beta^{1}_{i}-\alpha^{1}_{i}\beta^{2}_{i}}{\alpha^{2}_{i}-\beta^{2}_{i}},\qquad
p^{\prime}_{i}=\frac{\alpha^{\prime 1}_{i}-\beta^{\prime
1}_{i}}{\alpha^{\prime 2}_{i}-\beta^{\prime 2}_{i}},\qquad
q_{i}^{\prime}=\frac{\alpha^{\prime 2}_{i}\beta^{\prime 1}_{i}-\alpha^{\prime
1}_{i}\beta^{\prime 2}_{i}}{\alpha^{\prime 2}_{i}-\beta^{\prime 2}_{i}},$
variables $\alpha^{k}_{i}$, $\beta^{k}_{i}$ and $\alpha^{\prime k}_{i}$,
$\beta^{\prime k}_{i}$ are related by the star-triangle transformation with
the condition $p_{1}p_{2}p_{3}=p_{1}^{\prime}p_{2}^{\prime}p_{3}^{\prime}$ and
graphs $G_{i}$ and $G_{ij}$, $i,j=0,1,2,3$ are depicted on the Figure 14.
Figure 14: The 14-term relation.
###### Proof.
We will prove this theorem using induction on $ex(G):=e(G)-3$.
We show that the base of the induction $k=0$ is trivial. Hence, let us
consider the $n$-Potts models $M_{2}(G)$ and $M_{2}(G^{\prime})$ and the
special models $M_{1}(G)$ and $M_{1}(G^{\prime})$ such that
$p_{1}p_{2}p_{3}=p_{1}^{\prime}p_{2}^{\prime}p_{3}^{\prime}$. Then, we write
the formulas (5.2) and (5.3), after the comparison for each terms using the
reasoning above we immediately obtain the result in the case $ex(G)=0$.
Then, make the step of induction. Again, let us write down the formulas (5.2)
and (5.3):
$\displaystyle Z^{1}_{n}(G)=\prod_{e\in G}q_{e}\sum\limits_{A_{1}\subseteq
G}\prod_{e\in A_{1}}\frac{p_{e}}{q_{e}}Z^{2}_{n}(A_{1})+\prod_{e\in
G}q_{e}\sum\limits_{A_{2}\subseteq G}\prod_{e\in
A_{2}}\frac{p_{e}}{q_{e}}Z^{2}_{n}(A_{2})+\frac{\prod_{e\in
G}q_{e}}{q_{1}q_{2}q_{3}}S_{1},$
where each $A_{1}$ contains the full triangle, each $A_{2}$ contains only a
part of the triangle and such that $ex(A_{2})\neq ex(G)$, and by $S_{1}$ we
denoted the left hand side of (5.4),
$\displaystyle Z^{1}_{n}(G^{\prime})=\prod_{e\in
G^{\prime}}q_{e}^{\prime}\sum\limits_{A_{1}^{\prime}\subseteq
G^{\prime}}\prod_{e\in
A^{\prime}_{1}}\frac{p^{\prime}_{e}}{q^{\prime}_{e}}Z^{2}_{n}(A_{1}^{\prime})+\prod_{e\in
G^{\prime}}q_{e}^{\prime}\sum\limits_{A^{\prime}_{2}\subseteq
G^{\prime}}\prod_{e\in
A^{\prime}_{2}}\frac{p^{\prime}_{e}}{q^{\prime}_{e}}Z^{2}_{n}(A_{2}^{\prime})+\frac{\prod_{e\in
G^{\prime}}q^{\prime}_{e}}{q^{\prime}_{1}q^{\prime}_{2}q^{\prime}_{3}}S_{2},$
where each $A^{\prime}_{1}$ contains the full star, each $A^{\prime}_{2}$
contains only a part of the star and such that $ex(A^{\prime}_{2})\neq
ex(G^{\prime})$, and by $S_{2}$ we denoted the right hand side of (5.4).
Then the induction assumption ends the proof. ∎
We consider these results in the context of numerous generalizations, both for
other models of statistical physics, and in a purely mathematical direction.
In particular, we are interested in applying this technique to the Potts model
in the presence of an external magnetic field [10], including an inhomogeneous
one. In addition, we are going to develop these methods in a more general
algebraic sense, in particular in a non-commutative situation. Partial results
of this activity have already been obtained in [6].
## Appendix A The proof of Lemma 4.3
###### Proof.
We start by reformulating this statement in terms of equivalent rational
identities. Let us introduce the $x$-variables by the following formula
$\displaystyle F\circ S^{3}(t_{1},t_{2},t_{3})=(x_{1},x_{2},x_{3}),$
$\displaystyle
S^{3}(x_{1},x_{2},x_{3})=(t_{1}^{\prime},t_{2}^{\prime},t_{3}^{\prime}).$
(A.1)
Here $S^{3}(t_{1},t_{2},t_{3})=(S\times S\times
S)(t_{1},t_{2},t_{3})=(S(t_{1}),S(t_{2}),S(t_{3}))$. Then the statement of the
lemma is equivalent to
$\displaystyle(t_{1}^{\prime},t_{2}^{\prime},t_{3}^{\prime})=S^{3}(x_{1},x_{2},x_{3})=F^{-1}(t_{1},t_{2},t_{3}).$
This identity is equivalent to three algebraic relations (we will write down
only one of them, because the others differ just by replacing the indices)
$\displaystyle
t_{1}t_{2}=\frac{t_{1}^{\prime}t_{2}^{\prime}t_{3}^{\prime}+1}{t_{3}^{\prime}+t_{1}^{\prime}t_{2}^{\prime}}=\bigg{(}\frac{(x_{1}-1)(x_{2}-1)(x_{3}-1)}{(x_{1}+1)(x_{2}+1)(x_{3}+1)}+1\bigg{)}\bigg{/}\bigg{(}\frac{x_{3}-1}{x_{3}+1}+\frac{(x_{1}-1)(x_{2}-1)}{(x_{1}+1)(x_{2}+1)}\bigg{)}$
$\displaystyle\hphantom{t_{1}t_{2}}{}=\frac{x_{1}+x_{2}+x_{3}+x_{1}x_{2}x_{3}}{x_{3}-x_{2}-x_{1}+x_{1}x_{2}x_{3}}=\frac{(x_{1}+x_{2}+x_{3}+x_{1}x_{2}x_{3})x_{1}x_{2}x_{3}}{(x_{3}-x_{2}-x_{1}+x_{1}x_{2}x_{3})x_{1}x_{2}x_{3}}.$
(A.2)
Now let us introduce some additional variables
$\displaystyle t_{12}=x_{1}x_{2},\qquad t_{23}=x_{2}x_{3},\qquad
t_{13}=x_{1}x_{3},\qquad a_{1}=x_{1}^{2},\qquad a_{2}=x_{2}^{2},\qquad
a_{3}=x_{3}^{2}.$
We could rewrite (A.2) in the following way
$\displaystyle
t_{1}t_{2}=\frac{a_{1}t_{23}+a_{2}t_{13}+a_{3}t_{12}+t_{12}t_{23}t_{13}}{-a_{2}t_{13}-a_{1}t_{23}+a_{3}t_{12}+t_{12}t_{23}t_{13}}.$
The equations (4.2) and (A.1) with the identification $y_{i}:=S(t_{i})$
provide the following system
$\displaystyle
t_{12}=\frac{y_{1}y_{2}y_{3}+1}{y_{3}+y_{1}y_{2}}=\frac{t_{3}+t_{2}+t_{1}+t_{1}t_{2}t_{3}}{t_{3}-t_{2}-t_{1}+t_{1}t_{2}t_{3}},$
$\displaystyle
t_{13}=\frac{y_{1}y_{2}y_{3}+1}{y_{2}+y_{3}y_{1}}=\frac{t_{3}+t_{2}+t_{1}+t_{1}t_{2}t_{3}}{t_{2}-t_{1}-t_{3}+t_{1}t_{2}t_{3}},$
$\displaystyle
t_{23}=\frac{y_{1}y_{2}y_{3}+1}{y_{1}+y_{3}y_{2}}=\frac{t_{3}+t_{2}+t_{1}+t_{1}t_{2}t_{3}}{t_{1}-t_{2}-t_{3}+t_{1}t_{2}t_{3}},$
$\displaystyle
a_{1}=\frac{(y_{1}y_{2}y_{3}+1)(y_{1}+y_{2}y_{3})}{(y_{2}+y_{1}y_{3})(y_{3}+y_{1}y_{2})}=\frac{(t_{1}-t_{2}-t_{3}+t_{1}t_{2}t_{3})(t_{3}+t_{2}+t_{1}+t_{1}t_{2}t_{3})}{(t_{3}-t_{2}-t_{1}+t_{1}t_{2}t_{3})(t_{2}-t_{1}-t_{3}+t_{1}t_{2}t_{3})},$
$\displaystyle
a_{2}=\frac{(y_{1}y_{2}y_{3}+1)(y_{2}+y_{1}y_{3})}{(y_{1}+y_{2}y_{3})(y_{3}+y_{1}y_{2})}=\frac{(t_{2}-t_{1}-t_{3}+t_{1}t_{2}t_{3})(t_{3}+t_{2}+t_{1}+t_{1}t_{2}t_{3})}{(t_{3}-t_{2}-t_{1}+t_{1}t_{2}t_{3})(t_{1}-t_{2}-t_{3}+t_{1}t_{2}t_{3})},$
$\displaystyle
a_{3}=\frac{(y_{1}y_{2}y_{3}+1)(y_{3}+y_{2}y_{1})}{(y_{2}+y_{1}y_{3})(y_{1}+y_{3}y_{2})}=\frac{(t_{3}-t_{2}-t_{1}+t_{1}t_{2}t_{3})(t_{3}+t_{2}+t_{1}+t_{1}t_{2}t_{3})}{(t_{1}-t_{2}-t_{3}+t_{1}t_{2}t_{3})(t_{2}-t_{1}-t_{3}+t_{1}t_{2}t_{3})}.$
Using these expressions we can compute
$\displaystyle t_{12}a_{3}+t_{13}a_{2}+a_{1}t_{23}+t_{12}t_{23}t_{13}$
$\displaystyle\qquad{}=\frac{4(t_{3}+t_{2}+t_{1}+t_{1}t_{2}t_{3})^{2}t_{1}t_{2}t_{3}}{(t_{3}-t_{2}-t_{1}+t_{1}t_{2}t_{3})(t_{2}-t_{1}-t_{3}+t_{1}t_{2}t_{3})(t_{1}-t_{2}-t_{3}+t_{1}t_{2}t_{3})},$
$\displaystyle
t_{1}t_{2}(-a_{2}t_{13}-a_{1}t_{23}+a_{3}t_{12}+t_{12}t_{23}t_{13})$
$\displaystyle\qquad{}=\frac{4(t_{3}+t_{2}+t_{1}+t_{1}t_{2}t_{3})^{2}t_{1}t_{2}t_{3}}{(t_{3}-t_{2}-t_{1}+t_{1}t_{2}t_{3})(t_{2}-t_{1}-t_{3}+t_{1}t_{2}t_{3})(t_{1}-t_{2}-t_{3}+t_{1}t_{2}t_{3})}.$
In this way we observe that
$\displaystyle
t_{1}t_{2}(-a_{2}t_{13}-a_{1}t_{23}+a_{3}t_{12}+t_{12}t_{23}t_{13})=a_{3}t_{12}+a_{2}t_{13}+a_{1}t_{23}+t_{12}t_{23}t_{13}.$
This completes the proof. ∎
## Appendix B For the second proof of the tetrahedron equation
In this Appendix we present some technical part of the proof from Section 4.3.
We present closed formulas for $t_{1}$ and $t_{2}$ variables (see the
discussion after (4.17))
$\displaystyle
t_{1}=\big{(}{-}b_{3}a_{3}+a_{2}b_{2}+a_{1}b_{1}-b_{4}a_{4}+(b_{3}^{2}a_{3}^{2}-2b_{3}a_{3}a_{2}b_{2}-2b_{3}a_{3}a_{1}b_{1}-2b_{3}a_{3}b_{4}a_{4}$
$\displaystyle\hphantom{t_{1}=(}{}+a_{2}^{2}b_{2}^{2}-2a_{2}b_{2}a_{1}b_{1}-2a_{2}b_{2}b_{4}a_{4}+a_{1}^{2}b_{1}^{2}-2a_{1}b_{1}b_{4}a_{4}+b_{4}^{2}a_{4}^{2}+4b_{4}a_{3}a_{1}b_{2}$
$\displaystyle\hphantom{t_{1}=(}{}+4a_{2}b_{1}b_{3}a_{4})^{1/2}\big{)}/(2(-b_{4}a_{3}+a_{2}b_{1})),$
(B.1) $\displaystyle
t_{2}=(a_{2}b_{4}-b_{2}a_{4}-b_{3}a_{1}+a_{3}b_{1}+(a_{2}^{2}b_{4}^{2}-2a_{4}b_{4}a_{2}b_{2}-2a_{2}b_{4}b_{3}a_{1}-2a_{2}b_{1}a_{3}b_{4}$
$\displaystyle\hphantom{t_{2}=(}{}+b_{2}^{2}a_{4}^{2}-2b_{3}a_{4}a_{1}b_{2}--2b_{2}a_{4}a_{3}b_{1}+b_{3}^{2}a_{1}^{2}-2a_{3}b_{3}a_{1}b_{1}+a_{3}^{2}b_{1}^{2}+4a_{3}b_{4}a_{1}b_{2}$
$\displaystyle\hphantom{t_{2}=(}{}+4a_{2}b_{1}b_{3}a_{4})^{1/2})/(2(a_{3}b_{4}-b_{3}a_{4})).$
(B.2)
### Acknowledgements
We are thankful to V. Gorbounov for indicating us the strategy of the first
proof of the tetrahedron equation in the trigonometric case in Section 4.2.
The research was supported by the Russian Science Foundation (project
20-61-46005). The authors thank the anonymous referees for very useful
comments which are improved the paper a lot.
## References
* [1] Atiyah M., Topological quantum field theories, Inst. Hautes Études Sci. Publ. Math. 68 (1988), 175–186.
* [2] Baxter R.J., Exactly solved models in statistical mechanics, Academic Press, Inc., London, 1982.
* [3] Beaudin L., Ellis-Monaghan J., Pangborn G., Shrock R., A little statistical mechanics for the graph theorist, Discrete Math. 310 (2010), 2037–2053, arXiv:0804.2468.
* [4] Berenstein A., Fomin S., Zelevinsky A., Parametrizations of canonical bases and totally positive matrices, Adv. Math. 122 (1996), 49–149.
* [5] Biggs N., Interaction models, London Mathematical Society Lecture Note Series, Vol. 30, Cambridge University Press, Cambridge – New York – Melbourne, 1977.
* [6] Bychkov B., Kazakov A., Talalaev D., Tutte polynomials of vertex-weighted graphs and cohomology of groups, Theoret. and Math. Phys., to appear.
* [7] Cardy J.L., Critical percolation in finite geometries, J. Phys. A: Math. Gen. 25 (1992), L201–L206, arXiv:hep-th/9111026.
* [8] El-Showk S., Paulos M.F., Poland D., Rychkov S., Simmons-Duffin D., Vichi A., Solving the 3d Ising model with the conformal bootstrap II. $c$-minimization and precise critical exponents, J. Stat. Phys. 157 (2014), 869–914, arXiv:1403.4545.
* [9] Ellis-Monaghan J.A., Merino C., Graph polynomials and their applications I: The Tutte polynomial, in Structural Analysis of Complex Networks, Birkhäuser/Springer, New York, 2011, 219–255, arXiv:0803.3079.
* [10] Ellis-Monaghan J.A., Moffatt I., The Tutte–Potts connection in the presence of an external magnetic field, Adv. in Appl. Math. 47 (2011), 772–782, arXiv:1005.5470.
* [11] Galashin P., Pylyavskyy P., Ising model and the positive orthogonal Grassmannian, Duke Math. J. 169 (2020), 1877–1942, arXiv:1807.03282.
* [12] Gorbounov V., Talalaev D., Electrical varieties as vertex integrable statistical models, J. Phys. A: Math. Theor. 53 (2020), 454001, 28 pages, arXiv:1905.03522.
* [13] Grimmett G., Three theorems in discrete random geometry, Probab. Surv. 8 (2011), 403–441, arXiv:1110.2395.
* [14] Huang Y.-T., Wen C., Xie D., The positive orthogonal Grassmannian and loop amplitudes of ABJM, J. Phys. A: Math. Theor. 47 (2014), 474008, 48 pages, arXiv:1402.1479.
* [15] Kashaev R.M., On discrete three-dimensional equations associated with the local Yang–Baxter relation, Lett. Math. Phys. 38 (1996), 389–397, arXiv:solv-int/9512005.
* [16] Kook W., Reiner V., Stanton D., A convolution formula for the Tutte polynomial, J. Combin. Theory Ser. B 76 (1999), 297–300, arXiv:math.CO/9712232.
* [17] Korepanov I.G., Algebraic integrable dynamical systems, 2+1-dimensional models in wholly discrete space-time, and inhomogeneous models in 2-dimensional statistical physics, arXiv:solv-int/9506003.
* [18] Korepanov I.G., Sharygin G.I., Talalaev D.V., Cohomologies of $n$-simplex relations, Math. Proc. Cambridge Philos. Soc. 161 (2016), 203–222, arXiv:1409.3127.
* [19] Lam T., Pylyavskyy P., Inverse problem in cylindrical electrical networks, SIAM J. Appl. Math. 72 (2012), 767–788, arXiv:1104.4998.
* [20] Matiyasevich Yu.V., On a certain representation of the chromatic polynomial, arXiv:0903.1213.
* [21] Sergeev S.M., Solutions of the functional tetrahedron equation connected with the local Yang–Baxter equation for the ferro-electric condition, Lett. Math. Phys. 45 (1998), 113–119, arXiv:solv-int/9709006.
* [22] Sokal A.D., The multivariate Tutte polynomial (alias Potts model) for graphs and matroids, in Surveys in Combinatorics 2005, London Math. Soc. Lecture Note Ser., Vol. 327, Cambridge University Press, Cambridge, 2005, 173–226, arXiv:math.CO/0503607.
* [23] Tutte W.T., A ring in graph theory, in Classic Papers in Combinatorics, Modern Birkhäuser Classics, Birkhäuser, Boston, 2009, 124–138.
* [24] Wu F.Y., Knot theory and statistical mechanics, Rev. Modern Phys. 64 (1992), 1099–1131.
* [25] Zamolodchikov A.B., Tetrahedra equations and integrable systems in three-dimensional space, Soviet Phys. JETP 52 (1980), 325–336.
|
# Carrot and Stick: Inducing Self-Motivation with Positive & Negative Feedback
Jimin Sohn1, Jeihee Cho2, Junyong Lee2, Songmu Heo3, Ji-Eun Han4, David R.
Mortensen5,
1 GIST, South Korea 2 Yonsei University, South Korea
3 Korea University, South Korea, 4 KT, South Korea, 5 Carnegie Mellon
University, USA
<EMAIL_ADDRESS>
###### Abstract
Positive thinking is thought to be an important component of self-motivation
in various practical fields such as education and the workplace. Previous
work, including sentiment transfer and positive reframing, has focused on the
positive side of language. However, self-motivation that drives people to
reach their goals has not yet been studied from a computational perspective.
Moreover, negative feedback has not yet been explored, even though positive
and negative feedback are both necessary to grow self-motivation. To
facilitate self-motivation, we propose CArrot and STICk (CASTIC) dataset,
consisting of $12,590$ sentences with 5 different strategies for enhancing
self-motivation. Our data and code are publicly available at here.
Carrot and Stick: Inducing Self-Motivation with Positive & Negative Feedback
Jimin Sohn1, Jeihee Cho2, Junyong Lee2, Songmu Heo3, Ji-Eun Han4, David R.
Mortensen5, 1 GIST, South Korea 2 Yonsei University, South Korea 3 Korea
University, South Korea, 4 KT, South Korea, 5 Carnegie Mellon University, USA
<EMAIL_ADDRESS>
## 1 Introduction
Interest in positive psychological aspects of language has growing in the
field of NLP. Ziems et al. (2022), Sharma et al. (2023), and Maddela et al.
(2023) introduce the task of Positive Reframing, aiming to shift the negative
perspective of a statement into a positive one without altering the original
content. Njoo et al. (2023) proposed a new benchmark analyzing how empowerment
is conveyed in language.
Previous research has only focused on reframing negative thoughts into
positive ones, ignoring the value of non-positive language. In this work, we
propose a new approach to appropriately utilize negative (and positive)
language as feedback, so as to induce self-motivation via stimulation.
Figure 1: An example of positive reframing Ziems et al. (2022) and feedback
generated using our CASTIC framework.
Self-motivation is an internal drive that leads a person to take action
towards a goal, which is significant in various real-world domains such as
education and business. One popular theoretical approach to motivation is
Maslow (1958), proposing that motivation is derived from five basic needs:
physiological, safety, belongingness & love, esteem, and self-actualization,
which are hierarchically organized.
Researchers have attempted to enhance the motivation of people by giving
feedback relevant to their situations. In Kim and Lee (2019), it was found
that when students received negative feedback, they achieved more accurate
self-assessment of skills compared to positive feedback, while positive
feedback enhanced students’ self-efficacy and boosted confidence in their
ability to achieve goals. Hence, the findings from Kim and Lee (2019) suggest
that a balanced use of positive and negative feedback is necessary to optimize
self-motivation. Wisniewski et al. (2020) also concluded that positive
feedback was effective in enhancing confidence and motivation while negative
feedback helped to clearly identify areas of deficiency and motivate
improvement. Although negative feedback might seem demotivating, it helped in
identifying areas that need improvement, guiding future efforts, and avoiding
past mistakes. The analysis of the results for each type of feedback in
Wisniewski et al. (2020) also aligns with Kim and Lee (2019), indicating that
for optimal self-motivation, both positive and negative feedback should be
used in a balanced manner.
Figure 2: Overall procedure of generating CASTIC dataset
In this work, we introduce a new benchmark named CArrot and STICk (CASTIC)
meant measure induction of self-motivation. We address the task by providing
both positive and negative feedback. As far as we can determine, dealing with
negative aspects of language in the context of motivation is methodologically
novel within NLP. The proposed dataset is generated with a three step
procedure and evaluated with both quantitative and qualitative approaches.
## 2 CASTIC Dataset
Obstacle Types | Definition & Example | Severity score
---|---|---
Relationships | Conflict situations with nearby people | not serious
_fight with friends, mother’s nagging_
Health | Physical or mental illness | serious
_migraine, stomachache, burn out_
Fear, overwhelmed | Anxiety about what will happen in the future or previous failings | serious
_Anxiety about past failures_
Lack of resource | A lack of supplies needed for work | serious
_lack of internet connection, lost laptop_
Annoyance | Irritation by trivial, annoying situations | not serious
_noisy circumstance_
Rest, Entertainment | Lack of motivation due to desire for simple entertainment | not serious
_game, movie, dating_
etc | Any situation other than the above | serious
_Internet/banking system error, bad weather_
Table 1: The seven types of obstacles blocking someone from reaching their
goal and the corresponding severity score. The definition is indicated at the
top and a corresponding example is given in italics.
Large Language Models tend to generate sentences with a positive bias Chen et
al. (2023). However, from the perspective of motivation, unconditional
positive support is not always what is needed. Stimulating feedback relevant
to the person’s circumstance is more effective in achieving goals. Therefore,
instead of always giving positive feedback, the model should provide either
negative or positive feedback depending on how seriously the obstacle
interferes with the task to be done. To give language models this ability, we
propose CASTIC dataset that provides appropriate feedback for given sentences.
In this section, we present our overall procedure for data generation and
provide a taxonomy of the types of obstacles and strategies for giving
feedback.
### 2.1 Data Collection
The overall procedure of generating CASTIC is provided in Fig. 2. The prompt
for extracting TODO, Obstacle, and generating the final feedback sentence are
provided in Appendix A.3.
##### Input sentence
We use input sentences from Positive Psychology Frames (POSREF, Ziems et al.
(2022) collected from Twitter with simple keyword #stressed. We use only the
original text column from the dataset.
##### Obstacle and TODO Extraction Module
We use Orion-14B-Chat Chen et al. (2024) with an Apache-2.0 License to
generate datasets as it is an open-source large language model (LLM) with
outstanding performance in comprehensive evaluations and supporting even
extremely long texts Chen et al. (2024). The model first extracts TODO and
Obstacle from the input sentence. TODO is the goal that people aim to achieve
and Obstacle is the challenge or obstacle that hinder people from achieving
TODO. The model gets to respond "None" when there is no specific TODO in a
given sentence, and the feedback is generated considering only TODO.
##### Obstacle Type and Severity Score
We annotate which of the seven categories the extracted Obstacle belongs to.
It is worth noting that a sentence can have multiple obstacles and therefore
can have multiple types. The Severity Score is assigned corresponding to the
obstacle type in Table 1. Severity Score means how seriously the Obstacle
blocks the person from TODO. If the sentence has multiple obstacles, the
sentence is considered “not serious” only when all the obstacle types are
considered “not serious”. Categories and corresponding severity scores were
determined according to the criteria by which we manually checked and
classified all input data.
##### Feedback Generation
To generate feedback inducing self-motivation, we use five motivation
strategies referenced from the widely known Motivation Theory’s “five needs”
Maslow (1958). A detailed explanation of each need is provided in Appendix
A.2. Feedback was created with LLM (Orion-14B-Chat) using the TODO, Obstacle,
and Severity Score from the previous step and each of the five needs. The
severity score determines whether the feedback is positive or negative, and
each of five needs determines which aspects to emphasize to motivate the
person. We reviewed each feedback generated by the model.
### 2.2 Data Distribution
We evaluate the statistics of seven obstacle types in our CASTIC dataset in
Table 2. As one sample can have multiple obstacle types, the total number does
not indicate the number of distinct samples. The statistics of frequently
appearing words in the dataset is provided in Appendix A.6.
Obstacle Types | Train # | Validation #
---|---|---
Relationships | 110 | 80
Health | 1,000 | 305
Fear | 5,165 | 1,205
Lack of resource | 235 | 80
Annoyance | 2,015 | 785
Rest | 20 | 50
etc | 1,455 | 135
serious | 7,855 | 1,725
not serious | 2,145 | 915
Total | 10,000 | 2,640
Table 2: Summary statistics of each obstacle type in the CASTIC dataset.
## 3 Self-Motivation Framework
Fine-tune | Model | Param. | R-1 | R-2 | R-L | BLEU | BScore | Sim | PPL
---|---|---|---|---|---|---|---|---|---
w/o Fine-tune | GPT | 116M | 11.79 | 0.47 | 8.35 | 0.08 | 82.20 | 0.121 | -
M2M-100 | 483M | 2.10 | 0.16 | 1.89 | 3.20 | 75.66 | 0.088 | 176.36
T5 | 60M | 0.49 | 0.00 | 0.49 | 1.66 | 84.52 | 0.452 | 295.87
Falcon | 7B | 10.59 | 0.76 | 7.49 | 0.19 | 82.07 | 0.19 | 106.02
Mistral | 7B | 12.47 | 1.04 | 8.76 | 0.29 | 82.77 | 0.16 | 202.30
BART-L | 406M | 18.10 | 3.76 | 13.00 | 1.81 | 84.94 | 0.458 | 188.65
w/ Fine-tune | GPT | 116M | 27.68 | 6.62 | 20.11 | 3.77 | 88.11 | 0.429 | 69.52
M2M-100 | 483M | 30.12 | 8.76 | 22.44 | 5.99 | 88.74 | 0.437 | 32.84
T5 | 60M | 30.28 | 10.04 | 23.74 | 5.5 | 88.76 | 0.480 | 25.40
Falcon | 7B | 29.63 | 8.59 | 20.84 | 4.59 | 88.30 | 0.522 | 33.49
Mistral | 7B | 27.96 | 13.19 | 23.66 | 2.64 | 88.63 | 0.496 | 27.61
BART-L | 406M | 33.93 | 10.04 | 23.59 | 7.00 | 88.98 | 0.498 | 24.11
Table 3: Self-Motivation results Performance of models with and without fine-
tuning on CASTIC dataset on ROUGE-1 (R-1), ROUGE-1 (R-2), ROUGE-L (R-L), BLEU,
and BERTScore (BScore). Param., Sim, PPL indicates the number of parameters of
each model, cosine similarity and perplexity, respectively.
### 3.1 Task Formulation
To generate feedback, we extract the Obstacle $O_{i}$ and TODO $T_{i}$ from
the input sentence. Then, the Severity Score $SS_{i}$ is assigned based on the
Obstacle Type $OT_{i}$ of the $O_{i}$. Then, Feedback Type $FT_{i}$ is labeled
as either positive or negative according to the severity score.
$FT_{i}=\begin{cases}\text{Positive if}\;SS_{i}=\text{serious},\\\
\text{Negative if}\;SS_{i}=\text{not serious}\end{cases}$
Based on Motivation Strategy $M_{i}$, the final Feedback $F_{i}$ to induce
self-motivation is generated.
$F_{i}=\\{FT_{i},M_{i},O_{i},T_{i}\\}$
### 3.2 Evaluation
#### 3.2.1 Experimental Setup
Dataset The CASTIC dataset contains 9,990 samples in the train split and 2,600
samples in the validation split. All the data are in English.
Model We use BART-L Lewis et al. (2019), GPT-2 Radford et al. (2019), M2M-100
Fan et al. (2021), T5 Raffel et al. (2020) model to test the dataset. The
number of parameters per model is described in Table 3.
#### 3.2.2 Evaluation Metric
Quantitative metric In various studies, the BLEU Papineni et al. (2002) and
ROUGE Lin (2004) metrics are utilized to evaluate the semantic similarity with
the ground truth. In this paper, we use BLEU, ROUGE-1, ROUGE-2, ROUGE-L, and
BERTScore Zhang et al. (2019) for qualitative results following previous work
Ziems et al. (2022). Cosine similarity between generated output and ground
truth is measured using the sentence transformer all-MiniLM-L6-v2 Reimers and
Gurevych (2019). Perplexity Bengio et al. (2000) is measured using GPT-2
Radford et al. (2019) from Hugging Face. It is worth noting that we discard
the empty generation samples when measuring the scores.
Qualitative metric Following Chiang and Lee (2023), we use GPT-3.5 Brown et
al. (2020) to evaluate the effect of our dataset on inducing self-motivation.
The prompt is illustrated in Appendix A.4. From the model’s generated
feedback, we randomly sample 100 sentences and ask GPT-3.5 how motivating
(Motivation) and how fluent (Fluency) the feedback is. The rating scale is
from 1-5 with 1 being the lowest.
### 3.3 Experimental Result
##### Overall Result
In Table 3, we evaluate the result of the experiment. The models can learn
each motivation strategy and generate feedback well. We illustrate the example
of generated feedback in Table 7 of Appendix A.7. Overall, BART-L shows the
best performance both in zero-shot and fine-tuning experiments.
Strategy | GPT | GPT-2 | M2M-100 | T5 | BART-L
---|---|---|---|---|---
Physiological Needs | 62.49 | 76.17 | 74.9 | 77.51 | 74.45
Safety Needs | 72.23 | 75.26 | 73.31 | 74.64 | 71.94
Love and Belonging | 72.83 | 76.97 | 76.2 | 76.67 | 65.95
Self-actualization | 67 | 75.32 | 75.87 | 74.95 | 72.32
Esteem Needs | 78.21 | 80.41 | 79.01 | 80.8 | 68.88
AVG. | 70.55 | 76.83 | 75.86 | 76.91 | 70.71
STD. | 6.01 | 2.12 | 2.09 | 2.48 | 3.32
Table 4: F1 score (%) of motivation strategies classification.
##### LLM Evaluation Result
We compare the score for models that are fine-tuned on the CASTIC dataset and
pre-trained models without fine-tuning. In Table 5, we illustrate the self-
motivation feedback resulting from GPT-3.5. The models fine-tuned with our
dataset show better performance compared to the others. Specifically, the
average motivation score for the fine-tuned models is 2.7, whereas models
without fine-tuning achieve an average motivation score of 1.66. Additionally,
in terms of fluency, the fine-tuned models attain an average score of 4,
outperforming the zero-shot result. The result indicates that the model fine-
tuned on our dataset generates fluent outputs.
Model | w/ Fine-tune | w/o Fine-tune
---|---|---
Motivation | Fluency | Motivation | Fluency
GPT | 2.3 | 3.57 | 1.17 | 2.03
M2M-100 | 2.32 | 3.85 | 1.00 | 1.00
BART-L | 2.54 | 4.34 | 1.82 | 3.06
T5 | 3.03 | 4.18 | 2.12 | 2.47
Mistral | 3.05 | 3.82 | 1.82 | 2.67
Falcon | 2.99 | 4.35 | 2.03 | 2.10
Table 5: LLM Evaluation results The average rate of generated feedbacks on
with and without fine-tuned models in the terms of how motivating and fluent
the feedback is.
##### Motivation Strategy Classification
In Table 4, we evaluate F1 score per 5 motivation strategies. We add one
additional linear layer that outputs 5 classes corresponding to each
motivation type. The result shows that each model can learn and distinguish
the characteristics between motivation strategies with over 70% F1 score for
every strategy.
## Limitation
We acknowledge that the severity score, which is determined based on the
severity of the obstacle, can be subjective. However, just as people
previously judged the negative and positive levels of words and annotated them
to obtain negative scores in the field of Sentimental analysis, we present the
standards by creating our own dataset and annotating it.
More significantly, we did not test the output of the trained models on human
participants to determine, empirically, whether they induced greater levels of
self-motivation.
## Ethics Statement
In this work, we used POSREF Ziems et al. (2022) which is a publicly available
dataset. The creators of POSREF have already considered ethical issues when
creating the dataset, but we additionally have manually checked every input
sentence and filtered out inappropriate ones. We didn’t find any obvious
ethical concerns, such as violent or offensive content. We used the dataset
consistent with the intended use. We used LLM in the process of creating and
validating the dataset and we performed the verification of the output
ourselves, meaning there were no issues with human annotators.
## References
* Bengio et al. (2000) Yoshua Bengio, Réjean Ducharme, and Pascal Vincent. 2000. A neural probabilistic language model. _Advances in neural information processing systems_ , 13.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901.
* Chen et al. (2024) Du Chen, Yi Huang, Xiaopu Li, Yongqiang Li, Yongqiang Liu, Haihui Pan, Leichao Xu, Dacheng Zhang, Zhipeng Zhang, and Kun Han. 2024. Orion-14b: Open-source multilingual large language models. _arXiv preprint arXiv:2401.12246_.
* Chen et al. (2023) Jiangjie Chen, Wei Shi, Ziquan Fu, Sijie Cheng, Lei Li, and Yanghua Xiao. 2023. Say what you mean! large language models speak too positively about negative commonsense knowledge. _arXiv preprint arXiv:2305.05976_.
* Chiang and Lee (2023) Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? _arXiv preprint arXiv:2305.01937_.
* Fan et al. (2021) Angela Fan, Shruti Bhosale, Holger Schwenk, Zhiyi Ma, Ahmed El-Kishky, Siddharth Goyal, Mandeep Baines, Onur Celebi, Guillaume Wenzek, Vishrav Chaudhary, et al. 2021. Beyond english-centric multilingual machine translation. _Journal of Machine Learning Research_ , 22(107):1–48.
* Kim and Lee (2019) Eun Jung Kim and Kyeong Ryong Lee. 2019. Effects of an examiner’s positive and negative feedback on self-assessment of skill performance, emotional response, and self-efficacy in korea: a quasi-experimental study. _BMC medical education_ , 19:1–7.
* Lewis et al. (2019) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. _arXiv preprint arXiv:1910.13461_.
* Lin (2004) Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In _Text summarization branches out_ , pages 74–81.
* Maddela et al. (2023) Mounica Maddela, Megan Ung, Jing Xu, Andrea Madotto, Heather Foran, and Y-Lan Boureau. 2023. Training models to generate, recognize, and reframe unhelpful thoughts. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 13641–13660, Toronto, Canada. Association for Computational Linguistics.
* Maslow (1958) Abraham Harold Maslow. 1958. A dynamic theory of human motivation.
* Njoo et al. (2023) Lucille Njoo, Chan Park, Octavia Stappart, Marvin Thielk, Yi Chu, and Yulia Tsvetkov. 2023. TalkUp: Paving the way for understanding empowering language. In _Findings of the Association for Computational Linguistics: EMNLP 2023_ , pages 9334–9354, Singapore. Association for Computational Linguistics.
* Papineni et al. (2002) Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In _Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics_ , pages 311–318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. _OpenAI blog_ , 1(8):9.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of machine learning research_ , 21(140):1–67.
* Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing_. Association for Computational Linguistics.
* Sharma et al. (2023) Ashish Sharma, Kevin Rushton, Inna Lin, David Wadden, Khendra Lucas, Adam Miner, Theresa Nguyen, and Tim Althoff. 2023. Cognitive reframing of negative thoughts through human-language model interaction. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 9977–10000, Toronto, Canada. Association for Computational Linguistics.
* Wisniewski et al. (2020) Benedikt Wisniewski, Klaus Zierer, and John Hattie. 2020. The power of feedback revisited: A meta-analysis of educational feedback research. _Frontiers in psychology_ , 10:487662.
* Zhang et al. (2019) Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. _arXiv preprint arXiv:1904.09675_.
* Ziems et al. (2022) Caleb Ziems, Minzhi Li, Anthony Zhang, and Diyi Yang. 2022. Inducing positive perspectives with text reframing. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 3682–3700, Dublin, Ireland. Association for Computational Linguistics.
## Appendix A Appendix
### A.1 Implementation Detail
Motivation Result We train BART-L and M2M-100 with learning rate 1e-4, and
batch size 32 for 5 epochs with a maximum sequence length of 128. For GPT and
T5, we use learning rate 3e-5, and a weight decay of 0.01 for 5 epochs. For
training and inference, we use a NVIDIA H100 80GB GPU. We run the experiment
once for all the models.
Motivation Strategy Classification We train each model with learning rate 1e-4
and batch size 32 for 2 epochs with output class number as 5. For training and
inference, we use a NVIDIA H100 80GB GPU. We run the experiment once for all
the models.
### A.2 Maslow’s Motivation Theory
In this section, we provide detailed explanation of Maslow’s Motivation Theory
Maslow (1958)’s five needs.
Physiological Needs are desires to maintain a constant, normal state such as
with respect to homeostasis, hormones, and nutrients. We applied this strategy
for desiring rest, sleep, food, water, air and homeostasis.
Safety Needs are the desires for a safe environment such as financial and
health security, stability in employment, protection from accidents, and a
safe environment. The need derives from human’s nature or preferring a safe,
orderly, predictable, organized world and disfavoring unexpected,
unmanageable, or other dangerous things.
Love & Belonging Needs are desires for love and affection from friends, lovers
or spouses. Humans strive to have affectionate relations with people.
Esteem Needs are desires for stable, firmly based, high evaluation of
themselves. Humans want to feel real respect from themselves and others for
their capacities and achievements.
Self-actualization refers to the desire for self-fulfillment in actualizing a
person’s full potential. Humans want to become everything that one is capable
of becoming.
### A.3 Prompt for Dataset Generation
#### A.3.1 Obstacle and TODO Extraction
We extract Obstacle and TODO of the negative input sentence with
Orion-14B-Chat using the prompt in Figure 3.
Figure 3: Prompt for Obstacle and TODO extraction
#### A.3.2 Generating Feedback
We generate feedback inducing self-motivation with Orion-14B-Chat using prompt
in Figure 3. The type of feedback, positive/negative determined by severity
score was applied in the feedback.
Figure 4: Prompt for Generating Feedback
### A.4 Prompt for LLM Evaluation
Motivation We evaluate how motivating the feedback generated by model trained
in our dataset is with Chat-GPT using prompt in Figure 5 following Chiang and
Lee (2023).
Fluency We evaluated how fluent the generated feedback by model trained in our
dataset is using prompt in Figure 6.
Figure 5: Prompt for evaluating motivation with LLM Figure 6: Prompt for
evaluating fluency with LLM
### A.5 Samples of CASTIC dataset
In Table 6, we illustrate the example of dataset.
(a) Input
(b) Feedback
Figure 7: Word frequency analysis for train of CASTIC
(a) Input
(b) Feedback
Figure 8: Word frequency analysis for validaiton of CASTIC
### A.6 Word frequency analysis
The word frequency figure illustrates the distribution of terms within our
CASTIC dataset, providing key insights into the linguistic patterns and
content characteristics of the corpus. As depicted in Figure 7(a), 7(b), 8(a),
8(b), the x-axis represents individual words ordered by their frequency, while
the y-axis shows the occurrence count without prompt words. The figure reveals
several prominent trends: notably, common words such as ‘goal’, ‘love’, and
‘focus’ dominate the corpus, reflecting the dataset’s focus on emotional
motivation. These findings underscore the dataset’s thematic concentration and
provide a foundation for understanding its linguistic structure and contextual
relevance within the field of natural language processing. Moreover,
comparative analyses with benchmark datasets reveal distinctive word usage
patterns unique to our dataset, highlighting its potential contributions to
advancing research in automated language understanding and generation.
Input | Obstacle | TODO | Obstacle Type | Severity score | Motivation strategy | Feedback
---|---|---|---|---|---|---
Still home from school sick gonna pay big time when i get back should be doing homework all weekend on the plane :$($ | Sick | Take some rest and do homework all weekend on the plane. | Health | serious | physiological need | | Take care of yourself
---
by resting and doing
your homework on the plane.
Your body will thank you for it!
safety needs | | Take care of yourself, prioritize
---
your health and well-being. Rest
and study on the plane will help
you feel recharged and ready
to tackle any obstacles that come your way.
love and belonging | | Take care of yourself and focus
---
on your studies. Your well-being is
important for your future relationships.
self-actualization | | Take care of yourself and prioritize
---
your well-being. Rest and study will
help you achieve your goals
and grow as a person.
esteem needs | | Rest and study will boost
---
your health and increase your
confidence, giving you the chance
to show your true potential and achieve success.
Table 6: Examples of feedback in the dataset.
### A.7 Qualitative result
In table, we examine one original text for each of the motivation strategies
in our dataset along with the BART model.
Input | Motivation strategy | Feedback
---|---|---
I already hate using computers, so having to apply for a job online is not for me. | physiological need | | You have the capability and determination to succeed
---
in applying for a job online. Take it one step at
a time and focus on what you need to accomplish.
_Think about how much easier it will be to apply_
_for a job online without having to leave your_
_house! You can take care of business and still_
_maintain a constant, normal state._
safety needs | | You have the capability and determination to succeed
---
in applying for a job online. Take it one step at
a time and focus on what you can control. You can do it!
_Don’t let fear of computers hold you back. Apply_
_for that job online and take control of your future._
love and belonging | | Feeling lonely? Apply for a job online and make new
---
connections. You’ll find love and belonging, which
are important for your overall well-being.
_Feeling lonely? Finding a job online could connect_
_you with new friends and a sense of community._
self-actualization | | You have the potential and the desire to succeed.
---
Let’s focus on applying for a job online
and fulfilling your potential.
_Don’t let fear of computers hold you back._
_Take control and apply for that job_
_online. You have the potential to succeed_
_and this is your chance to show it._
esteem needs | | You have the capability and determination to succeed.
---
Let’s focus on applying for the job
online and show your skills.
_Feeling capable and respected is important, but_
_hating computers won’t get you far. It’s time to_
_overcome your fear and apply for that job online._
Table 7: Examples of generated feedback of BART in a controlled setting.
Prediction feedback corresponds to each motivation strategy is at the first
row and the ground truth follows in italic.
|
# Proof of a Stable Fixed Point for Strongly Correlated Electron Matter
Jinchao Zhao1, Gabriele La Nave2, and Philip W. Phillips1 1Department of
Physics and Institute for Condensed Matter Theory, University of Illinois 1110
W. Green Street, Urbana, IL 61801, U.S.A. 2Department of Mathematics,
University of Illinois, Urbana, Il. 61801
(March 2023)
###### Abstract
We establish the Hatsugai-Kohmoto model as a stable quartic fixed point
(distinct from Wilson-Fisher) by computing the $\beta-$function in the
presence of perturbing local interactions. In vicinity of the half-filled
doped Mott state, the $\beta-$function vanishes for all local interactions
regardless of their sign. The only flow away from the HK model is through the
superconducting channel which lifts the spin degeneracy as does any ordering
tendency. The superconducting instability is identical to that established
previouslyPhillips et al. (2020). A corollary of this work is that Hubbard
repulsive interactions flow into the HK stable fixed point in the vicinity of
half-filling. Consequently, although the HK model has all-to-all interactions,
nothing local destroys it. The consilience with Hubbard arises because both
models break the $Z_{2}$ symmetry on a Fermi surface, the HK model being the
simplest to do so. Indeed, the simplicity of the HK model belies its
robustness and generality.
## I Introduction
Proving Mott’s claim, in any dimension we care about ($d\geq 2$), that strong
electron correlation opens a gap in a half-filled band without changing the
size of the Brillouin zone still stands as a key unsolved problem in
theoretical physics. Demonstrating that the spectral function contains no
states in momentum space that cross the chemical potential would suffice as
proof of Mott’s claim. However, the inherent problem is that the model
employed in this context, namely the Hubbard model, contains strong on-site
repulsion in real space, thereby preventing any exact statement about the
corresponding spectral function in momentum space. There is of course an easy
way around this problem: add to the non-interacting band model an energy
penalty, $U$,
$H=\sum_{{{\mathbf{p}}}\sigma}\xi_{{{\mathbf{p}}}\sigma}n_{{{\mathbf{p}}}\sigma}+U\sum_{{{\mathbf{p}}}}n_{{{\mathbf{p}}}\uparrow}n_{{{\mathbf{p}}}\downarrow},$
(1)
whenever two electrons doubly occupy the same momentum state. In the above
$\xi_{{{\mathbf{p}}}\sigma}$ is the non-interacting band dispersion and
$n_{{{\mathbf{p}}}\sigma}$ is the occupancy. Since such an interaction does
not mix the momenta, a gap must open in the spectrum should $U$ exceed the
bandwidth as illustrated in Fig. 1. This is precisely the Hatsugai-Kohmoto
modelHatsugai and Kohmoto (1992) which was introduced in 1992 but attracted
essentially no attention untilLi et al. (2022); Zhong (2022); Yang (2021);
Setty (2021a); Zhu et al. (2021); Leeb and Knolle (2023); Zhu and Han (2021);
Setty (2021b); Tzeng et al. (2023); Zhong (2023); Zhao et al. (2023) our
demonstrationPhillips et al. (2020); Huang et al. (2022) that this model
offers an exact way of treating the Cooper instability in a doped Mott
insulator. Despite this utility, the HK model faces an uphill battle to
replace the ingrained Hubbard model as the knee-jerk response to the strong-
correlation problem with local-in space interactions. In bridging the gap from
the HK to the Hubbard model, three questions arise. 1) Does the HK model
remain stable to local-in-space interactions? If it does, then a
correspondence with the Hubbard model can be established. 2) Does the
resilience to local-in-space interactions give rise to a stable fixed point?
3) What about the obvious spin degeneracy that arises from the singly occupied
states in the spectrum? Are there ordering instabilities at $T=0$ that lift
the degeneracy?
Figure 1: The spectral functions of the HK model at different fillings and
interaction strength. (a) Mott insulating state, with $U>W$ and
$\braket{n}=1$. (b) Strongly repulsive Mott metal, with $U>W$ and
$\braket{n}<1$. (c) Weakly repulsive Mott metal, with $U<W$ and
$\braket{n}=1$.
It is these three questions that we answer in this paper. Briefly, we
construct the $\beta-$function explicitly and demonstrate that the answer to
all three leading questions is yes. The degeneracy is lifted spontaneously by
superconductivity as in a Fermi liquid. Central to our construction is the
leading lesson from Fermi liquid theory. As Fermi liquids admit a purely local
description in momentum space, their eigenstates are indexed simply by
momentum and spin. Consequently, in real space, Fermi liquids exhibit long-
range entanglement. It is precisely this long- range entanglement that makes
them impervious to any local interaction, that is, short-range Coulomb
interactions. The renormalization group analysis of a Fermi liquid shows
distinctlyPolchinski (1992); Shankar (1994) that should the sign of the
interaction be reversed and the electrons reside on opposites of the Fermi
surface, a superconducting instability arises. The added stipulation of the
electrons residing on opposite sides of the Fermi surface changes the scaling
dimension of the generic 4-fermion interaction from being irrelevant to
marginal. Also of note is that in terms of the pair annihilation operator,
$b_{k}=c_{k\uparrow}c_{k\downarrow}$, the Cooper pairing interaction
$V_{\rm
pair}=-g\sum_{{{\mathbf{k}}},{{\mathbf{k}}}^{\prime}}b_{{{\mathbf{k}}}}^{\dagger}b_{{{\mathbf{k}}}^{\prime}},$
(2)
is completely non-local in real space as it involves an interaction between
all pairs of electrons regardless of their separation. Such non-locality does
not make the BCS model unphysical because this interaction is the only
relevant perturbation that destroys a Fermi liquid as indicated by the 1-loop
flow equation,
$\frac{dg}{dt}=\frac{g^{2}}{4\pi},$ (3)
which exhibits a breakdown at a finite energy scale (the transition
temperature) signaling the onset of pairing and the eventual non-perturbative
physics of the BCS superconducting state. All of this is summarized in the
flow diagram in Fig. (2): short-range interactions ($V_{\rm local}$)
regardless of their sign flow to the Fermi liquid state while the Cooper term,
($V_{\rm pair}$), inherently non-local in real space, flows to strong
coupling. Once again, it is the real-space long-range entanglement of a Fermi
liquid that accounts for its resilience to any short-range interaction.
Figure 2: Perturbative low diagram for interactions in a Fermi liquid. Short-
range interactions regardless of their sign do nothing. Pairing leads to a
flow to strong coupling and the ultimate destruction of the Fermi liquid and
the onset of a superconducting state. The nature of the superconducting state
cannot be established based on perturbative arguments but requires BCS theory.
As it is ultimately the conservation of the charge currents in momentum space,
$n_{{{\mathbf{p}}}\sigma}$, that is at play here, it is natural to rewrite the
HK model as $\sum_{p}h_{p}$ with $h_{p}$ implicitly defined from Eq. (1). From
this, it is clear that the HK model has a momentum-diagonal structure of a
Fermi liquid. However, unlike a Fermi liquid, this model describes a Mott
insulatorPhillips et al. (2020); Huang et al. (2022); Hatsugai and Kohmoto
(1992) as shown in Fig. (1). While it is natural to criticize this model as
being unphysical because of the non-local in real space interactions, non-
locality by itself does not dismiss a model from being physically relevant as
$V_{\rm pair}$ in a Fermi liquid is non-local but nonetheless is the only
relevant pairing term that leads to the BCS instability. The real question is:
Do local interactions lead to flow away from the Hatsugai-Kohmoto model? That
is, does the Mott insulator that HK describes represent a fixed point in
direct analogy with the stability of a Fermi liquid to local interactions? We
show here that the answer to this question is a resounding yes; HK is a stable
fixed point. Even adding local terms of the Hubbard kind does nothing.
Analogies with Fermi liquid aside, a more fundamental reason is operative for
the resilience of the HK model to local perturbations. Haldane and
AndersonAnderson and Haldane (2001) argued that Fermi liquids contain an added
$Z_{2}$ symmetry on the Fermi surface that amounts to a particle-hole
interchange for one of the spin species. Operationally, if only doubly
occupied or empty sectors are present, then adding or removing an electron of
either spin is kinematically equivalent. However, removing an electron from a
singly occupied k-state can only be done in one way, thereby breaking the
$Z_{2}$ symmetry. Note even in the Hubbard model, the on-site repulsion
creates single occupancy in real space which will give rise to single
occupancy also in momentum space. HK suffices as it is the simplest term that
does soHuang et al. (2022). As long as this symmetry is already broken, adding
new interactions which also break this symmetry yields no relevant physics as
far as Wilsonian renormalization is concerned.
We carry out the full renormalization group analysis for the HK model and show
that as in a Fermi liquid, no local perturbation destroys the HK model,
thereby defining a stable fixed point. The only perturbation that destroys HK
physics is $V_{\rm pair}$ as in a Fermi liquid. We conclude then that the HK
model is more than a toy model. Rather it represents a stable fixed point for
a model non-Fermi liquid that describes a Mott insulator. It is a new fixed
point in quantum matter that describes a doped Mott insulator. The
superconducting state that ensuesPhillips et al. (2020) is the BCS analogue
for a doped Mott insulator.
## II RG approach for $d\geq 2$ HK model: Tree level analysis
Our goal is to establish the stability of the HK model to weak local
interactions. What is central to the HK model are the two filling surfaces,
the lower filling surface (L-surface) separates singly- and empty states, and
the upper filling surface (U-surface) separates the doubly- and singly-
occupancy. The key argument of the RG process is that at weak coupling, only
modes around these filling surfaces participate. At different filling and
interaction strengths, the number of filling surfaces varies from zero (Mott
insulating state, Fig.1(a)), to one (strongly repulsive Mott metal, Fig.1(b))
or two (weakly repulsive Mott metal, Fig.1(c)). For simplicity, we work with
$d=2$ and rotationally invariant filling surfaces. The higher dimensional
result can always be achieved by adding rotational degrees of freedom and the
relevance of interactions that define the fixed point is not changed.
### II.1 L-surface
We start with the analysis of the setup with only one spherical filling
surface of which the particle occupancy is singly- or empty on either side
(L-surface). This analysis follows the method WeinbergWeinberg (1996) and
ShankarShankar (1994) though we use the notation of PolchinskiPolchinski
(1992). We linearize the dispersion around the L-surface with radius $K_{L}$
to
$E\left({{\mathbf{K}}}={{\mathbf{n}}}(K_{L}+k)\right)=v_{L}k,$ (4)
where ${{\mathbf{n}}}=\frac{{{\mathbf{K}}}}{|{{\mathbf{K}}}|}$ is the unit
vector in the direction of ${{\mathbf{K}}}$ and $v_{L}$ is the isotropic
L-surface velocity. Note linearization restricts our analysis to the vicinity
of metallic state where Mott physics is relevant. Near the bottom of the band,
our analysis fails as the band dispersion is inherently quadratic, and weak
coupling physics obtains. We write the zero temperature partition function,
$\displaystyle Z=\int\mathcal{D}[c,\bar{c}]e^{-S_{0}},$ (5)
$\displaystyle\begin{split}&S_{0}=\int_{\Lambda}\frac{d^{d}{{\mathbf{K}}}}{(2\pi)^{d}}\int_{-\infty}^{\infty}d\omega\\\
&\left[\sum_{\sigma}\bar{c}_{{{\mathbf{K}}}\sigma}(i\omega-
v_{L}k)c_{{{\mathbf{K}}}\sigma}+U\bar{c}_{{{\mathbf{K}}}\uparrow}\bar{c}_{{{\mathbf{K}}}\downarrow}{c}_{{{\mathbf{K}}}\downarrow}{c}_{{{\mathbf{K}}}\uparrow}\right].\end{split}$
(6)
The integral over momentum is confined in a thin shell around the filling
surface, with distance $\Lambda$ as the cut-off. The partition function
factorizes at each momentum $K$, in exactly the same ways as a Fermi liquid,
however, mixing the up and down spins at the same momentum. After integrating
out the fast modes living within $s\Lambda<|k|<\Lambda$, we perform the
rescaling of the variables and fields:
$\displaystyle k^{\prime}$ $\displaystyle=s^{-1}k,$ (7)
$\displaystyle\omega^{\prime}$ $\displaystyle=s^{-1}\omega,$ (8)
$\displaystyle c^{\prime}_{{{\mathbf{K}}}^{\prime}\sigma}$
$\displaystyle=s^{3/2}c_{{{\mathbf{K}}}\sigma},$ (9) $\displaystyle
U^{\prime}$ $\displaystyle=s^{-2}U,$ (10)
to make the partition function invariant up to a constant. It is worth noting
that the HK repulsion has a scaling dimension of $-2$ and hence is strongly
relevant. This relevant term suppresses the contribution of spectral weight
from the other band that is $U$ away from the filling surface.
Now we consider the effect of perturbations on this fixed point. First,
consider perturbations that are quadratic in the fields,
$\delta
S^{(2)}=\int_{\Lambda}\frac{d^{d}{{\mathbf{K}}}}{(2\pi)^{d}}\int_{-\infty}^{\infty}d\omega\sum_{\sigma}\mu({{\mathbf{K}}})\bar{c}_{{{\mathbf{K}}}\sigma}c_{{{\mathbf{K}}}\sigma}.$
(11)
This action separates into slow and fast pieces, and the effect of integrating
out the fast modes produces a constant. After rescaling the momenta and the
fields, we have
$\mu^{\prime}(k^{\prime})=s^{-1}\mu(k).$ (12)
This relevant term is the chemical potential, which should be included in the
action kinetic term. As a result, the location of the fixed point definitely
depends on the filling of the system. We shall adjust the position of the
filling surface according to the chemical potential so as to make the system
truly fixed.
Next, we consider the quartic interaction in the most general form,
$\begin{split}\delta
S_{4}&=\int_{K}\int_{\omega}\bar{c}_{4}(\tau)\bar{c}_{3}(\tau)c_{2}(\tau)c_{1}(\tau)u(4,3,2,1),\\\
\bar{c}_{i}&\equiv\bar{c}_{{{\mathbf{K}}}_{i}\sigma_{i}},\\\ c_{i}&\equiv
c_{{{\mathbf{K}}}_{i}\sigma_{i}},\\\
u&(4,3,2,1)=u({{\mathbf{K}}}_{4}\sigma_{4},{{\mathbf{K}}}_{3}\sigma_{3},{{\mathbf{K}}}_{2}\sigma_{2},{{\mathbf{K}}}_{1}\sigma_{1}),\\\
\int_{K}&\equiv\prod_{i=1}^{4}\int_{\Lambda}d{{\mathbf{K}}}_{i}\int
d\Omega_{i}\delta({{\mathbf{K}}}_{1}+{{\mathbf{K}}}_{2}-{{\mathbf{K}}}_{3}-{{\mathbf{K}}}_{4}),\\\
\int_{\omega}&\equiv\prod_{i=1}^{4}\int_{-\infty}^{\infty}d\omega_{i}\delta(\omega_{1}+\omega_{2}-\omega_{3}-\omega_{4}).\end{split}$
(13)
The delta functions put constraints on the integral region of momentum and
frequency. The delta function on frequency could be easily rescaled as
$\delta(\omega_{1}^{\prime}+\omega_{2}^{\prime}-\omega_{3}^{\prime}-\omega_{4}^{\prime})=s\delta(\omega_{1}+\omega_{2}-\omega_{3}-\omega_{4})$
since the integral over frequency extends to infinity. The delta function on
momentum, however, has a different scaling behavior as pointed out by
PolchinskiPolchinski (1992). The distance from the filling surface only gives
a contribution proportional to the cutoff $\Lambda$ which is a negligible
contribution compared with the filling momentum
$K_{L}:\delta({{\mathbf{K}}}_{1}+{{\mathbf{K}}}_{2}-{{\mathbf{K}}}_{3}-{{\mathbf{K}}}_{4})\approx\delta(K_{L}({{\mathbf{n}_{1}}}+{{\mathbf{n}}}_{2}-{{\mathbf{n}}}_{3}-{{\mathbf{n}_{4}}})+O(\Lambda))$.
When the momenta all point in different directions, the first term in the
delta function dominates and the delta function does not scale upon RG.
Following the same argument by PolchinskiPolchinski (1992), this quartic
operator is thus irrelevant at the tree level,
$u^{\prime}(4^{\prime},3^{\prime},2^{\prime},1^{\prime})=su(4,3,2,1).$ (14)
It is then easy to see that any further interactions are even more irrelevant.
This power-counting tree-level analysis already rules out the general short-
range interactions from being relevant. Upon decreasing the energy scale, we
find that the coupling becomes weaker and weaker while the HK repulsion gets
stronger and stronger. Consequently, Mott physics captured by the filling
surface is stable under local interactions.
There is an important subtlety in the kinematics and as a result, our
treatment of the delta function is not always valid. The first-term
contribution to the delta function could be set exactly to zero by putting an
additional constraint on the momentum directions. The second term which is
proportional to $\Lambda$ will be renormalized and generates a factor of $s$.
The rescaling of these direction-constrained interactions are
$u^{\prime}(4^{\prime},3^{\prime},2^{\prime},1^{\prime})=s^{0}u(4,3,2,1).$
(15)
Performing a Taylor expansion in $k$ and $\omega$ and comparing coefficients
of separate powers, we conclude that the leading term, with no dependence on
either variable, is marginal, while all the rest are irrelevant. Because there
are only two degrees of freedom in momentum, both of them are non-local by
definition. Consequently, to reach a definitive conclusion, we must proceed to
the 1-loop level to determine whether they are marginally irrelevant or
marginally relevant, or truly marginal. This calculation will be performed in
the next section.
### II.2 U-surface
In the case that the filling surface is the occupancy boundary of the doubly
and singly occupied regions, we have to perform an extra particle-hole
transform before we write down the partition function in the path integral
language so as to obtain the correct linearized dispersion around the
U-surface with radius $K_{U}$,
$E\left({{\mathbf{K}}}={{\mathbf{n}}}(K_{U}+k)\right)=v_{U}k.$ (16)
This particle-hole asymmetry reflects the broken $Z_{2}$ symmetry of Mott
physics. The zero temperature partition function then becomes
$\displaystyle Z=\int\mathcal{D}[c,\bar{c}]e^{-S_{0}},$ (17)
$\displaystyle\begin{split}&S_{0}=\int_{\Lambda}\frac{d^{d}{{\mathbf{K}}}}{(2\pi)^{d}}\int_{-\infty}^{\infty}d\omega\\\
&\left[\sum_{\sigma}\bar{c}_{{{\mathbf{K}}}\sigma}(i\omega-
v_{U}k)c_{{{\mathbf{K}}}\sigma}+U\bar{c}_{{{\mathbf{K}}}\uparrow}\bar{c}_{{{\mathbf{K}}}\downarrow}{c}_{{{\mathbf{K}}}\downarrow}{c}_{{{\mathbf{K}}}\uparrow}\right].\end{split}$
(18)
choosing the same setup with cut-off $\Lambda$ results in the same scaling
rule for the variables and fields as in Eq.(10). The identical analysis on
quadratic and quartic perturbations thus reoccurs and we have the same
irrelevant tree-level behavior around this U-surface fixed point.
### II.3 2 Filling Surfaces
In the weakly repulsive case, the 2 occupancy boundaries(L-surface and
U-surface) coexist. In order to discuss low-energy physics, we have to include
modes around both filling surfaces. Due to the factorizability of the
partition function in momentum space, we can safely achieve the bare partition
function by the product of Eq.(5) and Eq.(17). By setting the energy scale
$\Lambda$ around both filling surfaces to be the same, we arrive at the same
scaling of the variables and fields as in Eq.(10). In conclusion, regardless
of the number of filling surfaces, local interactions are always irrelevant at
the tree level and hence do not modify the fixed point defined by the filling
surfaces. We will move on to see the effect of marginal quartic interactions
on these fixed points.
## III The perturbative expansion for 1-loop level corrections
With the tree-level analysis in hand, we have already ruled out the local part
of any perturbations. It is interesting to see how we can obtain a collective
effect such as superconductivity in a low-energy theory.
The RG process is carried out by integrating the fast modes and rescaling the
slow modes and the associated variables to keep the partition function
unchanged. Besides the terms that only contain slow modes(tree-level result),
we also need to include the terms that have both slow modes and fast modes and
add their contribution to the scaling equations. This process is
mathematically equivalent to calculating multipoint correlation functions with
interactions. In the HK model, the correlation functions could be calculated
using perturbative expansion for weak coupling as demonstrated in the
Appendix.
Figure 3: The one-loop graphs for $\beta(u)$ for quartic interactions. The
loop momenta lie in the shell of width $d\Lambda$ being eliminated. The
external frequencies being all zero, the loop frequencies are (a) equal for
ZS, (b) equal for ZS’, and (c) equal and opposite for the BCS graph.
The increment in $u(4,3,2,1)$
$\begin{split}du(4,3,2,1)=&\int
u(6,3,5,1)u(4,5,2,6)G(5)G(6)\delta(3+6-1-5)d5d6\\\ &-\int
u(6,4,5,1)u(3,5,2,6)G(5)G(6)\delta(6+4-1-5)d5d6\\\ &-\frac{1}{2}\int
u(6,5,2,1)u(4,3,6,5)G(5)G(6)\delta(5+6-1-2)d5d6.\end{split}$ (19)
is given by 3 diagrams as mentioned by Shankar, and we will follow the
nomenclature to call them ZS, ZS’ and BCS diagrams respectively.
We examine first the tree-level marginal interactions. We define the momentum
components on the filling surface of $K_{i}$ as $K_{i}^{F}$. Then a common
property of these marginal interactions is that the sum of the incoming and
outgoing $K_{i}^{F}$s is zero
${{\mathbf{K}}}_{1}^{F}+{{\mathbf{K}}}_{2}^{F}-{{\mathbf{K}}}_{3}^{F}-{{\mathbf{K}}}_{4}^{F}=0$.
The delta function on momentum thus scales as $s^{-1}$ and gives the marginal
power counting. This property reduces the marginal interactions into two
families: 1) forward scattering and 2) superconductivity or the Cooper
channel.
## IV Forward scatterings at 1-loop
The forward scatterings are defined by a non-vanishing
${{\mathbf{P}}}={{\mathbf{K}}}_{1}^{F}+{{\mathbf{K}}}_{2}^{F}$. This
nomenclature comes from the $d=2$ 1-filling surface case, where there are only
two solutions:
${{\mathbf{K}}}_{1}={{\mathbf{K}}}_{3},{{\mathbf{K}}}_{2}={{\mathbf{K}}}_{4}$
or
${{\mathbf{K}}}_{1}={{\mathbf{K}}}_{4},{{\mathbf{K}}}_{2}={{\mathbf{K}}}_{3}$.
These two setups are equivalent to one another up to changing the Fermion
order.
### IV.1 1 Filling surface
When there is only a single filling surface, the forward scattering is
determined only by the solution to
${{\mathbf{K}}}_{1}={{\mathbf{K}}}_{3},{{\mathbf{K}}}_{2}={{\mathbf{K}}}_{4}$.
Including spins, there are explicitly 3 choices:
$\displaystyle
F^{1}({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{2})=u({{\mathbf{K}}}_{2}\sigma,{{\mathbf{K}}}_{1}\sigma,{{\mathbf{K}}}_{2}\sigma,{{\mathbf{K}}}_{1}\sigma),$
(20) $\displaystyle
F^{2}({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{2})=u({{\mathbf{K}}}_{2}\bar{\sigma},{{\mathbf{K}}}_{1}\sigma,{{\mathbf{K}}}_{2}\bar{\sigma},{{\mathbf{K}}}_{1}\sigma),$
(21) $\displaystyle
F^{3}({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{2})=u({{\mathbf{K}}}_{2}\sigma,{{\mathbf{K}}}_{1}\bar{\sigma},{{\mathbf{K}}}_{2}\bar{\sigma},{{\mathbf{K}}}_{1}\sigma).$
(22)
Due to the fact that any higher order terms in the Taylor expansion on
$\omega$ and $k$ are irrelevant, we can freely choose the frequency and
momentum deviations. For simplicity, we set all external legs to zero
frequency and almost on the filling surface ($\omega=0,k=\epsilon\ll
d\Lambda$). The tiny value of $\epsilon$ will be set equal to zero at the last
step. We need to keep it during the calculation to make the running momentum
$K$ and $K+Q$ different according to the requirement of the weak coupling
expansion.
First, we consider the ZS diagram in Fig.3 given by the first integral in
Eq.(19). Since $Q=\epsilon\ll d\Lambda$, both $K$ and $K+Q$ lie on the same
side of the filling surface for all eligible choices of $K$. As a result, the
directional integral of $K$ is over the full range,
$\begin{split}dF({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{2})=&\int\frac{d{{\mathbf{n}}}}{(2\pi)^{d-1}}F({{\mathbf{n}}}_{1},{{\mathbf{n}}})F({{\mathbf{n}}},{{\mathbf{n}}}_{2})\\\
&\int_{d\Lambda}\frac{dk}{2\pi}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}G(\omega,k)G(\omega,k+\epsilon).\end{split}$
(23)
Here $F({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{3})$ represents the appropriate
choice of $F$ that satisfies the momentum and spin conservation at each
vertex. The integral over $dk$ lies inside the thin shells to be integrated
out. However, there are two such shells. One of the shells lies inside the
filling surface while the other is outside the filling surface. For the outer
shell corresponding to $k\in[\Lambda-d\Lambda,\Lambda]>0$, the states belong
to the 0-occupancy region, which means we can replace the Green function by
$G(\omega,k)=\frac{1}{i\omega-v_{L}k}.$ (24)
For the inner shell, corresponding to $k\in[-\Lambda,-\Lambda+d\Lambda]<0$,
the states belong to the single-occupancy region, which means we can replace
the Green function by
$G(\omega,k)=\frac{1/2}{i\omega-v_{L}k}+\frac{1/2}{i\omega-v_{L}k-U}.$ (25)
The poles in the $\omega$ plane do not contribute if they lie on the same side
of the real axis. The only surviving contribution from ZS is thus
$\begin{split}dF(&{{\mathbf{n}}}_{1},{{\mathbf{n}}}_{2})=2\int\frac{d{{\mathbf{n}}}}{(2\pi)^{d-1}}F({{\mathbf{n}}}_{1},{{\mathbf{n}}})F({{\mathbf{n}}},{{\mathbf{n}}}_{2})\\\
&\int_{-\Lambda}^{-\Lambda+d\Lambda}\frac{dk}{2\pi}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\frac{1/2}{i\omega-
v_{L}k}\cdot\frac{1/2}{i\omega-v_{L}k-U}.\end{split}$ (26)
The integral over $\omega$ and $k$ gives
$\int_{d\Lambda}\frac{dk}{2\pi}\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}\frac{1/2}{i\omega-
v_{L}k}\cdot\frac{1/2}{i\omega-v_{L}k-U}=\frac{d\Lambda}{8\pi U}.$ (27)
As $U$ is strongly relevant, that is, $U^{\prime}=s^{-2}U$, this contribution
goes to zero much faster than $d\Lambda/\Lambda$. Thus the ZS diagram does not
yield any contribution under the RG analysis.
Now consider the ZS’ diagram. Due to the momentum transfer $Q$ of order
$K_{L}$ at the left vertex, not only is the magnitude of the loop momentum
restricted to lie within the shell being eliminated but also its angle is
restricted to a range of order $d\Lambda/K_{L}$. This suppression in ZS’
diagrams contributes to $d\Lambda^{2}/\Lambda K_{L}$. The $\beta$ function
thus vanishes as we take the limit $d\Lambda/\Lambda\rightarrow 0$.
Finally, the same kinematic reasons used to establish that ZS’ vanishes can be
adopted to show that the BCS diagram also does not renormalize $F$ at one
loop. Hence, the coupling constants for the forward scattering do not flow in
this order because $\beta(F)=0$.
### IV.2 2 Filling surfaces
When there are 2 filling surfaces, there are at most 4 independent solutions
to ${{\mathbf{P}}}={{\mathbf{K}}}_{1}+{{\mathbf{K}}}_{2}$ as shown in Fig.4.
Figure 4: The solutions to
${{\mathbf{P}}}={{\mathbf{K}}}_{1}+{{\mathbf{K}}}_{2}$. The L-surface and
U-surface are plotted in dotted circles. The 4 solutions are marked by the
surfaces where the 2 momentums that sum up to ${{\mathbf{P}}}$ are.
The combined equation ${{\mathbf{P}}}={{\mathbf{K}}}_{1}+{{\mathbf{K}}}_{2}$
and ${{\mathbf{P}}}={{\mathbf{K}}}_{3}+{{\mathbf{K}}}_{4}$ thus have $4\times
4=16$ independent solutions. There are still explicitly 3 spin configurations
for each momentum solution. Thus the total number of marginal forward
scattering is $16\times 3=48$. We will not enumerate them here since they
remain un- changed in the same way as we discussed in the case of a single-
filling surface. The 1-loop correction contributes a term proportional to
either $d\Lambda/U$ from the ZS diagrams or $d\Lambda/K_{L}$ for the ZS’ and
BCS diagrams. Once again, forward scatterings remain fixed under RG; that is,
they do not flow under RG.
## V Superconducting pairings at 1-loop
The incoming and outgoing momenta each summing to zero
${{\mathbf{K}}}_{1}=-{{\mathbf{K}}}_{2},{{\mathbf{K}}}_{3}=-{{\mathbf{K}}}_{4}$
constitute superconducting pairings. The 1-loop evolution of $V$s has a non-
vanishing contribution even for a Fermi liquid and hence we expect an
analogous contribution at the HK fixed point should one exist. We will analyze
the single and two filling surfaces separately.
### V.1 1 Filling surface
${{\mathbf{K}}}_{1}$ and ${{\mathbf{K}}}_{3}$ can lie freely on the filling
surface. For simplicity, we set all external legs to zero frequency and on the
filling surface ($\omega=k=0$). Including the spin-singlet and spin-triplet
pairings, we have 2 choices,
$\displaystyle
V^{1}({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{3})=u(-{{\mathbf{K}}}_{3}\sigma,{{\mathbf{K}}}_{3}\sigma,-{{\mathbf{K}}}_{1}\sigma,{{\mathbf{K}}}_{1}\sigma),$
(28) $\displaystyle
V^{2}({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{3})=u(-{{\mathbf{K}}}_{3}\bar{\sigma},{{\mathbf{K}}}_{3}\sigma,-{{\mathbf{K}}}_{1}\bar{\sigma},{{\mathbf{K}}}_{1}\sigma)..$
(29)
These two spin configurations satisfy the antisymmetry for Fermions regardless
of the angular structure of the pairing interaction. The incoming momenta are
equal and opposite on the filling surface. The ZS and ZS’ diagrams are
suppressed by $d\Lambda^{2}/\Lambda K_{L}$ and hence do not contribute for the
same reason that ZS’ and BCS diagrams did not contribute to the flow of
forward scatterings. The BCS diagram now has a contribution since the running
momentum in the loop can now freely point in any direction, regardless of the
value of $K$. We rewrite Eq.(19) as
$\begin{split}dV^{1,2}({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{3})=&-\frac{1}{2}\int\frac{d{{\mathbf{n}}}}{(2\pi)^{d-1}}V^{1,2}({{\mathbf{n}}}_{1},{{\mathbf{n}}})V^{1,2}({{\mathbf{n}}},{{\mathbf{n}}}_{3})\\\
&\int_{d\Lambda}\frac{dk}{2\pi}\int_{-\infty}^{\infty}d\omega
G(\omega,k)G(-\omega,k).\end{split}$ (30)
The integral over $dk$ also lies inside the two thin shells to be integrated
out. These two thin shells yield contribute differently. For the outer shell,
corresponding to $k\in[\Lambda-d\Lambda,\Lambda]>0$, the second line of
Eq.(30) yields a finite value $\frac{d\Lambda}{4\pi\Lambda}$. For the inner
shell, corresponding to $k\in[-\Lambda,-\Lambda+d\Lambda]<0$, the integral
region thus gives a value that is reduced by a factor of a quarter,
$\frac{d\Lambda}{16\pi\Lambda}$. In all, The renormalization group equation
for $V^{1,2}$ is
$\begin{split}\frac{dV({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{3})}{dt}=&-\frac{5}{32\pi
v_{L}}\int\frac{d{{\mathbf{n}}}}{(2\pi)^{d-1}}V({{\mathbf{n}}}_{1},{{\mathbf{n}}})V({{\mathbf{n}}},{{\mathbf{n}}}_{3})\\\
\equiv&-\frac{5}{32\pi
v_{L}}(V*V)({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{3}),\end{split}$ (31)
where $dt=|d\Lambda|/\Lambda$ is the RG transform step size, and $*$ defines
the generalized convolution in $d$-dimensions. This is the Cooper instability
in the HK model. For the case of $d=2$, we can simplify this by going to
momentum eigenfunctions
$V_{l}=\int_{0}^{2\pi}\frac{d\theta}{2\pi}e^{il\theta}V(\theta),$ (32)
where $V(\theta)=V({{\mathbf{n}}}_{0},R_{\theta}{{\mathbf{n}}}_{0})$ and
$R_{\theta}$ is the rotation by degree $\theta$. We can obtain the $\beta$
function for each angular momentum $l$
$\frac{dV_{l}}{dt}=-\frac{5}{32\pi v_{L}}V_{l}^{2}.$ (33)
The flow tells us that the couplings $V_{l}$ are marginally relevant if
negative, and marginally irrelevant if positive. By integrating the flow, we
obtain
$V_{l}(t)=\frac{V_{l}(0)}{1+5tV_{l}(0)/32\pi v_{L}},$ (34)
which implies an instability at the energy scale
$\Lambda_{c}=\Lambda_{0}e^{32\pi v_{L}/5V_{0}}$. The energy scale in a thermal
system can be proportional to the temperature; thus we propose that the
approximate transition temperature of this metallic state scales as
$\Lambda_{c}$.
### V.2 2 Filling surfaces
The two-filling surface case remains to be analyzed. The electrons around
different filling surfaces now have different contributions and can be grouped
into 3 different categories (intra-L, intra-U, and inter-LU)
$\displaystyle
V^{1}({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{3})=u(-{{\mathbf{K}}}_{3}^{L}\bar{\sigma},{{\mathbf{K}}}_{3}^{L}\sigma,-{{\mathbf{K}}}_{1}^{L}\bar{\sigma},{{\mathbf{K}}}_{1}^{L}\sigma),$
(35) $\displaystyle
V^{2}({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{3})=u(-{{\mathbf{K}}}_{3}^{U}\bar{\sigma},{{\mathbf{K}}}_{3}^{U}\sigma,-{{\mathbf{K}}}_{1}^{U}\bar{\sigma},{{\mathbf{K}}}_{1}^{U}\sigma),$
(36) $\displaystyle
V^{3}({{\mathbf{n}}}_{1},{{\mathbf{n}}}_{3})=u(-{{\mathbf{K}}}_{3}^{U}\bar{\sigma},{{\mathbf{K}}}_{3}^{U}\sigma,-{{\mathbf{K}}}_{1}^{L}\bar{\sigma},{{\mathbf{K}}}_{1}^{L}\sigma),$
(37)
for the spin-singlet configuration. Here the superscripts represent the
surface around which the momenta are located. The spin-triplet processes have
exactly the same RG structure and follow the same RG equations. We omit their
definition and work with only the spin-singlet processes.
The RG flow of the BCS couplings still follows Eq.(30). The running momentum
can be chosen freely on both filling surfaces. The integral of the Green
function around each filling surface follows the same process as Eq.(31). The
renormalization group equations are
$\displaystyle\frac{dV^{1}}{dt}=$
$\displaystyle-\frac{5}{32\pi}(\frac{V^{1}*V^{1}}{v_{L}}+\frac{V^{3\dagger}*V^{3}}{v_{U}}),$
(38) $\displaystyle\frac{dV^{2}}{dt}=$
$\displaystyle-\frac{5}{32\pi}(\frac{V^{3}*V^{3\dagger}}{v_{L}}+\frac{V^{2}*V^{2}}{v_{U}}),$
(39) $\displaystyle\frac{dV^{3}}{dt}=$
$\displaystyle-\frac{5}{32\pi}(\frac{V^{1}*V^{3}}{v_{L}}+\frac{V^{3}*V^{2}}{v_{U}}).$
(40)
With the simplification, $V^{1}=V^{2}=V^{3}=V$, we simplify the $\beta$
function to a single equation,
$\frac{dV}{dt}=-\frac{5}{32\pi}(\frac{1}{v_{L}}+\frac{1}{v_{U}})V*V.$ (41)
For the case of $d=2$, the instability for attractive $V$ obtains at the
energy scale
$\Lambda_{c}=\Lambda_{0}\exp\left({\frac{32\pi
v_{L}v_{U}}{5(v_{L}+v_{U})V_{0}}}\right).$ (42)
The energy scale in a thermal system can be proportional to the temperature.
This increase in the critical energy scale is consistent with the exactly
calculated pair susceptibilityZhao et al. (2022), which diverges at a higher
temperature for a small value of $U$ compared with the Fermi liquid result.
The comparison between different $T_{c}$ was shown in Figure(5)
Figure 5: The dependence of $T_{c}$ on superconducting pairing strength
$g=-V$. The curves from top to bottom are: $U<W$ (2 filling surfaces), $U=0$
(Fermi liquid), and $U>W$ (1 filling surface). The insert plot shows the
linear dependence of $\log(T_{c})$ on $1/g$ and the slope of each curve is
$0.8:1:1.6$. These $T_{c}$ exponents are consistent with the $\Lambda_{c}$
estimate.
Due to the fact that RG analysis only deals with perturbative interactions
around the fixed point at each step, the first order transition into a
superconductor at finite $U$Zhao et al. (2022) is absent.
## VI Final Remarks
We have shown that the analogies with Fermi liqiud theory noted in the
introduction are borne out by a thorough RG analysis of the interactions that
can contribute to the HK model. For short-range repulsions, nothing flows away
from HK, thereby establishing it as a stable fixed point depicted in Fig. (6).
While it is surprising that a new quartic fixed point exists, it is ultimately
not surprising given that the momentum structure of the HK model is identical
to that of Fermi liquid theory. As a consequence, all of the interactions
which flow into Fermi liquid theory also flow into this quartic fixed point.
Once again as in Fermi liquid theory, superconductivity leads to flow away
from the HK fixed point (see Fig.6). Since Hubbard interactions are also local
in real space, they also cannot perturb away from the HK fixed point in the
metallic state. Consequently, the simplicity of the HK model belies its true
robustness and generality.
Figure 6: Perturbative flow diagram for interactions in a doped HK Mott
system. Short-range interactions regardless of their sign do nothing. Only
pairing leads to flow to strong coupling and the ultimate destruction of the
non-Fermi liquid HK metallic state and the onset of a superconducting state
distinct from that of a BCS superconductor. The nature of the superconducting
state cannot be established based on perturbative arguments but requires the
PYH theoryPhillips et al. (2020); Zhao et al. (2022).
While the similarity of the momentum structure of the theory with that of
Fermi liquid theory plays a key role in the stability of the fixed point, the
relation to Hubbard is ultimately driven by the $Z_{2}$ symmetry breaking of
the interaction in Eq. (1). As long as the interaction is repulsive, breaking
the $Z_{2}$ symmetry noted by Anderson and HaldaneAnderson and Haldane (2001)
leads to single occupancy. The phase diagram for the evolution of HK physics
from a simple Fermi liquid can then be plotted as a function of the singly
occupied region, $\Omega_{1}$, as shown in Fig. (7). Fermi liquids live only
in the region where $\Omega_{1}$ vanishes. The transition line for
superconductivity from a FL state is given by the green line. The entire
region in the $\Omega_{1}-g$ plane represents non-BCS superconductivity and is
governed by the quartic HK fixed point delineated here. In addition to
mediating non-Fermi liquid behvaiour, single occupancy in real or momentum
space leads to degeneracy. Degeneracy is a tell-tale sign of competing
ordering tendencies that are well known to obtain in strongly correlated
systems. For example, the pure HK model has a divergent ferromagnetic
susceptibility (See Appendix). What the HK model does is separate the
bifurcation of the spectral function into low and high energy branches per
momentum state, the inherent physics of a Mott insulator, from ordering
tendencies. Such ordering is ultimately needed to free the HK model of the
spurious ground-state degeneracy without generating flow away from the stable
fixed point. Hence, what the HK model ultimately does is offer a way of
treating Mott’s original conception of the gapped half-filled band. Mottness
sets the gap and ordering is secondary as borne out experimentally in all Mott
systems ranging from the vanadatesQazilbash et al. (2008) to the
cupratesCooper et al. (1990); Chen et al. (1991); Uchida et al. (1991).
Figure 7: The general phase diagram of the superconducting instability in the
HK fixed point. As long as the $\Omega_{1}$ region is non-zero, the system
breaks the Fermi Liquid $Z_{2}$ symmetry and flows into HK.
Acknowledgements We thank the NSF DMR-2111379 for partial funding this work.
## Appendix
### VI.1 Detailed derivation of the Green function
Here we briefly review the exact partition and Green function of the HK model
at any filling. We start with the HK Hamiltonian,
$H_{HK}=\sum_{k}\left[\xi_{k}(n_{k\uparrow}+n_{k\downarrow})+Un_{k\uparrow}n_{k\downarrow}\right],$
(43)
where $\xi_{k}=\epsilon_{k}-\mu$. For each momentum sector, the HK Hamiltonian
is built out of commuting parts. Thus, the partition function factorizes, and
for each momentum sector we have
$Z_{k}=1+2e^{-\beta\xi_{k}}+e^{-\beta(2\xi_{k}+U)}.$ (44)
The Heisenberg equations in imaginary time for Fermion $c_{k\sigma}$
annihilation and the number operator
$n_{k\sigma}=c^{\dagger}_{k\sigma}c_{k\sigma}$ are
$\displaystyle\dot{c}_{k\sigma}$
$\displaystyle=\left[H,c_{k,\sigma}\right]=-(\xi_{k}+Un_{k\bar{\sigma}})c_{k\sigma},$
(45) $\displaystyle\dot{n}_{k\sigma}$
$\displaystyle=\left[H,n_{k,\sigma}\right]=0.$ (46)
Thus we have the time evolution of the fermi operator,
$c_{k\sigma}(\tau)=e^{-(\xi_{k}+Un_{k\bar{\sigma}})\tau}c_{k\sigma}(0).$ (47)
The average particle number is
$\braket{n_{k\sigma}}=\frac{e^{-\beta\xi_{k}}+e^{-\beta(2\xi_{k}+U)}}{1+2e^{-\beta\xi_{k}}+e^{-\beta(2\xi_{k}+U)}}.$
(48)
The imaginary time Green function is
$\begin{split}G_{k\sigma}(\tau)&=-\braket{c_{k\sigma}(\tau)c^{\dagger}_{k\sigma}(0)}\\\
&=-\mathrm{Tr}\left[e^{-\beta(H-F)}e^{-(\xi_{k}+Un_{k\bar{\sigma}})\tau}c_{k\sigma}(0)c^{\dagger}_{k\sigma}(0)\right]\\\
&=-\mathrm{Tr}\left[e^{\beta F}e^{-(\xi_{k}+Un_{k\bar{\sigma}})\tau}e^{-\beta
H}(1-n_{k\sigma})\right]\\\ &=-e^{\beta
F}\left[e^{-\xi_{k}\tau}+e^{-\beta\xi_{k}}e^{-(\xi_{k}+U)\tau}\right].\end{split}$
(49)
Performing the Fourier transform with the anti-periodic boundary condition
$G(\tau+\beta)=-G(\tau)$ leads to
$\begin{split}G_{k\sigma}(i\omega_{n})&=\int_{0}^{\beta}G_{k\sigma}(\tau)e^{i\omega_{n}\tau}\\\
&=-e^{\beta
F}\left[\frac{-e^{-\beta\xi_{k}}-1}{i\omega_{n}-\xi_{k}}+e^{-\beta\xi_{k}}\frac{-e^{-\beta(\xi_{k}+U)}-1}{i\omega_{n}-\xi_{k}-U}\right]\\\
&=\frac{1-\braket{n_{k\bar{\sigma}}}}{i\omega_{n}-\xi_{k}}+\frac{\braket{n_{k\bar{\sigma}}}}{i\omega_{n}-\xi_{k}-U}.\end{split}$
(50)
This result is exact for any value of $\mu$ and $U$.
### VI.2 Weak coupling Expansion and Wick’s Theorem
According to the Gell-Mann-Low formula, the $2n$-point function under
interaction $V$ is given by
$\begin{split}&\braket{Tc_{k_{1}}^{\dagger}(\tau_{1})\cdots
c_{k_{n}}^{\dagger}(\tau_{n})c_{k^{\prime}_{1}}(\tau^{\prime}_{1})\cdots
c_{k^{\prime}_{n}}(\tau^{\prime}_{n})}_{I}\\\
&\quad=\frac{\braket{T\bar{c}_{k_{1}}(\tau_{1})\cdots\bar{c}_{k_{n}}(\tau_{n})c_{k^{\prime}_{1}}(\tau^{\prime}_{1})\cdots
c_{k^{\prime}_{n}}(\tau^{\prime}_{n})\exp\left(-\int_{0}^{\beta}d\tau
V\left[\bar{c},c\right]\right)}_{HK}}{\braket{T\exp\left(-\int_{0}^{\beta}d\tau
V\left[\bar{c},c\right]\right)}_{HK}},\end{split}$ (51)
where $\braket{}_{I}$ is the average using the full Hamiltonian $H_{HK}+H_{I}$
and $\braket{}_{HK}$ is the average under the HK Hamiltonian $H_{HK}$ only.
Since the HK path integral is decomposed into a series of products in the
momentum space, the creation and annihilation operators from different
momentum sectors can be safely factored out. For example, suppose
$k_{1},k_{2},\cdots,k_{n}$ are different from each other, and each
corresponding annihilation and creation operator appears
$l_{1},l_{2},\cdots,l_{n}$ times, then
$\begin{split}&\left\langle
T\prod_{j=1}^{l_{1}}c_{k_{1}\sigma_{1}^{j}}^{\dagger}(\tau_{1}^{j})\cdots\prod_{j=1}^{l_{n}}c_{k_{n}\sigma_{n}^{j}}^{\dagger}(\tau_{n}^{j})\prod_{j=1}^{l_{1}}c_{k_{1}\sigma_{1}^{j}}(\tau^{\prime}_{1}{}^{j})\cdots\prod_{j=1}^{l_{n}}c_{k_{n}\sigma_{n}^{j}}(\tau^{\prime}_{n}{}^{j})\right\rangle_{HK}\\\
&=(-1)^{P}\left\langle
T\prod_{j=1}^{l_{1}}c_{k_{1}\sigma_{1}^{j}}^{\dagger}(\tau_{1}^{j})\prod_{j=1}^{l_{1}}c_{k_{1}\sigma_{1}^{j}}(\tau^{\prime}_{1}{}^{j})\right\rangle_{HK}\cdots\left\langle
T\prod_{j=1}^{l_{n}}c_{k_{n}\sigma_{n}^{j}}^{\dagger}(\tau_{n}^{j})\prod_{j=1}^{l_{n}}c_{k_{n}\sigma_{n}^{j}}(\tau^{\prime}_{n}{}^{j})\right\rangle_{HK},\end{split}$
(52)
where $P$ is the times of permutation performed so as to separate the fermion
operators.
The deviation from Wick’s theorem could be observed on certain multi-point
correlation functions(e.g. 4-point function) for which all the momenta of the
fermion operators are identical. This contribution, however, is
thermodynamically suppressed in our calculation. Thus we employ Feynman
diagram rules and use the exact Green function to calculate the correlation
functions.
## VII Magnetic instability
The singly occupied region of the HK model at finite $U$ in the HK model leads
to a singly occupied region which introduces spin degeneracy for each momentum
sector. This ground state degeneracy is extensive with the system size and
thus considered to be unphysical although a singly occupied region generally
exists in Mott systems with the traditional Hubbard model. The solution to the
degeneracy in the HK fixed point is ordering such as superconductivity,
ferromagnetism or a spin-density wave. The superconducting instability has
been discussed in the main body and we provide here an analysis of a possible
magnetic ordering of the HK model.
With an external magnetic field $B$, the partition function now reads,
$Z(B)=\sum_{k}\left(1+e^{-\beta(\xi_{k}-\mu B)}+e^{-\beta(\xi_{k}+\mu
B)}+e^{-\beta(2\xi_{k}+U)}\right).$ (53)
The magnetic susceptibility is achieved by a double derivative of the
partition function,
$\begin{split}\chi&=\left.-\frac{1}{\beta Z(B)}\frac{\partial^{2}Z}{\partial
B^{2}}\right|_{B=0}\\\
&=\left.\frac{\mu^{2}\beta}{Z(B)}\sum_{k}\left(e^{-\beta(\xi_{k}-\mu
B)}+e^{-\beta(\xi_{k}+\mu B)}\right)\right|_{B=0}\\\
&=\mu^{2}\Omega_{1}\beta,\end{split}$ (54)
where $\Omega_{1}$ is the extent of singly occupied region. The susceptibility
diverges as temperature goes to zero, signaling a ferromagnetic phase
transition at $T=0$, thus offering an avenue for resolving the ground state
degeneracy. As pointed out in the text, superconductivity also lifts the
degeneracy and leads to flow away from the HK fixed point.
## References
* Phillips et al. (2020) P. W. Phillips, L. Yeo, and E. W. Huang, Nature Physics 16, 1175 (2020), URL https://doi.org/10.1038%2Fs41567-020-0988-4.
* Hatsugai and Kohmoto (1992) Y. Hatsugai and M. Kohmoto, Journal of the Physical Society of Japan 61, 2056 (1992), eprint https://doi.org/10.1143/JPSJ.61.2056, URL https://doi.org/10.1143/JPSJ.61.2056.
* Li et al. (2022) Y. Li, V. Mishra, Y. Zhou, and F.-C. Zhang, New Journal of Physics 24, 103019 (2022), URL https://dx.doi.org/10.1088/1367-2630/ac9548.
* Zhong (2022) Y. Zhong, Phys. Rev. B 106, 155119 (2022), URL https://link.aps.org/doi/10.1103/PhysRevB.106.155119.
* Yang (2021) K. Yang, Phys. Rev. B 103, 024529 (2021), URL https://link.aps.org/doi/10.1103/PhysRevB.103.024529.
* Setty (2021a) C. Setty, Phys. Rev. B 103, 014501 (2021a), URL https://link.aps.org/doi/10.1103/PhysRevB.103.014501.
* Zhu et al. (2021) H.-S. Zhu, Z. Li, Q. Han, and Z. D. Wang, Phys. Rev. B 103, 024514 (2021), URL https://link.aps.org/doi/10.1103/PhysRevB.103.024514.
* Leeb and Knolle (2023) V. Leeb and J. Knolle, arXiv e-prints arXiv:2301.08685 (2023), eprint 2301.08685.
* Zhu and Han (2021) H.-S. Zhu and Q. Han, Chinese Physics B 30, 107401 (2021), URL https://dx.doi.org/10.1088/1674-1056/abec36.
* Setty (2021b) C. Setty, _Dilute magnetic moments in an exactly solvable interacting host_ (2021b), eprint 2105.15205.
* Tzeng et al. (2023) Y.-C. Tzeng, P.-Y. Chang, and M.-F. Yang, _Interaction-induced metal to topological insulator transition_ (2023), eprint 2302.02771.
* Zhong (2023) Y. Zhong, _Notes on quantum oscillation for hatsugai-kohmoto model_ (2023), eprint 2301.09377.
* Zhao et al. (2023) M. Zhao, W.-W. Yang, H.-G. Luo, and Y. Zhong, arXiv e-prints arXiv:2303.00926 (2023), eprint 2303.00926.
* Huang et al. (2022) E. W. Huang, G. L. Nave, and P. W. Phillips, Nature Physics 18, 511 (2022), URL https://doi.org/10.1038%2Fs41567-022-01529-8.
* Polchinski (1992) J. Polchinski, arXiv:hep-th/9210046v2 (1992).
* Shankar (1994) R. Shankar, Rev. Mod. Phys. 66, 129 (1994), URL https://link.aps.org/doi/10.1103/RevModPhys.66.129.
* Anderson and Haldane (2001) P. W. Anderson and F. D. M. Haldane, Journal of Statistical Physics 103, 425 (2001).
* Weinberg (1996) S. Weinberg, _VOLUME II_ (Cambridge University Press, 1996), vol. 2, pp. xvii–xix.
* Zhao et al. (2022) J. Zhao, L. Yeo, E. W. Huang, and P. W. Phillips, Physical Review B 105, 184509 (2022).
* Qazilbash et al. (2008) M. M. Qazilbash, A. A. Schafgans, K. S. Burch, S. J. Yun, B. G. Chae, B. J. Kim, H. T. Kim, and D. N. Basov, Phys. Rev. B 77, 115121 (2008), URL https://link.aps.org/doi/10.1103/PhysRevB.77.115121.
* Cooper et al. (1990) S. L. Cooper, G. A. Thomas, J. Orenstein, D. H. Rapkine, A. J. Millis, S.-W. Cheong, A. S. Cooper, and Z. Fisk, Phys. Rev. B 41, 11605 (1990), URL https://link.aps.org/doi/10.1103/PhysRevB.41.11605.
* Chen et al. (1991) C. T. Chen, F. Sette, Y. Ma, M. S. Hybertsen, E. B. Stechel, W. M. C. Foulkes, M. Schulter, S.-W. Cheong, A. S. Cooper, L. W. Rupp, et al., Phys. Rev. Lett. 66, 104 (1991), URL https://link.aps.org/doi/10.1103/PhysRevLett.66.104.
* Uchida et al. (1991) S. Uchida, T. Ido, H. Takagi, T. Arima, Y. Tokura, and S. Tajima, Phys. Rev. B 43, 7942 (1991), URL https://link.aps.org/doi/10.1103/PhysRevB.43.7942.
|
progenitor must be compact arguing for a progenitor stripped of its hydrogen
envelope (MacFadyen & Woosley, 1999). But it does not explain why the GRB-
associated supernovae appear to be stripped of their helium envelopes as well
(GRB-associated supernovae are Ic, not Ib supernovae). Progenitor models also
struggle to produce the needed high angular momenta (Woosley & Bloom, 2006;
Fryer et al., 2007). For many of the progenitor scenarios, the model also
predicts a large number of Type I and Type II BL supernovae with baryon-loaded
jets that do not produce prompt GRB emission, which is in tension with the
rates of optically-identified GRB afterglow (Ho et al., 2022). Of related
interest is whether the progenitor systems of cosmological collapsars is the
same as those of ultra-long GRBs and low-luminosity GRBs, as discussed below.
There is a sub-class of CCSN that are relativistic, identified by radio
emission from a faster moving ejecta powered by a central engine (Soderberg et
al., 2010), but the inferred rate appears to be on par with the GRB rate
rather than the large number predicted by these scenarios (Corsi et al.,
2022). Also, while the exact requirements (e.g., the baryon-loading fraction)
to produce a successful GRB jet are still not well understood, we now know GRB
980425 is part of a class of low-luminosity (or sub-energetic) GRBs (Bromberg
et al., 2011). In fact, many GRBs with associated supernovae have low
luminosities (Cano et al., 2017), as both can only be seen to limited
cosmological scales. Some of these low-luminosity GRBs are thought to be
emitted by relativistic shock breakout (Campana et al., 2006; Waxman et al.,
2007; Nakar et al., 2012) while others require a different, non-thermal
emission mechanism, likely the typical internal dissipation mechanism of GRBs
(Chand et al., 2020). To make things more complex, for normal CCSN,
relativistic CCSN, low-luminosity GRBs, and fully successful collapsars, the
ejecta velocity and kinetic energies of the slow moving ejecta component are
comparable, while the same parameters for the faster moving ejecta span a
continuum over orders of magnitude (Margutti et al., 2014).
One of the biggest unsolved mysteries of collapsar or collapsar-like GRBs is
understanding the nature of the progenitor. Given the low rate of GRBs and BL
supernovae, it is clear that jet-driven supernovae are rare; therefore, the
process to produce them is uncommon. Spin-down of giant stars could explain
why no Type II supernovae are known to have relativistic ejecta (see Smith et
al. 2012 for an argued jet-driven Type II). But to explain why He-stars also
do not produce long GRBs is more difficult. It has been argued that a large
fraction of jets are “choked” inside the star (or “failed”; Bromberg et al.
2012; Lazzati et al. 2012), which could be because they become so baryon-
contaminated that the shock is not relativistic; however, there is no decisive
observational evidence that any given supernova truly harbors a choked jet.
While the discovery of collapsars and low-luminosity GRBs is the domain of GRB
monitors, optical facilities are key to identifying supernovae of interest.
These include searching for BL Type Ib and Ic supernovae. While this began
nearly 20 years ago (Berger et al., 2003), modern optical facilities and the
modern TDAMM ecosystem for follow-up to characterize the fast ejecta have
vastly increased identification rates of these events (e.g. Corsi et al.,
2017; Ho et al., 2020a; Corsi et al., 2022).
To add another creature to this mix, the discovery of AT2018cow (Prentice et
al., 2018) and similar events (Coppejans et al., 2020; Ho et al., 2020b;
Perley et al., 2021; Yao et al., 2022) led to the recognition of “luminous
fast blue optical transients” (LFBOTs; Margutti et al. 2019; Metzger 2022). It
is generally accepted that these events involve an active compact object or
central engine (Ho et al., 2019; Margutti et al., 2019), with mildly
relativistic speeds in some cases (Coppejans et al., 2020; Ho et al., 2020b).
These LFBOTs have been argued to involve shocked jets and may also be key
neutrino and VHE sources (Fang et al., 2019; Guarini et al., 2022). However,
no prompt emission has been detected.
The nature of LFBOTs is uncertain. They may arise from massive stars, e.g.,
failed supernovae (Perley et al., 2019; Margutti et al., 2019) or stellar-mass
tidal disruption events (TDEs) (Kremer et al., 2021; Metzger, 2022).
Alternatively, they may represent intermediate mass black hole TDEs (Kuin et
al., 2019; Perley et al., 2019). In order to understand the origin of LFBOTs,
continuous and sufficiently sensitive coverage is necessary to recover their
shock breakout or low-luminosity GRB emission or to fully rule them out.
Probing the evident continuum from normal CCSN through relativistic CCSN, low-
luminosity GRBs, and ultrarelativistic collapsars must be a key goal in TDAMM
this decade given the broad range of facilities coming online to identify and
characterize these events. Understanding whether and where X-ray flashes,
LFBOTs, and other exotic supernovae fit into this picture will increase our
understanding of both supernovae and GRBs. The exact questions to be answered
are vague due to the lack of knowledge of these sources. Mapping out the
relation between supernovae and GRBs explores one of only three supernova
explosion mechanisms known. Understanding why the lower energy form of engine-
driven supernova appear similar to the traditional CCSN will undoubtedly drive
advances in modeling of these events. Observation and characterization of
shock breakout will provide the most direct diagnostic and understanding of
the structure of massive stars near the end of their life. Understanding when
the escaping jets transition from only shock breakout to the traditional GRB
emission gives a handle on the necessary conditions for jet formation,
propagation, and energy dissipation.
The high-energy monitors are to be responsible for detection and
characterization of the initial prompt signatures from these events. For
ultrarelativistic collapsars, the signature is that of a traditional long GRB.
The signature for low-luminosity GRBs are generally softer, longer, and less
luminous. For relativistic supernova the shock breakout will have peak
energies around a few keV and durations of hundreds or thousands of seconds.
For normal CCSNe, the shock breakout can peak as low as $\sim$0.1 keV.
Identifying a population of these events is critical to meeting the Decadal
recommended TDAMM science. The information carried in these signals is
impossible to recover through other means, i.e. observations of the shock
breakout in UV (as it cools below the initial $\sim$X-ray temperature) will
probe the circumburst material surrounding the star, but only the X-ray
signature can probe the stellar structure itself. We do not understand massive
stars at the end of their lives, and this is the only known method to directly
study them.
Successful collapsars are served with a range of a few tens of keV to a few
MeV. Low-luminosity GRBs often have peak energies between 10 and 100 keV.
X-ray flashes, likely to be part of the continuum towards less successful
jets, have been observed between $\sim$ 1 – 10 keV. This is the range
appropriate to study shock breakout of relativistic supernova. Observing
normal supernovae push the requirement down to 0.1 keV. Ideally these events
are reported fast enough to recover the UV signatures of the shock-cooling as
well as the potential VHE signatures (discussed later), requiring
localizations with tens of deg2. For characterization of more distant events,
this precision requirement is more stringent.
Sensitivity requirements are generally similar to that of studying short GRBs.
Existing sensitivity is sufficient, but deeper sensitivity, similar to BATSE
from 30 years ago, would provide far greater detection rates and deeper
characterization of detected events. The study of long GRBs requires greater
background stability on the order of their duration, with a median value of 30
s, but a tail extending to several hundred and sometimes thousands of seconds.
This timescale is well-matched to that of the durations of the initial shock
breakout emission of supernova.
As mapping out the GRB to supernova continuum is a key goal, the GRB monitors
must be designed as partners to the UV, optical, and infrared time-domain
surveys. While fortunate events discovered by these facilities may have rise
times constrained to tens of minutes, a more typical value is on the order of
a day, and a less optimistic scenario is an uncertainty of a few days. This
drives the need for all-sky coverage (or at least coverage of the full night
sky), sensitivity, and long contiguous viewing windows. The sensitivity must
be sufficient to exclude traditional long GRB emission of these events, which
is ideally an order of magnitude more sensitive than Konus-Wind, the current
most sensitive all-sky instrument. The contiguous viewing windows should be
longer than a week, as quantified in Section 3.3. This additional dimension to
the observing fraction requirement is specific to longer-duration discovery
windows.
#### 2.3.2 Dirty Fireballs and Optically-identified Transients
Transients resembling the optical afterglows from collapsar GRBs are now
routinely detected via their afterglow signatures independently of prompt GRB
emission (Cenko et al., 2015; Stalder et al., 2017; Bhalerao et al., 2017; Ho
et al., 2020; Andreoni et al., 2021; Ho et al., 2022). Ho et al. (2022) show
that the majority of optically-identified GRBs either have associated prompt
GRB emission or the GRB upper limits are insufficient to exclude a prompt
signature. This implies that, if dirty fireballs have a similar energy per
solid angle as clean fireballs, the rate of dirty fireballs is within the rate
of GRBs. However, it is possible that less energetic jets may be more likely
to become baryon loaded, and that it is more difficult for a mass loaded jet
to escape the progenitor star. Therefore, absence of evidence is not evidence
of absence: dirty fireballs may exist, but with less luminous optical
afterglows to clean fireballs. Ongoing optical searches, and particularly
searches for soft X-ray transients (expected from a dirty fireball), will
resolve this question.
Matching the discussion for general engine-driven supernova, the start times
of these transients are sometimes as fast as tens of minutes but more often
are on the timescale of a day, given the nightly observing cadence of many
facilities. Confirming an associated prompt GRB or providing constraining
upper limits requires contiguous viewing intervals over the full sky covering
the entire start time uncertainty. Determining if there is an associated GRB
requires checking the data of several facilities with varying amounts of
coverage, livetime, and sensitivity. Comparison of optically-identified with
gamma-ray-identified collapsar jets is informative on whether they arise from
the same underlying population.
#### 2.3.3 Origin of Short GRBs
As noted, recently there have been convincing cases of long GRBs that arose
from a merger origin. Conversely, there are short GRBs that arise from
collapsars, one of which is GRB 200826A (Ahumada et al., 2021). The burst was
discovered in the prompt phase by Fermi-GBM and in the afterglow phase by ZTF.
Despite the $\sim$1 s duration, the expectation was a merger origin, but
follow-up identified an excess inconsistent with a kilonova origin but
consistent with a supernova.
The prompt duration is generally shorter than is expected to be possible from
collapsars. One explanation is the jet was only partially successful at
escaping. It is also feasible that the inferred duration is incorrect due to
the tip-of-the-iceberg effect. This drives a need for vastly improved
sensitivity over Fermi-GBM. While these events cannot be particularly common,
otherwise Swift would have identified more, it is unclear how to prioritize
follow-up of these events. However, the focus on characterizing short GRBs
will be beneficial in finding more of these events, which provide insight into
what makes a successful collapsar jet.
#### 2.3.4 Ultra-long GRBs
On the other end of the collapsar prompt GRB duration distribution, ultra-long
GRBs have particularly long durations (Levan et al., 2013). The exact
delineation from normal long GRBs is not agreed upon, but is on the order of
1,000-3,600 s. The most extreme event surpasses 10,000 s in duration as seen
by Konus-Wind (Golenetskii et al., 2011), with evidence from follow-up
observations for a prompt duration of up to 25,000 s (i.e., 7 hours; Gendre et
al., 2013). It is generally believed that normal long GRBs arise from compact
Wolf-Rayet progenitors, which should not be able to power the longest duration
bursts. It is an open question if ultra-long GRBs arise from different stellar
progenitors, including some cases where no supernova is identified to deep
limits. Intriguingly, there also appear to be different types of ultra-long
GRBs, e.g. a precursor pulse followed by quiescence before the main emission
(e.g. GRBs 160625B, 221009A) and others that follow a single pulse with an
exponential decay, which may be indicative of external shock from jets that
did not have internal dissipation (see discussion in the appendix of Kann et
al., 2018).
Resolving these questions are key to understanding collapsars. That is, the
mechanism that powers their supernova is all the same, but identifying
differences in the prompt emission, their shock breakout, or supernova signals
will lead to understanding of all the types of progenitor stars capable of
ending their lives this way and which stars are incapable of doing so. For
example, why has a Type Ib supernova never been found following a long GRB? It
is perhaps related to the size of the star, which may suggest these events
appear as LFBOTs or other engine-driven supernova. This is another step to
understanding the exotic zoo of relativistic transients.
These events are identified either in cases where they remain bright enough to
be recovered by Swift-BAT over multiple orbits (Lien et al., 2016) or through
the unbiased coverage by Konus-WIND (Svinkin et al., in prep). Having two
large GRB monitor with background stability over several hours (requiring a
non-LEO orbit) would allow for discovery and confirmation of additional GRBs.
This would determine whether an ultra-long sample fully distinct or the
extreme of the general long GRB population based on prompt properties (i.e.
via duration).
An additional capability that would be a large advancement in the study of
these sources would be rapid classification of a given burst as belonging to
this class. In a few cases, Swift-BAT has re-triggered on the same burst over
multiple orbits, but a dedicated search identifies additional events (Lien et
al., 2016). Konus is capable of identifying these events, but the high
downlink latency generally prevents follow-up at sufficiently early times.
#### 2.3.5 High Redshift GRBs
High-redshift GRBs are one of the most sought after events as they provide a
key cosmological probe of the high-redshift universe (Lamb & Reichart, 2000).
They have been detected out to a redshift of $\sim$9 (Salvaterra 2015; Figure
6), encompassing the era of reionization and beyond. In principle, they can be
recovered deeper into the universe than other objects, e.g., the BOAT could be
seen with existing facilities to a redshift of $\sim$15-20 (Frederiks et al.,
2023).
Figure 6: The observed redshift distribution of long GRBs (from De Cia, 2011,
updated through 2020)
The featureless, power-law behavior of GRB afterglows and their extreme
brightness make high-z GRBs the best probes to understand the epoch of
reionization, which could be mapped with tens of events (Tanvir et al., 2019).
They also uniquely allow follow-up observations to study small, early
galaxies. GRB afterglow measurements also provide a direct measure of the
optical depth of the host galaxy to Lyman continuum radiation and thereby
constrain the fraction of UV emission that escapes to reionize the
intergalactic medium (Kawai et al., 2006).
Spectral observations of the afterglow measure absorption lines from elements
in the host galaxy and the intervening intergalactic medium, probing the
chemical enrichment history of the universe (Hartoog et al., 2015). Follow-up
observations of the host galaxy after the afterglow has faded allows study on
the mass-metallicity relation allowing for tests of the earliest phases of
galaxy formation (Laskar et al., 2011). A sample of these events also allows
inferences on the earliest phases of star formation rate evolution (Robertson
& Ellis, 2011). Lastly, some of these events may arise from the first stars in
the universe.
Several missions have been conceptualized to specifically study and identify
high redshift GRBs. These concepts generally achieve a low-energy threshold of
a few keV with wide-field monitors to handle the effects of extreme
cosmological redshift and design toward extremely sensitive instruments,
necessary to identify events at these distances. High redshift GRBs comprise
perhaps 5-10% of the total GRB population. In order to trigger the large near-
infrared telescopes like JWST and the future extremely large telescopes these
events must be identified rapidly, preferably within 15 minutes. Thus, these
concepts also generally have on-board infrared telescopes to make this
measurement and to recover the $\sim$arcsecond scale localization for spectral
follow-up.
#### 2.3.6 Very-High-Energy Emission
The search for Very-High-Energy (VHE) EM emission from GRBs has been a holy
grail in the field for decades. The first potential detection occurred in 2000
with Milagrito (Atkins et al., 2000) but it took nearly 20 years for
unambiguous proof. We separate our discussion into the two main types of VHE
detectors: Imaging Atmospheric Cherenkov Telescopes (IACTs) and water
Cherenkov arrays. We first focus on detections by the IACTs.
IACTs are pointed instruments with fields of view of a few degrees but
excellent sensitivity above tens of GeV. IACT arrays react to GRB detections
by pointing at or tiling the localization regions disseminated by satellites,
resulting in an irreducible delay with respect to the prompt emission. The
first publicly announced VHE GRB was GRB 190114C, discovered by Swift-BAT and
Fermi-GBM, which was detected in the TeV energy range by the MAGIC Telescopes
during the early afterglow (MAGIC Collaboration et al., 2019b). The TeV
emission provided the first potential evidence for inverse Compton radiation —
more specifically, synchrotron self-Compton — which has been long theorized to
be an emission channel during both the prompt and afterglow phases of GRBs.
This synchrotron self-Compton component would provide a measurement of
microphysical parameters of the jet which remain otherwise unknown after
decades of study. Significantly, this GRB has also opened a new channel for
high-redshift GRB identification.
Although GRB 190114C was a particularly bright GRB, the VHE detection of the
early afterglow phase was only possible due to a rapid autonomous IACT
response to the well-constrained localization provided by the _Swift_ -BAT
through GCN. However, late-time observations by IACTs can also be fruitful, as
GRB 190829A (Abdalla et al., 2021) remained detectable in the VHE regime up to
56 hours after the prompt emission, despite being a low-luminosity burst.
Curiously, the VHE emission was more consistent with a synchrotron than a
synchrotron self-Compton origin. If true, this would indicate a strong
incompatibility with the standard one-zone scenario. VHE GRBs therefore offer
a unique opportunity to move beyond the standard simplified scenario and
explore GRB jet properties such as the magnetic field structure, as well as
relativistic shocks in general.
The detection of high-energy emission by the LAT following GRB 211211A, one of
the long GRBs that arises from a compact merger origin, suggests an external
Inverse Compton (EIC) origin (Mei et al., 2022; Zhang et al., 2022). EIC is
where the electrons are from the jet but the seed photons are from another
source, in this case possibly from the kilonova. This detection brings up the
feasibility of recovering EIC from nearby GRBs, e.g. low-luminosity GRBs.
The next-generation IACT array, the Cherenkov Telescope Array (CTA), will be
roughly 10$\times$ as sensitive as the current IACTs and, in sub-hour
timescales, 10,000$\times$ more sensitive in the 75-250 GeV range than Fermi-
LAT (Fioretti et al., 2019). With $\sim$6 VHE detections of GRBs by the
existing IACTs in the past $\sim$5 years and a handful of detections of
$\sim$100 GeV photons by the Fermi-LAT in 15 years of observing, CTA could
detect tens of GRBs per year, so long as it has fast access to sufficiently
accurate localization information. The largest of the telescopes, which have
the lowest energy ranges (25-150 GeV) and are therefore more sensitive to
extragalactic sources (as the highest energy photons are preferentially
absorbed by the Extragalactic Background Light), will be able to repoint to
any position in the sky within 20 s of reception of an
alert111https://www.eoportal.org/other-space-activities/cta. These Large-Sized
Telescopes will have fields of view of 4.3∘ radius, and for poorly localized
sources, CTA can cover a larger region with lower depth by staggering the
pointings of its telescope array, allowing for an adaptive follow-up program.
Thus, CTA can follow-up initial localizations on the order of tens of square
degrees and have robust association of low significance VHE signals through
comparison with higher latency precise localizations of the prompt signal. For
the brightest GRBs, CTA will have the most sensitive view into the GeV energy
band; for the larger number of more moderate GRBs, CTA will provide the
statistics necessary to better characterize the population of VHE GRBs.
The water Cherenkov telescopes are wide-field survey telescopes, and are the
better instrument for VHE observations during the prompt emission. They cover
vast regions of the sky (few sr) at a much shallower depth than the IACTs, but
targeting a higher energy (above a hundred GeV) and with the advantage of
having a much larger duty cycle, as they do not require the dark observing
conditions that IACTs do. The premiere telescopes are the High-Altitude Water
Cherenkov Observatory and the Large High Altitude Air Shower Observatory
(LHAASO), both in the Northern Hemisphere, with a proposed southern
installation called the Southern Wide-field Gamma-ray Observatory. Thus far,
the only detection of a GRB by a water Cherenkov telescope is of GRB 221009A
by LHAASO (LHAASO Collaboration et al., 2023), with confirmed photon energies
of up to 7 TeV and tentatively as high as 18 TeV (Huang et al., 2022). As the
GRB went off in the LHAASO field of view, this also marked the first VHE
detection of a GRB during the prompt emission, though the smooth nature of the
VHE emission was more consistent with being due to forward shock emission.
While this has been the sole detection of a GRB by a water Cherenkov telescope
to date, the enormous number of counts seen by LHAASO suggests other bright
events may be recovered if they are fortuitously observable by these arrays.
If all three water Cherenkov observatories are in operation, then a large
fraction of the sky will be simultaneously observed at any time. These arrays
do not require prompt localizations, only delayed localizations of sufficient
accuracy for association.
However, a VHE detection on its own — whether by IACTs or water Cherenkov
telescopes — is not sufficient to constrain GRB properties such as the
magnetic field structure or the microphysical parameters of the jet. Rather,
this is only achievable when there are longer wavelength observations to
provide the crucial context for the VHE measurements. In particular, sensitive
instruments operating in the UV to GeV range will be absolutely crucial to
characterize the true nature of the transition between the synchrotron and
inverse Compton components. Thus, the prompt localizations need to meet the
needs of telescopes across the EM spectrum.
#### 2.3.7 Origin of Neutrinos and Ultra-High Energy Cosmic Rays
There are strong connections of ultra-high energy cosmic rays (UHECRs) with
particle physics and high energy astrophysics. Three messengers, cosmic rays,
gamma-rays, and high-energy neutrinos (HEN), are linked, providing
complementary information about the same underlying physical phenomena in
astrophysical environments. A golden multimessenger era lies ahead of us and
with the next-generation cosmic ray experiments it will be crucial to seek out
neutral ultra-high-energy particles related to transient events in order to
acquire insights into a number of nature’s most energetic processes.
Among the outstanding questions in these interdisciplinary fields is the
origin of UHECRs, of which GRBs are prime suspects. Assuming they produce
UHECRs, then HEN emission is also expected as the UHECRs will collide with
photons (Waxman & Bahcall, 1997). Unlike UHECRs, HEN will travel in a straight
line through magnetic fields, making it possible to pin point their origin. A
HEN signal from a GRB would be considered “smoking gun” evidence for GRBs
being hadronic accelerators.
HEN emission from GRBs has yet to be detected and they have been ruled out as
the main contributor to the all-sky HEN flux (Abbasi et al., 2022). The most
optimistic models for HEN emission have also been ruled out, but more
realistic models are yet to be excluded (Aartsen et al., 2015). Measurements
or limits on the HEN flux of a GRB can be used to constrain the parameter
space of model parameters, such as the baryon loading factor (Murase et al.,
2022). In cases where there is a detection of VHE gamma-rays from a nearby
GRB, a limit or measurement of the HEN flux can set a constraint on how much
of the VHE gamma-rays are from hadronic versus leptonic origins.
The IceCube Neutrino Observatory (Aartsen et al., 2017) is currently the most
sensitive experiment to transient sources of HENs. The most important factors
for improving the sensitivity are increasing the number of GRBs with
localizations near to the angular resolution of IceCube’s through-going tracks
of ${\cal O}(1^{\circ})$. IceCube is undergoing an upgrade and pursuing
support for a second generation instrument. Similarly, in Europe ANTARES is
being upgraded to KM3Net. Baikal-GVD continues improvements in their analysis.
Future treatment of these telescopes as a single effective instrument may
further increase sensitivity. As these non-EM telescopes advance, they require
continued capability in the detection and localization of prompt GRBs.
Related to these works is exploration of whether GRBs can be the origin of
UHECRs even if they do not substantially contribute to the HEN flux. For
example, in the ICMART model of prompt GRB dissipation (Zhang & Yan, 2010) the
internal dissipation radius is substantially larger than in other models,
where the lower density will result in fewer proton interactions, allowing for
contribution to the UHECR flux without producing a luminous HEN signal.
Continued joint observations of the high-energy sky in neutrinos and gamma-
rays is key to understanding the prompt emission mechanism of GRBs, both in
population (Abbasi et al., 2022) and individual studies (Abbasi et al., 2023).
The IceCube observations of GRB 221009A and upper limits from analyses
designed for lower energy studies has also proven informative in new ways
(Abbasi et al., 2023). For example, Murase et al. (2022) show that the non-
detection of GeV neutrinos from IceCube places constraints on the bulk Lorentz
factor of the jet at an earlier phase than has been done before or require an
even more pure (low baryon content) jet than previously shown. As analysis
techniques improve in the existing and forthcoming neutrino telescopes, we
will continue to learn more about GRBs, provided sufficient discovery missions
are still observing.
While the above discussion focused on HENs from fully successful collapsars,
the most promising neutrino source here may be so-called “choked” grbs. These
events would produce significant amounts of neutrinos given the far higher
proton interaction likelihood (Mészáros & Waxman, 2001). Choked GRBs likely
sit in the continuum from normal CCSN to ultrarelativistic collapsars, with
fully choked GRBs appearing as relativistic supernovae and partially choked
GRBs as low-luminosity GRBs. LFBOTs, X-ray flashes, and similar may also be
key neutrino sources. If true, optical identification of these events may
preclude robust association of related neutrinos due to the large temporal
uncertainty on the EM side. The precise timing and spatial information from
detection of shock breakout or GRB emission of a choked GRB will enable
association of even untracked high energy neutrinos. This should be a key goal
of the new multimessenger era.
When speaking strictly about understanding the origin of neutrinos, there is
no strong alert latency driver for high energy EM monitors because the HEN
telescopes are effectively continuously observing the entire sky. It is
sufficient to provide precise localizations, necessary for robust association
of any neutrino signature, in high latency. However, should such a joint
detection occur, it would be one of a few known multimessenger source types
and full temporal and spectral characterization across the full EM spectrum
would be critical. The GRB monitor requirements are thus those described above
for general follow-up of collapsars. Follow-up would be aided by the neutrino
localizations, though tracked IceCube events require tiling from narrow field-
of-view facilities. It is reasonable to assume the first neutrino detections
of GRBs will occur for nearby events, which supports all-sky coverage even
with less sensitivity to GRBs, though low-luminosity events may be of
particular interest in this case.
#### 2.3.8 Gravitational Waves
The relationship between collapsars and long GRBs is in a state similar to
that between BNS mergers and short GRBs prior to GRB 170817A. A GW observation
associated with a collapsar, identified by the long GRB, would provide a
watershed of new insights. As the sensitivity of the GW network continues to
improve, such a possibility becomes more realistic. Some specific models for
GW emission, for example, the accretion driven instability model (van Putten
et al., 2014), suggest a long GRB at $\sim 50$ Mpc would already be observable
by the current GW networks (Abbott et al., 2022). Only a small sample of long
GRBs within this distance have been observed to date; however, the rate of
redshift determinations is very low. For example, of the 86 GRBs used in the
analysis for Abbott et al. (2022), none had a measured redshift. Moreover,
gamma-ray brightness is a poor indicator of distance, therefore, the GW
analyses follow up all GRBs, since any one of them may be within the
detectable horizon. While it is expected that detections will not be possible
until the 3G era, observations prior to that epoch will provide some of the
only direct information on what occurs in the interior of collapsars. An
improved sky localization from IPN or other mission could facilitate a follow
up redshift determination to bolster the case for a putative GW detection. The
requirements on high energy monitors are similar to those of multimessenger
searches with high energy neutrinos, as the IGWN also has archival all-sky
observations (when operating). Precise, high-latency alerts are sufficient for
GW-GRB specific science and the GW-detected events will likely be nearby,
favoring all-sky coverage even with lower sensitivity. Similarly,
identification of these events rapidly enough to allow successful follow-up
and detailed characterization is highly beneficial. We additionally note the
effects of failed jet may be detectable in GWs (Gottlieb et al., 2022), but
prompt EM identification likely requires X-ray recovery of the shock breakout.
#### 2.3.9 Polarization and the Blandford-Znajek Mechanism
An additional diagnostic in astronomy is polarization, providing insight into
geometry and ordered magnetic fields. Both geometric and intrinsic
polarization may be expected in prompt and afterglow emission. Geometric
polarization will arise when the viewing region crosses structure from the
jet. Intrinsic polarization may arise from the creation of ordered magnetic
fields at the jet launch site by the central engine; these will become
disordered due to turbulence as the jet propagation outwards.
From time-integrated measurements in the prompt phase alone it is not possible
to determine the origin of a significant polarization signal; however, if the
majority of GRB jets have highly ordered magnetic field and their prompt
emission mechanism is synchrotron, it is possible to identify this population
from the rest of GRBs through a population of prompt polarization detections
(Toma et al., 2009). This requires a GRB polarimeter and determination of the
peak energy for a sample of at least tens of GRBs. External determination of
spectra is beneficial for more confident determination of polarization.
To better constrain the prompt emission mechanism and understand jet physics
and evolution, time-resolved polarization is necessary. Historically, theory
has focused on time-integrated measurements as a substantial number of photons
are required for robust spectropolarimetric fits. With the next generation of
polarimeters being built and proposed it may be possible to perform time-
resolved fits for an number of GRBs. Theoretical and simulation work can
explore distinguishable time-resolved spectropolarimetric behaviors, allowing
for advanced understanding on the prompt GRB emission mechanism and on
microphysical jet parameters (e.g. Deng et al., 2016; Gill & Granot, 2021;
Parsotan & Lazzati, 2022).
Very early observations of GRB afterglow may be able to capture intrinsic
polarization and their destruction as the jet propagates, as possibly seen in
GRB 120308A (e.g. Steele et al., 2017). At later times, afterglow polarization
measurements generally capture only geometric polarization, which can be used
to understand jet structure in concert with observations of jet breaks.
Bursts with sensitive polarization observations in both the prompt and
afterglow phase are of particular scientific interest (Negro et al., 2023). If
follow-up observations of the afterglow find low polarization, but the prompt
emission has high polarization, then the prompt phase must arise from a jet
with ordered magnetic fields. As ordered fields will only be destroyed during
propagation they provide insight into the central engine. Jets formed by black
hole and neutrino-antineutrino annihilation central engines (Eichler et al.,
1989; Mészáros & Rees, 1992) will tend to carry most of their energy in the
matter entrained in the jet and will have low prompt polarization. Jets with
ordered magnetic fields arise either from magnetar central engines or black
holes whose energy is extracted to power the jet via the Blandford-Znajek
process (Blandford & Znajek, 1977).
Bursts with magnetar central engines have a maximal energy output set by the
rotation energy imparted on the object. Bursts with total energy, being the
sum of the energy released as light in the prompt phase and supernova (or
kilonova) phases as well as the kinetic energy in the jet and omnidirectional
ejecta, can only be powered by black holes. Thus, an energetic burst with
prompt polarization and no or little afterglow polarization would be direct
proof of the Blandford-Znajek mechanism. While it is known that black holes
are voracious consumers of surrounding material such a result would be proof
that black holes occasionally return significant power to the universe, i.e.
the first proof of a Penrose process (Penrose & Floyd, 1971). This is one of
the few specific questions listed in the Astro2020 Decadal report.
#### 2.3.10 Prompt Emission Mechanism
The prior discussions on understanding the origin of the VHE signals, seeking
neutrino counterparts, and applying polarimetry to the study of GRBs in the
prompt and afterglow phase are deeply related to understanding the prompt GRB
emission mechanism. The mechanism, or mechanisms, has escaped confident
determination for decades (see Zhang, 2018, for a general review relevant for
this section). The current state of the field is an on-going debate on whether
a low-energy excess is due to an additional thermal component or an additional
break in a synchrotron spectrum. Working out if the jets are Poynting flux or
matter dominated, how they are launched, whether they have ordered magnetic
fields, and understanding their microphysical parameters are all key
diagnostics to understand the viable prompt GRB emission mechanisms. A sample
of these well-studied events is necessary to understand if more than one
mechanism is viable in distinct GRBs or even if multiple contributions
contribute to the signal seem in individual bursts. The GRB monitors need to
enable rapid, broadband follow-up of the EM spectrum, provide precise
localizations (even in high latency) for the non-EM messengers, and cover a
broad range of the prompt emission, ideally $\sim$1-10,000 keV. Coverage up to
20 GeV would be well-suited to determine the inverse Compton crossover region
in partnership with CTA.
Long GRBs are often easier to use than short GRBs for the study of the prompt
emission mechanism as they are generally brighter, as previously discussed.
Additionally, their durations allow for the possibility to observe the prompt
phase across the EM spectrum (noting it is inherently covered by the neutrino
monitors and IGWN when they are observing). This can occur for any GRB if it
is in the field of view of a given survey instrument, such as the LHAASO
observation of GRB 221009A or the TESS observation of GRB 230307A. However, it
is unlikely that this will occur for multiple survey instruments at the same
time. Swift and Fermi alerts can be received and distributed to the community
on the order of 10–30 s. If observations begin within a minute after trigger,
20% of long GRBs are longer than this. The first success was with a wide-field
monitor observing GRB 990123 (Akerlof et al., 1999), with one case of early
polarimetry observations (Troja et al., 2017), and in total, order tens of
events have been detected with prompt optical observations. Recovery of the
prompt signal from optical (or lower) up to TeV energies would be a stringent
test of any prompt emission model. This requires immediate alerts. Arcminute
localizations are the ideal scenario, and is perhaps the only way to recover
the prompt signature in more than one follow-up facility at a time.
#### 2.3.11 Lorentz Invariance Violation
Observations of GRBs provide some of the most sensitive search space for
Lorentz Invariance Violation (LIV), motivated by the goal to test General
Relativity and the search of a quantized field theory of gravity, and is thus
a path towards a grand unified theory (Burns, 2020). Here we use the language
of the Standard Model Extension Framework (Kosteleckỳ & Russell, 2011), which
separates potential LIV into sectors (gravity, neutrino, photon, matter),
dispersive and non-dispersive, birefringent and non-birefringent, and
directional dependent violations.
Perhaps the most well known LIV test using GRBs is using GRB 090510 where the
constraints fall into the dispersive, non-birefringent, photon case (e.g.
Vasileiou et al., 2013). However, the most precise test of LIV with GRBs
arises from the detection of polarization, as birefringent LIV would destroy a
coherent signal with precision scaling with distance (e.g. Stecker, 2011).
In the new age of multimessenger astronomy, these and other EM limits are key
to determination of LIV in the other sectors. For example, the speed of
gravity was determined from the known speed of light, and dispersive LIV tests
in gravity determined from the far more precisely known limits on the photon
sector with GW170817 and GRB 170817A (Abbott et al., 2017). Similar results
have been calculated for SN 1987A (Ellis et al., 2008) and the neutrinos seen
in coincidence with a blazar flare from TXS+0506 (Ellis et al., 2019).
For dispersive tests, coverage of the prompt emission over several orders of
magnitude is critical. Improvements require a range broader than that achieved
for GRB 090510 ($\sim 10$ keV to $\sim$10 GeV) or a shorter timescale ($\sim$1
s), favoring the broadband recovery of prompt emission described in the past
subsections. For more precise birefringent tests sensitive polarimetric
observations is required to recover polarization from events deeper into the
universe.
#### 2.3.12 Summary of Capability Requirements
As expected, a great deal of the science that can be achieved through
collapsar detections requires precise localizations. High-latency, precise
localizations allow deep multimessenger searches for neutrinos and GWs, and
can confirm or reject candidate counterparts identified in independent surveys
or from follow-up observations of earlier GRB localizations (e.g. optical,
VHE, or otherwise); if precise enough ($\sim$arcsecond scale), these can
enable late-time follow-up observations to identify host galaxies and thus
redshift determination. A best-case scenario certainly includes rapid
reporting of precise localizations from the high-energy monitor and is
scientifically well-motivated. It allows broadband observations of the prompt
phase to study LIV and the prompt emission mechanism, full characterization of
the spectral energy distribution for VHE detections, observations of the early
polarization decay, measurement of redshift directly from the afterglow, and
recovery of the full range of diagnostics of these events.
The field of view requirement depends on the specific science case. When
paired with other discovery facilities, true all-sky monitoring is preferred,
given the usual limited horizon of other diagnostics (optical, GW, neutrino,
etc). Studies of the high-redshift universe favor depth in a given direction.
These capabilities need not be satisfied with the same instruments.
Discovery of transients in UV, infrared, optical, and radio drives $\gtrsim$1
week contiguous observing intervals. The need to determine if ultra-long GRBs
are part of the long GRB population or are distinct motivates background
stability on the order of at least thousands of seconds, preferably hours. The
timing precision and temporal resolution of collapsar GRBs is generally less
stringent than that of short GRBs or magnetar flares.
Polarization in a GRB monitor is key to understanding the jets themselves and
the prompt GRB emission mechanism. Polarimetry of the afterglow of these
bursts may provide proof of the Blandford-Znajek mechanism in specific cases.
These drive instrument and precise localization requirements.
Fully successful collapsars require the typical historic energy range of GRB
monitors. To entirely map out the continuum through low-luminosity GRBs/X-ray
flashes, shock breakout from relativistic supernovae, and shock breakout from
normal CCSN requires low energy thresholds of approximately 10, 1, and 0.1
keV, respectively. A mission capable of achieving 0.1 keV would be a
revolutionary facility for both GRBs and supernovae.
#### 2.3.13 Requested Deliverables from GRB Monitors
Many communities, including IGWN, IceCube, and wide-field survey instruments,
would greatly benefit from collated GRB alert streams and catalogs. This aides
the discovery of multiwavelength and multimessenger events and vastly reduces
the work necessary for multidiagnostic studies. It would also enable real-time
multimessenger searches, e.g., for neutrino-GRB joint detections or comparison
with optically-identified transients. Reduced localization area increases
joint search sensitivity and reduces computational expense. Having the prompt
GRB community deliver this information will result in greater science return.
Providing access to the detections of the full GRB network, or coherent upper
limits for non-detections, will help prioritize follow-up observations of
externally identified relativistic transients.
Similarly, studying data as it arrives with the goal being to determine if a
given burst could be consistent with a shock breakout origin may aid the study
of low-luminosity GRBs, or shock breakouts from supernovae in the case of an
X-ray monitor. The automatic flagging of potential ultra-long GRBs while the
afterglow is still detectable would be a boon in the study of these sources.
Providing information on whether a given event may have broadband spectral
coverage, prompt polarization coverage, or is particularly bright would also
help the follow-up community determine which bursts to target. These are
similar to the findings for merger GRBs; we refer the reader to that section
for additional details.
### 2.4 Miscellany
In addition to the sources where GRB monitors have compiled large samples,
they are also important for rare and unusual events. This is a driver for
sustained coverage of the high-energy sky over the timescales of human
lifetimes, to ensure detection of once-in-a-lifetime events.
#### 2.4.1 Jetted Tidal Disruption Events
There have been four relativistic (“jetted”) TDEs discovered to date. Three
were discovered by Swift-BAT, one by the on-board trigger and two in high-
latency analyses by the BAT hard X-ray monitor (Levan, 2015). The fourth was
recovered in optical by ZTF (Andreoni et al., 2022; Pasham et al., 2023). Key
questions to these sources include i) what is the connection between jetted
TDEs and other classes of TDEs found routinely by optical surveys (Van Velzen
et al., 2021; Charalampopoulos et al., 2022; Hammerstein et al., 2022; Yao et
al., 2023)? ii) are optically-identified and high-energy-identified jetted
TDEs distinct or of the same class? iii) how do their jets carry their energy?
Study of these sources allows for observation of a jet where both the
initiation and end of the jet can be observed, similar to GRBs, and provide a
link to active galactic nuclei. Study of the three source classes may allow
for understanding on how relativistic jets are powered, what distinguishes
ultrarelativistic from relativistic, and understanding of the ways these jets
carry their energy.
Discovery of more TDEs by high energy monitors likely requires instruments
beyond the basic scintillator design, which are non-imaging and background
dominated. They can be discovered at a low cadence with coded aperture masks
like Swift-BAT. Lobster-eye telescopes may prove to be more capable but will
generally detect more distant events. However, as optical surveys become ever
more capable it is evident that they will discover additional jetted TDEs.
Determination of a high-energy signature requires either early identification
in optical with rapid follow-up of pointed X-ray telescopes or a highly
sensitive X-ray monitor with at least a $\sim$daily all-sky cadence.
#### 2.4.2 Non-Astrophysical Sources
GRB monitors are often used for studies in heliophysics, helping to complete
the multiwavelength picture of solar flares. The discovery of terrestrial
gamma-ray flashes associated with lightning by BATSE has led to
characterization of these sources in X-rays and higher energy gamma-rays by
RHESSI and Fermi-LAT, as well as comparison with lightning detection by
ground-based optical networks. Thus, GRB monitors have a role to play in
multiwavelength studies of solar and Earth science events. As we utilize
instruments on planetary and heliophysics spacecraft, it would appropriate to
ensure the deliverables from the prompt GRB monitors are also designed to
benefit other divisions and cross-divisional studies, e.g. in the study of
solar flares.
#### 2.4.3 Unexpected Discoveries
The discovery of GRB 211211A as a long GRB from a merger origin was
unexpected, as was the consistency of the prompt emission arising from a fast-
cooling synchrotron emission (Gompertz et al., 2023). The analysis of GRB
221009A identified a 10 MeV line in the prompt phase of individual, rare
events (Ravasio et al., 2023) which has no previous theoretical expectation.
Even after 10,000 prompt GRB detections, new insights into GRBs are made
though careful study of the prompt phase. As a further example we emphasize
the possibility of GRBs being created by the evaporation of local primordial
black holes (Ukwatta et al., 2016). Such a possibility is best studied with
multiple spacecraft at interplanetary distances. Maintaining various
capabilities of the prompt GRB monitors over decades is key to maximizing
science from rare events, either through the study of their prompt signatures
alone or with multidiagnostic studies common in the current era.
## 3 Capability Requirements
Each sub-topic for the sources discussed in Section 2 discusses the specific
capability requirements necessary to advance scientific understanding on that
topic. Each source also has a Summary of Capability Requirements section
giving the broad needs specific to high energy monitor observations of that
source class, i.e. magnetars in Section 2.1.9, compact mergers in Section
2.2.16, and collapsars in Section 2.3.12. When seeking to understand the set
of requirements necessary for a specific question or source class we refer the
reader to the content of Section 2.
This section takes a more holistic view on the observational capabilities
themselves, commenting on the general importance of various requirement
thresholds. This is useful in considering which technologies to use or
develop. We discuss how easy it is to meet the various requirement levels. We
emphasize all capabilities do not need to be met with each instrument, i.e. a
polarimeter does not also need to provide arcminute-scale localizations alone.
### 3.1 Localization Precision
The science possible with a given localization precision is summarized in
Table 1. For cosmological GRBs, i.e. those from mergers or collapsars, a
position determination to $\sim 1^{\prime\prime}$ accuracy is necessary to
robustly associate the event to a host galaxy and to determine the offset from
the host galaxy. This permits inference of the redshift (though this can be
measured directly from the afterglow in some cases), enabling measurements of
intrinsic energetics and the progenitor environments. This precision is beyond
the capability of wide-field transient monitors and is enabled through
successful recovery of the afterglow in follow-up instruments.
Localization Accuracy | Corresponding Result | Sections
---|---|---
4$\pi$ sr | Detection of gamma-ray transients by all-sky monitors |
| Chance joint detection of transients with other wide-field monitors |
$\sim$1000 deg2 | Follow-up tiling of GRBs by the widest field UV and optical telescopes | 3.1
| Robust association of GWs and GRBs | 2.2
$\sim$100 deg2 | Identification of MGF candidates and potential host galaxy | 5.3.7
| Follow-up tiling of GRBs by wide-field optical telescopes | 2.2.3, 2.3.6
$\sim$30 deg2 | Follow-up tiling of GRBs by wide-field radio, VHE, and IR facilities | 3.1
$\sim$10 deg2 | Associate nearby extragalactic MGFs to ideal host galaxies | 2.1.1
| Robust association of GRBs to neutrinos | 2.3.7
$\sim$1 deg2 | Associate SGR flares to specific magnetars | 2.1.3
| Robust association of UVOIR identified transients to GRB |
$\sim$100 arcminute2 | Extragalactic MGF host galaxy association | 2.1.1
$\sim$30 arcminute2 | Follow-up observations by the majority of telescopes |
$\sim$1 arcminute2 | Follow-up observations by effectively all telescopes |
$\sim 100$ arcsecond2 | Follow-up identification of Galactic magnetars | 2.1.5
$\sim 10$ arcsecond2 | Robust associations of cosmological GRBs to host galaxy and measurement of offset | 2.3.12
Table 1: Brief summary of the follow-up observations and association strengths
for various gamma-ray transient localization precision. Relevant sections
which detail specific rows are provided in the third column.
Thus, achieving precise localizations is tied to a mix of alert latency and
precise localizations. Localizations on the $\sim 10^{\prime}$ scale is
sufficient to be observed with the vast majority of telescopes, though $\sim
1^{\prime}$ scale is necessary for some specialized telescopes like the large
optical and infrared spectrometers. Localizations at this scale are possible
with coded aperture masks (e.g. Barthelmy et al., 2005; Chattopadhyay et al.,
2018). Sub-degree localizations, achievable with Compton telescopes and pair-
conversion telescopes, can be followed-up with a small number of tiled
observations. A common example of this is Swift-XRT and UVOT tilings of Fermi-
LAT detections, which have a typical uncertainty of $\sim 0.5^{\circ}$.
Localizations on larger scales can be followed up with wide-field instruments.
Instruments similar to Fermi-GBM can achieve localizations on the scale of
$\sim 1-50^{\circ}$. As the localization area increases, the number of
facilities capable of observing a significant portion of the localization
decreases rapidly. With the advent of GW astronomy, which can have
localizations of comparable areas, more wide-field optical facilities have
been built (e.g. Groot et al., 2019; Bellm et al., 2018; Steeghs et al., 2022)
that are capable of covering thousands of square degrees per night (e.g.
Coughlin et al., 2019). Similarly, UV coverage by ULTRASAT, VHE coverage by
CTA and the water Cherenkov telescopes, infrared by WINTER, and several wide-
field radio facilities contribute to broad wavelength coverage. While these
facilities exist and unlock new, critical capabilities in the TDAMM ecosystem,
there is a cost to poor high-energy localizations. Successful identification
of the afterglow can take more than a day as the requirement for a fading
transient and observing cadence following Earth’s day/night cycle is a
fundamental limit for ground-based facilities. There is generally a trade-off
between sensitivity and field of view, limiting their depth to the nearer
universe. Especially in the optical, there must be vetting of enormous numbers
of sources.
The independent localizations from joint detections of prompt signals can be
combined to produce an improved localization compared to either independent
localization alone. A key example is the case of joint GW-GRB localizations
where GBM-like localizations will reduce single GW interferometer
localizations by factors of $10-100$x and double interferometer triggers by a
median of $\sim$5x due to the different localization morphologies. Joint
detections are also possible with IceCube, ULTRASAT, TESS (e.g. GRB 230307A),
the water Cherenkov telescopes, etc. Though, again, a precise localization on
the high energy signal is the best-case scenario for the entire TDAMM
community.
The other main role of localization precision is in associating transient
events. For the association of two signals, the significance scales inversely
to the localization area. Localization precision can be relaxed in this role
if the temporal precision is sufficient to associate signals in time. The
temporal offset between the GW signal and GRB is on the order of seconds, so
even large (thousand square degree) localizations are sufficient to achieve
unambiguous association. For optically-identified transients with large
uncertainties on the time of initial explosion, these large localization
regions can be insufficient for associating signals. These often require final
localizations on the order $\sim$1 deg2, with discovery claims requiring
greater precision. The same is true for most expected neutrino signals from
GRBs and CTA follow-up observations.
Figure 7: The 1-sigma width distribution of the third IPN timing annuli,
through 2020. Overlaid is a black vertical line denoting the typical 90%
confidence error for Swift-BAT. The narrowest 90% confidence width in the
distribution is $4^{\prime\prime}$ with a median width of $5^{\prime}$.
Additional larger annuli can be generated for satellite pairs in low Earth
orbit which is not included in the figure.
Timing annuli from the IPN span a wide range of precision, as shown in Figure
7. Typical annuli widths for IPN bursts recovered by distant spacecraft (which
are the bright $\sim$half of GBM bursts) are a few arcminutes wide, though
precision down to $4^{\prime\prime}$ has been achieved. Note that a single
annulus with a few arcminute width can span significant total sky area, and at
least three spacecraft (two annuli) are required to achieve a localization
area comparable to Swift-BAT. IPN localizations have two drawbacks compared to
those possible from coded aperture masks. The reporting latency for distant
spacecraft is most often measured in hours to days, though fortuitous timing
is possible. Second, when the first data from distant spacecraft arrive the
initial localization is very elongated, which is a shape that wide-field
monitors are not generally suited to tiling. Despite this, successful recovery
of interesting events has been demonstrated, most notably with GRB 230307A as
discussed in Section 5.3.2. Initial localizations sufficient for narrow-field
follow-up allowed for recovery of a candidate counterpart, which was confirmed
in part by the higher latency data providing a particularly precise
localization.
For the study of magnetars, the most stringent localization precision
requirement arises for associating extragalactic MGFs to their host galaxies,
where local events require precision of $\sim$10-100 arcminute2 and distant
events require precision approaching a few arcseconds. Association of normal
SGR short bursts to active magnetars requires order $\sim$1 deg localizations.
New magnetars can be found by stacking localizations of several short bursts,
historically achieving accuracy on the order of $100$ arcsecond2 (Hurley et
al., 1999).
### 3.2 Alert Latency
The ideal worldwide system for transient astronomy would be sensitive all-sky
coverage at all wavelengths and messengers. Barring that, the best-case
scenario is immediate reporting of a precise localization from the first
detectable signal(s) of a given source class. For all types of GRBs, SGR/FRB
and magnetar giant flares, and jetted TDEs, this is usually the keV-MeV
signal. In the pursuit of multiple diagnostics from the same event, observing
the immediate aftermath is absolutely critical to fully understand these
sources and unlock multimessenger science. The definition of precise
localization in this subsection refers to $<10^{\prime}$, sufficient for
follow-up with a single pointing in most facilities. Achieving this accuracy
can be done directly from some high energy monitors, or in concert with high
energy monitors and follow-up tiling, as described above. Various telescopes
across the EM spectrum can observe with a wider field of view; see Section 2
for specific details. The results possible with a given timescale are
summarized in Table 2.
Alert Latency | Corresponding Result | Sections
---|---|---
$>$10,000,000 s | Origin of short GRBs, MGF candidate identification | 2.1.1
| Origin and physical mechanisms of FRBs | 2.1.4
| MGF QPOs and NS equation of state | 2.1.2
| Sources of GWs | 2.1.3 2.3.8
| Discovery of new magnetars | 2.1.5
| Magnetar formation channels, properties, and burst physics | 2.1.6 2.1.7 2.1.8
| Determination of SGRB progenitor fractions | 2.2.6
| GRB classification of GW sources | 2.2.8
| Speed of gravity measures | 2.2.15
| Determination of GRB counterpart to orphan afterglow, dirty fireballs | 2.3.1 2.3.2
| Origin of neutrinos, ultra-high energy cosmic rays | 2.3.7
1,000,000 s | Guide fast radio burst searches of active Galactic magnetars | 2.1.4
| Capture rise of supernova | 2.3
100,000 s | Follow-up classification of long GRBs from mergers | 2.2.7 2.2.9
| Latest reliable recovery of afterglow, potential redshift determination, cosmology | 2.2.10 2.2.11 2.2.13
| Guide follow-up of externally-identified transients based on prompt GRB signal | 2.3.1 2.3.2
| Capture rise of red kilonova | 2.2.4
10,000 | Key diagnostic information on relativistic jets | 2.2.14
| X-ray recovery of plateau emission in afterglow | 2.2.1
| Tests of gravity parity violation | 2.2.15
| Follow-up observations for VHE emission | 2.3.6
| Capture rise of blue kilonova | 2.2.3
1000 s | X-ray observations of fading tail after Galactic MGFs | 2.1
| Discrimination of origin of early UV emission in mergers | 2.2.3
| Observation of prompt phase of ultra-long GRBs | 2.3.4
| Blandford-Znajek test via afterglow polarization observations | 2.3.9
100 s | Multiwavelength characterization of BNS merger classes and associated science | 2.2.4
| Critical early observations of EM-bright NSBH mergers | 2.2.5
| Prioritized follow-up based on GW merger classification | 2.2.8
| X-ray observations of fading tail after extragalactic MGFs | 2.1
| X-ray recovery of merger GRB extended emission | 2.2.1
| Full tests of dense matter, origin of heavy elements | 2.2.12
10 s | Recovery of higher radio frequency (low dispersion measure) precursors | 2.2.2
Table 2: Summary of the science possible with various alert latency
thresholds. Implicit in these timescales are the time to recover the precise
position for narrow-field telescopes to use, which may add delays for poorly
localized prompt GRB detections. Complete understanding of these transients
requires reporting on the timescale of 10-100 s, though some key science can
be done with high latency data access.
The fastest realistic latency is 10 s, commonly achieved by Swift-BAT. This
would allow for the recovery of effectively all signals discussed in Section
2, given that capable follow-up telescopes are ready. After $\sim$100 s the
radio precursor signature in NS mergers may be lost, as are observations of
the early plateau decays (though some plateaus are recoverable for $10^{5}$
s). Also after 100 s the chances to observe prompt emission at other
wavelengths begins to rapidly decrease out to $\sim$10,000 s. After 1000 s the
chance to recover fading, pulsating X-ray tails from extragalactic MGFs with
current facilities becomes extremely unlikely. On a similar timescale, the
prospects of discriminating some of the potential sources of the early UV
emission in NS mergers begin to fade.
A latency of 10,000 s results in the loss of the brightest portion of
afterglow, making recovery of VHE emission and early polarimetric observations
of the afterglow less likely to succeed. Observations at this time are still
likely recover follow-up signals, capture the rise of redder kilonova
emission, and to observe the pre-break timescales of the jet. After 100,000 s
(i.e. about a day), each of these three are less likely but still possible,
but beyond 1,000,000 s these successful observations become quite unlikely.
High latency reporting is still valuable for the purposes of confirming or
rejecting candidate counterparts found in other wavelengths and messengers.
Reporting latency is generally less critical for the monitoring of Galactic
magnetars. It is necessary to inform the radio community when specific
magnetars are active, but high-latency matching of SGR and FRB flares is
sufficient, provided other FRB counterparts are not identified. The exception
is the detection of magnetar giant flares, as noted, with possible recovery of
the fading tail in X-rays if observations begin within minutes of an
extragalactic event and within somewhat longer timescales for Galactic events.
### 3.3 Field of View, Livetime, and Contiguous Viewing Intervals
The study of nearby events presents critical diagnostics to provide leaps in
the understanding of a given source class. The detection of MeV neutrinos and
nuclear gamma-rays from SN 1987A set our current understanding of the
convective engine paradigm in typical CCSN explosions. The detection of SN
1998bw following GRB 980425 proved the theoretically predicted collapsar model
for GRBs and supernova explosions. The detection of the off-axis GRB 170817A
with GW170817 and AT2017gfo led to breakthroughs in several fields of physics
and still sets our standard for the study of NS mergers in astrophysics. The
association of an SGR 1935+2154 X-ray short burst to bright $\sim$ms radio
flashes proved that at least some cosmological FRBs originate from magnetars.
The limits from the IceCube non-detection of GRB 221009A are more informative
on prompt GRB models than the stacking of several thousand bursts. This is an
effect of a multi-diagnostic Malmquist bias, where the nearest maximum
detection distance determines the rate of events that can have complete
observations. Whenever possible, a key component of the transient and
multimessenger ecosystem should approach complete and uniform coverage of the
sky. This is approximately met by IceCube and should be met by the IGWN
network beginning in $\sim$2027.
Sources | Field of View | Livetime | Contiguous Interval | Sections
---|---|---|---|---
Prompt GRBs | $\sim$100% | $\sim$100% | | 2.1 2.2 2.3
GWs, FRBs, High-energy neutrinos | $\sim$100% | $\sim$100% | | 2.1.3 2.2.8 2.3.8 2.1.4 2.3.7
MeV Neutrinos, Shock Breakout | $\sim$100% | $\sim$100% | | 2.3 2.3.1
Relativistic supernovae | $\sim$80% (Night Sky) | $\sim$100% | $\sim$1 week | 2.3.1
Orphan Afterglows, LFBOTs | $\sim$80% (Night Sky) | $\sim$100% | $\sim$1 week | 2.3.2
Unknown Unknowns | 100% | $\sim$100% | $\sim$1 week | 2.4
Table 3: The Field of View, Livetime, and Contiguous observing intervals
necessary for various sources of interest for GRB monitors.
The use of high-energy monitors in multi-diagnostic astronomy are necessary
for the vast majority of future astrophysical advances, and all-sky field of
view is often more important than depth for a given effective area of
background-dominated instruments, as summarized in Table 3. This is because
nearly all other diagnostics have a more limited detection horizon. NS mergers
can only be recovered by current GW interferometers to a few hundred
megaparsecs, to an even shorter distance by current UVOIR surveys, and to a
smaller volume when observing afterglows than when observing the prompt phase.
This effect for GW-GRB joint detections was shown directly in the NASA GW-EM
Task Report prior to the Astro2020 Decadal (Racusin et al., 2019). For
collapsars, low-luminosity GRBs are detectable to only a tiny fraction of the
observable volume of typical long GRBs. VHE observations are limited in
distance by extinction due to pair production with the extragalactic
background light. The first transient neutrino sources will almost certainly
be (cosmologically) local. In these cases, all-sky sensitivity for the gamma-
ray transients is favored over depth. Exceptions include any hope of recovery
of SGR X-ray flares around extragalactic FRBs (perhaps possible with pointed
X-ray telescopes) and high-redshift GRBs.
Historically, discovery of relativistic transients was the domain of high-
energy monitors. As the survey capabilities of other wavelengths and
messengers advanced, they have begun to play a leading role in the discovery
of these events. Because the prompt gamma-ray signatures are reasonable
contemporaneous or precede other signals, archival coverage of the full event
is necessary to make strong statements in the case of sensitive non-detection
and to maximize the chance of signal recovery.
For short-duration, well-localized transients, the chance that a GRB monitor
will have coverage is determined by its instantaneous field of view multiplied
by its livetime. An example of this is observation of a Swift-BAT detected GRB
by GBM. For spatially extended localizations, this fraction decreases further
when the field of view is under 100%. For example, the long arcs of two-
interferometer IGWN localizations generally precludes total coverage by
individual low Earth orbit satellites, driving the need for sensitive all-sky
instantaneous field of view. This, and livetime, is driven by the need to
provide complete coverage for neutrino events, FRBs, external prompt GRB
detections, and other short-duration transients.
For externally identified events with imprecise start times, this fraction
decreases even further when the livetime is under 100%. The lack of
sufficiently long contiguous observing intervals prevents direct statements on
the gamma-ray emission from individual relativistic transients identified by
optical surveys that have start time uncertainties on the order of $\sim$1
day. This may prevent significant statements on particularly rare events.
Additionally, it limits the population comparison of a high-energy-identified
and optically-identified transients of the same source class (i.e. collapsar
jets, relativistic supernova). With $N$-day fixed contiguous viewing intervals
and 1 day as the representative start time uncertainty, the effective livetime
scales as $(N-1)/(N+1)$, i.e. a 3-day window has full coverage of only
$\sim$50% of events and a 7-day window of $\sim$75%. This an additional
dimension to the observing fraction requirement, specific to longer-duration
transients. Thus, the development of ground-based wide-field survey telescopes
drives the need for long contiguous viewing intervals in GRB monitors.
Similar needs will be needed as the capability to discover transients occurs
in new wavelengths and messengers. As one example we point to the millimeter
and sub-millimeter wavelengths where orphan afterglows can be found (Whitehorn
et al., 2016). The discovery space here is emphasized by the Decadal
recommendation for time-domain capability in CMB-S4.
True 100% all-sky coverage and livetime is ideal to have in prompt high-
energy monitors. This need is generally met by IPN, albeit to a shallower
sensitivity than some of the more sensitive low Earth orbit satellites. The
best individual instrument, in this regard, is Konus-Wind, which has all-sky
coverage, near 100% livetime, and is more sensitive than the more distant IPN
spacecraft. Achieving these properties is effectively impossible to meet with
satellites in low Earth orbit, unless there is a fleet. For a mission
dedicated to detecting transients that have complete multiwavelength
observations, it could be designed to focus on the $\sim$80% of the sky
visible at night on Earth, as most ground-based facilities cannot observe near
the Sun.
There is one particularly notable exception to the need for a massive field of
view, which is the search for high redshift GRBs. Identifying a sample of
these events requires great sensitivity to recover them as far back in cosmic
time as they occur. This sensitivity is likely more easily achieved with a
narrower field of view.
### 3.4 Background Stability
A related requirement is background stability, referring to the range of time
over which background is stable. Stability over $\sim$100 s is key to
identification of extended emission following short GRBs, key to much of the
science possible with NS mergers. This requirement can be met with inertially
pointed instruments in low Earth orbit as demonstrated by the greater recovery
of extended emission in BAT and BATSE and lower recovery rate in GBM. To
provide a definitive answer on the overlap of traditional and ultra-long GRBs,
and to prompt identify ultra-long GRBs while their emission is still
observable, background stability for $\gtrsim$6 hours is necessary. While
Swift-BAT has identified some ultra-long GRBs over multiple orbits, the
orbital timescale imprints an effective cut in the timescales. For the study
of ultra-long GRBs having multiple instruments with sufficient sensitivity and
background stability is key for confirming and localizing these bursts.
Background Stability Interval | Source Type | Sections
---|---|---
10 s | SGR flares and FRB counterparts, short GRBs | 2.1, 2.2
100 s | Short GRBs with extended emission | 2.2.1
1,000 s | Long GRBs | 2.3
10,000 s | Ultra-long GRBs | 2.3.4
100,000 s | TDEs, longest GRBs | 2.4, 2.3.4
Table 4: Brief summary of the background stability necessary for various
source classes.
### 3.5 Timing Capabilities
Absolute timing refers to the preciseness of the data compared to a given time
standard, generally UTC, with accuracy limited by the capability of the
onboard clock. For scientific purposes, absolute timing of $\sim$1 ms is
sufficient for nearly all of the discussed outcomes, with the comparison of
the emission times of SGR X-ray flares and FRBs imposing the most stringent
case requirement. This capability is lost at an absolute timing precision of
$\sim$10 ms. A precision of $\sim$100 ms is an important limitation on
measuring the speed of gravity and LIV. Achieving $\sim$1 ms timing precision
for low Earth orbit satellites is easy, given the capability to update onboard
clocks with pings from the Global Positioning System (GPS). It is more
difficult for satellites beyond the reach of GPS, but the adoption of atomic
clocks and/or tiny X-ray telescopes for pulsar timing and deep space
navigation (Ray et al., 2014; Winternitz et al., 2016; Ray et al., 2017)
should prove revolutionary in this regard. Additional approaches to more
precise absolute timing would also prove beneficial for these studies. The IPN
would benefit from the multi-decade development of high-precision optical
clocks in space for use in fundamental physics studies prioritized in the
Thriving in Space: Ensuring the Future of Biological and Physical Sciences
Research: A Decadal Survey for 2023-2032.
Relative Timing | Corresponding Result | Sections
---|---|---
10 $\mu$s | Measure shortest duration pulses observed in SGR flares | 2.1.8
100 $\mu$s | Measure shortest duration pulses observed in cosmological GRBs |
| Search for QPOs in the range of interest for neutron stars | 2.1.8
Absolute Timing | Corresponding Result | Sections
100 ms | Subdominant limitation to most speed of gravity measures | 2.2.15
1 ms | Enable minimal (statistical) timing annuli widths |
| Comparison of FRB and SGR arrival time, physical mechanisms | 2.1.4
Table 5: Brief summary of the relative timing and absolute timing
requirements, with relevant sections.
Relative timing precision sets how accurately the offset between specific
events can be measured. There are two main types of temporal data in high
energy monitors, time-tagged event (TTE) and binned data. TTE data is an array
of a time and energy channel for each event registered, and the timing
precision is limited by the instrument and electronics. Binned data are time-
series spectral histograms, with predefined temporal resolution, which is
nearly always larger than the native timing precision. Most onboard clocks
undergo a slow drift away from a reference time, meaning relative timing is
often far more precise than absolute timing.
An effective temporal resolution of 10 $\mu$s is sufficient for any high-
energy monitor. The shortest pulses ever observed have durations of 70 $\mu$s
and 100 $\mu$s (Bhat et al., 1992; Roberts et al., 2021), which may be lost
for instruments with only 100 $\mu$s timing. This relative timing precision
allows for QPO searches up to $\sim$10 kHz, beyond the range of most NS
oscillation eigenmodes. These searches will miss much of the QPO frequency
range of interest for these modes at 10 ms relative timing precision. At 10 ms
the complete searches for SGR flares will be negatively affected. At 100 ms
they will be severely limited, and there will be significant losses in the
detection of cosmological short GRBs.
A key example of the importance of temporal resolution is the sub-threshold
searches for SGR and short GRBs in Fermi-GBM and Swift-BAT data. With the
planned continuous binned data resolutions of these instruments, searches for
untriggered SGR flares would be impossible, and these provide a more complete
picture on the waiting time and energetics distributions of these events. Many
short GRBs would also be lost, preventing associations between GRBs and GW
signals. The downlink of continuous TTE data for Fermi-GBM and the saving of
Swift-BAT TTE data around externally identified times of interest by GUANO
(Tohuvavohu et al., 2020) has allowed substantially more sensitive searches
for, and better localizations of, these types of events. This method of
requested TTE is an option that should be explored in all future GRB monitors
as the flexibility and resolution of TTE data is a critical asset for TDAMM
studies.
Timing capabilities are intimately tied to the operation of the IPN. The
instrument with the lowest absolute timing resolution and relative temporal
resolution will dominate the systematic uncertainty. Extragalactic MGFs can,
in principle, be timed to sub-millisecond accuracy, setting a floor on this
requirement. A level of $\sim$1-10 ms is likely sufficient for the majority of
bright GRBs. There are two hurdles implicit to IPN operation in this regard.
The first is the relatively inaccurate absolute timing precision of distant
spacecraft. In fact, using the position of known GRBs and inverting the IPN
calculation is used to map the timing accuracy of planetary spacecraft.
Second, the limited bandwidth through the Deep Space Network (DSN), and
similar communication solutions, places a floor on the temporal resolution of
the binned data, and has precluded unbinned data. These can be ameliorated in
future instruments, as discussed in Section 4.
TTE data has a critical use in the IPN. Timing annuli are generally derived
through a cross-correlation function comparison of two datasets. The ability
to analyze data are arbitrarily small timescales ensures minimal timing annuli
widths. For example the Konus data has a 2 ms binning, but the time offset
between GBM and Konus for GRB 200415A is 1.3 ms. There should always be LEO
satellites with TTE data. TTE could be recovered from distant spacecraft with
advanced communications solutions, or through limited TTE around intervals of
interested identified by an on-board trigger.
$\sim$1 ms timing precision is necessary for joint SGR-FRB studies and allows
for more accurate triangulation of SGR X-ray flares. This drives both absolute
timing precision as well as minimum effective temporal resolution.
### 3.6 Energy Range
Standard GRBs typically have the highest signal-to-noise ratio over the
$\sim$50-300 keV energy range. Broadening this energy range to lower values
brings detection of additional relativistic transient classes. Broadening the
range in either direction brings greater characterization of detected
transients. These are summarized in Table 6.
Energy Range | Corresponding Result | Sections
---|---|---
50–300 keV | Detection and localization |
High Energy Threshold | |
1 MeV | Constraint on peak energy, total energetics for long GRBs | 2.3.10
10 MeV | Constraint on peak energy, total energetics for short GRBs and MGFs | 2.3.10
| Evidence for additional spectral components in prompt GRB emission | 2.3.10
1 GeV | New understanding of additional spectral prompt components | 2.3.10
Low Energy Threshold | |
10 keV | Increase in SGR flare detections, FRB counterparts |
| Identification of short GRBs with extended emission | 2.2.1
| Detection of low-luminosity GRBs | 2.3.1
| Identification of multiple prompt GRB spectral components | 2.3.10
1 keV | Insight on prompt GRB emission mechanism | 2.3.10
| Detection of X-ray plateaus | 2.2.1
| Detection Shock breakout of relativistic supernovae | 2.3.1
| Detection of X-ray flashes | 2.3.1
0.1 keV | Detection of shock breakout of most supernovae, full mapping of relativistic transients | 2.3.1
Table 6: Corresponding results possible with specific low and high energy
instrument thresholds. Each column implicitly assumes sufficient sensitivity
for a given topic.
Observing at higher energies improves the spectral characterization.
Understanding the spectrum and energetics is vastly improved when spectral
curvature can be measured. With a high-energy threshold of 1 MeV curvature
will be measured for most collapsar GRBs. At 10 MeV it will be measured for
the majority of all GRBs and MGFs, although the curvature has been measured
above 10 MeV in some rare cases. Extending the energy range higher is also
important to seek potential additional spectral components, in particular an
extra power-law component that is likely indicative of afterglow radiation.
Sensitivity up to 20 GeV would match the expected low-energy threshold of CTA
and thus provide complete spectral coverage to the VHE regime. This complete
spectral coverage would have the ability to determine the spectral turnover
from base synchrotron to inverse Compton humps.
Decreasing the low-energy threshold is also crucial for characterization of
GRBs. Sensitivity to $\sim$10 keV allows recovery of the observed low-energy
excess in some bright GRB spectra, the origin of which is still unknown. It
also enables recovery of extended emission and mapping of the low-energy
quasi-thermal tails seen after GRBs 170817A and 150101B (Goldstein et al.,
2017; Burns et al., 2018). Extension down to 0.1–1.0 keV would enable routine
measurement of three separate regimes in the prompt synchrotron
interpretation, increase the recovery rate and insight on extended emission,
and increase the recovery rate of the cooling of thermal tails and shock
breakout emission.
Additionally, the low-energy threshold increases the capability of the
instrument to recover additional transient classes. If the threshold is at 50
keV, the majority of SGR short bursts are missed, while many are recovered
with a threshold of 10 keV. Given the peak flux-temperature connection for
SGRs, pushing to even lower values has limited return. A 10 keV threshold
allows recovery of many low-luminosity GRBs, with near-complete sampling if a
1 keV threshold can be achieved. This threshold would also allow recovery of
relativistic supernova and X-ray flashes, likely helping to understand the
exotic zoo of relativistic transients. Pushing down to $\sim 0.1$ keV would
also capture shock breakout from normal CCSNe, providing direct insight into
most types of massive stars at their end, fulfilling an additional key science
goal of the Astro2020 Decadal. These capabilities would also unlock
identification of X-ray bright tidal disruption events and undoubtedly uncover
new transient classes.
Fermi GBM and LAT combine to provide unparalleled broadband observations of
GRBs. The GBM low-energy threshold of 8 keV has allowed for the identification
of structure in the low energy end of GRB spectra ($\sim$10s of keV) which may
be evidence for an additional thermal component an an extra break in the
broader curvature. The high energy response from Fermi shows an expected
turnover below the LAT energy range in several GRBs. Together they have shown
evidence for the onset of a power-law component during the prompt phase,
likely attributable to the onset of external shock. Pushing to even lower
energies and covering the gap of $\sim$10–50 MeV promise ever further advances
in understanding the prompt GRB components.
### 3.7 Sensitivity
The sensitivity of high energy monitors drives both the number of events they
will observe and the statistics available for detections of a given
brightness. Over typical GRB energy ranges, the order of magnitude sensitivity
of Fermi-GBM and Swift-BAT triggers is $\sim 1\times 10^{-7}$ erg s-1 cm-2.
BATSE triggers and Swift-BAT subthreshold triggers reach to a few$\times
10^{-8}$ erg s-1 cm-2. An approximate sensitivity for the distant instruments
of the IPN (i.e. beyond Konus) is $\sim 1\times 10^{-6}$ erg s-1 cm-2.
Bright bursts are easily recoverable by the IPN instruments, but the IPN
sensitivity strongly limits scientific return. The non-detection of GRB
170817A by any spacecraft more distant than INTEGRAL demonstrates that GW-GRB
joint detections require deeper sensitivities. The current all-sky coverage of
optically-identified afterglows can exclude most of the viable parameter space
for prompt GRB counterparts, but it is not sufficient to fully exclude such a
signal, preventing direct inferences on the potential existence of dirty
fireballs.
The deeper sensitivities of GBM and BAT enable far higher detection rates and
recovery of events far deeper into the universe. They also allow recovery of
far greater numbers of magnetar bursts, though this is also enabled by high
temporal resolution data. Neither instrument, even with deep subthreshold
searches, is capable of recovering GRB 170817A beyond 100 Mpc, while IGWN can
already recover BNS mergers out to $\sim$300 Mpc, a $\sim$30$\times$ larger
volume. The deeper BATSE sensitivity resulted in a measurement of the turnover
in the cumulative fluence distribution of long GRBs, proving the cosmological
origin of GRBs. For temporal and spectral studies, the BATSE data is often far
superior than modern instruments (though it has a narrower energy range than
GBM). Achieving $\sim 1\times 10^{-8}$ erg s-1 cm-2 with modern flight
software detection algorithms and TTE data would enable breakthroughs.
Increases in sensitivity well beyond BAT and GBM are key to success to
observing high redshift objects as well enhanced studies of magnetars in the
local universe. However, we emphasize that the characteristic sensitivity
values quoted here do not apply to X-ray transients, which generally have
lower absolute values as they are integrated over a narrower energy range and
we are quoting energy (not photon) fluxes.
### 3.8 Maximum Photon Rate
The most stringent requirement on a maximal photon rate comes from a potential
energetic Galactic MGF which, in the most extreme scenario, may exceed
$1\times 10^{8}$ photons s-1 cm-2, although the SGR 1806 event had a peak flux
of 1.5$\times 10^{7}$ photons s-1 cm-2 (Frederiks et al., 2007a) and the other
two nearby MGFs were more than an order of magnitude less luminous. The
highest rates of extragalactic MGFs, normal Galactic SGR short bursts, and the
BOAT would be perfectly measured if an instrument could handle $1\times
10^{4}$ photons s-1 cm-2; roughly one GRB per year exceeds $\sim 1\times
10^{3}$ photons s-1 cm-2.
The maximum photon rate a given detector can handle is determined by the
instrument size and various aspects of its readout implementation. For
example, it is far easier to saturate BATSE than a smallsat with much smaller
detectors. It is easier to saturate slower scintillators than fast
scintillators. The electronics readout speed is also important. When the rate
limit is exceeded there are various instrumental effects that can occur
including data losses, data gaps, and spectral distortion. These can sometimes
be modeled and corrected, but this process is often imperfect and model
dependent. Achieving full, unaffected data of a Galactic giant flare likely
requires a tiny dedicated detector built for this purpose. For all other
purposes, large instruments can likely make clean observations, but there is a
cost to accommodating sufficiently high photon rates; however, exceptionally
luminous and energetic events often provide greater physical insights,
strongly motivating this additional cost.
### 3.9 Spectral Resolution
High-energy transients usually have smooth continuum spectra owing to their
origin from incoherent radiative processes in relativistic plasmas. Instrument
energy resolution is typically 10%–50% over $\sim$50–300 keV, which has proven
sufficient for nearly all purposes. For future study of physically motivated
models or possible lines in prompt GRB emission (Ravasio et al., 2023) higher-
energy resolution may be beneficial. This will be informed by the forthcoming
COSI mission. We add that this requires spectral readout into a sufficient
number of energy channels in order to maximize sensitivity to transients and
enable spectral analysis. If a high-energy monitor low-energy threshold were
extended down to $\sim$1 keV, then spectral resolution capable of resolving
lines, e.g. iron lines, would be scientifically well motivated.
High spectral resolution benefits constraints on fundamental physics. For
instance, constraining populations of low-mass black holes for dark matter by
lensing of fast transients (such as MGFs) potentially requires higher spectral
resolution $\Delta E/E\ll 10\%$ (Katz et al., 2018).
### 3.10 Polarization
The study of polarization in prompt GRBs is still relatively new. With
properly calibrated instruments, scientific advancements can be made with a
minimum detectable polarization down to 10% for bright bursts. Recent works
have shown that tests of specific prompt GRB models are best done with time-
resolved spectropolarimetric analysis. This requires either studying only the
very brightest bursts or vastly increased sensitivity to polarization than
prior instruments, e.g. POLAR. Precise requirements will be more easily
defined after continued advancements in theory and simulation and with respect
to the specific specialized instruments designed for this purpose.
Magnetar bursts at all energies (but particularly above 50 keV) are expected
to be highly polarized. Models of such bursts are at an early stage but
forthcoming.
## 4 Actionable Items for Missions and Instruments
Earlier sections detailed the scientific potential achievable through high-
energy monitor observations of transient events, as well as the requirements
needed to reach this potential. Several actions could be taken by NASA to
significantly augment this multi-diagnostic, interdisciplinary science. Some
specific actions are summarized below.
We emphasize that this report has implicitly assumed that space missions will
be capable of downlinking data and uplinking new commands with the typical
latencies currently in use today. The TDAMM Communications SAG will
investigate the space-based communications needs for broader science and
instruments than are considered here. We comment directly on how alterations
in the use of DSN may improve IPN operations below.
### 4.1 Active Missions
The Astrophysics Advisory Committee (APAC) recently recommended NASA’s
Astrophysics Division (APD) to perform a reanalysis of their current portfolio
to determine how to maximize TDAMM capabilities. Here we discuss some specific
actionable items where NASA APD could enhance return of its own assets as well
as assets in other Divisions, similar to its funding a TDAMM enhancement of
the Near-Earth Object Surveyor mission. The key enhancements are all
effectively different ways to get more data accessed as fast as possible. We
note that Swift and Fermi have historic and planned improvements, per the last
round of Senior Review, to continue enhancing the TDAMM science return of
these workhorses. For example, the plan to lower the on-board triggering
threshold of GBM for short GRBs would allow for great scientific returns in
partnership with the IGWN.
Perhaps the most immediate example is the over-guide request from Swift for
additional ground-station passes in the last round of Senior Review. This
would reduce the latency of full data downlink from BAT (both standard and
GUANO data), XRT, and UVOT. Identifying counterparts of GWs from any of these
instruments $\sim$hours earlier would contribute to understanding of specific
NS mergers, and allow the location information to be passed to telescopes
worldwide that are ready to characterize these events. The lower latency of
data would be generally useful in the study of all Swift time-domain sources.
For example, analysis of the Swift UV grism observations of the recent
supernova in M101 (the closest in a decade) were key to the decision to
repoint Swift for additional observations and as guidance for other UV and
X-ray telescopes, but the existing system resulted in delays. The cost is
minimal compared to the scientific gain. The choice of Senior Review to avoid
decisions on TDAMM-related over-guides and lack of strategic TDAMM planning
since then has overlooked this obvious improvement. Yet, O4 has already begun.
This is perhaps the greatest immediate priority action NASA could take to
benefit TDAMM.
Additional ground-station (or other downlink) passes could benefit additional
NASA missions with respect to gamma-ray transients. This requires discussion
with the relevant instrument teams to ensure it is possible and beneficial
within the existing mission architecture. Fermi is a specific case where
additional downlink passes would accelerate spectral and temporal analysis of
on-board triggers and sub-threshold searches for additional events rapidly
enough to guide follow-up observations of its core transient classes.
One of the most important improvements would be additional downlinks for the
distant spacecraft of the IPN. Today the IPN includes 11 gamma-ray detectors,
of which four are beyond low Earth orbit. Additional types of detectors can be
used for particularly bright events (on the order of one per year). The most
distant gamma-ray detector alternates between the Mercury Gamma-ray and
Neutron Spectrometer (MGNS) on-board BepiColombo Mercury Planet Orbiter, and
the High Energy Neutron Detector (HEND) on-board Mars Odyssey. The recent
switch of the MGNS data on BepiColombo, en route to Mercury, from 250 ms to 50
ms has enabled current IPN localizations to rival Swift-BAT positions. HEND
on-board Mars Odyssey has 250 ms resolution. Konus on-board the US Wind
satellite has 2.944 s background resolution and 2 ms resolution for on-board
triggers. Additional downlinks would support more rapid IPN localization in
the multimessenger era and could possibly support higher temporal resolution.
However, we emphasize that any change is not trivial and must be done in
concert with interest from the instrument and mission teams. Potential issues
include aging flight recorders, the necessity to maneuver the spacecraft for
downlinks, and potential onboard processor or storage limitation.
Beyond decreasing the latency for data to arrive on the ground, improvements
can be made to data access. For Fermi-GBM, distribution of HEALPix maps that
account for systematics of the ground localization would be beneficial for
observations beginning under 15 minutes, as would removal of Earth occulted
positions from localization maps. Serializing the roboBA localization into the
alert itself rather than waiting for the delay added by HEASARC would allow
more rapid use of GBM data. Further, distribution of retraction Notices due to
the astrophysical nature or misclassification of events would be beneficial to
guide follow-up efforts. Some of these enhancements can be done outside of the
instrument team pipelines and could be handled by the IPN (see Section 5), as
discussed in Section 5.
Support for improving the robustness of existing IPN data pipelines and
enhancing the information shared is key to automating the process and reducing
ground-based contributions to alert latencies. Support for creating instrument
responses for the purposes of GRB studies is necessary for inclusion in any
multi-mission analysis.
### 4.2 Forthcoming Missions
There are numerous forthcoming missions of relevance. NASA APD has selected
COSI as an upcoming Small Explorer mission, StarBurst as an upcoming Pioneer,
and is funding additional technology demonstration missions through APRA.
Space-based multi-band astronomical Variable Objects Monitor (SVOM) is a
French-Chinese mission designed with a similar ideology as Swift. China is
launching a series of GECAM satellites. Several gamma-ray spectrometers on
planetary and heliophysics spacecraft are planned to be built and launched by
American and Russian scientists over the next decade.
StarBurst will have several times the effective area of Fermi-GBM, launching
as a ride share. StarBurst is using silicon photomultipliers, which will limit
the overall instrument lifetime at high inclination orbits due to increased
radiation exposure. COSI will have a dedicated mission launch and requires a
$<2^{\circ}$ inclination orbit. Pairing StarBurst to the COSI launch would
significantly enhance the scientific return of StarBurst by increasing
observing livetime due to fewer passages through high rate particle background
regions. It would also increase the operating lifetime of StarBurst by
reducing the damage of the silicon photomultipliers. Additionally, increasing
the number of ground contacts and bandwidth would enable the downlink of all
StarBurst TTE data, instead of TTE limited to on-board triggers.
COSI is slated to launch in 2027, well-timed with the planned beginning of the
IGWN O5 observing run. The onboard COSI trigger for GRBs will be initiated by
count rate increases in the anticoincidence shield, which will trigger prompt
downlink of a limited number of Compton events allowing for a $\sim$deg scale
localization within an hour of the trigger. Enhancing the amount of data that
can be rapidly downlinked by COSI would improve that initial localization.
Enabling the downlink of the full dataset would result in final localizations
at a much higher cadence, enhancing the transient science return of COSI. We
also support the downlink of all possible data from the COSI shields and
single-site events in the main instrument, regardless of likely event class or
the likelihood of being recovered as Compton events. COSI will contribute to
the IPN with events detected by its shields and will help characterize the IPN
systematics through facilitating the recovery of arcsecond localizations.
There may also be possible TDAMM enhancements for the COSI student-led BTO
instrument which will broaden the energy coverage and can also contribute to
IPN.
The Johns Hopkins Applied Physics Laboratory is building the Psyche Gamma-Ray
and Neutron Spectrometer (GRNS), the Mars-moon Exploration with GAmma rays and
NEutrons (MEGANE) instrument for the JAXA Martian Moons eXploration (MMX)
mission, and the Dragonfly Gamma-ray and Neutron Spectrometer (DraGNS). All
three spectrometers use germanium which must be cooled. DraGNS will utilize
the temperature of the Titan atmosphere for passive cooling, preventing use
for the IPN.
The Psyche GRNS will contribute to the IPN for $\sim$6 years while MEGANE on
MMX should contribute for $\sim$2 years. These instrument and mission
operations have been adapted for use in the IPN. Each has an onboard trigger
that will achieve 50 ms resolution for a fixed time period. At Psyche’s
greatest distance of 3 AU this allows a statistical timing annulus width of
$\sim 10^{\prime\prime}$. Additional TDAMM support from NASA could ensure the
data is accessed as quickly, easily, and reliably as possible, and to support
the generation of GRB response matrices.
The Ioffe Institute in Russia is launching gamma-ray spectrometers on three
future spacecraft (Ulanov et al., 2019). Konus-UF will consist of two detector
units on the World Space Observatory Ultraviolet, providing all-sky coverage
and possibly TTE data from a geosynchronous orbit. If so, it would be the
first TTE data from beyond low Earth orbit. The twin InterhelioProbe
(Kuznetsov et al., 2016) spacecraft will each contain one detector unit and
will be among the few spacecraft to significantly diverge from the ecliptic,
which will be valuable for IPN localizations. US contributions to these
missions may bolster the IPN and meet the Decadal suggestion of supporting a
TDAMM program through international partnership.
As this report is being delivered, the review of the current Mission of
Opportunity call is underway. Both missions, LEAP and MoonBEAM, are dedicated
GRB monitors, with different focus areas. We support consideration of
additional TDAMM enhancements should one or both instruments be selected.
### 4.3 The Decadal-Recommended High Energy Monitor
The top priority sustaining activity recommended by the Astro2020 Decadal
Pathways to Discovery in Astronomy and Astrophysics for the 2020s (National
Academies of Sciences, Engineering, and Medicine, 2021) was: “NASA should
establish a time-domain program to realize and sustain the necessary suite of
space-based EM capabilities required to study transient and time-variable
phenomena, and to follow-up multi-messenger events. This program should
support the targeted development and launch of competed Explorer-scale or
somewhat larger missions and missions of opportunity.” While much of TDAMM can
be done from the ground, the wavelengths only observable from space are
critical for the TDAMM ecosystem, as noted by the Decadal: “The most important
of these are wide-field gamma-ray and X-ray monitoring, and rapid and flexible
imaging and spectroscopic follow-up in the X-ray, ultraviolet (UV), and far-
infrared (far-IR).” We also highlight the following quote to emphasize the
introduction of the concept of a workhorse mission to NASA APD: “NASA’s
workhorse hard X-ray and gamma ray transient facilities (Swift and Fermi,
respectively) are aging and their longevity is uncertain. Higher sensitivity
all-sky monitoring of the high-energy sky […] is a critical part of our vision
for the next decade in transient and multi-messenger astronomy.” Other NASA
science divisions have sustaining capabilities that are ensured across
generations of missions; for example, total and complete coverage of the solar
flares from the Sun. The message is clear: NASA astrophysics should ensure
continuous coverage of the sky with wide-field high-energy monitors.
Furthermore, the total NASA TDAMM program was recommended a budget of
$500M-$800M for use over the coming decade.
Relativistic transients are a major component of time-domain and
multimessenger astrophysics, containing the multi-diagnostic events of
supernovae, collapsars, NS mergers, TDEs, FRBs, magnetars, and some signals
whose origin is sill unknown. The science and corresponding requirements
detailed in this document provide motivation and an outline for a strategic
high-energy monitor. In the context of a field where every other wavelength is
facing a major upgrade in TDAMM capabilities (e.g., the Decadal recommended
next-generation Very Large Array, the Square Kilometer Array, CHORD, Roman,
the Vera Rubin Observatory, ULTRASAT, Einstein Probe, and the Cherenkov
Telescope Array) including simultaneous upgrades of ground-based GW
interferometers and HEN telescopes, a TDAMM-focused high-energy monitor is the
missing critical link in the full ecosystem. It is important to bolstering the
return of NASA’s investment in other missions and infrastructure.
NASA’s current largest investment in a high-energy monitor is COSI. COSI will
achieve $\sim$degree scale localizations of GRBs that will be reported within
1 hour, meeting some goals outlined in Section 3 and the temporal and
polarization requirements listed below. COSI’s prime design driver; however,
is nuclear astrophysics, and although it will revolutionize this field, COSI
will not replace Fermi or Swift for TDAMM studies. As highlighted in the
Decadal, another future high-energy monitor is SVOM, a French-Chinese mission
being built with similar instrument designs and science goals as Swift, but
with worse sensitivity and localization. SVOM is set to launch in 2024 and
will provide localizations at $\sim 10^{\prime}$ precision for 50-60 GRBs per
year. Over the nominal 3 year mission and potential 2 year extension, SVOM
will produce a sample of well-studied events, but its capabilities will not
match that of Swift or Fermi. Additionally, as noted in Astro2020, the US has
no significant involvement.
A number of nations have built, or are in the process of building, low-cost
scintillator-based instruments to search for GRBs from GW sources. A veritable
fleet will be operating soon. The largest of these are Glowbug and StarBurst,
funded by NASA as a technology demonstration and Pioneer, respectively. These
missions will both be more sensitive than Fermi-GBM, a key piece towards
improving the number of GW-GRB detections, but have narrower energy ranges,
face the same limitations of operating in LEO, and it is impossible to
substantially lengthen mission lifetimes, which are $\sim$1 year each, given
the instrument design. Additionally, they are designed for one area of
transient gamma-ray science. They will both provide new insight into the
universe in partnership with GW detectors, but they are not Fermi or Swift
replacements, which is not surprising given the orders of magnitude lower
total cost.
Based on the requirements outlined herein, and the currently planned future of
this field, there are some obvious missing existing capabilities. A high-
energy monitor covering the majority of the sky and capable of $\sim$arcminute
scale localizations, provided within $\sim$10 s, from an instrument that
continuously observes at a sufficient sensitivity would fill the missing gap
in the transient ecosystem. The operational mode of the IPN precludes this
possibility, given light travel time from distant spacecraft. Such a dedicated
mission is within the capability of existing technology. If the instrument
could achieve 10’ scale localizations, a low-energy threshold down to $\sim$1
keV and transient spectral sensitivity to several MeV, while maintaining
background stability, it would additional bring the recovery of the initial
shock breakout (necessary to map the properties of the progenitor star) and
high-cadence monitoring of the X-ray sky (also highlighted in the Decadal).
This seems feasible with developing technologies (e.g. Chattopadhyay et al.,
2018) and achievable as the TDAMM mission at the scale recommended by the
Decadal. Such a mission would prove a critical partner to all other transient
facilities including nearly the entire astrophysics fleet of NASA (and the
rest of humanity) as well as the low and high energy neutrino facilities, low
and high frequency GW interferometers, and the vast numbers of ground-based
telescopes used in follow-up observations.
Such a mission, through collaboration with worldwide partners, would unlock
new knowledge on fundamental physics including gravity parity violation, dark
matter, antimatter, the speed of gravity, and Lorentz Invariance violation.
Some of these tests using astrophysics are orders of magnitude more sensitive
than any other proposed method. It would provide the most precise measures on
the behavior of dense matter by determining the maximum mass of NSs, the
lifetime of meta-stable NSs, the radius of NSs, insight into the proto-NSs
formed during core-collapse, and the compactness of NSs. The maximum mass of a
NS is an asymptotic value, providing a uniquely constraining test of dense
matter, and could be measured to 1% precision (Margalit & Metzger, 2019),
which is a factor of several better than non-asymptotic tests expected through
currently active or selected missions. This mission would probe the origin of
GRBs, FRBs, GWs, astrophysical neutrinos, low-luminosity GRBs, ultra-long
GRBs, unusual supernovae and LFBOTs, every other relativistic transient class
discovered with forthcoming surveys, and rare and unusual individual
transients. It will provide insights into how jets form, whether black holes
return power to the universe, insights into the physics of magnetars, and the
transition regime between dense and condensed matter. It will support multiple
diagnostics for identifying different source classes, standardizing NS mergers
for precision cosmology throughout the universe, support proper inference on
the origin of the elements, standardize CCSN to understand how they explode,
and resolve the neutrino mass hierarchy. The all-sky field of view is critical
to identify all nearby transients where the diagnostics with the most limited
detection distances can still be recovered, a capability that has never
existed with prompt and precise localizations. Implementing the requirements
will provide significant advances in astrophysics, cosmology, gravity, and
fundamental physics, and enable interdisciplinary work in nuclear, particle,
plasma, and atomic physics.
The level of support for space-based TDAMM missions recommended by the
Astro2020 Decadal is absolutely necessary to unlock this magnificent breadth
of science. While no mature mission concept is capable of these requirements,
informed estimation accounting for the cost of fiducial detectors to achieve
the requisite sensitivity and including the requirement for achieving at least
a high Earth orbit suggest it may not fit within the Small Explorer mission
budget cap. The Decadal also highlights the need for “…rapid and flexible
imaging and spectroscopic follow-up in the X-ray, ultraviolet (UV), and far-
infrared (far-IR)”, which are wavelengths only accessible by instruments in
space. Key to the modern TDAMM system is the decoupling of discovery monitors
from the follow-up telescopes onto separate spacecraft. This design principle
implies the necessity of dissemination of high-energy alerts to the full
community, which can route to low-latency commanding of the follow-up
telescopes, as shown by Swift. This allows the high-energy monitors to
maintain background stability and vastly reduces the moment of inertia of the
narrow-field telescopes. Together the cost is likely to fall into the range of
the $500-800M$ recommended in Astro2020.
If the above described monitor is sufficiently sensitive, it will also support
high-redshift GRB science; however, this may not be technically feasible, and
a dedicated $\sim$1 sr field of view instrument may be required. It is
critical for this instrument to overlap with JWST, allowing for follow-up
characterization of the low-mass galaxy hosts in the early universe. In the GW
3G era, it will be critical to recover merger GRBs deep into the universe to
build the precision Hubble diagram out to a redshift of a few. Thus, such a
mission should be designed to launch before the end of JWST and to overlap
with the future GW interferometers. The characterization of explosive
transients by the previously described near-universe monitor will prove
important to science of this far-universe mission.
### 4.4 The Future of the InterPlanetary Network
The IPN has utilized more than 50 instruments, and outlived 31 of them,
providing true all-sky coverage for nearly 50 years. It is the existing
example of a workhorse network in NASA Astrophysics. Regardless of dedicated
GRB missions, the use of all-sky monitoring with relatively precise
localizations will always be needed.
Note that precise localizations of GRBs cannot be routinely accomplished by a
network of detectors entirely contained to low Earth orbit. A given annulus is
described as (RA, Dec, radius, width). The radius is the angle $\theta$,
defined relative to the vector joining the two spacecraft, calculated as
$\cos(\theta)=c\delta T/D$ with $c$ the speed of light, $\delta T$ and $D$ the
arrival delay time and distance between the two spacecraft, respectively. The
width is calculated as $c\sigma(\delta T)/D\sin(\theta)$ where $\sigma(\delta
T)$ is the uncertainty on the delay time (Hurley et al., 1999). The
interplanetary baselines are of order $10^{8}$ km. LEO provides baselines
$\lesssim 10^{4}$ km. While the temporal resolution of distant IPN spacecraft
can limit timing annuli accuracy, GRB lightcurves are not delta functions and
have a floor on the arrival time uncertainty determination.
A representative 10 ms arrival time uncertainty in LEO gives a best-case
1$\sigma$ annulus width of $\sim$10 deg. Even for particularly bright bursts
where 1 ms timing is possible, this is still $\sim$1 deg annulus width.
Improved methodologies cannot avoid this fundamental fact. Additionally, small
detectors will also generally result in poorer timing precision given the
smaller effective area. Careful simulations of a network show that they will
achieve degree-scale localizations similar to those provided by Fermi-GBM
alone (e.g. Thomas et al., 2023; Hurley, 2020). Hurley (2020) also raises the
issue that increasing the size of the network will decrease the average
distance between viewing detectors, thereby limiting the spatial precision
gain possible.
What is required from LEO satellites is a deep sensitivity and TTE data, which
is a critical complement to the more distant instruments in the IPN. The more
distant instruments will launch predominantly on planetary and heliophysics
spacecraft, and perhaps on astrophysics satellites that are more often
venturing to the Sun-Earth L2. Often these instruments are enhanced beyond
their primary purpose for use in the IPN: planetary missions use gamma-ray
spectrometers to map elements on the surface of other bodies in the solar
system and heliophysics missions can use them to study the X-ray and gamma-ray
emission in solar flares. Enhancements included increasing the temporal
resolution of all data and adding onboard GRB triggers (with a pre-trigger
buffer) for higher temporal resolution during specific times. Additional
enhancements that require very little modification to the existing systems
could include prioritization of the GRB data to be downlinked first during
scheduled communication passes and contribution of additional DSN time to
allow for greater temporal resolution.
Swift has demonstrated the recovery of TTE data around externally identified
times of interest, e.g. GW triggers. An addition to missions that host
contributing instruments to IPN would be large onboard storage, which would
allow days worth of TTE data to be saved onboard and requested during specific
times of interest.
A more disruptive enhancement may be scheduling DSN downlinks with greater
frequency. Given the large distances, only directional antennae are viable,
requiring reorientation for communication during coast or acceleration times.
This can result in downlink delays of $\sim$1 week, which are substantial
limitations. If DSN time can be scheduled and mission operations can
accommodate this change, this would be a vast improvement in IPN operations.
There are a few potential game changers identified in Origins, Worlds, and
Life: A Decadal Strategy for Planetary Science and Astrobiology 2023-2032 for
planetary science that would also enable capability leaps of the IPN. The
first is the planned Deep Space Optical Communications experiment on Psyche,
seeking the first demonstration of optical communications beyond the Moon.
Success would mean potential bandwidth increases by 10-100 fold above the
current radio solutions. This would greatly enhance the temporal resolution of
distant IPN instruments and may additionally support more frequent contacts.
The other development is the potential success of Starship and similar mega-
rockets. These heavy launchers would alleviate the greatest limitation of
launching distant spacecraft: mass, potentially allowing the launch of
multiple planetary spacecraft with one rocket. This would allow for larger
gamma-ray spectrometers for the study of element distributions on planetary
bodies. Further, it may allow the return of dedicated GRB monitors on distant
spacecraft which was routine in the field for decades but has not occurred
with U.S. led missions since the launch of Ulysses more than 30 years ago.
Russia plans to contribute dedicated GRB monitors to three spacecraft, an area
where the US could explore contribution of downlinks in support of the IPN and
the Astro2020 Decadal suggestion of partnering on foreign missions for TDAMM.
The last major change is in the reduction of systematic uncertainty that has
previously limited IPN precision. One advancement is the demonstration of
precise atomic clocks in space and, in particular, the plan to launch one on
the forthcoming VERITAS probe to Venus in 2028. Historically, the onboard
clocks of distant IPN spacecraft have been verified or their drift quantified
using the GRB instruments (e.g. Hurley & Sommer, 1994). The precise absolute
calibration is critical for IPN operation and key to knowledge of the true
spacecraft position. Inclusion of atomic clocks can solve this problem.
Alternatively, both timing and position could be enhanced with pulsar timing
positioning systems (e.g. Ray et al., 2014; Winternitz et al., 2016; Ray et
al., 2017). As discussed, the suggested clocks program in Thriving in Space:
Ensuring the Future of Biological and Physical Sciences Research: A Decadal
Survey for 2023-2032 would vastly exceed the precision needed for the IPN. The
other improvement since Ulysses is the external verification of true source
positions, which supports the verification of IPN precision or modeling of an
irreducible systematic uncertainty. The 90% annulus width of
$4^{\prime\prime}$ was derived from the collaboration between the GRB monitor
on Ulysses, a distant heliophysics satellite, and BATSE, but this was limited
by temporal resolution of these instruments and the systematics of the IPN at
the time.
If optical communications, megarockets, and atomic clocks become pillars of
future distant spacecraft, then the future of IPN could be scientifically
fruitful, perhaps achieving arcsecond scale localizations. There are $\sim$20
distant spacecraft launched by NASA or partner space agencies. If even half of
future distant spacecraft had dedicated GRB monitors, roughly $\sim$10
instruments would be available. Even if each has an average downlink latency
of days, then a few will downlink data each day. This could support precise
localizations with latency sufficient for successful follow-up of most
sources. The addition of a GRB monitor to any of the future Venus probes would
be beneficial.
At a few AU, e.g. the distance achieved by Ulysses and expected for Psyche,
TTE data would enable statistical localizations on the order of 0.1′′ for
particularly bright and variable bursts. 1′′ precision is possible with
$\sim$10 ms temporal resolution. If the atomic clocks and systematics are
carefully modeled, the resulting localizations are sufficiently precise that
host galaxy association and redshift determination could be made without the
need for successful follow-up recovery of the afterglow; only optical spectra
follow-up would be desirable. This could produce a complete mapping of
intrinsic energetics distributions and source evolution and complete the
Hubble diagram alongside 3G GW interferometers.
This range of enhancements, from non-disruptive to the intentional
construction of a dedicated IPN, requires inter-division discussions at NASA.
Elevation of IPN to a strategic initiative could be a necessity, as discussed
in the next section. This could prioritize taking advantage of new
capabilities and options, such as a dedicated monitor onboard Gateway enabling
TTE data from $\sim$1 lightsecond away.
As an additional example, Origins, Worlds, and Life: A Decadal Strategy for
Planetary Science and Astrobiology 2023-2032 recommend the Uranus Orbiter and
Probe mission as the highest priority among new flagship missions. If $\sim$2%
of the total mission mass could be allocated to a dedicated GRB monitor and
NASA APD funds the instrument, an on-board trigger, and the technology for
precise timing, then sub-arcsecond timing annuli would be available to the IPN
for nearly two decades ($\sim$13-15 year cruise, $\sim$4.5 year prime
mission). This would be a foundation for a new era of the IPN. With an
appropriate paired instrument (preferably outside of the ecliptic), host
galaxies could be found without requiring an afterglow detection phase. Other
opportunities exist including the Venus probes. Additionally opportunities may
arise pending the outcome of the Heliophysics Decadal.
## 5 Actionable Items for the Gamma-ray Transient Network
The needs of the various communities interested in studying magnetars, compact
mergers, and collapsars also lead to a number of requested products from the
ideal gamma-ray transient network. We find the best path to creating these
materials to be enhancing the operations of the IPN.
### 5.1 Support for Community-led Strategic Initiatives
Of critical importance to NASA’s implementation of the TDAMM aspects of the
Decadal is long-term, sustained funding in support of community-led strategic
initiatives, software development, and formalized renewal of mission-oriented
projects that are too small for a Senior Review. There is no general software
development call, preventing NASA from tapping expertise in our broad
community. These issues have precluded enhancement of the IPN due to limited,
sporadic, and fractured support.
Kevin Hurley’s operation of the IPN included half of the real-time
localization and reporting, archiving of the GRB and SGR flare catalogs,
maintenance of the entire system, and providing requested products to various
institutions (e.g. catalogs to LIGO). In his later years, this was supported
through Fermi and Swift GI proposals, proposed annually, providing a total
support on the order of $100 k per year. This was both insufficient to support
modern development of an improved system and the year-to-year funding nature
required time spent on proposals that otherwise could have been used to
deliver an obvious community project. Prior roles as participating scientists
in planetary missions allowed fostered communication and support from the
relevant instrument and mission teams, but the proposed work must be directly
for the individual funding missions, resulting in bifurcated community
products (e.g. Hurley et al., 2005, 2010, 2011a, 2011b, 2017).
Making the IPN, and the products described above, a community-led strategic-
initiative supported by NASA with review on a reasonable cadence would enable
far greater science with existing and forthcoming missions for a fraction of
the cost of a new mission. Such a decision would also allow the IPN to act in
an appropriate manner related to its importance in the field. For example,
allowing the IPN to request NASA-led press directly.
The advancement of astronomy has necessitated ever greater group sizes. In
many cases, the lone astronomer has been replaced by groups that have
sometimes evolved into larger collaborations, and a few of those have
coalesced into consortia. This change has been particularly prevalent in the
area of TDAMM, requiring broad expertise to capture a full story of specific
events. Perhaps the most visible is the formalization of IGWN from the LIGO
Scientific Collaboration, Virgo Collaboration, and KAGRA Collaboration. This
integration has greatly benefited these groups and the astronomical community
as a whole. Similarly, the creation of the SNEWS from the global set of MeV
neutrino telescopes will be critical to discover and characterize the next
Galactic supernova. There are also initial discussions on potential
integration of the HEN telescopes.
The gamma-ray transient detectors have not integrated to the same degree. The
IPN and the GW-GRB Working Group are the two relevant multinational consortia.
The IPN has operated as a university-led project within the US for decades.
This setup has allowed for continued operation through the evolution of
restrictions related to geopolitics, use of private data that is unlikely to
be accessible outside of the IPN, and is a key example of community-led
mission-oriented work. The GW-GRB Working Group is now more than a decade old
and has been co-led by various individuals at universities, non-profits, and
labs.
The integration of these groups is on-going. While leadership could rotate to
individuals at NASA institutions, it is preferable to maintain the broad
community-led nature of these consortia. There is a sufficient support
mechanism to facilitate growth of these projects to meet the needs of the
broader TDAMM community outlined in this report. NASA has supported key
aspects related to these goals through the Internal Scientist Funding Model,
but this method is not accessible for university-led projects. Given that the
deliverables described herein would result in maximal TDAMM science return via
gamma-ray scientists making otherwise publishable results immediately
available to the community, long-term sustained support would act as a proper
reward system for these scientists. Otherwise the individual benefit is
maximized by restricting public release to ensure authorship on
multiwavelength papers.
The purpose of this report is to gather the needs of the wider TDAMM
community, and to ensure those needs are met. The specific products to
maximize science with these instruments is listed in the next subsection. NASA
has facilitated the technical capability for many of these projects through
various grants and internal funding; however, to fully implement the community
requests, support for a greater integration of the gamma-ray transient
community is necessary. The specific implementation likely requires working
through the existing consortia, which enable the sharing of otherwise private
data.
### 5.2 Products
The analysis of data from multiple missions is generally limited to joint
spectral analysis or localization via the IPN. The products listed here would
greatly benefit the community, as previously noted in the Multimessenger
Astronomy Science Analysis Group (Brandt et al., 2020).
#### 5.2.1 Event-based Real-time Alert Stream
The community has developed signal-based reporting to facilitate coordinated
observations and analysis of specific events. This is most evident in the
fixed naming conventions of supernovae, GRBs, FRBs, GWs, etc, which are
generally distinct from the internal identifiers of each observing instrument.
For example, GRB 170817A is the event-based GRB name for the GBM trigger
bn170817529. IGWN has created a single alert stream for GW alerts, allowing
the community to receive the best available real-time information, especially
spatial information, in a fixed format. SNEWS has similarly created this for
temporal and spatial information of MeV detections of neutrinos.
No such stream exists for GRBs. This would be one important benefit the high-
energy monitors could deliver to the community: providing a single access
point to real-time GRB information to the full follow-up community. This would
unify the schema of the data presentation, allowing a single script to be
written to listen in to this alert stream rather than requiring work for each
individual existing stream. The greatest benefit would be improved spatial
constraints to aide follow-up efforts. We emphasize this stream would support
equitable access to data in the field. Currently large institutions have an
advantage in developing frameworks to handle the vast heterogeneous alert
streams. A single access point allows institutions with less technical
capability to access the full information from a field with a single script.
The Unified Schema from GCN will be a great equalizer in this regard by
facilitating ease of access to information from all contributing TDAMM fields.
As currently implemented, the community is losing key early information in
rare and interesting events. This is perhaps best exemplified by GRB 230307A
(Section 5.3.2). It was iterative localized by IPN as data became available,
but limitations introduced by processing the real-time data and the manual
steps required resulted in delaying the alerts. Despite this, the localization
was tiled with both Swift, a space-base facility, and ULTRACAM, a large (3.5
|
# On the calculation of upper variance under multiple probabilities 111This
work was supported by NSF of Shandong Provence (No.ZR2021MA018), NSF of China
(No.11601281), National Key R&D Program of China (No.2018YFA0703900) and the
Qilu Young Scholars Program of Shandong University.
Xinpeng Li222Research Center for Mathematics and Interdisciplinary Sciences,
Shandong University, 266237, Qingdao, China<EMAIL_ADDRESS>Miao
Yu333Research Center for Mathematics and Interdisciplinary Sciences, Shandong
University, 266237, Qingdao, China . Shiyi Zheng444School of Mathematics,
Shandong University, 250100, Jinan, China.
###### Abstract
The notion of upper variance under multiple probabilities is defined by a
corresponding minimax optimization problem. This paper proposes a simple
algorithm to solve the related minimax optimization problem exactly. As an
application, we provide the probabilistic representation for a class of
quadratic programming problems.
Keywords: Multiple probabilities; Quadratic programming; Sublinear
expectation; Upper variance
## 1 Introduction
The notion of upper and lower variances is a generalization of classical
notion of variance to the setting of multiple probabilities or the problems
with uncertainty. For example, let $X$ be a random variable representing the
daily return of one stock. Without loss of generality, we assume $X\sim
N(0.1,0.4)$ in the bull market and $X\sim N(-0.1,0.4)$ in the bear market
respectively. In fact, the parameters of mean and variance can be estimated
from the historical data of the stock. We hope to estimate the “risk”
(variance) of the stock after one month, but we do not know the true state
(bull or bear) of the stock market in the future. In this case, the notion of
upper and lower variances is a good measure for such risk. More precisely, let
$\mathcal{P}=\\{P_{1},\cdots,P_{K}\\}$ be a set of probability measures on
measurable space $(\Omega,\mathcal{F})$. For each random variable $X$ with
$\max_{1\leq i\leq K}E_{P_{i}}[X^{2}]<\infty$, the upper and lower variances
of $X$ under $\mathcal{P}$ are defined respectively by
$\overline{V}(X)=\min_{\mu\in\mathbb{R}}\max_{1\leq i\leq
K}E_{P_{i}}[(X-\mu)^{2}],\ \ \ \
\underline{V}(X)=\min_{\mu\in\mathbb{R}}\min_{1\leq i\leq
K}E_{P_{i}}[(X-\mu)^{2}].$
Walley [7] pointed out that it is quite easy to calculate the lower variance,
but more difficult to calculate the upper variance (see Appendix G in [7]).
This paper aims to provide a simple algorithm to calculate the upper variance
$\overline{V}(X)$.
The upper and lower variances can be widely used in many fields, especially in
mathematical finance. Li et al. [1] used upper and lower variances to obtain a
general $G$-Var model with mean-uncertainty, which generalized the $G$-Var
model with zero-mean in Peng et al. [5]. In this paper, we provide a new
application of upper variance. Since the upper variance $\overline{V}(X)$ can
be regarded as the supremum of the variance over the convex hull of
$\mathcal{P}$ (see Theorem 2.3), it can be calculated by a quadratic
programming problem on the unit simplex in $\mathbb{R}_{+}^{K}$ (see
Proposition 4.1). Our algorithm is also useful for solving a class of
quadratic programming problems, and we can obtain a probabilistic
representation for such quadratic programming problems, i.e., the related
optimal value is the upper variance under multiple probabilities.
The structure of this paper is as follows. Section 2 recalls the notions and
properties of upper and lower variances. The algorithm for the calculation of
upper variance is presented in Section 3. Section 4 shows the application of
upper variance in quadratic programming.
## 2 Preliminaries
We firstly recall the preliminaries of sublinear expectation theory introduced
in Peng [4], which is a powerful tool to deal with problems with uncertainty,
i.e., problems under multiple probabilities.
Let $\Omega$ be a given set and $(\Omega,\mathcal{F})$ be a measurable space.
Let $\mathcal{P}$ be a set of probability measures on $(\Omega,\mathcal{F})$
characterizing Knightian uncertainty. We define the corresponding upper
expectation ${\mathbb{E}^{\mathcal{P}}}$ by
${\mathbb{E}^{\mathcal{P}}}[X]=\sup_{P\in\mathcal{P}}E_{P}[X].$
Obviously, ${\mathbb{E}^{\mathcal{P}}}$ is a sublinear expectation satisfying
* (i)
Monotonicity:
${\mathbb{E}^{\mathcal{P}}}[X]\leq{\mathbb{E}^{\mathcal{P}}}[Y],$ if $X\leq
Y$;
* (ii)
Constant preserving : ${\mathbb{E}^{\mathcal{P}}}[c]=c,\forall
c\in\mathbb{R}$;
* (iii)
Sub-additivity :
${\mathbb{E}^{\mathcal{P}}}[X+Y]\leq{\mathbb{E}^{\mathcal{P}}}[X]+{\mathbb{E}^{\mathcal{P}}}[Y]$;
* (iv)
Positive homogeneity : ${\mathbb{E}^{\mathcal{P}}}[\lambda
X]=\lambda{\mathbb{E}^{\mathcal{P}}}[X],\forall\lambda\geq 0$.
We call $(\Omega,\mathcal{F},{\mathbb{E}^{\mathcal{P}}})$ the sublinear
expectation space.
In this paper, we denote ${\rm{co}(\mathcal{P})}$ the convex hull of
$\mathcal{P}$. It is clear that
${\mathbb{E}^{\mathcal{P}}}[X]=\sup_{P\in\mathcal{P}}E_{P}[X]=\sup_{P\in{\rm{co}(\mathcal{P})}}E_{P}[X].$
The upper expectation ${\mathbb{E}^{\mathcal{P}}}[X]$ and the lower
expectation $-{\mathbb{E}^{\mathcal{P}}}[-X]$ of $X$ are denoted by
$\overline{\mu}_{X}$ and $\underline{\mu}_{X}$ respectively, called upper mean
and lower mean of $X$. The interval $[\underline{\mu}_{X},\overline{\mu}_{X}]$
characterizes the mean-uncertainty of $X$, denoted by $M_{X}$.
Now we recall the notion of upper and lower variances which was first
introduced by Walley [7] for bounded random variable under coherent prevision,
and then generalized by Li et al. [1].
Let $(\Omega,\mathcal{F},P)$ be a probability space and $X$ be a random
variable with finite second moment. The variance of $X$, denoted by
$V_{P}(X)$, is defined as
$V_{P}(X)=E_{P}[(X-E_{P}[X])^{2}].$
We note that
$E_{P}[(X-\mu)^{2}]=V_{P}(X)+(E_{P}[X]-\mu)^{2},$
immediately,
$V_{P}(X)=\min_{\mu\in\mathbb{R}}E_{P}[(X-\mu)^{2}],$
then we use ${\mathbb{E}^{\mathcal{P}}}$ instead of $E_{P}$ to obtain the
following definition.
###### Definition 2.1
For a random variable $X$ on sublinear expectation space
$(\Omega,\mathcal{F},{\mathbb{E}^{\mathcal{P}}})$ with
${\mathbb{E}^{\mathcal{P}}}[X^{2}]<\infty$, we define the upper variance of
$X$ as
$\overline{V}(X):=\min_{\mu\in
M_{X}}\\{{\mathbb{E}^{\mathcal{P}}}[(X-\mu)^{2}]\\},$
and the lower variance of $X$ as
$\underline{V}(X):=\min_{\mu\in
M_{X}}\\{-{\mathbb{E}^{\mathcal{P}}}[-(X-\mu)^{2}]\\}.$
###### Remark 2.2
Since ${\mathbb{E}^{\mathcal{P}}}[(X-\mu)^{2}]$ is a strict convex function of
$\mu$, there exists a unique $\mu^{*}\in M_{X}$ such that
$\overline{V}(X)={\mathbb{E}^{\mathcal{P}}}[(X-\mu^{*})^{2}].$
It is not hard to prove that
$\overline{V}(X)=\min_{\mu\in\mathbb{R}}\\{{\mathbb{E}^{\mathcal{P}}}[(X-\mu)^{2}]\\},\
\ \
\underline{V}(X)=\min_{\mu\in\mathbb{R}}\\{-{\mathbb{E}^{\mathcal{P}}}[-(X-\mu)^{2}]\\}.$
More details can be found in Walley [7] and Li et al. [1].
Thanks to the minimax theorem, we have the following variance envelop theorem.
###### Theorem 2.3
Let $X$ be a random variable on
$(\Omega,\mathcal{F},{\mathbb{E}^{\mathcal{P}}})$ with
${\mathbb{E}^{\mathcal{P}}}[X^{2}]<\infty$. Then we have
* (i)
$\overline{V}(X)=\sup_{P\in{\rm{co}(\mathcal{P})}}V_{P}(X)$.
* (ii)
$\underline{V}(X)=\inf_{P\in{\rm{co}(\mathcal{P})}}V_{P}(X)=\inf_{P\in\mathcal{P}}V_{P}(X)$.
Proof. Since the interval $M_{X}$ is convex and compact, and
${\rm{co}(\mathcal{P})}$ is convex, furthermore, $E_{P}[(X-\mu)^{2}]$ is a
linear function in $P$ and a convex function in $\mu$, by minimax theorem in
Sion [6], we have
$\displaystyle\overline{V}(X)$ $\displaystyle=\min_{\mu\in
M_{X}}\sup_{P\in\mathcal{P}}E_{P}[(X-\mu)^{2}]$ $\displaystyle=\min_{\mu\in
M_{X}}\sup_{P\in{\rm{co}(\mathcal{P})}}E_{P}[(X-\mu)^{2}]$
$\displaystyle=\sup_{P\in{\rm{co}(\mathcal{P})}}\min_{\mu\in
M_{X}}E_{P}[(X-\mu)^{2}]=\sup_{P\in{\rm{co}(\mathcal{P})}}V_{P}(X).$
It is obvious that
$\displaystyle\underline{V}(X)$ $\displaystyle=\min_{\mu\in
M_{X}}\inf_{P\in\mathcal{P}}E_{P}[(X-\mu)^{2}]$ $\displaystyle=\min_{\mu\in
M_{X}}\inf_{P\in{\rm{co}(\mathcal{P})}}E_{P}[(X-\mu)^{2}]$
$\displaystyle=\inf_{P\in\mathcal{P}}\min_{\mu\in
M_{X}}E_{P}[(X-\mu)^{2}]=\inf_{P\in\mathcal{P}}V_{P}(X)=\inf_{P\in{\rm{co}(\mathcal{P})}}V_{P}(X).$
###### Remark 2.4
Unlike the envelop theorems in Walley [7] and Li et al. [1], we do not require
that $\mathcal{P}$ to be weakly compact. But the convexity of the
$\mathcal{P}$ is necessary, see Example 2.5.
###### Example 2.5
Let $X$ be normally distributed with $X\sim N(0.1,0.4)$ under $P_{1}$ and
$X\sim N(-0.1,0.4)$ under $P_{2}$. Taking $\mathcal{P}=\\{P_{1},P_{2}\\}$, we
obtain
$\overline{V}(X)=V_{P^{*}}(X)=0.41>0.4=\max\\{V_{P_{1}}(X),V_{P_{2}}(X)\\},$
where $P^{*}=\frac{1}{2}(P_{1}+P_{2})$, and
$\underline{V}(X)=0.4=V_{P_{1}}(X)=V_{P_{2}}(X).$
In this paper, we only consider the case of $\mathcal{P}$ consisting of
finitely many probability measures, i.e.,
$\mathcal{P}=\\{P_{1},\cdots,P_{K}\\}$. In this case,
${\rm{co}(\mathcal{P})}=\\{P_{{\boldsymbol{\lambda}}}:P_{{\boldsymbol{\lambda}}}=\lambda_{1}P_{1}+\cdots+\lambda_{K}P_{K},\forall{{\boldsymbol{\lambda}}}=(\lambda_{1},\cdots,\lambda_{K})^{T}\in\Delta^{K}\\},$
where
$\Delta^{K}=\\{{{\boldsymbol{\lambda}}}\in\mathbb{R}^{K}:\lambda_{1}+\cdots+\lambda_{K}=1,\lambda_{i}\geq
0,1\leq i\leq K\\}$.
As pointed out by Walley [7], it is usually quite easy to calculate lower
variance $\underline{V}(X)$ by Theorem 2.3, because
$\underline{V}(X)=\min_{1\leq i\leq K}V_{P_{i}}(X)$. For the upper variance
$\overline{V}(X)$, there exists ${\boldsymbol{\lambda}}^{*}\in\Delta^{K}$ such
that $\overline{V}(X)=V_{P_{{\boldsymbol{\lambda}}^{*}}}(X)$, but
$P_{{\boldsymbol{\lambda}}^{*}}$ is not an extreme point in general (see
Example 2.5), so the calculation of upper variance $\overline{V}(X)$ is more
difficult.
## 3 Calculation of upper variance
In this section, we propose a simple algorithm to calculate the upper variance
$\overline{V}(X)$ under $\mathcal{P}=\\{P_{1},\cdots,P_{K}\\}$.
Our goal is to solve the following minimax problem:
$\overline{V}(X)=\min_{\mu\in M_{X}}\max_{1\leq i\leq
K}E_{P_{i}}[(X-\mu)^{2}].$ (1)
We rewrite (1) as
$\overline{V}(X)=\min_{\mu\in M_{X}}\max_{1\leq i\leq
K}(\mu^{2}-2\mu_{i}\mu+\kappa_{i}),$ (2)
where $\mu_{i}=E_{P_{i}}[X]$ and $\kappa_{i}=E_{P_{i}}[X^{2}]$, $1\leq i\leq
K$.
Without loss of generality, we assume
$\mu_{1}\leq\mu_{2}\leq\cdots\leq\mu_{K}$.
In order to prove our main theorem, we need following two lemmas. The first
lemma characterizes the position of the optimal point for (1). The second
lemma calculates the upper variance for two probability measures.
###### Lemma 3.1
The unique optimal $\mu^{*}\in M_{X}$ in (1) should satisfy one of the
following two conditions.
(1) $\mu^{*}$ is the minimum of some parabola
$f_{i}(\mu):=\mu^{2}-2\mu_{i}\mu+\kappa_{i}$.
(2) $\mu^{*}$ is the intersection of two parabolas $f_{i}$ and $f_{j}$, i.e.,
$f_{i}(\mu^{*})=f_{j}(\mu^{*})$.
Proof. If $\mu^{*}$ is not the intersection of two parabolas, without loss of
generality, we assume that
$f_{1}(\mu^{*})<f_{2}(\mu^{*})<\cdots<f_{K}(\mu^{*}).$
There exists a neighbourhood $O$ of $\mu^{*}$ such that
$f_{1}(x)<f_{2}(x)<\cdots<f_{K}(x),\ \ \forall x\in O.$
Thus
$\min_{x\in O}\max_{1\leq i\leq K}f_{i}(x)=\min_{x\in O}f_{K}(x),$
$\mu^{*}$ is the minimum of $f_{K}$.
###### Lemma 3.2
The upper variance of $X$ under $\mathcal{P}=\\{P_{1},P_{2}\\}$ can be
calculated by
$\overline{V}(X)=\max\\{\kappa_{1}-\mu_{1}^{2},\kappa_{2}-\mu_{2}^{2},h(\mu_{12})\\},$
where
$\displaystyle\mu_{12}=\begin{cases}(\mu_{1}\vee\frac{\kappa_{2}-\kappa_{1}}{2(\mu_{2}-\mu_{1})})\wedge\mu_{2},&\mu_{1}<\mu_{2},\\\
\mu_{1},&\mu_{1}=\mu_{2},\end{cases}$
and $h(x)=x^{2}-2\mu_{1}x+\kappa_{1}$.
Proof. We consider the optimal $\mu^{*}$ in two cases as in Lemma 3.1.
In Case (1), there exists $i_{0}\in\\{1,2\\}$ such that
$\overline{V}(X)=V_{P_{i_{0}}}(X)=\kappa_{i_{0}}-\mu_{i_{0}}^{2}$ and
$\mu^{*}=\mu_{i_{0}}$.
In Case (2), since $f_{1}(\mu^{*})=f_{2}(\mu^{*})$, we have
$\mu^{*}=\frac{\kappa_{2}-\kappa_{1}}{2(\mu_{2}-\mu_{1})},$
and
$\displaystyle\overline{V}(X)=h(\mu^{*})=\frac{\mu_{2}\kappa_{1}-\mu_{1}\kappa_{2}}{\mu_{2}-\mu_{1}}+\frac{(\kappa_{1}-\kappa_{2})^{2}}{4(\mu_{1}-\mu_{2})^{2}}.$
In this case, we further require that
$\mu^{*}\in[\mu_{1},\mu_{2}],\ \ \ \ \mu_{1}<\mu_{2}.$
In fact, if
$\frac{\kappa_{2}-\kappa_{1}}{2(\mu_{2}-\mu_{1})}\notin[\mu_{1},\mu_{2}]$,
then we have
$h(\mu_{1})\vee
h(\mu_{2})\leq\max\\{\kappa_{1}-\mu_{1}^{2},\kappa_{2}-\mu^{2}_{2}\\}.$
We take $\mu_{1}$ or $\mu_{2}$ instead of
$\frac{\kappa_{2}-\kappa_{1}}{2(\mu_{2}-\mu_{1})}$ in this case.
Finally, we obtain
$\overline{V}(X)=\max\\{\kappa_{1}-\mu_{1}^{2},\kappa_{2}-\mu_{2}^{2},h(\mu_{12})\\}.$
###### Theorem 3.3
The upper variance of $X$ under $\mathcal{P}=\\{P_{1},\cdots,P_{K}\\}$ can be
calculated by
$\overline{V}(X)=\max\left\\{\max_{1\leq i\leq
K}(\kappa_{i}-\mu_{i}^{2}),\max_{1\leq i<j\leq K}h_{ij}(\mu_{ij})\right\\},$
where
$\displaystyle\mu_{ij}=\begin{cases}(\mu_{i}\vee\frac{\kappa_{j}-\kappa_{i}}{2(\mu_{j}-\mu_{i})})\wedge\mu_{j},&\mu_{i}<\mu_{j},\\\
\mu_{i},&\mu_{i}=\mu_{j},\end{cases}\ \ \ \ \ 1\leq i<j\leq K$
and $h_{ij}(x)=x^{2}-2\mu_{i}x+\kappa_{i}$.
Proof.
For $1\leq i<j\leq K$, let $\overline{V}_{ij}(X)$ be the upper variance under
two probability measures $P_{i}$ and $P_{j}$. Then by Lemma 3.2, we have
$\overline{V}_{ij}(X)=\max\\{\kappa_{i}-\mu_{i}^{2},\kappa_{j}-\mu_{j}^{2},h_{ij}(\mu_{ij})\\}.$
It is obvious that $\overline{V}(X)\geq\overline{V}_{ij}(X),\ 1\leq i<j\leq
K$, we obtain
$\overline{V}(X)\geq\max\left\\{\max_{1\leq i\leq
K}(\kappa_{i}-\mu_{i}^{2}),\max_{1\leq i<j\leq K}h_{ij}(\mu_{ij})\right\\}.$
We note that $\kappa_{i}-\mu_{i}^{2}=\min_{\mu\in\mathbb{R}}f_{i}(\mu)$,
$1\leq i\leq K$ and $h_{ij}(\mu_{ij})=f_{i}(\mu_{ij})=f_{j}(\mu_{ij})$ if
$\mu_{ij}=\frac{\kappa_{j}-\kappa_{i}}{2(\mu_{j}-\mu_{i})}$, $1\leq i<j\leq
K$.
By Lemma 3.1,
$\overline{V}(X)\leq\max\left\\{\max_{1\leq i\leq
K}(\kappa_{i}-\mu_{i}^{2}),\max_{1\leq i<j\leq K}h_{ij}(\mu_{ij})\right\\}.$
###### Corollary 3.4
The upper variance of $X$ under $\mathcal{P}=\\{P_{1},\cdots,P_{K}\\}$ can be
calculated by
$\overline{V}(X)=\max_{1\leq i<j\leq K}\left\\{\overline{V}_{ij}(X)\right\\},$
where $\overline{V}_{ij}(X)$ is the upper variance under $P_{i}$ and $P_{j}$,
$1\leq i<j\leq K$.
Now we present our algorithm as following. Given a random variable $X$ under
$\mathcal{P}=\\{P_{1},\cdots,P_{K}\\}$, we calculate $\mu_{i}=E_{P_{i}}[X]$
and $\kappa_{i}=E_{P_{i}}[X^{2}]$, $1\leq i\leq K$.
Algorithm:
Step (1): We sort $\mu_{1}\leq\mu_{2}\leq\cdots\leq\mu_{K}$.
Step (2): For $1\leq i<j\leq K$, we calculate $\mu_{ij}$ as
$\displaystyle\mu_{ij}=\begin{cases}(\mu_{i}\vee\frac{\kappa_{j}-\kappa_{i}}{2(\mu_{j}-\mu_{i})})\wedge\mu_{j},&\mu_{i}<\mu_{j},\\\
\mu_{i},&\mu_{i}=\mu_{j}.\end{cases}$
Step (3): Output
$\overline{V}(X)=\max\left\\{\max_{1\leq i\leq
K}(\kappa_{i}-\mu_{i}^{2}),\max_{1\leq i<j\leq K}h_{ij}(\mu_{ij})\right\\},$
where $h_{ij}(x)=x^{2}-2\mu_{i}x+\kappa_{i}$.
In addition, the lower variance is calculated simply by
$\underline{V}(X)=\min_{1\leq i\leq K}(\kappa_{i}-\mu_{i}^{2}).$
In practice, $\mu_{i}$ and $\kappa_{i}$ can be easily estimated from data. For
example, let $X$ be the daily return of one stock. In the real market, we can
obtain the daily return data $\\{x_{i}\\}_{i\in I}$ (resp. $\\{x_{j}\\}_{j\in
J}$) from bull (resp. bear) market, where $I$ (resp. $J$) denote the periods
of bull (resp. bear) market. The we can estimate the sample means and
variances as
$\hat{\mu}_{1}=\frac{\sum_{i\in I}x_{i}}{|I|},\ \ \
\hat{\mu}_{2}=\frac{\sum_{j\in J}x_{j}}{|J|},$
and
$\hat{\sigma}^{2}_{1}=\frac{\sum_{i\in I}(x_{i}-\hat{\mu}_{1})^{2}}{|I|-1},\ \
\ \hat{\sigma}^{2}_{2}=\frac{\sum_{j\in J}(x_{j}-\hat{\mu}_{2})^{2}}{|J|-1}.$
Then we take $\mu_{i}=\hat{\mu}_{i}$ and
$\kappa_{i}=\hat{\sigma}^{2}_{i}+\mu_{i}^{2}$ ($i=1,2$) to calculate the upper
and lower variances.
## 4 Application: Quadratic programming
By Theorem 2.3, (1) is equivalent to the following convex quadratic
programming problem.
###### Proposition 4.1
$\overline{V}(X)=\max_{{\boldsymbol{\lambda}}\in\Delta^{K}}({\boldsymbol{\lambda}}^{T}\boldsymbol{\kappa}-({\boldsymbol{\lambda}}^{T}\boldsymbol{\mu})^{2}),$
(3)
where $\boldsymbol{\kappa}=(E_{P_{1}}[X^{2}],\cdots,E_{P_{K}}[X^{2}])^{T}$ and
$\boldsymbol{\mu}=(E_{P_{1}}[X],\cdots,E_{P_{K}}[X])^{T}$.
Proof. By Theorem 2.3, we have
$\overline{V}(X)=\max_{{\boldsymbol{\lambda}}\in\Delta^{K}}V_{P_{\boldsymbol{\lambda}}}(X).$
It is easily seen that
$\displaystyle
V_{P_{\boldsymbol{\lambda}}}(X)=E_{P_{\boldsymbol{\lambda}}}[X^{2}]-(E_{P_{\boldsymbol{\lambda}}}[X])^{2}={\boldsymbol{\lambda}}^{T}\boldsymbol{\kappa}-({\boldsymbol{\lambda}}^{T}\boldsymbol{\mu})^{2}.$
It can be seen that (3) is a convex quadratic programming problem. There are
many numerical algorithms to solve such a problem, e.g., the polynomial-time
interior-point algorithm (Nesterov and Nemirovski [3]) and accelerated
gradient method (Nesterov [2]), etc. However, we only obtain approximate
solutions by most existing algorithms. In this section, we provide a simple
method to solve such a problem exactly by the means of upper variance under
multiple probabilities.
We have the following theorem.
###### Theorem 4.2
Given $\boldsymbol{\mu}=(\mu_{1},\cdots,\mu_{K})^{T}\in\mathbb{R}^{K}$ with
$\mu_{1}\leq\mu_{2}\leq\cdots\leq\mu_{K}$ and
$\boldsymbol{\kappa}=(\kappa_{1},\cdots,\kappa_{K})^{T}\in\mathbb{R}^{K}$, the
solution of the following maximize problem:
$V=\max_{{\boldsymbol{\lambda}}\in\Delta^{K}}({\boldsymbol{\lambda}}^{T}\boldsymbol{\kappa}-({\boldsymbol{\lambda}}^{T}\boldsymbol{\mu})^{2})$
is given by
$V=\max\left\\{\max_{1\leq i\leq K}(\kappa_{i}-\mu_{i}^{2}),\max_{1\leq
i<j\leq K}h_{ij}(\mu_{ij})\right\\},$
where
$\displaystyle\mu_{ij}=\begin{cases}(\mu_{i}\vee\frac{\kappa_{j}-\kappa_{i}}{2(\mu_{j}-\mu_{i})})\wedge\mu_{j},&\mu_{i}<\mu_{j},\\\
\mu_{i},&\mu_{i}=\mu_{j},\end{cases}\ \ \ \ \ 1\leq i<j\leq K$
and $h_{ij}(x)=x^{2}-2\mu_{i}x+\kappa_{i}$.
If there exists $i_{0}$ such that $V=\kappa_{i_{0}}-\mu_{i_{0}}^{2}$, then the
optimal ${\boldsymbol{\lambda}}^{*}$ is given by $\lambda_{i_{0}}^{*}=1$ and
$\lambda_{j}^{*}=0$, $j\neq i_{0}$. Otherwise, there exists $1\leq
i_{0}<j_{0}\leq K$ such that $V=h_{i_{0}j_{0}}(\mu_{i_{0}j_{0}})$, then the
optimal ${\boldsymbol{\lambda}}^{*}$ is given by
$\lambda_{i_{0}}^{*}=\frac{\mu_{j_{0}}}{\mu_{j_{0}}-\mu_{i_{0}}}+\frac{\kappa_{i_{0}}-\kappa_{j_{0}}}{2(\mu_{i_{0}}-\mu_{j_{0}})^{2}}$,
$\lambda_{j_{0}}^{*}=1-\lambda_{i_{0}}^{*}$ and $\lambda_{j}^{*}=0$, $j\neq
i_{0},j_{0}$.
Proof. We only give the sketch of the proof. Let $C=\min_{1\leq i\leq
K}\\{\kappa_{i}-\mu_{i}^{2}\\}$.
(1) $C>0$. In this case, (2) is equivalent to (1). $V$ can be calculated by
Theorem 3.3. We only need to consider the optimal
${\boldsymbol{\lambda}}^{*}$.
If there exists $i_{0}$ such that $V=\kappa_{i_{0}}-\mu_{i_{0}}^{2}$, then it
is clear that $\lambda_{i_{0}}^{*}=1$ and $\lambda_{j}^{*}=0$ for $j\neq
i_{0}$.
If there exists $1\leq i_{0}<j_{0}\leq K$ such that
$V=h_{i_{0}j_{0}}(\mu_{i_{0}j_{0}})$, we consider the following maximize
problem:
$\max_{\lambda_{i_{0}}+\lambda_{j_{0}}=1}\left\\{\lambda_{i_{0}}\kappa_{i_{0}}+\lambda_{j_{0}}\kappa_{j_{0}}-(\lambda_{i_{0}}\mu_{i_{0}}+\lambda_{j_{0}}\mu_{j_{0}})^{2}\right\\}.$
(4)
The optimal solution of (4) is
$\lambda_{i_{0}}=\frac{\mu_{j_{0}}}{\mu_{j_{0}}-\mu_{i_{0}}}+\frac{\kappa_{i_{0}}-\kappa_{j_{0}}}{2(\mu_{i_{0}}-\mu_{j_{0}})^{2}},$
and
$V=\frac{\mu_{j_{0}}\kappa_{i_{0}}-\mu_{i_{0}}\kappa_{j_{0}}}{\mu_{j_{0}}-\mu_{i_{0}}}+\frac{(\kappa_{i_{0}}-\kappa_{j_{0}})^{2}}{4(\mu_{i_{0}}-\mu_{j_{0}})^{2}}=h_{i_{0}j_{0}}(\mu_{i_{0}j_{0}}).$
Similar to the proof of Case (2) in Theorem 3.3, we know in this case
$0\leq\lambda_{i_{0}}\leq 1$.
(2) $C\leq 0$. We consider
$\bar{\boldsymbol{\kappa}}=\boldsymbol{\kappa}-C+1$, then
$\bar{\kappa_{i}}-\mu_{i}^{2}>0$, $1\leq i\leq K$. Noting that
${\boldsymbol{\lambda}}^{T}\bar{\boldsymbol{\kappa}}={\boldsymbol{\lambda}}^{T}\boldsymbol{\kappa}-C+1$,
the optimal ${\boldsymbol{\lambda}}^{*}$ is same as in the case of $C>0$.
###### Remark 4.3
If $\kappa_{i}-\mu_{i}^{2}>0$, $1\leq i\leq K$, then $V$ can be regarded as
the upper variance of some random variable $X$ under the set of probability
measures $\\{P_{1},\cdots,P_{K}\\}$ with $E_{P_{i}}[X]=\mu_{i}$ and
$E_{P_{i}}[X^{2}]=\kappa_{i}$, $1\leq i\leq K$. Indeed, we can take $X$ being
normally distributed under each $P_{i}$ with $P_{i}\sim
N(\mu_{i},\kappa_{i}-\mu_{i}^{2})$, $1\leq i\leq K$.
## References
* [1] Shan Li, Xinpeng Li, and George Xianzhi Yuan, Upper and lower variances under model uncertainty and their applications in finance. Int. J. Financ. Eng. 9 (2022), 2250007.
* [2] Yurii Nesterov, Introductory lectures on convex optimization: A basic course. Springer Science & Business Media, 2003.
* [3] Yurii Nesterov and Arkadii Nemirovskii, Interior-point polynomial algorithms in convex programming. Society for Industrial and Applied Mathematics, 1987.
* [4] Shige Peng, Nonlinear expectations and stochastic calculus under uncertainty. Springer, 2019.
* [5] Shige Peng, Shuzhen Yang, and Jianfeng Yao, Improving value-at-risk prediction under model uncertainty. J. Financ. Econ. 21 (2023), 228–259.
* [6] Maurice Sion, On general minimax theorems. Pacific J. Math. 8 (1958), 171–176.
* [7] Peter Walley, Statistical reasoning with imprecise probabilities. Chapman and Hall, 1991.
|
# High-Dimensional Sparse Linear Bandits
Botao Hao
Deepmind
<EMAIL_ADDRESS>
&Tor Lattimore
Deepmind
<EMAIL_ADDRESS>
Mengdi Wang
Department of Electrical Engineering
Princeton University
<EMAIL_ADDRESS>
###### Abstract
Stochastic linear bandits with high-dimensional sparse features are a
practical model for a variety of domains, including personalized medicine and
online advertising (Bastani and Bayati, 2020). We derive a novel
$\Omega(n^{2/3})$ dimension-free minimax regret lower bound for sparse linear
bandits in the data-poor regime where the horizon is smaller than the ambient
dimension and where the feature vectors admit a well-conditioned exploration
distribution. This is complemented by a nearly matching upper bound for an
explore-then-commit algorithm showing that that $\Theta(n^{2/3})$ is the
optimal rate in the data-poor regime. The results complement existing bounds
for the data-rich regime and provide another example where carefully balancing
the trade-off between information and regret is necessary. Finally, we prove a
dimension-free $\mathcal{O}(\sqrt{n})$ regret upper bound under an additional
assumption on the magnitude of the signal for relevant features.
## 1 Introduction
Stochastic linear bandits generalize the standard reward model for multi-armed
bandits by associating each action with a feature vector and assuming the mean
reward is the inner product between the feature vector and an unknown
parameter vector (Auer, 2002; Dani et al., 2008; Rusmevichientong and
Tsitsiklis, 2010; Chu et al., 2011; Abbasi-Yadkori et al., 2011).
In most practical applications, there are many candidate features but no clear
indication about which are relevant. Therefore, it is crucial to consider
stochastic linear bandits in the high-dimensional regime but with low-
dimensional structure, captured here by the notion of sparsity. Previous work
on sparse linear bandits has mostly focused on the data-rich regime, where the
time horizon is larger than the ambient dimension (Abbasi-Yadkori et al.,
2012; Carpentier and Munos, 2012; Wang et al., 2018; Kim and Paik, 2019;
Bastani and Bayati, 2020). The reason for studying the data-rich regime is
partly justified by minimax lower bounds showing that for smaller time
horizons the regret is linear in the worst case.
Minimax bounds, however, do not tell the whole story. A crude maximisation
over all environments hides much of the rich structure of linear bandits with
sparsity. We study sparse linear bandits in the high-dimensional regime when
the ambient dimension is much larger than the time horizon. In order to
sidestep existing lower bounds, we refine the minimax notion by introducing a
dependence in our bounds on the minimum eigenvalue of a suitable exploration
distribution over the actions. Similar quantities appear already in the vast
literature on high-dimensional statistics (Bühlmann and Van De Geer, 2011;
Wainwright, 2019).
#### Contributions
Our first result is a lower bound showing that $\Omega(n^{2/3})$ regret is
generally unavoidable when the dimension is large, even if the action set
admits an exploration policy for which the minimum eigenvalue of the
associated data matrix is large. The lower bound is complemented by an
explore-the-sparsity-then-commit algorithm that first solves a convex
optimization problem to find the most informative design in the exploration
stage. The algorithm then explores for a number of rounds by sampling from the
design distribution and uses Lasso (Tibshirani, 1996) to estimate the unknown
parameters. Finally, it greedily chooses the action that maximizes the reward
given the estimated parameters. We derive an $\mathcal{O}(n^{2/3})$ dimension-
free regret that depends instead on the minimum eigenvalue of the covariance
matrix associated with the exploration distribution. Our last result is a
post-model selection linear bandits algorithm that invokes phase-elimination
algorithm (Lattimore et al., 2020) to the model selected by the first-step
regularized estimator. Under a sufficient condition on the minimum signal of
the feature covariates, we prove that a dimension-free $\mathcal{O}(\sqrt{n})$
regret is achievable, even if the data is scarce.
The analysis reveals a rich structure that has much in common with partial
monitoring, where $\Theta(n^{2/3})$ regret occurs naturally in settings for
which some actions are costly but highly informative (Bartók et al., 2014). A
similar phenomenon appears here when the dimension is large relative to the
horizon. There is an interesting transition as the horizon grows, since
$\mathcal{O}(\sqrt{dn})$ regret is optimal in the data rich regime.
Table 1: Comparisons with existing results on regret upper bounds and lower
bounds for sparse linear bandits. Here, $s$ is the sparsity, $d$ is the
feature dimension, $n$ is the number of rounds, $K$ is the number of arms,
$C_{\min}$ is the minimum eigenvalue of the data matrix for an exploration
distribution (3.1) and $\tau$ is a problem-dependent parameter that may have a
complicated form and vary across different literature.
Upper Bound | Regret | Assumptions | Regime
---|---|---|---
Abbasi-Yadkori et al. (2012) | $\mathcal{O}(\sqrt{sdn})$ | none | rich
Sivakumar et al. (2020) | $\mathcal{O}(\sqrt{sdn})$ | adver. + Gaussian noise | rich
Bastani and Bayati (2020) | $\mathcal{O}(\tau Ks^{2}(\log(n))^{2})$ | compatibility condition | rich
Wang et al. (2018) | $\mathcal{O}(\tau Ks^{3}\log(n))$ | compatibility condition | rich
Kim and Paik (2019) | $\mathcal{O}(\tau s\sqrt{n})$ | compatibility condition | rich
Lattimore et al. (2015) | $\mathcal{O}(s\sqrt{n})$ | action set is hypercube | rich
This paper (Thm. 4.2) | $\mathcal{O}(C_{\min}^{-2/3}s^{2/3}n^{2/3})$ | action set spans $\mathbb{R}^{d}$ | poor
This paper (Thm. 5.2) | $\mathcal{O}(C_{\min}^{-1/2}\sqrt{sn})$ | action set spans $\mathbb{R}^{d}$ \+ mini. signal | rich
Lower Bound | | |
Multi-task bandits111Section 24.3 of Lattimore and Szepesvári (2020) | $\Omega(\sqrt{sdn})$ | N.A. | rich
This paper (Thm. 3.3) | $\Omega(C_{\min}^{-1/3}s^{1/3}n^{2/3})$ | N.A. | poor
#### Related work
Most previous work is focused on the data-rich regime. For an arbitrary action
set, Abbasi-Yadkori et al. (2012) proposed an online-to-confidence-set
conversion approach that achieves a $\mathcal{O}(\sqrt{sdn})$ regret upper
bound, where $s$ is a known upper bound on the sparsity. The algorithm is
generally not computationally efficient, which is believed to be unavoidable.
Additionally, a $\Omega(\sqrt{sdn})$ regret lower bound for data-rich regime
was established in Section 24.3 of Lattimore and Szepesvári (2020), which
means polynomial dependence on $d$ is generally not avoidable without
additional assumptions.
For this reason, it recently became popular to study the contextual setting,
where the action set changes from round to round and to careful assumptions
are made on the context distribution. The assumptions are chosen so that
techniques from high-dimensional statistics can be borrowed. Suppose $\tau$ is
a problem-dependent parameter that may have a complicated form and varies
across different literature. Kim and Paik (2019) developed a doubly-robust
Lasso bandit approach with an $\mathcal{O}(\tau s\sqrt{n})$ upper bound but
required the average of the feature vectors for each arm satisfies the
compatibility condition (Bühlmann and Van De Geer, 2011). Bastani and Bayati
(2020) and Wang et al. (2018) considered a multi-parameter setting (each arm
has its own underlying parameter) and assumed the distribution of contexts
satisfies a variant of the compatibility condition as well as other separation
conditions. Bastani and Bayati (2020) derived a $\mathcal{O}(\tau
Ks^{2}(\log(n))^{2})$ upper bound and was sharpen to $\mathcal{O}(\tau
Ks^{2}\log(n))$ by Wang et al. (2018), where $K$ is the number of arms.
Although those results are dimension-free, they require strong assumptions on
the context distribution that are hard to verify in practice. As a result, the
aforementioned regret bounds involved complicated problem-dependent parameters
that may be very large when the assumptions fail to hold.
Another thread of the literature is to consider specific action sets.
Lattimore et al. (2015) proposed a selective explore-then-commit algorithm
that only works when the action set is exactly the binary hypercube. They
derived an optimal $\mathcal{O}(s\sqrt{n})$ upper bound as well as an optimal
gap-dependent bound. Sivakumar et al. (2020) assumed the action set is
generated adversarially but perturbed artificially by some standard Gaussian
noise. They proposed a structured greedy algorithm to achieve an
$\mathcal{O}(s\sqrt{n})$ upper bound. Deshpande and Montanari (2012) study the
data-poor regime in a Bayesian setting but did not consider sparsity.
Carpentier and Munos (2012) considered a special case where the action set is
the unit sphere and the noise is vector-valued so that the noise becomes
smaller as the dimension grows. We summarize the comparisons in Table 1.
## 2 Problem setting
In the beginning, the agent receives a compact action set
$\mathcal{A}\subseteq\mathbb{R}^{d}$, where $d$ may be larger than the number
of rounds $n$. At each round $t$, the agent chooses an action
$A_{t}\in\mathcal{A}$ and receives a reward
$Y_{t}=\langle A_{t},\theta\rangle+\eta_{t}\,,$ (2.1)
where $(\eta_{t})_{t=1}^{n}$ is a sequence of independent standard Gaussian
random variables and $\theta\in\mathbb{R}^{d}$ is an unknown parameter vector.
We make the mild boundedness assumption that for all $x\in\mathcal{A}$,
$\|x\|_{\infty}\leq 1$. The parameter vector $\theta$ is assumed to be
$s$-sparse:
$\|\theta\|_{0}=\sum_{j=1}^{d}\operatorname{\mathds{1}}\\{\theta_{j}\neq
0\\}\leq s.$
The Gaussian assumption can be relaxed to conditional sub-Gaussian assumption
for the regret upper bound, but is necessary for the regret lower bound. The
performance metric is the cumulative expected regret, which measures the
difference between the expected cumulative reward collected by the omniscient
policy that knows $\theta$ and that of the learner. The optimal action is
$x^{*}=\mathop{\mathrm{argmax}}_{x\in\mathcal{A}}\langle x,\theta\rangle$ and
the regret of the agent when facing the bandit determined by $\theta$ is
$R_{\theta}(n)=\mathbb{E}\left[\sum_{t=1}^{n}\langle
x^{*},\theta\rangle-\sum_{t=1}^{n}Y_{t}\right]\,,$
where the expectation is over the interaction sequence induced by the agent
and environment. Our primary focus is on finite-time bounds in the data-poor
regime where $d\geq n$.
#### Notation
Let $[n]=\\{1,2,\ldots,n\\}$. For a vector $x$ and positive semidefinite
matrix $A$, we let $\|x\|_{A}=\sqrt{x^{\top}Ax}$ be the weighted
$\ell_{2}$-norm and $\sigma_{\min}(A),\sigma_{\max}(A)$ be the minimum
eigenvalue and maximum eigenvalue of $A$, respectively. The cardinality of a
set $\mathcal{A}$ is denoted by $|\mathcal{A}|$. The support of a vector $x$,
$\text{supp}(x)$, is the set of indices $i$ such that $x_{i}\neq 0$. And
$\operatorname{\mathds{1}}\\{\cdot\\}$ is an indicator function. The
suboptimality gap of action $x\in\mathcal{A}$ is $\Delta_{x}=\langle
x^{*},\theta\rangle-\langle x,\theta\rangle$ and the minimum gap is
$\Delta_{\min}=\min\\{\Delta_{x}:x\in\mathcal{A},\Delta_{x}>0\\}$.
## 3 Minimax lower bound
As promised, we start by proving a kind of minimax regret lower. We first
define a quantity that measures the degree to which there exist good
exploration distributions over the actions.
###### Definition 3.1.
Let $\mathcal{P}(\mathcal{A})$ be the space of probability measures over
$\mathcal{A}$ with the Borel $\sigma$-algebra and define
$C_{\min}(\mathcal{A})=\max_{\mu\in\mathcal{P}(\mathcal{A})}\sigma_{\min}\Big{(}\mathbb{E}_{A\sim\mu}\big{[}AA^{\top}\big{]}\Big{)}\,.$
###### Remark 3.2.
$C_{\min}(\mathcal{A})>0$ if and only if $\mathcal{A}$ spans $\mathbb{R}^{d}$.
Two illustrative examples are the hypercube and probability simplex. Sampling
uniformly from the corners of each set shows that $C_{\min}(\mathcal{A})\geq
1$ for the former and $C_{\min}(\mathcal{A})\geq 1/d$ for the latter.
The next theorem is a kind of minimax lower bound for sparse linear bandits.
The key steps of the proof follow, with details and technical lemmas deferred
to Appendix B.
###### Theorem 3.3.
Consider the sparse linear bandits described in Eq. (2.1). Then for any policy
$\pi$ there exists an action set $\mathcal{A}$ with $C_{\min}(\mathcal{A})>0$
and $s$-sparse parameter $\theta\in\mathbb{R}^{d}$ such that
$R_{\theta}(n)\geq\frac{\exp(-4)}{4}\min\Big{(}C_{\min}^{-\tfrac{1}{3}}(\mathcal{A})s^{\tfrac{1}{3}}n^{\tfrac{2}{3}},\sqrt{dn}\Big{)}.$
(3.1)
Theorem 3.3 holds for any data regime and suggests an intriguing transition
between $n^{2/3}$ and $n^{1/2}$ regret, depending on the relation between the
horizon and the dimension. When $d>n^{1/3}s^{2/3}$ the bound is
$\Omega(n^{2/3})$, which is independent of the dimension. On the other hand,
when $d\leq n^{1/3}s^{2/3}$, we recover the standard $\Omega(\sqrt{sdn})$
dimension-dependent lower bound up to a $\sqrt{s}$-factor. In Section 4, we
prove that the $\Omega(n^{2/3})$ minimax lower bound is tight by presenting a
nearly matching upper bound in the data-poor regime.
###### Remark 3.4.
Theorem 3.3 has a worst-case flavor. For each algorithm we construct a problem
instance with the given dimension, sparsity and value of $C_{\min}$ for which
the stated regret bound holds. The main property of this type of hard instance
is that it should include a informative but high-regret action set such that
the learning algorithm should balance the trade-off between information and
regret. This leaves the possibility for others to create minimax lower bound
for their own problem.
###### Proof of Theorem 3.3.
The proof uses the standard information-theoretic machinery, but with a novel
construction and KL divergence calculation.
Step 1: construct a hard instance. We first construct a low regret action set
${\mathcal{S}}$ and an informative action set $\mathcal{H}$ as follows:
$\begin{split}&{\mathcal{S}}=\Big{\\{}x\in\mathbb{R}^{d}\Big{|}x_{j}\in\\{-1,0,1\\}\
\text{for}\ j\in[d-1],\|x\|_{1}=s-1,x_{d}=0\Big{\\}}\,,\\\
&\mathcal{H}=\Big{\\{}x\in\mathbb{R}^{d}\Big{|}x_{j}\in\\{-\kappa,\kappa\\}\
\text{for}\ j\in[d-1],x_{d}=1\Big{\\}}\,,\end{split}$ (3.2)
where $0<\kappa\leq 1$ is a constant. The action set is the union
$\mathcal{A}={\mathcal{S}}\cup\mathcal{H}$ and let
$\theta=\big{(}\underbrace{\varepsilon,\ldots,\varepsilon}_{s-1},0,\ldots,0,-1\big{)}\,,$
where $\varepsilon>0$ is a small constant to be tuned later. Because
$\theta_{d}=-1$, actions in $\mathcal{H}$ are associated with a large regret.
On the other hand, actions in $\mathcal{H}$ are also highly informative, which
hints towards an interesting tradeoff between regret and information. Note
that $\mathcal{H}$ is nearly the whole binary hypercube, while actions in
${\mathcal{S}}$ are $(s-1)$-sparse. The optimal action is in the action set
$\mathcal{A}$:
$x^{*}=\mathop{\mathrm{argmax}}_{x\in\mathcal{A}}\langle
x,\theta\rangle=\big{(}\underbrace{1,\cdots,1}_{s-1},0,\ldots,0\big{)}\in\mathcal{A}\,.$
(3.3)
Step 2: construct an alternative bandit. The second step is to construct an
alternative bandit $\widetilde{\theta}$ that is hard to distinguish from
$\theta$ and for which the optimal action for $\theta$ is suboptimal for
$\widetilde{\theta}$ and vice versa. Denote $\mathbb{P}_{\theta}$ and
$\mathbb{P}_{\smash{\widetilde{\theta}}}$ as the measures on the sequence of
outcomes $(A_{1},Y_{1},\ldots,A_{n},Y_{n})$ induced by the interaction between
a fixed bandit algorithm and the bandits determined by $\theta$ and
$\widetilde{\theta}$ respectively. Let
$\mathbb{E}_{\theta},\mathbb{E}_{\widetilde{\theta}}$ be the corresponding
expectation operators. We denote a set ${\mathcal{S}}^{\prime}$ as
$\begin{split}{\mathcal{S}}^{\prime}=\Big{\\{}x\in\mathbb{R}^{d}\Big{|}&x_{j}\in\\{-1,0,1\\}\
\text{for}\ j\in\\{s,s+1,\ldots,d-1\\}\,,\\\ &x_{j}=0\ \text{for}\
j=\\{1,\ldots,s-1,d\\},\|x\|_{1}=s-1\Big{\\}}\,.\end{split}$ (3.4)
Clearly, ${\mathcal{S}}^{\prime}$ is a subset of ${\mathcal{S}}$ and for any
$x\in{\mathcal{S}}^{\prime}$, its support has no overlap with
$\\{1,\ldots,s-1\\}$. Then we denote
$\widetilde{x}=\mathop{\mathrm{argmin}}_{x\in{\mathcal{S}}^{\prime}}\mathbb{E}_{\theta}\left[\sum_{t=1}^{n}\langle
A_{t},x\rangle^{2}\right]\,,$ (3.5)
and construct the alternative bandit $\widetilde{\theta}$ as
$\widetilde{\theta}=\theta+2\varepsilon\widetilde{x}\,.$ (3.6)
Note that $\widetilde{\theta}$ is $(2s-1)$-sparse since $\widetilde{x}$
belongs to ${\mathcal{S}}^{\prime}$ that is a $(s-1)$-sparse set. This design
guarantees the optimal arm $x^{*}$ in bandit $\theta$ is suboptimal in
alternative bandit $\widetilde{\theta}$ and the suboptimality gap for $x^{*}$
in bandit $\widetilde{\theta}$ is $\max_{x\in\mathcal{A}}\langle
x-x^{*},\widetilde{\theta}\rangle=(s-1)\varepsilon.$ Define an event
$\mathcal{D}=\left\\{\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{s-1}A_{tj}\leq\frac{n(s-1)}{2}\right\\}\,.$
The next claim shows that when $\mathcal{D}$ occurs, the regret is large in
bandit $\theta$, while if it does not occur, then the regret is large in
bandit $\smash{\widetilde{\theta}}$. The detailed proof is deferred to
Appendix B.1.
###### Claim 3.5.
Regret lower bounds with respect to event $\mathcal{D}$:
$\begin{split}R_{\theta}(n)\geq\frac{n(s-1)\varepsilon}{2}\mathbb{P}_{\theta}(\mathcal{D})\qquad\text{and}\qquad
R_{\widetilde{\theta}}(n)\geq\frac{n(s-1)\varepsilon}{2}\mathbb{P}_{\widetilde{\theta}}(\mathcal{D}^{c})\,.\end{split}$
By the Bretagnolle–Huber inequality (Lemma C.1 in the appendix),
$\begin{split}R_{\theta}(n)+R_{\widetilde{\theta}}(n)\geq\frac{n(s-1)\varepsilon}{2}\Big{(}\mathbb{P}_{\theta}(\mathcal{D})+\mathbb{P}_{\widetilde{\theta}}(\mathcal{D}^{c})\Big{)}\geq\frac{n(s-1)\varepsilon}{4}\exp\Big{(}-\text{KL}\big{(}\mathbb{P}_{\theta},\mathbb{P}_{\widetilde{\theta}}\big{)}\Big{)}\,,\end{split}$
where $\text{KL}(\mathbb{P}_{\theta},\mathbb{P}_{\smash{\widetilde{\theta}}})$
is the KL divergence between probability measures $\mathbb{P}_{\theta}$ and
$\mathbb{P}_{\smash{\widetilde{\theta}}}$.
Step 3: calculating the KL divergence. We make use of the following bound on
the KL divergence between $\mathbb{P}_{\theta}$ and
$\mathbb{P}_{\smash{\widetilde{\theta}}}$, which formalises the intuitive
notion of information. When the KL divergence is small, the algorithm is
unable to distinguish the two environments. The detailed proof is deferred to
Appendix B.2.
###### Claim 3.6.
Define
$T_{n}(\mathcal{H})=\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in\mathcal{H})$.
The KL divergence between $\mathbb{P}_{\theta}$ and
$\mathbb{P}_{\smash{\widetilde{\theta}}}$ is upper bounded by
$\text{KL}\left(\mathbb{P}_{\theta},\mathbb{P}_{\smash{\widetilde{\theta}}}\right)\leq
2\varepsilon^{2}\left(\frac{n(s-1)^{2}}{d}+\kappa^{2}(s-1)\mathbb{E}_{\theta}[T_{n}(\mathcal{H})]\right)\,.$
(3.7)
The first term in the right-hand side of the bound is the contribution from
actions in the low-regret action set ${\mathcal{S}}$, while the second term is
due to actions in $\mathcal{H}$. The fact that actions in ${\mathcal{S}}$ are
not very informative is captured by the presence of the dimension in the
denominator of the first term. When $d$ is very large, the algorithm simply
does not gain much information by playing actions in ${\mathcal{S}}$. When
$T_{n}(\mathcal{H})<1/(\kappa^{2}(s-1)\varepsilon^{2})$, it is easy to see
$R_{\theta}(n)+R_{\widetilde{\theta}}(n)\geq\frac{n(s-1)\varepsilon}{4}\exp\left(-\frac{2n\varepsilon^{2}(s-1)^{2}}{d}\right)\exp(-2)\,.$
(3.8)
On the other hand, when
$T_{n}(\mathcal{H})>1/(\kappa^{2}\varepsilon^{2}(s-1))$, we have
$\begin{split}R_{\theta}(n)\geq\mathbb{E}_{\theta}[T_{n}(\mathcal{H})]\min_{x\in\mathcal{H}}\Delta_{x}\geq\frac{1}{\kappa^{2}\varepsilon^{2}(s-1)}+\frac{1-\kappa}{\kappa^{2}\varepsilon}\,,\end{split}$
(3.9)
since $\min_{x\in\mathcal{H}}\Delta_{x}=1+(s-1)\varepsilon(1-\kappa)$ from the
definition of $\mathcal{H}$ and $\theta$.
Step 4: conclusion. Combining the above two cases together, we have
$\begin{split}R_{\theta}(n)+R_{\widetilde{\theta}}(n)\geq\min\left(\left(\frac{ns\varepsilon}{4}\right)\exp\left(-\frac{2\varepsilon^{2}s^{2}n}{d}\right)\exp(-2),\,\frac{1}{\kappa^{2}\varepsilon^{2}s}+\frac{1-\kappa}{\kappa^{2}\varepsilon}\right)\,,\end{split}$
(3.10)
where we replaced $s-1$ by $s$ in the final result for notational simplicity.
Consider a sampling distribution $\mu$ that uniformly samples actions from
$\mathcal{H}$. A simple calculation shows that $C_{\min}(\mathcal{A})\geq
C_{\min}(\mathcal{H})\geq\kappa^{2}>0$. This is due to
$\sigma_{\min}\left(\sum_{x\in\mathcal{H}}\mu(x)xx^{\top}\right)=\sigma_{\min}\Big{(}\mathbb{E}_{X\sim\mu}[XX^{\top}]\Big{)}=\kappa^{2}\,,$
where each coordinate of the random vector $X\in\mathbb{R}^{d}$ is sampled
independently uniformly from $\\{-1,1\\}$. In the data poor regime when $d\geq
n^{1/3}s^{2/3}$, we choose $\varepsilon=\kappa^{-2/3}s^{-2/3}n^{-1/3}$ such
that
$\begin{split}\max(R_{\theta}(n),R_{\widetilde{\theta}}(n))&\geq
R_{\theta}(n)+R_{\widetilde{\theta}}(n)\\\
&\geq\frac{\exp(-4)}{4}\kappa^{-\tfrac{2}{3}}s^{\tfrac{1}{3}}n^{\tfrac{2}{3}}\geq\frac{\exp(-4)}{4}C_{\min}^{-\tfrac{1}{3}}(\mathcal{A})s^{\tfrac{1}{3}}n^{\tfrac{2}{3}}\,.\end{split}$
Finally, in the data rich regime when $d<n^{1/3}s^{2/3}$ we choose
$\varepsilon=\sqrt{d/(ns^{2})}$ such that the exponential term is a constant,
and then
$\max(R_{\theta}(n),R_{\widetilde{\theta}}(n))\geq
R_{\theta}(n)+R_{\widetilde{\theta}}(n)\geq\frac{\exp(-4)}{4}\sqrt{dn}\,.\qed$
## 4 Matching upper bound
We now propose a simple algorithm based on the explore-then-commit paradigm222
Explore-then-commit template is also considered in other works (Deshmukh et
al., 2018) but both the exploration and exploitation stages are very
different. Deshmukh et al. (2018) considers simple regret minimization while
we focus on cumulative regret minimization. and show that the minimax lower
bound in Eq. (3.1) is more or less achievable. As one might guess, the
algorithm has two stages. First it solves the following optimization problem
to find the most informative design:
$\begin{split}\max_{\mu\in\mathcal{P}(\mathcal{A})}\
\sigma_{\min}\Big{(}\int_{x\in\mathcal{A}}xx^{\top}d\mu(x)\Big{)}\,.\end{split}$
(4.1)
In the exploration stage, the agent samples its actions from $\widehat{\mu}$
for $n_{1}$ rounds, collecting a data-set
$\\{(A_{1},Y_{1}),\ldots,(A_{n_{1}},Y_{n_{1}})\\}$. The agent uses the data
collecting in the exploration stage to compute the Lasso estimator
$\widehat{\theta}_{n_{1}}$. In the commit stage, the agent executes the greedy
action for the rest $n-n_{1}$ rounds. The detailed algorithm is summarized in
Algorithm 1.
###### Remark 4.1.
The minimum eigenvalue is concave (Boyd et al., 2004), which means that the
solution to (4.1) can be approximated efficiently using standard tools such as
CVXPY (Diamond and Boyd, 2016).
Algorithm 1 Explore the sparsity then commit (ESTC)
1: Input: time horizon $n$, action set $\mathcal{A}$, exploration length
$n_{1}$, regularization parameter $\lambda_{1}$;
2: Solve the optimization problem in Eq. (4.1) and denote the solution as
$\widehat{\mu}$.
3: for $t=1,\cdots,n_{1}$ do
4: Independently pull arm $A_{t}$ according to $\widehat{\mu}$ and receive a
reward: $Y_{t}=\langle A_{t},\theta\rangle+\eta_{t}.$
5: end for
6: Calculate the Lasso estimator (Tibshirani, 1996):
$\widehat{\theta}_{n_{1}}=\mathop{\mathrm{argmin}}_{\theta\in\mathbb{R}^{d}}\Big{(}\frac{1}{n_{1}}\sum_{t=1}^{n_{1}}\big{(}Y_{t}-\langle
A_{t},\theta\rangle\big{)}^{2}+\lambda_{1}\|\theta\|_{1}\Big{)}.$ (4.2)
7: for $t=n_{1}+1$ to $n$ do
8: Take greedy actions
$A_{t}=\mathop{\mathrm{argmin}}_{x\in\mathcal{A}}\langle\widehat{\theta}_{n_{1}},x\rangle.$
9: end for
The following theorem states a regret upper bound for Algorithm 1. The proof
is deferred to Appendix B.3.
###### Theorem 4.2.
Consider the sparse linear bandits described in Eq. (2.1) and assume the
action set $\mathcal{A}$ spans $\mathbb{R}^{d}$. Suppose $R_{\max}$ is an
upper bound of maximum expected reward such that
$\max_{x\in\mathcal{A}}\langle x,\theta\rangle\leq R_{\max}$. In Algorithm 1,
we choose
$n_{1}=n^{2/3}(s^{2}\log(2d))^{1/3}R_{\max}^{-2/3}(2/C_{\min}^{2}(\mathcal{A}))^{1/3},$
(4.3)
and $\lambda_{1}=4\sqrt{\log(d)/n_{1}}$. Then the following regret upper bound
holds,
$R_{\theta}(n)\leq(2\log(2d)R_{\max})^{\tfrac{1}{3}}C_{\min}^{-\tfrac{2}{3}}(\mathcal{A})s^{\tfrac{2}{3}}n^{\tfrac{2}{3}}+3nR_{\max}\exp(-c_{1}n_{1}).$
(4.4)
Together with the minimax lower bound in Theorem 3.3, we can argue that ESTC
algorithm is minimax optimal in time horizon $n$ in the data-poor regime.
###### Remark 4.3.
The regret upper bound Eq. (4.4) may still depend on $d$ because
$1/C_{\min}(\mathcal{A})$ could be as large as $d$. Indeed, if the action set
is the standard basis vectors, then the problem reduces to the standard multi-
armed bandit for which the minimax regret is $\Theta(\sqrt{dn})$, even with
sparsity. If we restrict our attention to the class of action set such that
$1/C_{\min}(\mathcal{A})$ is dimension-free, then we have a dimension-free
upper bound.
###### Remark 4.4.
Another notion frequently appearing in high-dimensional statistics is the
restricted eigenvalue condition. Demanding a lower bound on the restricted
eigenvalue is weaker than the minimum eigenvalue, which can lead to stronger
results. As it happens, however, the two coincide in the lower bound
construction. The upper bound may also be sharpened, but the resulting
optimization problem would (a) depend on the sparsity $s$ and (b) the
objective would have a complicated structure for which an efficient algorithm
is not yet apparent.
###### Remark 4.5.
There is still a $(s/C_{\min}(\mathcal{A}))^{1/3}$ gap between the lower bound
(Eq. (3.1)) and upper bound (Eq. (4.4)) ignoring logarithmic factor. We
conjecture that the use of $\ell_{1}/\ell_{\infty}$ inequality when proving
Theorem 4.2 is quite conservative. Specifically, we bound the following using
the $\ell_{1}$-norm bound of Lasso (see Eq. (B.15) in the Appendix B.3 for
details),
$\big{\langle}\theta-\widehat{\theta}_{n_{1}},x^{*}-A_{t}\big{\rangle}\leq\big{\|}\theta-\widehat{\theta}_{n_{1}}\big{\|}_{1}\big{\|}x^{*}-A_{t}\big{\|}_{\infty}\lesssim\sqrt{\frac{s^{2}\log(d)}{n_{1}}}.$
The first inequality ignores the sign information of
$\widehat{\theta}_{n_{1}}$ and the correlation between $x^{*}-A_{t}$ and
$\widehat{\theta}_{n_{1}}$. A similar phenomenon has been observed by
Javanmard et al. (2018) and resolved by means of a delicate leave-one-out
analysis to decouple the correlation. An interesting question is whether or
not a similar technique could be used in our case to improve the above bound
to $\sqrt{s\log(d)/(n_{1})}$, closing the gap between regret upper bound and
lower bound. On the other hand, surprisingly, even in the classical
statistical settings there are still gaps between upper and lower bounds in
terms of $C_{\min}(\mathcal{A})$ (Raskutti et al., 2011). We speculate that
the upper bound may be improvable, though at present we do not know how to do
it.
###### Remark 4.6.
The algorithm uses knowledge of the sparsity to tune the length of exploration
in Eq. (4.3). When the sparsity is not known, the length of exploration can be
set to $n_{1}=n^{2/3}$. The price is an additional factor of
$\mathcal{O}(s^{1/3})$ to regret. This is an advantage relative to the
algorithm by Abbasi-Yadkori et al. (2012), for which knowledge of the sparsity
is apparently essential for constructing the confidence set.
###### Remark 4.7.
We do not expect explicit optimism-based algorithms (Dani et al., 2008;
Rusmevichientong and Tsitsiklis, 2010; Chu et al., 2011; Abbasi-Yadkori et
al., 2011) or implicit ones, such as Thompson sampling (Agrawal and Goyal,
2013), to achieve the minimax lower bound in the data-poor regime. The reason
is that the optimism principle does not balance the trade-off between
information and regret, a phenomenon that has been observed before in linear
and structured bandits (Lattimore and Szepesvari, 2017; Combes et al., 2017;
Hao et al., 2020).
## 5 Improved upper bound
In this section, we show that under additional minimum signal condition, the
restricted phase elimination algorithm can achieve a sharper
$\mathcal{O}(\sqrt{sn})$ regret upper bound.
The algorithm shares similar idea with Carpentier and Munos (2012) that
includes feature selection step and restricted linear bandits step. In the
feature selection step, the agent pulls a certain number of rounds $n_{2}$
following $\widehat{\mu}$ as in (4.1). Then Lasso is used to conduct the
feature selection. Based on the support Lasso selects, the algorithm invokes
phased elimination algorithm for linear bandits (Lattimore et al., 2020) on
the selected support.
Algorithm 2 Restricted phase elimination
1: Input: time horizon $n$, action set $\mathcal{A}$, exploration length
$n_{2}$, regularization parameter $\lambda_{2}$;
2: Solve the optimization problem Eq. (4.1) and denote the solution as
$\widehat{\mu}$.
3: for $t=1,\cdots,n_{2}$ do
4: Independently pull arm $A_{t}$ according to $\widehat{\mu}$ and receive a
reward: $Y_{t}=\langle A_{t},\theta\rangle+\eta_{t}.$
5: end for
6: Calculate the Lasso estimator $\widehat{\theta}_{n_{2}}$ as in Eq. (4.2)
with $\lambda_{2}$.
7: Identify the support: $\widehat{S}=\text{supp}(\widehat{\theta}_{n_{2}})$.
8: for $t=n_{2}+1$ to $n$ do
9: Invoke phased elimination algorithm for linear bandits on $\widehat{S}$.
10: end for
###### Condition 5.1 (Minimum signal).
We assume there exists some known lower bound $m>0$ such that
$\min_{j\in\text{supp}(\theta)}|\theta_{j}|>m.$
###### Theorem 5.2.
Consider the sparse linear bandits described in Eq. (2.1). We assume the
action set $\mathcal{A}$ spans $\mathbb{R}^{d}$ as well as
$|\mathcal{A}|=K<\infty$ and suppose Condition 5.1 holds. Let
$n_{2}=C_{1}s\log(d)/(m^{2}C_{\min}(\mathcal{A}))$ for a suitable large
constant $C_{1}$ and choose $\lambda_{2}=4\sqrt{\log(d)/n_{2}}$. Denote
$\phi_{\max}=\sigma_{\max}(\sum_{t=1}^{n_{2}}A_{t}A_{t}^{\top}/n_{2})$. Then
the following regret upper bound of Algorithm 2 holds,
$R_{\theta}(n)\leq
C\Big{(}\frac{s\log(d)}{m^{2}C_{\min}(\mathcal{A})}+\sqrt{\frac{9\phi_{\max}\log(Kn)}{C_{\min}(\mathcal{A})}}\sqrt{sn}\Big{)},$
(5.1)
for universal constant $C>0$.
When $C_{\min}(\mathcal{A})$ is dimension-free and
$m\geq(s\log^{2}(d)/C_{\min}(\mathcal{A})n)^{1/4}$, we reach an
$\mathcal{O}(\sqrt{sn})$ regret upper bound. The proof is deferred to Appendix
B.4. It utilizes the sparsity and variable screening property of Lasso. More
precisely, under minimum signal condition, the Lasso estimator can identify
all the important covariates, i.e.,
$\text{supp}(\widehat{\theta}_{n_{1}})\supseteq\text{supp}(\theta)$. And the
model Lasso selected is sufficiently sparse, i.e.
$|\text{supp}(\widehat{\theta}_{n_{1}})|\lesssim s$. Therefore, it is enough
to query linear bandits algorithm on $\text{supp}(\widehat{\theta}_{n_{1}})$.
###### Remark 5.3.
It is possible to remove the dependency of $\phi_{\max}$ in the Eq. (5.1)
using more dedicated analysis, using theorem 3 in Belloni et al. (2013). The
reason we choose a phase elimination type algorithm is that it has the optimal
regret guarantee when the size of action set is moderately large. When the
action set has an infinite number of actions, we could switch to the linear
UCB algorithm (Abbasi-Yadkori et al., 2011) or appeal to a discretisation
argument.
## 6 Experiment
We compare ESTC (our algorithm) with LinUCB (Abbasi-Yadkori et al., 2011) and
doubly-robust (DR) lasso bandits (Kim and Paik, 2019). For ESTC, we use the
theoretically suggested length of exploration stage. For LinUCB, we use the
theoretically suggested confidence interval. For DR-lasso, we use the code
made available by the authors on-line.
* •
Case 1: linear contextual bandits. We use the setting in Section 5 of Kim and
Paik (2019) with $N=20$ arms, dimension $d=100$, sparsity $s=5$. At round $t$,
we generate the action set from $N(0_{N},V)$, where $V_{ii}=1$ and
$V_{ik}=\rho^{2}$ for every $i\neq k$. Larger $\rho$ corresponds to high
correlation setting that is more favorable to DR-lasso. The noise is from
$N(0,1)$ and $\|\theta\|_{0}=s$.
* •
Case 2: hard problem instance. Consider the hard problem instance in the proof
of minimax lower bound (Theorem 3.3), including an informative action set and
an uninformative action set. Since the size of action set constructed in the
hard problem instance grows exponentially with $d$, we uniformly randomly
sample 500 actions from the full informative action set and 200 from
uninformative action set.
Conclusion: The experiments confirm our theoretical findings. Although our
theory focuses on the fixed action set setting, ESTC works well in the
contextual setting. DR-lasso bandits heavily rely on context distribution
assumption and almost fail for the hard instance. LinUCB suffers in the data-
poor regime since it ignores the sparsity information.
Figure 1: The top two figures are for Case 1 and the bottom two figures are
for Case 2.
## 7 Discussion
In this paper, we provide a thorough investigation of high-dimensional sparse
linear bandits, and show that $\Theta(n^{2/3})$ is the optimal rate in the
data-poor regime. Our work leaves many open problems on how the shape of
action set affects the regret that reveals the subtle trade-off between
information and regret. For instance, it is unclear how the regret lower bound
depends on $C_{\min}(\mathcal{A})$ in the data-rich regime and if
$C_{\min}(\mathcal{A})$ is the best quantity to describe the shape of action
set $\mathcal{A}$.
In another hand, the ESTC algorithm can only achieve optimal regret bound in
data poor regime and becomes suboptimal in the data rich regime. It is
interesting to have an algorithm to achieve optimal regrets in “best of two
worlds". Information-direct sampling (Russo and Van Roy, 2014) might be a good
candidate since it delicately balances the trade-off between information and
regret which is necessary in the sparse linear bandits.
#### Broader Impact
We believe that presented research should be categorized as basic research and
we are not targeting any specific application area. Theorems may inspire new
algorithms and theoretical investigation. The algorithms presented here can be
used for many different applications and a particular use may have both
positive or negative impacts. We are not aware of any immediate short term
negative implications of this research and we believe that a broader impact
statement is not required for this paper.
## Acknowledgments and Disclosure of Funding
Mengdi Wang gratefully acknowledges funding from the U.S. National Science
Foundation (NSF) grant CMMI1653435, Air Force Office of Scientific Research
(AFOSR) grant FA9550-19-1-020, and C3.ai DTI.
## References
* Abbasi-Yadkori et al. [2011] Yasin Abbasi-Yadkori, Dávid Pál, and Csaba Szepesvári. Improved algorithms for linear stochastic bandits. In _Advances in Neural Information Processing Systems_ , pages 2312–2320, 2011.
* Abbasi-Yadkori et al. [2012] Yasin Abbasi-Yadkori, David Pal, and Csaba Szepesvari. Online-to-confidence-set conversions and application to sparse stochastic bandits. In _Artificial Intelligence and Statistics_ , pages 1–9, 2012.
* Agrawal and Goyal [2013] Shipra Agrawal and Navin Goyal. Thompson sampling for contextual bandits with linear payoffs. In _International Conference on Machine Learning_ , pages 127–135, 2013.
* Auer [2002] Peter Auer. Using confidence bounds for exploitation-exploration trade-offs. _Journal of Machine Learning Research_ , 3(Nov):397–422, 2002.
* Bartók et al. [2014] G. Bartók, D. P. Foster, D. Pál, A. Rakhlin, and Cs. Szepesvári. Partial monitoring—classification, regret bounds, and algorithms. _Mathematics of Operations Research_ , 39(4):967–997, 2014.
* Bastani and Bayati [2020] Hamsa Bastani and Mohsen Bayati. Online decision making with high-dimensional covariates. _Operations Research_ , 68(1):276–294, 2020.
* Belloni et al. [2013] Alexandre Belloni, Victor Chernozhukov, et al. Least squares after model selection in high-dimensional sparse models. _Bernoulli_ , 19(2):521–547, 2013.
* Bickel et al. [2009] Peter J Bickel, Ya’acov Ritov, Alexandre B Tsybakov, et al. Simultaneous analysis of lasso and dantzig selector. _The Annals of Statistics_ , 37(4):1705–1732, 2009.
* Boyd et al. [2004] Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. _Convex optimization_. Cambridge university press, 2004.
* Bühlmann and Van De Geer [2011] Peter Bühlmann and Sara Van De Geer. _Statistics for high-dimensional data: methods, theory and applications_. Springer Science & Business Media, 2011.
* Carpentier and Munos [2012] Alexandra Carpentier and Rémi Munos. Bandit theory meets compressed sensing for high dimensional stochastic linear bandit. In _Artificial Intelligence and Statistics_ , pages 190–198, 2012\.
* Chu et al. [2011] Wei Chu, Lihong Li, Lev Reyzin, and Robert Schapire. Contextual bandits with linear payoff functions. In _Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics_ , pages 208–214, 2011.
* Combes et al. [2017] Richard Combes, Stefan Magureanu, and Alexandre Proutiere. Minimal exploration in structured stochastic bandits. In _Advances in Neural Information Processing Systems_ , pages 1763–1771, 2017.
* Dani et al. [2008] Varsha Dani, Thomas P Hayes, and Sham M Kakade. Stochastic linear optimization under bandit feedback. 2008\.
* Deshmukh et al. [2018] Aniket Anand Deshmukh, Srinagesh Sharma, James W Cutler, Mark Moldwin, and Clayton Scott. Simple regret minimization for contextual bandits. _arXiv preprint arXiv:1810.07371_ , 2018.
* Deshpande and Montanari [2012] Yash Deshpande and Andrea Montanari. Linear bandits in high dimension and recommendation systems. In _2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)_ , pages 1750–1754. IEEE, 2012.
* Diamond and Boyd [2016] Steven Diamond and Stephen Boyd. Cvxpy: A python-embedded modeling language for convex optimization. _The Journal of Machine Learning Research_ , 17(1):2909–2913, 2016.
* Hao et al. [2020] Botao Hao, Tor Lattimore, and Csaba Szepesvari. Adaptive exploration in linear contextual bandit. _AISTATS_ , 2020.
* Javanmard and Montanari [2014] Adel Javanmard and Andrea Montanari. Confidence intervals and hypothesis testing for high-dimensional regression. _The Journal of Machine Learning Research_ , 15(1):2869–2909, 2014.
* Javanmard et al. [2018] Adel Javanmard, Andrea Montanari, et al. Debiasing the lasso: Optimal sample size for gaussian designs. _The Annals of Statistics_ , 46(6A):2593–2622, 2018.
* Kim and Paik [2019] Gi-Soo Kim and Myunghee Cho Paik. Doubly-robust lasso bandit. In _Advances in Neural Information Processing Systems_ , pages 5869–5879, 2019.
* Lattimore and Szepesvari [2017] Tor Lattimore and Csaba Szepesvari. The end of optimism? an asymptotic analysis of finite-armed linear bandits. In _Artificial Intelligence and Statistics_ , pages 728–737, 2017\.
* Lattimore and Szepesvári [2020] Tor Lattimore and Csaba Szepesvári. _Bandit algorithms_. Cambridge University Press, 2020.
* Lattimore et al. [2015] Tor Lattimore, Koby Crammer, and Csaba Szepesvári. Linear multi-resource allocation with semi-bandit feedback. In _Advances in Neural Information Processing Systems_ , pages 964–972, 2015.
* Lattimore et al. [2020] Tor Lattimore, Csaba Szepesvari, and Gellert Weisz. Learning with good feature representations in bandits and in rl with a generative model. _International Conference on Machine Learning_ , 2020.
* Raskutti et al. [2011] G. Raskutti, M. J. Wainwright, and B. Yu. Minimax rates of estimation for high-dimensional linear regression over $\ell_{q}$ -balls. _IEEE Transactions on Information Theory_ , 57(10):6976–6994, 2011.
* Rudelson and Zhou [2013] Mark Rudelson and Shuheng Zhou. Reconstruction from anisotropic random measurements. _IEEE Transactions on Information Theory_ , 59(6):3434–3447, 2013.
* Rusmevichientong and Tsitsiklis [2010] Paat Rusmevichientong and John N Tsitsiklis. Linearly parameterized bandits. _Mathematics of Operations Research_ , 35(2):395–411, 2010.
* Russo and Van Roy [2014] Daniel Russo and Benjamin Van Roy. Learning to optimize via posterior sampling. _Mathematics of Operations Research_ , 39(4):1221–1243, 2014.
* Sivakumar et al. [2020] Vidyashankar Sivakumar, Zhiwei Steven Wu, and Arindam Banerjee. Structured linear contextual bandits: A sharp and geometric smoothed analysis. _International Conference on Machine Learning_ , 2020.
* Tibshirani [1996] R. Tibshirani. Regression shrinkage and selection via the lasso. _Journal of the Royal Statistical Society, Series B_ , 58:267–288, 1996.
* Tsybakov [2008] Alexandre B. Tsybakov. _Introduction to Nonparametric Estimation_. Springer Publishing Company, Incorporated, 1st edition, 2008. ISBN 0387790519, 9780387790510.
* Vershynin [2010] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. _arXiv preprint arXiv:1011.3027_ , 2010.
* Wainwright [2019] Martin J Wainwright. _High-dimensional statistics: A non-asymptotic viewpoint_ , volume 48. Cambridge University Press, 2019.
* Wang et al. [2018] Xue Wang, Mingcheng Wei, and Tao Yao. Minimax concave penalized multi-armed bandit model with high-dimensional covariates. In _International Conference on Machine Learning_ , pages 5200–5208, 2018.
In Appendix A, we review some statistical results for sparse linear
regression. In Appendix B, we provide the proof of main theorems as well as
main claims. In Appendix C, we include some supporting lemma for the sake of
completeness.
## Appendix A Sparse linear regression
We review some classical results in sparse linear regression. Consider the
following sparse linear regression model:
$y_{i}=\langle x_{i},\theta^{*}\rangle+\epsilon_{i},i=1,\ldots,n,$ (A.1)
where $\theta^{*}\in\mathbb{R}^{d}$ and $\|\theta^{*}\|_{0}=s\leq d$ and the
noise $\\{\epsilon_{i}\\}_{i=1}^{n}$ independently follows a zero-mean,
$\sigma$-sub-Gaussian distribution. Let the design matrix be
$X=(x_{1},\ldots,x_{n})^{\top}\in\mathbb{R}^{n\times d}$. Define the Lasso
estimator as follows:
$\widehat{\theta}_{n}=\mathop{\mathrm{argmin}}_{\theta}\Big{(}\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\langle
x_{i},\theta\rangle)^{2}+\lambda\|\theta\|_{1}\Big{)}.$
###### Condition A.1 (Restricted eigenvalues).
Define the cone:
$\mathbb{C}(S):=\\{\Delta\in\mathbb{R}^{d}|\|\Delta_{S^{c}}\|_{1}\leq
3\|\Delta_{S}\|_{1}\\},$
where $S$ is the support set of $\theta^{*}$. Then there exists some positive
constant $\kappa$ such that the design matrix $X\in\mathbb{R}^{n\times d}$
satisfied the condition
$\frac{\|X\theta\|_{2}^{2}}{n}\geq\kappa\|\theta\|_{2}^{2},$
for all $\theta\in\mathbb{C}(S)$.
###### Condition A.2 (Column normalized).
Using $X_{j}\in\mathbb{R}^{n}$ to denote the $j$-th column of $X$, we say that
$X$ is column-normalized if for all $j=1,2,\ldots,d$,
$\frac{\|X_{j}\|_{2}}{\sqrt{n}}\leq 1.$
###### Theorem A.3.
Consider an $s$-sparse linear regression and assume design matrix
$X\in\mathbb{R}^{n\times d}$ satisfies the RE condition (Condition A.1) and
the column normalization condition (Condition (A.2)). Given the Lasso
estimator with regularization parameter $\lambda_{n}=4\sigma\sqrt{\log(d)/n}$,
then with probability at least $1-\delta$,
* •
the estimation error under $\ell_{1}$-norm (Theorem 7.13 in Wainwright [2019])
of any optimal solution $\widehat{\theta}_{n}$ satisfies
$\big{\|}\widehat{\theta}_{n}-\theta^{*}\big{\|}_{1}\leq\frac{\sigma
s}{\kappa}\sqrt{\frac{2\log(2d/\delta)}{n}};$
* •
the mean square prediction error (Theorem 7.20 in Wainwright [2019]) of any
optimal solution $\widehat{\theta}_{n}$ satisfies
$\frac{1}{n}\sum_{i=1}^{n}\big{(}x_{i}^{\top}(\widehat{\theta}_{n}-\theta)\big{)}^{2}\leq\frac{9}{\kappa}\frac{s\log(d/\delta)}{n}.$
## Appendix B Proofs of main theorems and claims
### B.1 Proof of Claim 3.5
We first prove the first part. By standard calculations, we have
$\begin{split}R_{\theta}(n)&=\mathbb{E}_{\theta}\Big{[}\sum_{t=1}^{n}\langle
x^{*},\theta\rangle\Big{]}-\mathbb{E}_{\theta}\Big{[}\sum_{t=1}^{n}\langle
A_{t},\theta\rangle\Big{]}\\\
&=\mathbb{E}_{\theta}\Big{[}n(s-1)\varepsilon-\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in\mathcal{H})\langle
A_{t},\theta\rangle-\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\theta\rangle\Big{]},\end{split}$
where the last equation is from the definition of $x^{*}$ in Eq. (3.3). From
the definition of $\mathcal{H}$ in Eq. (3.2), the following holds for small
enough $\varepsilon$,
$\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in\mathcal{H})\langle
A_{t},\theta\rangle\leq T_{n}(\mathcal{H})(\kappa(s-1)\varepsilon-1)\leq 0,$
(B.1)
where
$T_{n}(\mathcal{H})=\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in\mathcal{H})$.
Since $\langle A_{t},\theta\rangle=\sum_{j=1}^{s}A_{tj}\varepsilon$ for
$A_{t}\in{\mathcal{S}}$, then it holds that
$\begin{split}R_{\theta}(n)&\geq\mathbb{E}_{\theta}\Big{[}n(s-1)\varepsilon-\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{s-1}A_{tj}\varepsilon\Big{]}\\\
&\geq\mathbb{E}_{\theta}\Big{[}\Big{(}n(s-1)\varepsilon-\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{s-1}A_{tj}\varepsilon\Big{)}\operatorname{\mathds{1}}(\mathcal{D})\Big{]}\\\
&\geq\Big{(}n(s-1)\varepsilon-\frac{n(s-1)\varepsilon}{2}\Big{)}\mathbb{P}_{\theta}(\mathcal{D})\\\
&=\frac{n(s-1)\varepsilon}{2}\mathbb{P}_{\theta}(\mathcal{D}).\end{split}$
(B.2)
Second, we derive a regret lower bound of alternative bandit
$\widetilde{\theta}$. Denote $\widetilde{x}^{*}$ as the optimal arm of bandit
$\widetilde{\theta}$. By a similar decomposition in Eq. (B.2),
$\begin{split}R_{\widetilde{\theta}}(n)&=\mathbb{E}_{\widetilde{\theta}}\Big{[}\sum_{t=1}^{n}\langle\widetilde{x}^{*},\widetilde{\theta}\rangle\Big{]}-\mathbb{E}_{\widetilde{\theta}}\Big{[}\sum_{t=1}^{n}\langle
A_{t},\widetilde{\theta}\rangle\Big{]}\\\
&=\mathbb{E}_{\widetilde{\theta}}\Big{[}2n(s-1)\varepsilon-\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in\mathcal{H})\langle
A_{t},\widetilde{\theta}\rangle-\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\widetilde{\theta}\rangle\Big{]}\\\
&\geq\mathbb{E}_{\widetilde{\theta}}\Big{[}2n(s-1)\varepsilon-\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\widetilde{\theta}\rangle\Big{]}.\end{split}$ (B.3)
where the inequality comes similarly in Eq. (B.1) to show
$\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in\mathcal{H})\langle
A_{t},\widetilde{\theta}\rangle\leq 0$. Next, we will find an upper bound for
$\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\widetilde{\theta}\rangle$. From the definition of $\widetilde{\theta}$
in Eq. (3.6),
$\begin{split}\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\widetilde{\theta}\rangle&=\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\theta+2\varepsilon\widetilde{x}\rangle\\\
&=\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\theta\rangle+2\varepsilon\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\widetilde{x}\rangle\\\
&\leq\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\theta\rangle+2\varepsilon\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j\in\text{supp}(\widetilde{x})}|A_{tj}|,\end{split}$
(B.4)
where the last inequality is from the definition of $\widetilde{x}$ in Eq.
(3.5). To bound the first term, we have
$\begin{split}\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\theta\rangle&=\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{s-1}A_{tj}\varepsilon\\\
&\leq\varepsilon\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{s-1}|A_{tj}|.\end{split}$
(B.5)
If all the actions $A_{t}$ come from ${\mathcal{S}}$ which is a $(s-1)$-sparse
set, we have
$\sum_{t=1}^{n}\sum_{j=1}^{d}|A_{tj}|=(s-1)n,$
which implies
$\begin{split}&\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\Big{(}\sum_{j=1}^{s-1}|A_{tj}|+\sum_{j\in\text{supp}(\widetilde{x})}|A_{tj}|\Big{)}\leq\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{d}|A_{tj}|\leq(s-1)n,\\\
&\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{s-1}|A_{tj}|\leq(s-1)n-\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j\in\text{supp}(\widetilde{x})}|A_{tj}|.\end{split}$
(B.6)
Combining with Eq. (B.5),
$\begin{split}\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\theta\rangle\leq\varepsilon\Big{(}(s-1)n-\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j\in\text{supp}(\widetilde{x})}|A_{tj}|\Big{)}\end{split}$
Plugging the above bound into Eq. (B.4), it holds that
$\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\langle
A_{t},\widetilde{\theta}\rangle\leq\varepsilon(s-1)n+\varepsilon\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j\in\text{supp}(\widetilde{x})}|A_{tj}|.$
(B.7)
When the event $\mathcal{D}^{c}$ (the complement event of $\mathcal{D}$)
happen, we have
$\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{s-1}|A_{tj}|\geq\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{s-1}A_{tj}\geq\frac{n(s-1)}{2}.$
Combining with Eq. (B.6), we have under event $\mathcal{D}^{c}$,
$\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j\in\text{supp}(\widetilde{x})}|A_{tj}|\leq\frac{n(s-1)}{2}.$
(B.8)
Putting Eqs. (B.3), (B.7), (B.8) together, it holds that
$R_{\widetilde{\theta}}(n)\geq\frac{n(s-1)\varepsilon}{2}\mathbb{P}_{\widetilde{\theta}}(\mathcal{D}^{c}).$
(B.9)
This ends the proof.
### B.2 Proof of Claim 3.6
From the divergence decomposition lemma (Lemma C.2 in the appendix), we have
$\begin{split}\text{KL}\big{(}\mathbb{P}_{\theta},\mathbb{P}_{\widetilde{\theta}}\big{)}&=\frac{1}{2}\mathbb{E}_{\theta}\Big{[}\sum_{t=1}^{n}\langle
A_{t},\theta-\widetilde{\theta}\rangle^{2}\Big{]}\\\
&=2\varepsilon^{2}\mathbb{E}_{\theta}\Big{[}\sum_{t=1}^{n}\langle
A_{t},\widetilde{x}\rangle^{2}\Big{]}.\end{split}$
To prove the claim, we use a simple argument “minimum is always smaller than
the average". We decompose the following summation over action set
${\mathcal{S}}^{\prime}$ defined in Eq. (3.4),
$\begin{split}\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\langle
A_{t},x\rangle^{2}&=\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\Big{(}\sum_{j=1}^{d}x_{j}A_{tj}\Big{)}^{2}\\\
&=\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\Big{(}\sum_{j=1}^{d}\big{(}x_{j}A_{tj}\big{)}^{2}+2\sum_{i<j}x_{i}x_{j}A_{ti}A_{tj}\Big{)}.\end{split}$
We bound the above two terms separately.
1. 1.
To bound the first term, we observe that
$\begin{split}&\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\sum_{j=1}^{d}\big{(}x_{j}A_{tj}\big{)}^{2}\\\
=&\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{d}|x_{j}A_{tj}|+\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in\mathcal{H})\sum_{j=1}^{d}(x_{j}A_{tj})^{2},\end{split}$
(B.10)
since both $x_{j},A_{tj}$ can only take $-1,0,+1$ if $A_{t}\in{\mathcal{S}}$.
If all the $A_{t}$ come from ${\mathcal{S}}$, we have
$\sum_{t=1}^{n}\sum_{j=1}^{d}|A_{tj}|=(s-1)n.$
This implies
$\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{d}|A_{tj}|\leq(s-1)n.$
Since $x\in{\mathcal{S}}^{\prime}$ that is $(s-1)$-sparse, we have
$\sum_{j=1}^{d}|x_{j}A_{tj}|\leq s-1$. Therefore, we have
$\begin{split}\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in{\mathcal{S}})\sum_{j=1}^{d}|x_{j}A_{tj}|\leq(s-1)n\binom{d-s-1}{s-2}.\end{split}$
(B.11)
In addition, since the action in ${\mathcal{S}}^{\prime}$ is $s-1$-sparse and
has 0 at its last coordinate, we have
$\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\operatorname{\mathds{1}}(A_{t}\in\mathcal{H})\sum_{j=1}^{d}(x_{j}A_{tj})^{2}\leq\kappa^{2}|{\mathcal{S}}^{\prime}|T_{n}(\mathcal{H})(s-1).$
(B.12)
Putting Eqs. (B.10), (B.11) and (B.12) together,
$\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\sum_{j=1}^{d}\big{(}x_{j}A_{tj}\big{)}^{2}\leq(s-1)n\binom{d-s-1}{s-2}+\kappa^{2}|{\mathcal{S}}^{\prime}|T_{n}(\mathcal{H})(s-1).$
(B.13)
2. 2.
To bound the second term, we observe
$\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}2\sum_{i<j}x_{i}x_{j}A_{ti}A_{tj}=2\sum_{t=1}^{n}\sum_{i<j}\sum_{x\in{\mathcal{S}}^{\prime}}x_{i}x_{j}A_{ti}A_{tj}.$
From the definition of ${\mathcal{S}}^{\prime}$, $x_{i}x_{j}$ can only take
values of $\\{1*1,1*-1,-1*1,-1*-1,0\\}$. This symmetry implies
$\sum_{x\in{\mathcal{S}}^{\prime}}x_{i}x_{j}A_{ti}A_{tj}=0,$
which implies
$\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}2\sum_{i<j}x_{i}x_{j}A_{ti}A_{tj}=0.$
(B.14)
Combining Eqs. (B.13) and (B.14) together, we have
$\begin{split}\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\langle
A_{t},x\rangle^{2}&=\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\sum_{j=1}^{d}|x_{j}A_{tj}|\\\
&\leq(s-1)n\binom{d-s-1}{s-2}+\kappa^{2}|{\mathcal{S}}^{\prime}|T_{n}(\mathcal{H})(s-1).\end{split}$
Therefore, we use the fact that the minimum of $n$ points is always smaller
than its average,
$\begin{split}\mathbb{E}_{\theta}\Big{[}\sum_{t=1}^{n}\langle
A_{t},\widetilde{x}\rangle^{2}\Big{]}&=\min_{x\in{\mathcal{S}}^{\prime}}\mathbb{E}_{\theta}\Big{[}\sum_{t=1}^{n}\langle
A_{t},x\rangle^{2}\Big{]}\\\
&\leq\frac{1}{|{\mathcal{S}}^{\prime}|}\sum_{x\in{\mathcal{S}}^{\prime}}\mathbb{E}_{\theta}\Big{[}\sum_{t=1}^{n}\langle
A_{t},x\rangle^{2}\Big{]}\\\
&=\mathbb{E}_{\theta}\Big{[}\frac{1}{|{\mathcal{S}}^{\prime}|}\sum_{x\in{\mathcal{S}}^{\prime}}\sum_{t=1}^{n}\langle
A_{t},x\rangle^{2}\Big{]}\\\
&\leq\frac{(s-1)n\binom{d-s-1}{s-2}+\mathbb{E}_{\theta}[T_{n}(\mathcal{H})](s-1)\binom{d-s}{s-1}}{\binom{d-s}{s-1}}\\\
&\leq\frac{(s-1)^{2}n}{d}+\kappa^{2}\mathbb{E}_{\theta}[T_{n}(\mathcal{H})](s-1).\end{split}$
This ends the proof of the claim of Eq. (3.7).
### B.3 Proof of Theorem 4.2: regret upper bound
Step 1: regret decomposition. Suppose $R_{\max}$ is an upper bound of maximum
expected reward such that $\max_{x\in\mathcal{A}}\langle x,\theta\rangle\leq
R_{\max}$. We decompose the regret of ESTC as follows:
$\begin{split}R_{\theta}(n)&=\mathbb{E}_{\theta}\Big{[}\sum_{t=1}^{n}\big{\langle}\theta,x^{*}-A_{t}\big{\rangle}\Big{]}\\\
&=\mathbb{E}_{\theta}\Big{[}\sum_{t=1}^{n_{1}}\big{\langle}\theta,x^{*}-A_{t}\big{\rangle}+\sum_{t=n_{1}+1}^{n}\big{\langle}\theta,x^{*}-A_{t}\big{\rangle}\Big{]}\\\
&\leq\mathbb{E}_{\theta}\Big{[}2n_{1}R_{\max}+\sum_{t=n_{1}+1}^{n}\big{\langle}\theta-\widehat{\theta}_{n_{1}},x^{*}-A_{t}\big{\rangle}+\sum_{t=n_{1}+1}^{n}\big{\langle}\widehat{\theta}_{n_{1}},x^{*}-A_{t}\big{\rangle}\Big{]}.\end{split}$
Since we take greedy actions when $t\geq n_{1}+1$, it holds that $\langle
x^{*},\widehat{\theta}_{n_{1}}\rangle\leq\langle
A_{t},\widehat{\theta}_{n_{1}}\rangle$. This implies
$\begin{split}R_{\theta}(n)&\leq\mathbb{E}_{\theta}\Big{[}2n_{1}R_{\max}+\sum_{t=n_{1}+1}^{n}\big{\langle}\theta-\widehat{\theta}_{n_{1}},x^{*}-A_{t}\big{\rangle}\Big{]}\\\
&\leq\mathbb{E}_{\theta}\Big{[}2n_{1}R_{\max}+\sum_{t=n_{1}+1}^{n}\big{\|}\theta-\widehat{\theta}_{n_{1}}\big{\|}_{1}\big{\|}x^{*}-A_{t}\big{\|}_{\infty}\Big{]}.\end{split}$
(B.15)
Step 2: fast sparse learning. It remains to bound the estimation error of
$\widehat{\theta}_{n_{1}}-\theta$ in $\ell_{1}$-norm. Denote the design matrix
$X=(A_{1},\ldots,A_{n_{1}})^{\top}\in\mathbb{R}^{n_{1}\times d}$, where
$A_{1},\ldots,A_{n_{1}}$ are independently drawn according to sampling
distribution $\widehat{\mu}$. To achieve a fast rate, one need to ensure $X$
satisfies restricted eigenvalue condition (Condition A.1 in the appendix).
Denote the uncentered empirical covariance matrix
$\widehat{\Sigma}=X^{\top}X/n_{1}$. It is easy to see
$\Sigma=\mathbb{E}(\widehat{\Sigma})=\int_{x\in\mathcal{A}}xx^{\top}d\widehat{\mu}(x),$
where $\widehat{\mu}$ is the solution of optimization problem Eq. (4.1). To
lighten the notation, we write $C_{\min}=C_{\min}(\mathcal{A})$. Since action
set $\mathcal{A}$ spans $\mathbb{R}^{d}$, we know that
$\sigma_{\min}(\Sigma)=C_{\min}>0$. And we also denote
$\sigma_{\max}(\Sigma)=C_{\max}$ and the notion of restricted eigenvalue as
follows.
###### Definition B.1.
Given a symmetric matrix $H\in\mathbb{R}^{d\times d}$ and integer $s\geq 1$,
and $L>0$, the restricted eigenvalue of $H$ is defined as
$\phi^{2}(H,s,L):=\min_{{\mathcal{S}}\subset[d],|{\mathcal{S}}|\leq
s}\min_{\theta\in\mathbb{R}^{d}}\Big{\\{}\frac{\langle\theta,H\theta\rangle}{\|\theta_{{\mathcal{S}}}\|_{1}^{2}}:\theta\in\mathbb{R}^{d},\|\theta_{{\mathcal{S}}^{c}}\|_{1}\leq
L\|\theta_{{\mathcal{S}}}\|_{1}\Big{\\}}.$
It is easy to see $X\Sigma^{-1/2}$ has independent sub-Gaussian rows with sub-
Gaussian norm $\|\Sigma^{-1/2}A_{1}\|_{\psi_{2}}=C_{\min}^{-1/2}$ (see
Vershynin [2010] for a precise definition of sub-Gaussian rows and sub-
Gaussian norms). According to Theorem 10 in Javanmard and Montanari [2014]
(essentially from Theorem 6 in Rudelson and Zhou [2013]), if the population
covariance matrix satisfies the restricted eigenvalue condition, the empirical
covariance matrix satisfies it as well with high probability. Specifically,
suppose the rounds in the exploration phase satisfies $n_{1}\geq
4c_{*}mC_{\min}^{-2}\log(ed/m)$ for some $c_{*}\leq 2000$ and
$m=10^{4}sC^{2}_{\max}/\phi^{2}(\Sigma,s,9)$. Then the following holds:
$\mathbb{P}\Big{(}\phi(\widehat{\Sigma},s,3)\geq\frac{1}{2}\phi(\Sigma,s,9)\Big{)}\geq
1-2\exp(-n_{1}/(4c_{*}C_{\min}^{-1/2})).$
Noticing that $\phi(\Sigma,s,9)\geq C_{\min}^{1/2}$, it holds that
$\mathbb{P}\Big{(}\phi^{2}(\widehat{\Sigma},s,3)\geq\frac{C_{\min}}{2}\Big{)}\geq
1-2\exp(-c_{1}n_{1}),$
where $c_{1}=1/(4c^{*}C_{\min}^{-1/2})$. This guarantees $\widehat{\Sigma}$
satisfies Condition A.1 in the appendix with $\kappa=C_{\min}/2$. It is easy
to see Condition A.2 holds automatically. Applying Theorem A.3 in the appendix
of the Lasso error bound, it implies:
$\big{\|}\widehat{\theta}_{n_{1}}-\theta^{*}\big{\|}_{1}\leq\frac{2}{C_{\min}}\sqrt{\frac{2s^{2}(\log(2d)+\log(n_{1}))}{n_{1}}}.$
with probability at least $1-\exp(-n_{1})$.
Step 3: optimize the length of exploration. Define an event $\mathcal{E}$ as
follows:
$\mathcal{E}=\Big{\\{}\phi(\widehat{\Sigma},s,3)\geq\frac{C_{\min}^{1/2}}{2},\big{\|}\widehat{\theta}_{n_{1}}-\theta^{*}\big{\|}_{1}\leq\frac{2}{C_{\min}}\sqrt{\frac{2s^{2}(\log(2d)+\log(n_{1}))}{n_{1}}}\Big{\\}}.$
We know that $\mathbb{P}(\mathcal{E})\geq 1-3\exp(-c_{1}n_{1})$. Note that
$\|x^{*}-A_{t}\|_{\infty}\leq 2$. According to Eq. (B.15), we have
$\begin{split}R_{\theta}(n)&\leq\mathbb{E}_{\theta}\Big{[}\Big{(}2n_{1}R_{\max}+\sum_{t=n_{1}+1}^{n}\big{\|}\theta-\widehat{\theta}_{n_{1}}\big{\|}_{1}\big{\|}x^{*}-A_{t}\big{\|}_{\infty}\Big{)}\operatorname{\mathds{1}}(\mathcal{E})\Big{]}+nR_{\max}\mathbb{P}(\mathcal{E}^{c})\\\
&\leq
n_{1}R_{\max}+(n-n_{1})\frac{4}{C_{\min}}\sqrt{\frac{2s^{2}(\log(2d)+\log(n_{1}))}{n_{1}}}2+3nR_{\max}\exp(-c_{1}n_{1})\end{split}$
with probability at least $1-\delta$. By choosing
$n_{1}=n^{2/3}(s^{2}\log(2d))^{1/3}R_{\max}^{-2/3}(2/C_{\min}^{2})^{1/3}$, we
have
$R_{\theta}(n)\leq(sn)^{2/3}(\log(2d))^{1/3}R_{\max}^{1/3}(\frac{2}{C_{\min}^{2}})^{1/3}+3nR_{\max}\exp(-c_{1}n_{1}).$
We end the proof.
### B.4 Proof of Theorem 5.2: improved regret upper bound
We start from a simple regret decomposition based on feature selection step
and restricted linear bandits step:
$\begin{split}R_{\theta}(n)&=\mathbb{E}_{\theta}\Big{[}\sum_{t=1}^{n}\big{\langle}\theta,x^{*}-A_{t}\big{\rangle}\Big{]}\\\
&=\mathbb{E}_{\theta}\Big{[}2n_{2}R_{\max}+\sum_{t=n_{2}+1}^{n}\big{\langle}\theta,x^{*}-A_{t}\big{\rangle}\Big{]}.\end{split}$
Step 1: sparsity property of Lasso. We first prove that the Lasso solution is
sufficiently sparse. The following proof is mainly from Bickel et al. [2009]
with minor changes. To be self-contained, we reproduce it here. Recall that
the Lasso estimator in the feature selection stage is defined as
$\widehat{\theta}=\mathop{\mathrm{argmin}}_{\theta\in\mathbb{R}^{d}}\Big{(}\frac{1}{n_{2}}\sum_{t=1}^{n_{2}}\big{(}Y_{t}-\langle
A_{t},\theta\rangle\big{)}^{2}+\lambda_{2}\|\theta\|_{1}\Big{)}.$
Define random variables
$V_{j}=\frac{1}{n_{2}}\sum_{t=1}^{n_{2}}A_{tj}\eta_{t}$ for $j\in[d]$ and
$\eta_{t}$ is the noise. Since $\|A_{t}\|_{\infty}\leq 1$, standard
Hoeffding’s inequality (Proposition 5.10 in Vershynin [2010]) implies
$\mathbb{P}\Big{(}\big{|}\sum_{t=1}^{n_{2}}A_{tj}\eta_{t}\big{|}\geq\varepsilon\Big{)}\leq\exp\Big{(}-\frac{\varepsilon^{2}}{2n_{2}}\Big{)}.$
Define an event $\mathcal{E}$ as
$\mathcal{E}=\bigcup_{j=1}^{d}\Big{\\{}|V_{j}|\leq\sqrt{\frac{4\log(d)}{n_{2}}}\Big{\\}}.$
Using an union bound, we have
$\mathbb{P}(\mathcal{E}^{c})\leq 1/d.$
From the Karush–Kuhn–Tucker (KKT) condition, the solution $\widehat{\theta}$
satisfies
$\begin{split}&\frac{1}{n_{2}}\sum_{t=1}^{n_{2}}A_{tj}^{\top}(Y_{t}-A_{t}^{\top}\widehat{\theta})=\lambda_{2}\text{sign}(\widehat{\theta}_{j}),\
\text{if}\ \widehat{\theta}_{j}\neq 0;\\\
&\Big{|}\frac{1}{n_{2}}\sum_{t=1}^{n_{2}}A_{tj}^{\top}(Y_{t}-A_{t}^{\top}\widehat{\theta})\Big{|}\leq\lambda_{2},\
\text{if}\ \widehat{\theta}_{j}=0.\end{split}$ (B.16)
Therefore,
$\frac{1}{n_{2}}\sum_{t=1}^{n_{2}}A_{tj}(A_{t}^{\top}\theta-
A_{t}^{\top}\widehat{\theta})=\frac{1}{n_{2}}\sum_{i=1}^{n_{2}}A_{tj}(Y_{t}-A_{t}^{\top}\widehat{\theta})-\frac{1}{n_{2}}\sum_{i=1}^{n_{2}}A_{tj}\eta_{t}$
Since $\lambda_{2}=4\sqrt{\log(d)/n_{2}}$, under event $\mathcal{E}$, we have
$\Big{|}\frac{1}{n_{2}}\sum_{t=1}^{n_{2}}A_{tj}(A_{t}^{\top}\theta-
A_{t}^{\top}\widehat{\theta})\Big{|}\geq\lambda_{2}/2,\ \text{if}\
\widehat{\theta}_{j}\neq 0.$
And
$\begin{split}\frac{1}{n_{2}^{2}}\sum_{j=1}^{d}\Big{(}\sum_{t=1}^{n_{2}}A_{tj}(A_{t}^{\top}\theta-
A_{t}^{\top}\widehat{\theta})\Big{)}^{2}&\geq\sum_{j:\widehat{\theta}_{j}\neq
0}\Big{(}\frac{1}{n_{2}}\sum_{t=1}^{n_{2}}A_{tj}(A_{t}^{\top}\theta-
A_{t}^{\top}\widehat{\theta})\Big{)}^{2}\\\
&\geq|\text{supp}(\widehat{\theta}_{n_{2}})|\lambda_{2}^{2}/4.\end{split}$
On the other hand, let
$X=(A_{1},\ldots,A_{n_{2}})^{\top}\in\mathbb{R}^{n_{2}\times d}$ and
$\phi_{\max}=\sigma_{\max}(XX^{\top}/n_{2})$. Then we have
$\begin{split}&\frac{1}{n_{2}^{2}}\sum_{j=1}^{d}\Big{(}\sum_{t=1}^{n_{2}}A_{tj}\Big{(}A_{t}^{\top}\theta-
A_{t}^{\top}\widehat{\theta}\Big{)}\Big{)}^{2}\\\
=&\frac{1}{n_{2}^{2}}\Big{(}X\theta-X\widehat{\theta}\Big{)}^{\top}XX^{\top}\Big{(}X\theta-X\widehat{\theta}\Big{)}\leq\phi_{\max}\frac{1}{n_{2}}\|X\widehat{\theta}-X\theta\|_{2}^{2}.\end{split}$
Therefore, with probability at least $1-1/d$,
$|\text{supp}(\widehat{\theta}_{n_{2}})|\leq\frac{4\phi_{\max}}{\lambda_{2}^{2}n_{2}}\|X\widehat{\theta}-X\theta\|_{2}^{2}.$
(B.17)
To lighten the notation, we write $C_{\min}=C_{\min}(\mathcal{A})$. As proven
in Section B.3, $X^{\top}X/n_{2}$ satisfies Condition A.1 with
$\kappa=C_{\min}/2$ when $n_{2}\gtrsim s\log(d)$. Applying the in-sample
prediction error bound in Theorem A.3, we have with probability at least
$1-1/d$,
$\frac{1}{n_{2}}\big{\|}X\widehat{\theta}-X\theta\big{\|}_{2}^{2}\leq\frac{9}{C_{\min}}\frac{s\log(d)}{n_{2}}.$
(B.18)
Putting Eqs. (B.17) and (B.18) together, we have with probability at least
$1-2/d$.
$|\text{supp}(\widehat{\theta})|\leq\frac{9\phi_{\max}s}{C_{\min}}.$ (B.19)
Step 2: variable screening property of Lasso. Under Condition 5.1 and using
Theorem A.3, it holds that with probability at least $1-1/d$,
$\min_{j\in\text{supp}(\theta)}|\theta_{j}|>\big{\|}\widehat{\theta}-\theta\big{\|}_{2}\geq\big{\|}\widehat{\theta}-\theta\big{\|}_{\infty}.$
If there is a $j\in\text{supp}(\theta)$ but
$j\notin\text{supp}(\widehat{\theta})$, we have
$|\widehat{\theta}_{j}-\theta_{j}|=|\theta_{j}|>\big{\|}\widehat{\theta}-\theta\big{\|}_{\infty}.$
On the other hand,
$|\widehat{\theta}_{j}-\theta_{j}|\leq\big{\|}\widehat{\theta}-\theta\big{\|}_{\infty},$
which leads a contradiction. Now we conclude that
$\text{supp}(\widehat{\theta})\supseteq\text{supp}(\theta)$. We reproduce
Theorem 22.1 in Lattimore and Szepesvári [2020] for the regret bound of phase
elimination algorithm for stochastic linear bandits with finitely-many arms.
###### Theorem B.2.
The $n$-steps regret of phase elimination algorithm satisfies
$R_{n}\leq C\sqrt{nd\log(Kn)},$
for an appropriately chosen universal constant $C>0$.
Together with Eq. (B.19), we argue the regret of running phase elimination
algorithm (Section 22 in Lattimore and Szepesvári [2020]) on
$\text{supp}(\widehat{\theta})$ for the rest $n-n_{2}$ rounds can be upper
bounded by
$\mathbb{E}_{\theta}\Big{[}\sum_{t=n_{2}+1}^{n}\big{\langle}\theta,x^{*}-A_{t}\big{\rangle}\Big{]}\leq
C\sqrt{\frac{9\phi_{\max}}{C_{\min}}s(n-n_{2})\log(K(n-n_{2}))}.$
This ends the proof.
## Appendix C Supporting lemmas
###### Lemma C.1 (Bretagnolle-Huber inequality).
Let $\mathbb{P}$ and $\widetilde{\mathbb{P}}$ be two probability measures on
the same measurable space $(\Omega,\mathcal{F})$. Then for any event
$\mathcal{D}\in\mathcal{F}$,
$\mathbb{P}(\mathcal{D})+\widetilde{\mathbb{P}}(\mathcal{D}^{c})\geq\frac{1}{2}\exp\left(-\text{KL}(\mathbb{P},\widetilde{\mathbb{P}})\right)\,,$
(C.1)
where $\mathcal{D}^{c}$ is the complement event of $\mathcal{D}$
($\mathcal{D}^{c}=\Omega\setminus\mathcal{D}$) and
$\text{KL}(\mathbb{P},\widetilde{\mathbb{P}})$ is the KL divergence between
$\mathbb{P}$ and $\widetilde{\mathbb{P}}$, which is defined as $+\infty$, if
$\mathbb{P}$ is not absolutely continuous with respect to
$\widetilde{\mathbb{P}}$, and is
$\int_{\Omega}d\mathbb{P}(\omega)\log\frac{d\mathbb{P}}{d\widetilde{\mathbb{P}}}(\omega)$
otherwise.
The proof can be found in the book of Tsybakov [2008]. When
$\text{KL}(\mathbb{P},\widetilde{\mathbb{P}})$ is small, we may expect the
probability measure $\mathbb{P}$ is close to the probability measure
$\widetilde{\mathbb{P}}$. Note that
$\mathbb{P}(\mathcal{D})+\mathbb{P}(\mathcal{D}^{c})=1$. If
$\widetilde{\mathbb{P}}$ is close to $\mathbb{P}$, we may expect
$\mathbb{P}(\mathcal{D})+\widetilde{\mathbb{P}}(\mathcal{D}^{c})$ to be large.
###### Lemma C.2 (Divergence decomposition).
Let $\mathbb{P}$ and $\widetilde{\mathbb{P}}$ be two probability measures on
the sequence $(A_{1},Y_{1},\ldots,A_{n},Y_{n})$ for a fixed bandit policy
$\pi$ interacting with a linear contextual bandit with standard Gaussian noise
and parameters $\theta$ and $\widetilde{\theta}$ respectively. Then the KL
divergence of $\mathbb{P}$ and $\widetilde{\mathbb{P}}$ can be computed
exactly and is given by
$\text{KL}(\mathbb{P},\widetilde{\mathbb{P}})=\frac{1}{2}\sum_{x\in\mathcal{A}}\mathbb{E}[T_{x}(n)]\,\langle
x,\theta-\widetilde{\theta}\rangle^{2}\,,$ (C.2)
where $\mathbb{E}$ is the expectation operator induced by $\mathbb{P}$.
This lemma appeared as Lemma 15.1 in the book of Lattimore and Szepesvári
[2020], where the reader can also find the proof.
|
# Frozen stars: Black hole mimickers sourced by a string fluid
Ram Brustein(1), A.J.M. Medved(2,3)
${}^{\textrm{\normalsize(1)\ Department of Physics, Ben-Gurion University,
Beer-Sheva 84105, Israel}}$ ${}^{\textrm{\normalsize(2)\ Department of Physics
\& Electronics, Rhodes University, Grahamstown 6140, South Africa}}$
${}^{\textrm{\normalsize(3)\ National Institute for Theoretical Physics
(NITheP), Western Cape 7602, South Africa}}$<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
The frozen star is a non-singular, ultracompact object that, to an external
observer, looks exactly like a Schwarzschild black hole, but with a different
interior geometry and matter composition. The frozen star needs to be sourced
by an extremely anisotropic fluid, for which the sum of the radial pressure
and energy density is either vanishing or perturbatively small. Here, we show
that this matter can be identified with the string fluid resulting from the
decay of an unstable $D$-brane or a brane-antibrane system at the end of open-
string tachyon condensation. The string fluid corresponds to flux tubes
emanating from the center and ending at the Schwarzschild radius of the star.
The effective Lagrangian for this fluid can be recast into a Born-Infeld form.
When the fluid Lagrangian is coupled to that of Einstein’s Gravity, the
static, spherically symmetric solutions of the equations of motion are shown
to be the same as those describing the frozen star model. Frozen stars can
therefore be viewed as gravitationally back-reacted BIons. The Born-Infeld
Lagrangian provides a complete set of equations that describe the dynamics of
the frozen star in a generic state, which is not necessarily static nor
spherically symmetric. Additionally, this description provides a new physical
perspective on the structure of the frozen star in terms of the corresponding
electric fields and charges. The electric field is sourced by a point-like
charge at the center of the star, while its outer layer is equal and
oppositely charged. The electric force between the charges is offset because
the mass of the star is fixed.
## 1 Introduction
The frozen star is a type of black hole (BH) mimicker: a static, spherically
symmetric solution of Einstein’s equations that is a regular and horizonless
alternative to the singular Schwarzschild solution [1, 2, 3, 4, 5].
Furthermore, the frozen star is able to reproduce all of the standard
thermodynamic properties of a Schwarzschild BH of the same mass [6]. It is
also possible to incorporate rotation into the frozen star model [7],
resulting in a regular and horizonless mimicker of the Kerr solution.
The motivation for the interior metric (1) follows from the initial impetus
for the model itself. As discussed at length in [1] (also see [2]), this
solution is meant to be an effective classical description of a highly quantum
state for the object’s interior that is known as the polymer model [8, 9]. In
this picture, the BH mimicker consists of an extremely hot collection of long,
closed, interacting, fundamental strings. One can view the polymer model as
the microscopic description of the frozen star. As the story unfolds, this
string picture will come around full circle, as we arrive at yet another
description of the interior that is motivated by string theory but differently
from the polymer model.
We will show that the frozen star is sourced by a fluid of cold strings
resulting from the decay of an unstable $D$-brane or a brane–antibrane system
at the end of open-string tachyon condensation [10, 11, 12, 13, 14]. Gibbons,
Hori and Yi (GHY) [15] (also see [16]) reformulated Sen’s effective action
describing the state of unstable $D$-branes after the process of tachyon
condensation. Many more references to discussions on this process can be found
in [17]. The reformulated Lagrangian is of a specific Born–Infeld form which
describes a fluid of rigid electric-flux tubes. In [16], the possible bending
and stretching of the flux tubes were also considered. As was duly noted in
[15], a similar form of Lagrangian with a two-form field first appeared in
[18]. After that, the same Lagrangian was proposed in the context of the
cloud-of-strings model in [19] and the string-dust model in [20] and was
discussed in a cosmological context as a “hedgehog compactification” in [21,
22].
When gravity is neglected, the Born–Infeld theory of $Dp$-branes has been
shown to give rise to spherically symmetric, static, solitonic solutions of
finite energy that are known as BIons [23]. (See [24] for a later review.)
BIons were shown to have a point-like source in their core with strings, or
flux tubes, emanating from the core and going all the way to infinity. From
this perspective, frozen stars can be viewed as a specific form of
gravitationally back-reacted BIons whose energy and spatial extent are both
finite.
The Born–Infeld Lagrangian completes the Einstein equations and describes the
dynamics of the frozen star in a generic state, which is not necessarily
static nor spherically symmetric. We can thus show that the frozen star
solution is a consistent and complete model which describes an ultracompact
object whose equilibrium state is practically indistinguishable from that of a
Schwarzschild BH.
Although beyond the scope of the current investigation, the framework that is
developed here should also allow us to study extensions of our model to
rotating frozen stars [7] and to the “defrosted star” model, which allows for
deviations away from $\;\rho+p=0\;$ [5]. The latter is a necessary step for
the frozen star interior to support dynamical modes because of the
ultrastability of the undeformed model [1, 3, 5]. The current framework should
eventually enable us to study extensions of our model to cases in which the
departures from equilibrium physics are macroscopic — the importance of which
has been stressed in [25]. Out-of-equilibrium physics could prove to be
important in describing the dynamics of astrophysical BH mergers, which would
help in distinguishing the frozen star from the Schwarzschild solution, as
well as from other BH mimickers.
## 2 The frozen star
The simplest form of frozen star metric, which is the one that will be
considered in this paper, is as follows:
$ds^{2}\;=\;-\varepsilon^{2}dt^{2}+\frac{1}{\varepsilon^{2}}dr^{2}+r^{2}(d\theta^{2}+\sin^{2}\theta
d\phi^{2})\;,$ (1)
where $\varepsilon^{2}$ is a small, constant and dimensionless parameter.
111The same parameter was referred to as $\epsilon$ or $\varepsilon$ in our
earlier papers on this topic. A recent paper that used data from the Event
Horizon Telescope constrained the parameter to be extremely small,
$\;\varepsilon^{2}\lesssim 10^{-22}\;$ [26]. The outermost surface of the star
is pushed out slightly from the location of the would-be horizon, $\;R\sim
2MG(1+\varepsilon^{2})\;$, where $R$ is the star’s radius and $M$ is its mass.
Additionally, just like the horizon-like outer surface, each radial slice of
the interior is a surface of exponentially large but finite redshift and thus
can be viewed, approximately, as a marginally trapped surface.
The stress–energy tensor $T^{a}_{\;\;b}$ that is needed to source this
simplest frozen star geometry is distinguished by having a radial component of
pressure, $\;p\equiv p_{r}\;$, that takes on the most negative value that is
allowed by causality: $\;p=-\rho\;$, where $\rho$ is the energy density.
Meanwhile, the transverse components of pressure $p_{\perp}\;$ are vanishing,
$\displaystyle\rho$ $\displaystyle=$
$\displaystyle-T^{t}_{\;\;t}\;=\;-\frac{1}{8\pi
G}G^{t}_{\;\;t}\;=\;\frac{1}{8\pi G}\frac{1-\varepsilon^{2}}{r^{2}}\;,$ (2)
$\displaystyle p$ $\displaystyle=$ $\displaystyle
T^{r}_{\;\;r}\;=\;\frac{1}{8\pi G}G^{r}_{\;\;r}\;=\;-\frac{1}{8\pi
G}\frac{1-\varepsilon^{2}}{r^{2}}\;,$ (3) $\displaystyle p_{\perp}$
$\displaystyle=$ $\displaystyle
T^{\theta}_{\;\;\theta}\;=\;T^{\phi}_{\;\;\phi}\;=\;\frac{1}{8\pi
G}G^{\theta}_{\;\;\theta}\;=\;0\;.$ (4)
It follows that
$2GM\;=\;2G\int^{R}dr\;4\pi r^{2}\rho\;=\;R(1-\varepsilon^{2})\;.$ (5)
The frozen star geometry can be viewed as a spherically symmetric collection
of straight, radially pointing, rigid strings (i.e., a “hedgehog”), each with
a constant tension of $\frac{1}{8\pi G}$ [4]. The geometry is mildly singular
near the center of the star, as are $\rho$ and $p$, so that a very small
concentric sphere must be regularized to ensure that these densities remain
finite. This process was described in [4]. Also, as detailed in [3], a
matching process is required near the outermost layer of the star so that the
metric in Eq. (1) and its corresponding stress tensor in Eqs. (2-4) match
smoothly to the exterior Schwarzschild geometry. Later in the paper, we will
offer a different perspective on the regularization and smoothing in terms of
a point-like charge at the center of the star and an equal and opposite charge
that is distributed uniformly over the surface of the star.
A perk of the maximally negative radial pressure is that the frozen star model
is able to evade the singularity theorems of Hawking and Penrose [27, 28]. For
a finite value of $\varepsilon^{2}$, a trapped surface is never actually
formed and having $\;p+\rho=0\;$ means that geodesics do not converge. The way
to understand this is to realize that the equation of state $\;p+\rho=0\;$ can
also be viewed as the saturation of the radial component of the null-energy
condition. Thus, the conditions under which the singularity theorems are valid
are not satisfied by the frozen star geometry. Similarly, a frozen star evades
the “Buchdahl-like” bounds which limit the compactness of matter [29, 30, 31,
32, 33] by having a large negative pressure throughout its interior.
An important characteristic of the frozen star geometry is that the deviations
from the Schwarzschild solution are large throughout the interior of the
object; that is, on horizon-length scales. This goes against common lore that
singularity resolution requires some quantum corrections only near the would-
be singularity, but such reasoning has been shown to lead to energy loss by
radiation that far exceeds the original mass of the object [34, 35].
## 3 Born-Infeld effective Lagrangian and BIons
Here, we review for completeness relevant portions of the analysis of Gibbons,
Hori and Yi (GHY) in [15] (also see [16]), where further details and
references can be found. Our conventions are that an index of 0 denotes time,
one of $a$,$b$,$\cdots$ denotes an arbitrary spacetime dimension and one of
$i$,$j$,$\cdots$ denotes an arbitrary spatial dimension. We assume three large
spatial dimensions for concreteness and, for spherical coordinates,
$\;(0,1,2,3)=(t,r,\theta,\phi)\;$.
The starting point for the analysis in [15] is Sen’s effective action for the
decay of unstable $D$-branes; specifically, at the end of tachyon condensation
and near the minimum of the tachyon potential, where it is vanishing [10, 11,
12, 13, 14]. Sen’s effective Lagrangian (density) can be expressed as
${\cal L}\;=\;-V(T)\sqrt{-Det\left(\eta+2\pi\alpha^{\prime}{\cal
F}\right)}\;+\;\sqrt{-\eta}A^{a}J_{a}\;,$ (6)
where $V(T)$ is the tachyon potential or, equivalently, the $D$-brane tension,
$2\pi\alpha^{\prime}$ is the inverse of the fundamental string tension,
$\;\eta^{a}_{\;\;b}=\delta^{a}_{\;\;b}\;$ is the Minkowski background metric,
$\;{\cal F}_{ab}=\partial_{a}A_{b}-\partial_{b}A_{a}\;$ is the field-strength
tensor for the gauge field $A_{a}$ and $J_{a}$ is the source, a 4-vector
current density. We have not included the kinetic terms for the tachyon field
and for the scalars that are associated with the additional transverse
dimensions. These fields are readily restored if need be (see [16] for formal
details) but are not necessary for the current analysis.
The Lagrangian (6) vanishes when $V(T)$ vanishes; however, the Hamiltonian
${\cal H}$ is well defined. In the case of no magnetic sources, this is
$\;{\cal H}\;=\;D^{i}{\cal F}_{0i}-{\cal L}\;=\;E_{i}D^{i}\;,$ (7)
where the last equality has used the fact that $\;{\cal L}=0\;$ at the minimum
of the potential, $\;E_{i}={\cal F}_{0i}\;$ is the electric field and
$\;D^{i}$ is the electric displacement which, in the current case, is the
canonical conjugate of the gauge field, $\;D^{i}=\frac{\delta{\cal
L}}{\delta(\partial_{0}A_{i})}\;$. The displacement $D^{i}$ is naturally
preferred over $E^{i}$ to play the role of the “electric field” because it is
the field in a Born–Infeld theory that always satisfies the Gauss’-law
constraint,
$\partial_{i}D^{i}\;=\;J_{0}\;=\;\rho_{e}\;.$ (8)
More generally, the Hamiltonian can be expressed as
${\cal H}\;=\;\frac{\delta{\cal L}}{\delta(\partial_{0}A_{i})}-{\cal
L}\;=\;\frac{1}{2\pi\alpha^{\prime}}\sqrt{D^{i}D_{i}+{\cal P}^{i}{\cal
P}_{i}}\;,$ (9)
where $\;{\cal P}_{i}=-{\cal
F}_{ij}D^{j}=\left(\vec{D}\times\vec{B}\right)_{i}\;$ is the conserved
momentum associated with spatial translations. The definition of the magnetic
induction is standard, $\;B^{i}=\frac{1}{2}\epsilon^{ijk}{\cal F}_{jk}\;$.
The sources can be included implicitly by imposing the Gauss’-law constraint
(8), Ampere’s law $\;\partial_{i}{\cal
F}^{i}_{\;\;j}-\partial_{0}{E}_{j}=J_{j}\;$, along with the Bianchi
identities, which include Faraday’s law
$\;{\nabla\times}\;\vec{E}+\partial_{0}{\vec{B}}=0\;$ and
$\partial_{i}B^{i}\;=\;0\;.$ (10)
The last equation assumes that there are no sources for magnetic monopoles.
The inline equations above Eq. (10) are of less importance here, as our main
concern is the case for which magnetic sources and time-dependent fields are
absent.
To obtain a useful Lagrangian, GHY follow techniques from [36] and regard the
magnetic fields ${\cal F}_{ij}$ as the conjugates with respect to a new set of
dual variables $K_{ij}$ such that $\;K^{ij}=2\frac{\delta{\cal H}}{\delta{\cal
F}_{ij}}$. The fields ${\cal F}_{ab}$ and ${\cal K}_{ab}$ — the latter being
an extension of $K_{ij}$ as defined in Eq. (14) — should be regarded as
independent variables. This will be relevant when we derive the equations of
motion (EOM) for the tensor field ${\cal K}$.
The resulting Lagrangian is then defined by an appropriate Legendre
transformation,
${\cal L}^{\prime}\;=\;{\cal H}-\frac{1}{2}{\cal
F}_{ij}K^{ij}\;=\;\frac{1}{2\pi\alpha^{\prime}}\sqrt{D^{i}D_{i}-\frac{1}{2}K^{ij}K_{ij}}\;,$
(11)
and is clearly non-vanishing in general. The latter equality can be obtained
using the relation
$\frac{1}{2}K^{ij}K_{ij}\;=\;\frac{1}{{\cal H}^{2}}{\cal P}^{i}{\cal
P}_{i}D^{j}D_{j}\;.$ (12)
GHY then define a two-form field ${\cal
K}_{ab}=\partial_{a}\widetilde{A}_{b}-\partial_{b}\widetilde{A}_{a}\;$ that
acts as an effective field strength,
$\displaystyle{\cal K}_{0i}$ $\displaystyle=$ $\displaystyle D_{i}\;,$ (13)
$\displaystyle{\cal K}_{ij}$ $\displaystyle=$ $\displaystyle K_{ij}\;.$ (14)
The new Lagrangian can now be written in a manifestly Born–Infeld form,
${\cal L}^{\prime}\;=\;\frac{1}{2\pi\alpha^{\prime}}\sqrt{-\frac{1}{2}{\cal
K}^{ab}{\cal K}_{ab}}\;,$ (15)
where the negative sign is a consequence of the time–time component of the
metric appearing in the contraction when $\;a=0\;$ and $\;b=i\;$ (and vice
versa). Here, the (effective) electric-field term is presumed to be the
dominant one. In the case of no magnetic sources, one finds that the new
Lagrangian is the same as the original Hamiltonian, $\;{\cal
L}^{\prime}\;=\;E_{i}D^{i}\;$.
There is a subtlety in this procedure in that Eq. (14) implies that $\;{\cal
K}\wedge{\cal K}=0\;$ but, given that this constraint is in effect, the
canonical analysis of ${\cal L}^{\prime}$ does not lead back to the same
Hamiltonian ${\cal H}$. As explained in [15], this situation can be rectified
by adding a Lagrange-multiplier term to ${\cal L}^{\prime}$ that imposes the
constraint explicitly.
To summarize, the Born–Infeld Lagrangian describing the string fluid is the
following:
${\cal L}^{\prime}\;=\;\frac{1}{2\pi\alpha^{\prime}}\sqrt{-\frac{1}{2}{\cal
K}^{ab}{\cal K}_{ab}}\;+\;\lambda_{1}\epsilon^{abcd}{\cal K}_{ab}{\cal
K}_{cd}\;+\sqrt{-\eta}\;A^{a}J_{a}\;,$ (16)
where $\lambda_{1}$ is the Lagrange multiplier and the constraint is
automatically satisfied (and the constraint term vanishes) by imposing Eq.
(14). The connection with strings follows from the fact that ${\cal K}^{ab}$
can be identified as a surface-forming bivector [15], which can then be
interpreted as a cross-sectional slice of the world sheet of an open string or
a flux tube.
The EOM for this Lagrangian, $\;d{\cal L}^{\prime}=0\;$, are most transparent
when expressed in terms of the original field-strength tensor as these are
equivalent to the original Bianchi identities [15],
$d{\cal F}\;=\;0\;.$ (17)
On the other hand, the Bianchi identities for the new field-strength tensor
$d{\cal K}\;=\;0\;$ (18)
are equivalent to the EOM for the original Lagrangian [15].
There is also a subtlety concerning the source term. The original source term
in Eq. (6) is unaffected by the Legendre transformation; however, $A_{a}$ is
not the gauge field for the field strength ${\cal K}_{ab}$. Let us denote its
gauge field as $\widetilde{A}_{a}$, then
$\;D_{i}=\partial_{i}\widetilde{A}^{0}\;$ whereas
$\;E_{i}=\partial_{i}A^{0}\;$. Fortunately, this tension is resolved because,
as we have noted, the two types of field strengths are independent, as must
also be true of their respective gauge fields. It follows that we can vary the
Lagrangian ${\cal L}^{\prime}$ with respect to either gauge field, and it
happens to be the variation with respect to $A^{a}$ that leads to the expected
Gauss’s-law constraint in Eq. (8). This is most clear in the case of no
magnetic sources, as can be seen from the form of the inline equation for
${\cal L}^{\prime}$ below Eq. (15).
If there are no bulk sources — our case of particular interest — the
stress–energy tensor is given by
$\displaystyle T_{ab}\;=\;2\frac{\delta{\cal
L}^{\prime}}{\delta\eta^{ab}}\;=\;\frac{1}{2\pi\alpha^{\prime}}\frac{{\cal
K}_{a}^{\;\;c}{\cal K}_{bc}}{\sqrt{-\frac{1}{2}{\cal K}^{ab}{\cal
K}_{ab}}}\;.$ (19)
That this is the appropriate definition of $T_{ab}$ is justified in [19]. In
terms of the effective electric fields, magnetic fields and momenta,
$\displaystyle T_{00}$ $\displaystyle=$
$\displaystyle\frac{1}{2\pi\alpha^{\prime}}\frac{D^{i}D_{i}}{\sqrt{D^{i}D_{i}-\frac{1}{2}K^{ij}K_{ij}}}\;=\;{\cal
H}\;,$ (20) $\displaystyle T_{0i}$ $\displaystyle=$
$\displaystyle\frac{1}{2\pi\alpha^{\prime}}\frac{D^{j}K_{ij}}{\sqrt{D^{i}D_{i}-\frac{1}{2}K^{ij}K_{ij}}}\;=\;-{\cal
P}_{i}\;,$ (21) $\displaystyle T_{ij}$ $\displaystyle=$
$\displaystyle\frac{1}{2\pi\alpha^{\prime}}\frac{-D^{i}D_{j}+K_{i}^{\;\;k}K_{jk}}{\sqrt{\pi^{i}\pi_{i}-\frac{1}{2}K^{ij}K_{ij}}}\;=\;\frac{1}{2\pi\alpha^{\prime}}\frac{-D_{i}D_{j}+{\cal
P}_{i}{\cal P}_{j}}{{\cal H}}\;,$ (22)
where the right-most relations make use of Eq. (12).
The simplest solution of the EOM is a static, spherically symmetric
configuration, which happens when $\;{\cal K}_{01}=-{\cal K}_{10}\;$ and all
other elements of this field strength vanish. In which case, $\;{\cal
K}^{ab}{\cal K}_{ab}=-2D_{1}D^{1}$. Then the only non-vanishing elements of
$T_{ab}$ are
$T_{00}\;=-\;T_{11}\;=\;\frac{1}{2\pi\alpha^{\prime}}\sqrt{D^{1}D_{1}}\;,$
(23)
implying that $\;p=-\rho\;$ and $\;p_{\perp}=0\;$. This is the so-called
string fluid [15], for which
$E_{1}\;=\;\frac{1}{2\pi\alpha^{\prime}}\frac{D_{1}}{\sqrt{(D_{1})^{2}}}\;=\;\frac{1}{2\pi\alpha^{\prime}}\;,$
(24)
and so
$T_{00}\;=\;E_{1}D^{1}\;.$ (25)
Importantly, the bulk portion (or first term) of the Lagrangian (16) is
exactly the same as $T_{00}$,
${\cal
L}^{\prime}_{bulk}\;=\;\frac{1}{2\pi\alpha^{\prime}}\sqrt{D^{1}D_{1}}\;=\;E_{1}D^{1}\;.$
(26)
The static and spherically symmetric case is also related to the BIon solution
[23], for which $\;D_{1}=\frac{q}{4\pi r^{2}}\;$, corresponding to a point-
like charge at the origin $\;\partial_{i}D^{i}=q\leavevmode\nobreak\
\delta^{(3)}(\vec{r})\;$. BIons can therefore be viewed as being sourced by a
fluid of electric flux lines that are emanating radially from a point source
at the center and extending all the way to infinity.
## 4 Frozen stars as gravitationally backreacted BIons
In this section, we couple the BIons to Einstein’s gravity, discuss the
backreaction on the BIons and, finally, show that they correspond to solutions
of the frozen star model.
The complete gravitational and matter action includes, in addition to the
Einstein–Hilbert and Born–Infeld Lagrangians, the constraint and source terms
$S_{GBI}\;=\;\int
d^{4}x\left\\{\sqrt{-g}R+\frac{1}{2\pi\alpha^{\prime}}\sqrt{-\frac{1}{2}{\cal
K}^{ab}{\cal K}_{ab}}+\lambda_{1}\epsilon^{abcd}{\cal K}_{ab}{\cal
K}_{cd}+\sqrt{-g}J_{a}A^{a}\right\\}\;.$ (27)
As discussed at the end of this section, a second constraint which fixes the
mass is still to be included. But, as a surface term, it will not affect the
bulk EOM. We will also be assuming that there are no sources in the bulk.
The gravitational EOM are $\;T^{a}_{\;\;b}=\frac{1}{8\pi G}G^{a}_{\;\;b}\;$,
with $T^{a}_{\;\;b}$ given in Eq. (19), again because there are no bulk
sources. The Born–Infeld EOM are as discussed in the previous section, except
partial derivatives should be replaced by covariant derivatives (although
often inconsequential because of the anti-symmetry properties of the field-
strength tensor). The same applies to the Bianchi identities and source
constraints.
To reproduce the frozen star solution, we will need to choose specific source
terms and impose certain boundary conditions such that the total charge of the
frozen star vanishes and its total mass is fixed (the latter constraint will
be discussed further on). The source current $J_{a}$, for which only $J_{0}$
is non-vanishing, therefore includes a point-like positive electric charge at
the star’s center and a negative charge of equal magnitude which is spread
evenly over the outer surface of the star.
One might be concerned that the attractive electric force between the opposite
charges in the core and the outer layer will endanger its stability. However,
as discussed later, fixing the mass of the star exactly offsets this
attractive force. The reason being that a fixed mass for the solution
translates into a fixed radius, so that the outer layer cannot move inwards
(or outwards) in response to a supplementary force. This was a key ingredient
in the discussion of the stability of the frozen star [1, 3, 5].
The frozen star solution can be related to the BIon solution. The matching can
be made precise for the static, spherically symmetric case by using the
${}^{t}_{\ t}$ component of the Einstein equations along with Eq. (23),
$\rho_{FS}\;=\;\frac{1}{8\pi
G}\frac{1-\varepsilon^{2}}{r^{2}}\;=\;\frac{1}{2\pi\alpha^{\prime}}\sqrt{D^{1}D_{1}}\;,$
(28)
such that $\rho_{FS}$ is the energy density as expressed in Eq. (2). The
result is
$\sqrt{D^{1}D_{1}}\;=\;\frac{\alpha^{\prime}}{4G}\frac{1}{r^{2}}+{\cal
O}[\varepsilon^{2}]\;,$ (29)
corresponding to the existence of an electric point-like charge at the center
of the star, since the solution follows from
$\;\nabla_{i}D^{i}=\nabla_{1}D^{1}=q_{core}\leavevmode\nobreak\
\delta^{(3)}(\vec{r})\;.$ It can then be deduced that
$q_{core}\;=\;\pi\frac{\alpha^{\prime}}{G}\;.$ (30)
An interesting feature of the solution is that the ratio
${\alpha^{\prime}}/{G}$ in weakly coupled string theories scales as $\;\sim
1/g_{s}^{2}\;$, where $g_{s}^{2}$ is the closed string coupling. Moreover, the
description of the solution as radial flux lines fits in well with the
hedgehog picture of the frozen star that was recalled in Section 2.
The total charge of the frozen star needs to vanish, as its energy density,
and thus its electric field, vanishes in the exterior region where the
geometry is described by the vacuum Schwarzschild solution. Correspondingly,
there has to be a charge $q_{out}$ that is spread out uniformly over the
star’s outer surface layer and equal to $-q_{core}$. This leads to a source
term at the outer surface as follows:
$\left(J_{0}\right)_{out}\;=\;\frac{q_{out}}{4\pi
r^{2}}\delta(r-R)\;=\;-\frac{q_{core}}{4\pi r^{2}}\delta(r-R)\;.$ (31)
Figure 1: A frozen star sourced by a fluid of electric flux lines that are
emanating radially from a point-like charge in its core and ending on its
outer layer which is oppositely charged. The electric force between the
charges is offset because the mass of the star is fixed.
Let us now discuss the boundary condition that fixes the mass of the frozen
star. This needs to be formally imposed at infinity, as this is the location
where the mass is defined by way of a surface integral. However, because the
solution outside of the frozen star is strictly a vacuum, the mass can also
evaluated as a surface integral at the outermost radius of the star, or,
alternatively, as a volume integral that enforces the constraint $\;\int
d^{3}x\;\sqrt{g_{22}g_{33}}\;\rho_{FS}=M\;$, as explained in some discussions
about BH mechanics [37, 38]. The latter option entails adding a second
Lagrange-multiplier term of the form (initially expressed as a scalar, not a
density)
$\Delta{L}\;=\;\lambda_{2}\left(\int_{0}^{R}\left[dr\leavevmode\nobreak\ 4\pi
r^{2}\leavevmode\nobreak\ \rho_{FS}\right]\;-\;M\right)\;.$ (32)
Using Eq. (28), we may rewrite the constraint term as
$\Delta{L}\;=\;\lambda_{2}\left(\int_{0}^{R}\left[dr\leavevmode\nobreak\ 4\pi
r^{2}\leavevmode\nobreak\ E_{1}D^{1}\right]\;-\;M\right)\;,$ (33)
which leads to the following Lagrangian density $\Delta{\cal L}$:
$\Delta{\cal L}\;=\;\lambda_{2}\frac{\delta\left(r-R\right)}{4\pi
r^{2}}\left(\int_{0}^{R}\left[dr\leavevmode\nobreak\ 4\pi
r^{2}\leavevmode\nobreak\ E_{1}D^{1}\right]\;-\;M\right)\;.$ (34)
We have, for simplicity, assumed a static and spherically symmetric solution
in expressing the above constraint term, as this will allow us to show
explicitly how the electric force between the charges at the core and the
outer surface is canceled by the mass constraint. However, it is possible to
express the mass constraint covariantly, along the lines of discussions on
isolated and dynamical horizons such as [38]. The outer layer of the frozen
star can be treated as a marginally trapped surface and, therefore, the
formalism for calculating the mass as a covariant surface integral is
applicable. We leave this to a future investigation.
We can solve for $\lambda_{2}$ by varying $\Delta{\cal L}$ with respect to the
original gauge field $A^{0}(R)$, as described in the previous section.
222Note: The integrand factor of $\;4\pi r^{2}=4\pi\sqrt{-g}\;$ is unaffected
by a covariant derivative. The equation of motion that results from varying by
$A^{0}(R)$ at the outer surface, $\;\frac{\delta\Delta{\cal L}}{\delta
A^{0}(R)}+J_{0}(R)=0\;$, along with the Gauss’-law constraint (8) and the
relation between the charges (31), leads to $\;\lambda_{2}=-1\;$, which simply
means that the constraint force exactly cancels the attractive electric force
on the outer charge distribution.
## 5 Summary and outlook
We have shown how our frozen star model for an ultracompact BH mimicker can be
described effectively as the spherically symmetric solution of Einstein’s
gravity coupled to the GHY form of Born–Infeld Lagrangian, which uses Sen’s
effective action for $D$-brane decay as its starting point.
The current framework can be extended in several directions. Including
rotation is straightforward. Let us recall the zero-angular-momentum-
observer’s form of the stress tensor for the rotating star, as presented in
[7]. In this case $\;p=-\rho\;$ and $\;p_{\perp}\sim{\cal
O}[\varepsilon^{2}]\;$, so that the corresponding axially symmetric solution
can be found with only an electric field and vanishing magnetic fields. The
only difference from the static, spherically symmetric case is the dependence
of the electric field on $r$ and $\theta$ rather than on $r$ alone.
To connect this framework with the defrosted star, for which $\;\rho+p\;$ and
$p_{\perp}$ are no longer vanishing but are perturbatively small, will require
more work. In addition to a spherical electric field, a weak spherical
magnetic field will be required. This can be realized by adding a magnetic
monopole source, meaning that the star is sourced by a dyon, as both its
electric and magnetic fluxes are in the radial direction.
It is also of interest that the Born–Infeld model can be thought of as
describing a fluid of open strings, given that the frozen star model has its
conceptual origins as a classical version of our earlier-proposed polymer
model, which contains an extremely hot fluid of closed strings. It is tempting
to conjecture that there is some sort of duality at work here. Nevertheless,
of the pair, it is the Born–Infeld description that has an emphatic advantage;
namely, a well-defined mathematical framework, which we look forward to
exploiting.
## Acknowledgments
We thank Yoav Zigdon for discussions, Suvendu Giri for pointing to us the
similarity of the frozen star model to the clouds of string model and to the
Born-Infeld BIons and Frans Pretorius for insisting that we find the source
matter Lagrangian of the frozen star. We extend our special thanks to Piljin
Yi, for helping us understand his work and its implications to the frozen star
model. The research is supported by the German Research Foundation through a
German-Israeli Project Cooperation (DIP) grant “Holography and the Swampland”
and by VATAT (Israel planning and budgeting committee) grant for supporting
theoretical high energy physics. The research of AJMM received support from an
NRF Evaluation and Rating Grant 119411 and a Rhodes Discretionary Grant
SD07/2022. AJMM thanks Ben Gurion University for their hospitality during past
visits.
## References
* [1] R. Brustein and A. J. M. Medved, “Resisting collapse: How matter inside a black hole can withstand gravity,” Phys. Rev. D 99, no.6, 064019 (2019) [arXiv:1805.11667 [hep-th]].
* [2] R. Brustein and A. J. M. Medved, “Non-Singular Black Holes Interiors Need Physics Beyond the Standard Model,” Fortsch. Phys. 67, no.10, 1900058 (2019) [arXiv:1902.07990 [hep-th]].
* [3] R. Brustein, A. J. M. Medved and T. Simhon, “Black holes as frozen stars,” Phys. Rev. D 105, no.2, 024019 (2022) [arXiv:2109.10017 [gr-qc]].
* [4] R. Brustein, A. J. M. Medved, T. Shindelman and T. Simhon, “Black Holes as Frozen Stars: Regular Interior Geometry,” Fortsch. Phys. 72, no.1, 2300188 (2024) [arXiv:2301.09712 [gr-qc]].
* [5] R. Brustein, A. J. M. Medved and T. Shindelman, “Defrosting frozen stars: spectrum of internal fluid modes,” Phys. Rev. D 108, no.4, 044058 (2023) [arXiv:2304.04984 [gr-qc]].
* [6] R. Brustein, A. J. M. Medved and T. Simhon, “Thermodynamics of frozen stars,” [arXiv:2310.11572 [gr-qc]].
* [7] R. Brustein and A. J. M. Medved, “Sourcing the Kerr geometry,” [arXiv:2310.16467 [gr-qc]].
* [8] R. Brustein and A. J. M. Medved, “Black holes as collapsed polymers,” Fortsch. Phys. 65, no. 1, 1600114 (2017) [arXiv:1602.07706 [hep-th]].
* [9] R. Brustein and A. J. M. Medved,“Emergent horizon, Hawking radia- tion and chaos in the collapsed polymer model of a black hole,” Fortsch. Phys. 65, 0116 (2017) [arXiv:1607.03721 [hep-th]].
* [10] A. Sen, “Tachyon condensation on the brane anti-brane system,” JHEP 08, 012 (1998) [arXiv:hep-th/9805170 [hep-th]].
* [11] A. Sen, “BPS D-branes on nonsupersymmetric cycles,” JHEP 12, 021 (1998) [arXiv:hep-th/9812031 [hep-th]].
* [12] A. Sen, “NonBPS states and Branes in string theory,” [arXiv:hep-th/9904207 [hep-th]].
* [13] A. Sen, “Supersymmetric world volume action for nonBPS D-branes,” JHEP 10, 008 (1999) [arXiv:hep-th/9909062 [hep-th]].
* [14] A. Sen, “Universality of the tachyon potential,” JHEP 12, 027 (1999) [arXiv:hep-th/9911116 [hep-th]].
* [15] G. W. Gibbons, K. Hori and P. Yi, “String fluid from unstable D-branes,” Nucl. Phys. B 596, 136 (2001) [arXiv:hep-th/0009061 [hep-th]].
* [16] H. U. Yee and P. Yi, “Open / closed duality, unstable D-branes, and coarse grained closed strings,” Nucl. Phys. B 686, 31 (2004) [arXiv:hep-th/0402027 [hep-th]].
* [17] A. Sen, “Tachyon dynamics in open string theory,” Int. J. Mod. Phys. A 20, 5513 (2005) [arXiv:hep-th/0410103 [hep-th]].
* [18] H. B. Nielsen and P. Olesen, “Local field theory of the dual string,” Nucl. Phys. B 57, 367 (1973).
* [19] P. S. Letelier, “Clouds Of Strings In General Relativity,” Phys. Rev. D 20 1294 (1979).
* [20] J. Stachel, “Thickening The String. I. The String Perfect Dust,” Phys. Rev. D 21, 2171 (1990).
* [21] E. I. Guendelman and A. Rabinowitz, “The Gravitational field of a hedgehog and the evolution of vacuum bubbles,” Phys. Rev. D 44, 3152 (1991).
* [22] E. I. Guendelman and A. I. Rabinowitz, “Hedgehog compactification,” Phys. Rev. D 47, 3474 (1993) [erratum: Phys. Rev. D 48, 2961 (1993)].
* [23] G. W. Gibbons, “Born-Infeld particles and Dirichlet p-branes,” Nucl. Phys. B 514, 603 (1998) [arXiv:hep-th/9709027 [hep-th]].
* [24] G. W. Gibbons, “Aspects of Born-Infeld theory and string / M theory,” AIP Conf. Proc. 589, no.1, 324 (2001) [arXiv:hep-th/0106059 [hep-th]].
* [25] R. Brustein and A. J. M. Medved, “Quantum hair of black holes out of equilibrium,” Phys. Rev. D 97, no.4, 044035 (2018) [arXiv:1709.03566 [hep-th]].
* [26] R. Carballo-Rubio, F. Di Filippo, S. Liberati and M. Visser, “Constraints on thermalizing surfaces from infrared observations of supermassive black holes,” [arXiv:2306.17480 [astro-ph.HE]].
* [27] R. Penrose, “Gravitational Collapse and Space-Time Singularities,” Phys. Rev. Lett. 14, 57 (1965).
* [28] S. W. Hawking and R. Penrose, “The singularities of gravitational collapse and cosmology,” Proc. R. Soc. Lond. A 314, 529 (1970).
* [29] H. Buchdahl, “General Relativistic Fluid Spheres,” Phys. Rev. 116, 1027 (1959).
* [30] S. Chandrasekhar “Dynamical Instability of Gaseous Masses Approaching the Schwarzschild Limit in General Relativity,” Phys. Rev. Lett. 12, 114 (1964).
* [31] S. Chandrasekhar, “The Dynamical Instability of Gaseous Masses Approaching the Schwarzschild Limit in General Relativity,” Astrophys. J. 140, 417 (1964).
* [32] H. Bondi, “Massive spheres in general relativity,” Proc. Roy. Soc. Lond. A 282, 303 (1964).
* [33] P. O. Mazur and E. Mottola, “Surface tension and negative pressure interior of a non-singular ,” Class. Quant. Grav. 32, no. 21, 215024 (2015) [arXiv:1501.03806 [gr-qc]].
* [34] V. P. Frolov and A. Zelnikov, “Quantum radiation from an evaporating nonsingular black hole,” Phys. Rev. D 95, no. 12, 124028 (2017) [arXiv:1704.03043 [hep-th]].
* [35] R. Carballo-Rubio, F. Di Filippo, S. Liberati, C. Pacilio and M. Visser, “On the viability of regular black holes,” JHEP 1807, 023 (2018) [arXiv:1805.02675 [gr-qc]].
* [36] A. A. Tseytlin, “Selfduality of Born-Infeld action and Dirichlet three-brane of type IIB superstring theory,” Nucl. Phys. B 469, 51 (1996) [arXiv:hep-th/9602064 [hep-th]].
* [37] J. M. Bardeen, B. Carter and S. W. Hawking, “The Four laws of black hole mechanics,” Commun. Math. Phys. 31, 161 (1973).
* [38] A. Ashtekar and B. Krishnan, “Isolated and dynamical horizons and their applications,” Living Rev. Rel. 7, 10 (2004) [arXiv:gr-qc/0407042 [gr-qc]].
|
# A Systematic Bias of Machine Learning Regression Models and Its Correction:
an Application to Imaging-based Brain Age Prediction
Hwiyoung Lee Maryland Psychiatric Research Center, University of Maryland,
School of Medicine University of Maryland, Institute for Health Computing
Shuo Chen Maryland Psychiatric Research Center, University of Maryland,
School of Medicine University of Maryland, Institute for Health Computing
###### Abstract
Machine learning models for continuous outcomes often yield systematically
biased predictions, particularly for values that largely deviate from the
mean. Specifically, predictions for large-valued outcomes tend to be
negatively biased, while those for small-valued outcomes are positively
biased. We refer to this linear central tendency warped bias as the
“systematic bias of machine learning regression”. In this paper, we first
demonstrate that this issue persists across various machine learning models,
and then delve into its theoretical underpinnings. We propose a general
constrained optimization approach designed to correct this bias and develop a
computationally efficient algorithm to implement our method. Our simulation
results indicate that our correction method effectively eliminates the bias
from the predicted outcomes. We apply the proposed approach to the prediction
of brain age using neuroimaging data. In comparison to competing machine
learning models, our method effectively addresses the longstanding issue of
“systematic bias of machine learning regression” in neuroimaging-based brain
age calculation, yielding unbiased predictions of brain age.
## 1 Introduction
Constructing predictive models with continuous outcomes is a fundamental
aspect of modern data science. Numerous tools have been developed for this
purpose, including statistical methods such as ordinary linear regression,
regression shrinkage, and Generalized Additive Models (GAM), as well as
machine learning methods like random forests, XGBoost, and support vector
regression, among others (Hastie et al.,, 2009). A general objective of these
methods is to minimize the discrepancy between the predicted continuous
outcomes and true values, particularly in independent testing datasets. Using
this heuristic, the predicted outcome is unbiased under the classic linear
regression setting. In contrast, the predicted outcome from a machine learning
model for continuous outcomes is often systematically biased (Zhang and Lu,,
2012; Belitz and Stackelberg,, 2021). This systematic bias is problematic for
the applications of machine learning regression models, leading to inaccurate
conclusions and forecasts.
We first illustrate the machine learning regression introduced bias using a
simple simulation study and multiple machine learning models including Kernel
Ridge Regression (KRR), Lasso, XGBoost, and Random Forest. Specifically, we
first generate synthetic training and testing data, each comprising $n=1,000$
observations. The predictors $\mathbb{X}$ were generated from a multivariate
normal distribution with dimension $p=200$ (i.e., $\mathbb{X}\sim
N(0,\Sigma)$, where $\Sigma=I_{p}$. The response $\mathbb{y}$ is linearly
related to the predictor $\mathbb{X}$, i.e.,
$\mathbb{y}=\mathbb{X}\boldsymbol{\beta}+\epsilon$. Here,
$\boldsymbol{\beta}\in\mathbb{R}^{p}$ is a vector of regression coefficients,
with each coefficient set to 0.1, and $\epsilon$ denotes random noise, which
follows the standard normal distribution.
Figure 1 presents a scatter plot of the predicted values against the observed
responses. A solid line representing the linear regression fit of the
predicted $\widehat{\mathbb{y}}$ to $\mathbb{y}$ is also shown, whereas the
dotted line has a slope of 1. When $\widehat{\mathbb{y}}$ is unbiased, the
slope of the solid line is also 1, otherwise biased. As demonstrated in this
figure, a systematic bias arises, characterized by a tendency to predict
larger values when $y$ is small (overestimation) and to predict lower values
when $y$ is large (underestimation). The sum of these biases for all
observations is approximately zero due to the optimization of the commonly
used objective function. This linear biased tendency generally presents across
all machine learning models and is more apparent in the testing dataset.
Since the systematic bias shows a consistent center-warping and linear
tendency across all machine learning models, we refer to this phenomenon as
the “systematic bias of machine learning regression” (SBMR). We denote the
linear trend of this systematic bias by $c=sin(\theta)$, where $\theta$ is the
angle between the dotted line and the solid line. In the following 1, we show
that the mean squared error of the biased prediction from a machine learning
regression model is smaller than the unbiased prediction. This theoretically
explains why the aforementioned systematic bias is preferred by all machine
learning regression models with the objective function to minimize the mean
squared error.
###### Proposition 1.
Consider the outcome $Y$ with a mean of zero and variance of $\sigma^{2}$,
$\widehat{y}_{i}$ as the unbiased prediction of $y_{i}$ ($i=1,\cdots,n$), and
the systematically biased machine learning regression prediction
$\tilde{y_{i}}=\widehat{y_{i}}-c\widehat{y_{i}}$, where $0<c<1$. Then for $c$,
and the coefficient of determination $R^{2}$ satisfying
$c<\frac{2-2R^{2}}{2-R^{2}}$, we have
$\displaystyle\mathbb{E}\left[\sum_{i=1}^{n}\left(y_{i}-\widehat{y}_{i}\right)^{2}\right]\geq\mathbb{E}\left[\sum_{i=1}^{n}\left(y_{i}-\tilde{y}_{i}\right)^{2}\right].$
This can be straightforwardly checked by using
$\mathbb{E}[\sum_{i=1}^{n}\left(y_{i}-\widehat{y_{i}}\right)^{2}]=(n-1)\sigma^{2}(1-R^{2})$,
and
$\mathbb{E}[\sum_{i=1}^{n}\left(y_{i}-\tilde{y}_{i}\right)^{2}]=(n-1)c^{2}\sigma^{2}+(n-1)(1-c)^{2}\sigma^{2}(1-R^{2})$.
Although this issue frequently arises during the regression of continuous
variables, it has received little attention in general statistics and machine
learning literature. However, in the neuroimaging community, especially in the
context of predicting brain age—calculated by regressing chronological age on
neuroimaging data—this phenomenon is widely acknowledged. Many studies have
endeavored to address this issue; the majority of proposed solutions focus on
the implementation of a post-bias correction step Smith et al., (2019) after
fitting the model, rather than addressing it at the level of the main
objective function. However, post-bias correction is often criticized Butler
et al., (2021) because this additional step relies on certain assumptions
about the distributions or relationships within the data. Thus, if these
assumptions are violated, the correction may not only fail to accurately
address the issue but could potentially worsen it. Nevertheless, to the best
of our knowledge, there are no other methodologies besides the bias correction
approach. To fill this gap and tackle the SBMR issue efficiently, we propose a
novel approach that addresses this problem at the objective function level,
without requiring any subsequent steps. This is achieved by imposing
constraints designed to prevent the fitted values from being forced toward the
mean.
Figure 1: Illustration of regression to the mean: We simulated 1,000
observations for each training and testing set. We compared our proposed
methods (represented in orange panels) with conventional machine learning
methods (shown in grey panels), including Kernel Ridge Regression (KRR), Lasso
Regression, XGBoost, and Random Forest (RF).
## 2 Method
In this section, we propose a solution that addresses the SBMR issue by
imposing a linear constraint on two existing methods: Lasso and KRR. This
additional constraint prevents the fitted values from being forced toward the
overall mean of the data.
Specifically, we partition the data into two groups based on whether the
response values are less than or greater than the overall mean of the training
set. We then adjust the fitted values so that the mean of the fitted values
for observations less than the overall mean equals the mean of these
observations, and similarly for observations greater than the overall mean.
This ensures that the mean of the fitted values corresponds appropriately. We
impose these constraints for each method, which are elaborated in the
following subsections.
### 2.1 Constrained Lasso
We first impose this penalty on the linear model. The regularized linear
regression has the following form:
$\displaystyle\operatornamewithlimits{argmin}_{\beta\in\mathbb{R}^{p}}\sum_{i=1^{n}}\left(\mathbb{y}_{i}-\mathbb{x}_{i}^{\top}\boldsymbol{\beta}\right)^{2}+\lambda\Omega(\boldsymbol{\beta}),$
(1)
where $\Omega$ is a penalty function. One popular choice is $\ell_{1}$
penalty, i.e.,
$\|\boldsymbol{\beta}\|_{1}=\sum_{j=1}^{p}|\boldsymbol{\beta}_{j}|$. We denote
the overall mean of the response as
$\overline{\mathbb{y}}=\frac{1}{n}\sum_{i=1}^{n}\mathbb{y}_{i}$. The response
variable can then be divided into two groups based on the overall training
mean: (i) those less than the overall training mean, i.e.,
$\mathbb{I}_{<}=\\{i\in n|\mathbb{y}_{i}<\overline{\mathbb{y}}\\}$, and (ii)
those greater than the overall training mean, i.e., $\mathbb{I}_{>}=\\{i\in
n|\mathbb{y}_{i}>\overline{\mathbb{y}}\\}$. The corresponding means for each
group are
$\overline{\mathbb{y}}_{<}=\frac{1}{|\mathbb{I}_{<}|}\sum_{i\in\mathbb{I}_{<}}\mathbb{y}_{i}$,
and
$\overline{\mathbb{y}}_{>}=\frac{1}{|\mathbb{I}_{>}|}\sum_{i\in\mathbb{I}_{>}}\mathbb{y}_{i}$.
Accordingly, we can divide the predictors into two sets:
$\mathbb{X}_{<}=[\mathbb{x}_{i}:i\in\mathbb{I}_{<}]\in\mathbb{R}^{|\mathbb{I}_{<}|\times
p}$, and
$\mathbb{X}_{>}=[\mathbb{x}_{i}:i\in\mathbb{I}_{>}]\in\mathbb{R}^{|\mathbb{I}_{>}|\times
p}$.
As illustrated in Figure 1, there is a trend of overestimation to the left of
the overall mean and underestimation to the right. Therefore, we intend to
adjust this systematic bias through the imposition of constraints.
Specifically, predicted values corresponding to the set $\mathbb{I}_{<}$ will
be adjusted to have the same mean as the observed values within their group,
that is $\texttt{Avg}(\widehat{\mathbb{y}}_{<})=\overline{\mathbb{y}}_{<}$,
where Avg($\cdot$) denotes average, i.e.,
$\texttt{Avg}(\widehat{\mathbb{y}}_{<})=\frac{1}{|\mathbb{I}_{<}|}\sum_{i\in\mathbb{I}_{<}}\widehat{\mathbb{y}}_{i}$.
This constraint is applied to correct the overestimation bias associated with
lower response values. Similarly, for $\mathbb{I}_{>}$, we impose the
constraint $\texttt{Avg}(\widehat{\mathbb{y}}_{>})=\overline{\mathbb{y}}_{>}$
to correct the underestimation bias observed to the right of the overall mean.
Under the linear model, where the prediction for the $i$-th subject can be
calculated as $\widehat{y}_{i}=\mathbb{X}_{i}\beta$, the aforementioned
constraints can be expressed as linear equality constraints in matrix form.
Thus, we can formulate the lasso problem with an equality constraint:
$\displaystyle\operatornamewithlimits{argmin}_{\beta\in\mathbb{R}^{p}}\quad$
$\displaystyle\|\mathbb{y}-\mathbb{X}\boldsymbol{\beta}\|^{2}+\lambda\|\boldsymbol{\beta}\|_{1}$
subject to
$\displaystyle\begin{pmatrix}\bf{1}_{|\mathbb{I}_{<}|}^{\top}\mathbb{X}_{<}\\\
\bf{1}_{|\mathbb{I}_{>}|}^{\top}\mathbb{X}_{>}\end{pmatrix}\boldsymbol{\beta}=\begin{pmatrix}|\mathbb{I}_{<}|\cdot\overline{\mathbb{y}}_{<}\\\
|\mathbb{I}_{>}|\cdot\overline{\mathbb{y}}_{>}\end{pmatrix}$
The above form is the constrained lasso problem Gaines et al., (2018) with an
equality-only constraint. It can be solved a quadratic programming.
### 2.2 Constrained KRR
A similar approach can be extended to nonlinear models. A general
nonparametric regression problem can be formulated as
$\mathbb{y}_{i}=f(\mathbb{x}_{i})+\epsilon_{i}$ where the goal is to estimate
a function $\widehat{f}$ from a function space
$\mathcal{F}:\mathcal{X}\rightarrow\mathbb{R}$. One popular choice for the
function space $\mathcal{F}$ is the Reproducing Kernel Hilbert Space (RKHS),
denoted by $\mathcal{H}$. Given that $\mathcal{H}$ is infinite-dimensional,
regularization is essential for estimation. The optimization problem known as
kernel ridge regression (KRR) Schölkopf and Smola, (2002) can be formulated as
follows:
$\displaystyle\widehat{f}=\operatornamewithlimits{argmin}_{f\in\mathcal{H}}\sum_{i=1}^{n}\left(\mathbb{y}_{i}-f(\mathbb{x}_{i})\right)^{2}+\lambda\|f\|_{\mathcal{H}}^{2},$
where the first term is goodness-of-fit to data, and the second term is the
regularization term which controls the trade-off between the fit of the model
to the data and the smoothness of the function $f$. By the representer theorem
Wahba, (1990), the function $f$ can be expressed as a linear combination of
kernel functions, i.e.,
$f(\mathbb{x})=\sum_{i=1}^{n}\alpha_{i}\kappa(\mathbb{x},\mathbb{x}_{i})$. The
function $\kappa:\mathcal{X}\times\mathcal{X}\rightarrow\mathbb{R}$ is called
a kernel function if it is symmetric and positive definite. The most popular
choice of the kernel is Gaussian Radial Basis Function (RBF) Kernel, i.e.,
$\kappa(\mathbb{x}_{i},\mathbb{x}_{j})=\|\mathbb{x}_{i}-\mathbb{x}_{j}\|^{2}/\sigma^{2}$.
By imposing the two previously used equality constraints for adjusting the
mean of the predictive values, the KRR objective function can be expressed as
follows:
$\displaystyle\widehat{f}=$
$\displaystyle\operatornamewithlimits{argmin}_{f\in\mathcal{H}}$
$\displaystyle\sum_{i=1}^{n}\left(\mathbb{y}_{i}-f(\mathbb{x}_{i})\right)^{2}+\lambda\|f\|_{\mathcal{H}}^{2}$
subject to
$\displaystyle\sum_{i\in\mathbb{I}_{<}}\overline{\mathbb{y}}_{<}-f(\mathbb{x}_{i})=0$
(2)
$\displaystyle\sum_{i\in\mathbb{I}_{>}}\overline{\mathbb{y}}_{>}-f(\mathbb{x}_{i})=0$
One of the primary advantages of the kernel method is its ability to apply
nonlinear techniques within a linear algorithmic framework by utilizing the
kernel trick Hofmann et al., (2008). Consequently, unlike other nonlinear
regression methods, KRR benefits from the incorporation of the mean equality
constraints, which can be treated as linear constraints with respect to the
optimization variables. Specifically, by substituting the representation of
$f(\cdot)$ into (2), it can express in a dual form:
$\displaystyle\widehat{\boldsymbol{\alpha}}=$
$\displaystyle\operatornamewithlimits{argmin}_{\alpha\in\mathbb{R}^{n}}$
$\displaystyle\|\mathbb{y}-\mathbb{K}\boldsymbol{\alpha}\|_{2}^{2}+\lambda\boldsymbol{\alpha}^{\top}\mathbb{K}\boldsymbol{\alpha}$
subject to
$\displaystyle\begin{pmatrix}\bf{1}_{|\mathbb{I}_{<}|}^{\top}\mathbb{K}_{<}\\\
\bf{1}_{|\mathbb{I}_{>}|}^{\top}\mathbb{K}_{>}\end{pmatrix}\boldsymbol{\alpha}=\begin{pmatrix}|\mathbb{I}_{<}|\cdot\overline{\mathbb{y}}_{<}\\\
|\mathbb{I}_{>}|\cdot\overline{\mathbb{y}}_{>}\end{pmatrix},$
where $\mathbb{K}\in\mathbb{R}^{n\times n}$ denotes the kernel gram matrix of
the complete data with the $(i,j)$ element is
$\kappa(\mathbb{x}_{i},\mathbb{x}_{j})$, and
$\mathbb{K}_{<}\in\mathbb{R}^{|\mathbb{I}_{<}|\times n}$,
$\mathbb{K}_{>}\in\mathbb{R}^{|\mathbb{I}_{>}|\times n}$ denote the kernel
matrices corresponding to datasets $\mathbb{I}_{<}$, $\mathbb{I}_{>}$,
respectively. Thus, for example the $i$-the row of $\mathbb{K}_{<}$ is
$\mathbb{K}_{{<}(i,\cdot)}=\left(\kappa(\mathbb{x}_{{<}_{i}},\mathbb{x}_{1}),\cdots,\kappa(\mathbb{x}_{{<}_{i}},\mathbb{x}_{n})\right)$,
where ${\mathbb{I}_{<}}_{i}$ denotes the $i$-th subject in the set
$\mathbb{I}_{<}$. Because the main objective function is quadratic and the
constraint is linear in $\boldsymbol{\alpha}$, the above optimization problem
can be readily solved using quadratic programming.
By formulating the above problem in the Lagrangian form:
$\displaystyle\mathcal{L}(\boldsymbol{\alpha},{\boldsymbol{\rho}})=\frac{1}{2}\boldsymbol{\alpha}^{\top}\mathbb{K}\left(\mathbb{K}+\frac{\lambda}{2}{\bf{I}}\right)\boldsymbol{\alpha}-\mathbb{y}^{\top}\mathbb{K}\boldsymbol{\alpha}+{\boldsymbol{\rho}}^{\top}\left(\underbrace{\begin{pmatrix}\bf{1}_{|\mathbb{I}_{<}|}^{\top}\mathbb{K}_{<}\\\
\bf{1}_{|\mathbb{I}_{>}|}^{\top}\mathbb{K}_{>}\end{pmatrix}}_{\mathbf{B}}\boldsymbol{\alpha}-\underbrace{\begin{pmatrix}|\mathbb{I}_{<}|\cdot\overline{\mathbb{y}}_{<}\\\
|\mathbb{I}_{>}|\cdot\overline{\mathbb{y}}_{>}\end{pmatrix}}_{\mathbf{d}}\right)$
we can derive the following Algorithm 1.
Algorithm 1 Algorithm for constrained KRR
1:Update $\boldsymbol{\alpha}^{(t)}$ by finding the stationary point of
$\mathcal{L}(\boldsymbol{\alpha},\boldsymbol{\rho}^{(t-1)}$)
$\displaystyle\nabla_{\boldsymbol{\alpha}}\mathcal{L}(\boldsymbol{\alpha},{\boldsymbol{\rho}^{(t-1)}})=\mathbb{K}\left(\mathbb{K}+\frac{\lambda}{2}{\bf{I}}\right)\boldsymbol{\alpha}-\mathbb{K}\mathbb{y}+\mathbf{B}^{\top}{\boldsymbol{\rho}^{(t-1)}}\equiv
0$
2:Update $\boldsymbol{\rho}^{(t)}$
$\displaystyle{\boldsymbol{\rho}^{(t)}}\leftarrow{\boldsymbol{\rho}^{(t-1)}}+s^{(t)}(\mathbf{B}\boldsymbol{\alpha}^{(t)}-\mathbf{d}),$
where $s^{(t)}$ is a step size, which can be chosen by line search.
3:Iterate updating $\boldsymbol{\alpha}$, and $\boldsymbol{\rho}$ until
converge.
## 3 Simulation
In this section, we benchmark our proposed methods (KRRConst, and LassoConst)
against their unconstrained counterparts and other ML methods including
XGBoost (Chen and Guestrin,, 2016) and Random Forest (RF; Breiman, (2001)). We
consider both linear and nonlinear relationships between predictor
$\mathbb{X}$ and response $\mathbb{y}$. In the linear case, we first generate
$\mathbb{X}$ from the multivariate normal distribution with dimension $p=10$
and then generate the response
$\mathbb{y}=\mathbb{X}\boldsymbol{\beta}+\epsilon$, where $\epsilon\sim
N(0,\sigma_{e}^{2})$. In the nonlinear case, we generate
$\mathbb{y}=f(\mathbb{X})+\epsilon$, and for the form of $f(\mathbb{X})$, we
use two of Friedman’s test functions, which are commonly used for evaluating
machine learning algorithms (Friedman,, 1991):
1. 1.
$f(\mathbb{X})=0.1\exp(4\mathbb{X}_{1})+4/(1+\exp(-20(\mathbb{X}_{2}-0.5))+3\mathbb{X}_{3}+2\mathbb{X}_{4}+\mathbb{X}_{5}+0\cdot\sum_{p=6}^{10}\mathbb{X}_{p}$
2. 2.
$f(\mathbb{X})=10\sin(\pi\mathbb{X}_{1}\mathbb{X}_{2})+20(\mathbb{X}_{3}-0.5)^{2}+10\mathbb{X}_{4}+5\mathbb{X}_{5}$.
We fitted the model using only the training set and evaluated its performance
on both the training set and the testing set, which was unseen during the
training stage. For all settings, we use an equal number of samples for the
training and testing sets, with each set containing $n=100$.
To assess whether the methods suffer from or resolve the SBMR in predictions,
we evaluate the bias by comparing the regression slope—obtained by regressing
predicted values on observed values—to the reference line set at 1 (i.e.,
$\widehat{\mathbb{y}}=\mathbb{y}$). In addition, to determine if there is a
linear relationship between the residuals and the predicted values, we
calculated the correlation between them. To measure the overall prediction
accuracy, we also compare the Root Mean Square Error (RMSE). To further
investigate the presence of systematic bias around the tail areas, we
calculated the average bias for observations falling below the lower quartile
(i.e., $<Q_{1}$) and above the upper quartile (i.e., $>Q_{3}$). The results
averaged over 100 replications are provided in Table 1.
Our proposed methods perform well, exhibiting only minimal bias in slope
across all settings and in both the training and testing datasets, with biases
consistently less than 0.1, except for KRRConst in the testing set under the
linear setting.
However, all conventional methods exhibit a biased slope, high linear
correlation between predicted values and residuals, and systematic bias in the
tail areas. Specifically, models tend to produce a positive bias for responses
under the lower quantiles, indicating that the predicted values are generally
larger than the observed values. Conversely, responses above the upper
quantiles typically result in negative biases, suggesting an underestimation
of the observed values. In addition, both XGBoost and random forest exhibit
overfitting, as evidenced by their superior performance on the training set,
particularly in terms of RMSE, compared to their poorer performance on the
testing set. The issue of overfitting with these ML methods is more pronounced
in the linear case, as evidenced by a bias in slope greater than 0.65. This
implies that the predicted values tend to be closer to the overall mean,
indicating that the SBMR occurs.
It should be noted that our proposed method does not achieve optimal results
in terms of RMSE. This is because our objective function does not globally
minimize the squared error loss, as it incorporates constraints designed to
adjust the bias. Consequently, this results in an increase in variance, which
is a trade-off we have to accept to eliminate systematic bias.
| | | KRRConst | LassoConst | KRR | Lasso | XGBoost | RF
---|---|---|---|---|---|---|---|---
Linear | Train | Bias (Slope) | 0.0204 | 0.0243 | 0.2962 | 0.5068 | 0.0187 | 0.3227
$\operatornamewithlimits{Cor}(\widehat{\mathbb{y}},\hat{\epsilon})$ | -0.0332 | -0.0357 | -0.5811 | -0.8034 | -0.6516 | -0.9187
RMSE | 0.5765 | 0.6137 | 0.3985 | 0.4945 | 0.0228 | 0.2772
Bias ($<Q_{1}$) | 0.0233 | 0.0260 | 0.2998 | 0.5076 | 0.0201 | 0.3186
Bias ($>Q_{3}$) | -0.0202 | -0.0221 | -0.2959 | -0.5039 | -0.0157 | -0.3206
Test | Bias (Slope) | 0.1178 | 0.0835 | 0.3907 | 0.5599 | 0.6815 | 0.7547
$\operatornamewithlimits{Cor}(\widehat{\mathbb{y}},\hat{\epsilon})$ | -0.1561 | -0.1061 | -0.5825 | -0.8021 | -0.7706 | -0.9281
RMSE | 0.6351 | 0.6820 | 0.5403 | 0.5620 | 0.7142 | 0.6576
Bias ($<Q_{1}$) | 0.1014 | 0.0748 | 0.3852 | 0.5592 | 0.6965 | 0.7584
Bias ($>Q_{3}$) | -0.1349 | -0.0905 | -0.4079 | -0.5774 | -0.6818 | -0.7710
Nonlinear 1 | Train | Bias (Slope) | 0.0292 | 0.0463 | 0.3069 | 0.3854 | 0.0112 | 0.2125
$\operatornamewithlimits{Cor}(\widehat{\mathbb{y}},\hat{\epsilon})$ | -0.0592 | -0.0794 | -0.6380 | -0.7388 | -0.4957 | -0.8560
RMSE | 1.3677 | 1.6200 | 1.2824 | 1.3899 | 0.0635 | 0.6638
Bias ($<Q_{1}$) | 0.0122 | 0.0763 | 0.9644 | 1.2281 | 0.0100 | 0.6812
Bias ($>Q_{3}$) | -0.1195 | -0.1781 | -1.0680 | -1.3374 | -0.0595 | -0.7311
Test | Bias (Slope) | 0.0781 | 0.0906 | 0.3344 | 0.4015 | 0.2943 | 0.4939
$\operatornamewithlimits{Cor}(\widehat{\mathbb{y}},\hat{\epsilon})$ | -0.1376 | -0.1413 | -0.6289 | -0.7275 | -0.5324 | -0.8632
RMSE | 1.5603 | 1.8139 | 1.4328 | 1.4856 | 1.4950 | 1.5449
Bias ($<Q_{1}$) | 0.1544 | 0.1868 | 1.0477 | 1.2675 | 0.7899 | 1.5844
Bias ($>Q_{3}$) | -0.3167 | -0.3578 | -1.1998 | -1.4338 | -1.1879 | -1.7578
Nonlinear 2 | Train | Bias (Slope) | 0.0065 | 0.0123 | 0.3442 | 0.4350 | 0.0124 | 0.2497
$\operatornamewithlimits{Cor}(\widehat{\mathbb{y}},\hat{\epsilon})$ | -0.0149 | -0.0244 | -0.6590 | -0.7790 | -0.4076 | -0.9011
RMSE | 2.9920 | 3.9202 | 2.5692 | 2.7455 | 0.1558 | 1.3704
Bias ($<Q_{1}$) | -0.0622 | -0.0512 | 2.0947 | 2.6742 | 0.0153 | 1.5576
Bias ($>Q_{3}$) | -0.0987 | -0.1143 | -2.2178 | -2.7873 | -0.1281 | -1.5597
Test | Bias (Slope) | 0.0349 | 0.0625 | 0.3590 | 0.4495 | 0.2882 | 0.5220
$\operatornamewithlimits{Cor}(\widehat{\mathbb{y}},\hat{\epsilon})$ | -0.0642 | -0.0932 | -0.6644 | -0.7789 | -0.5669 | -0.9125
RMSE | 3.1257 | 4.1235 | 2.7400 | 2.9182 | 2.5776 | 2.9002
Bias ($<Q_{1}$) | -0.0102 | 0.1719 | 2.1599 | 2.7855 | 1.6632 | 3.3347
Bias ($>Q_{3}$) | -0.3796 | -0.5188 | -2.4279 | -2.9739 | -2.0054 | -3.3655
Table 1: Results of the experiments described in Section 3. The best-ranked
method for each measure is highlighted in bold.
## 4 Real-life example of MTM (Estimating Brain Age)
Regression to the mean has been commonly reported in neuroimaging studies,
particularly in the context of estimating brain age to quantify an
individual’s brain condition by regressing chronological age on neuroimaging
measurements. In such cases, older individuals tend to be estimated with a
younger brain age, while younger individuals tend to be estimated with an
older brain age.
We applied the proposed methods to two neuroimaging studies: The Lifespan
Human Connectome Project Aging Study (HCP-A,
https://www.humanconnectome.org/study/hcp-lifespan-aging), and UK Biobank
(UKBB, https://www.ukbiobank.ac.uk/) data. In both the HCP-A and UKBB
datasets, we utilized fractional anisotropy (FA), a measure of white matter
integrity, measured across various brain regions as the neuroimaging
predictors. The dimensions of the predictors are $p=64$ for HCP-A and $p=39$
for UKBB, respectively. We use chronological age as the response variable. The
age range for the HCP-A dataset is 36-90 years, and for the UKBB dataset, it
is 45-82 years. The number of samples for which neuroimaging measurements are
available is 662 for HCP-A and 36,856 for UKBB.
We trained the model on randomly sampled subsets from the entire dataset—300
samples for HCP-A and 500 for UKBB. We then evaluated model performance using
the unseen testing set, which consists of the remaining data for HCP-A and
another 500 randomly selected samples from the remaining data for UKBB. We
compare the performance of our method with other ML methods. The results from
100 replications are summarized in Table 2.
| | KRRConst | LassoConst | KRR | Lasso | XGBoost | RF
---|---|---|---|---|---|---|---
HCP-A | Bias (Slope) | 0.0769 | 0.1057 | 0.4995 | 0.4194 | 0.4538 | 0.5046
$\operatornamewithlimits{Cor}(\widehat{\mathbb{y}},\hat{\epsilon})$ | -0.0874 | -0.1525 | -0.7553 | -0.6559 | -0.6086 | -0.7449
RMSE | 13.0182 | 10.0511 | 9.4118 | 9.0879 | 10.6284 | 9.6430
Bias ($<Q_{1}$) | 3.8303 | 3.8248 | 10.4100 | 8.9860 | 9.1444 | 10.3606
| Bias ($>Q_{3}$) | -0.2421 | -0.8128 | -8.5090 | -7.0122 | -8.2300 | -8.6196
UKBB | Bias (Slope) | 0.0659 | 0.1450 | 0.6705 | 0.7098 | 0.6298 | 0.6725
$\operatornamewithlimits{Cor}(\widehat{\mathbb{y}},\hat{\epsilon})$ | -0.0501 | -0.1147 | -0.8354 | -0.8679 | -0.7292 | -0.8390
RMSE | 10.8041 | 9.8240 | 6.1308 | 6.2405 | 6.6023 | 6.1208
Bias ($<Q_{1}$) | 1.5131 | 1.9598 | 7.2653 | 7.6575 | 6.3815 | 7.1399
| Bias ($>Q_{3}$) | 0.1360 | -0.8688 | -6.3362 | -6.7404 | -6.3815 | -6.4899
Table 2: Results of the real data application. The best-ranked method for each
measure is highlighted in bold.
Overall, the constrained KRR outperforms other methods, with the only
exceptions being lower tail bias in the HCP-A data, where the proposed
constrained lasso performs the best with only a small margin, and RMSE for
both datasets, where conventional methods perform well. The small bias in
slope (all less than 0.1) of constrained KRR indicates that the predicted
values are closely aligned with the observed values. Additionally, the
correlation between residuals and response values is significantly weaker for
the proposed methods compared to other competing methods. This is also
illustrated in Figure 2, which displays a scatter plot of one sample result
from 100 replications. This lack of a linear trend indicates that the proposed
method removes systematic biases, leading to more accurate and reliable
predictions across the range of data. This removal of systematic bias is
further evidenced by the small biases observed in the tail areas, below the
first quartile ($<Q_{1}$) and above the third quartile ($>Q_{3}$).
As noted, the proposed method’s reduction in bias may lead to increased
variance in model predictions, potentially raising the RMSE. Consequently, our
method is not optimal in terms of RMSE, illustrating a trade-off between bias
and variance. However, the difference in RMSE between the proposed methods and
other ML methods is not critical, considering the substantial improvement in
tail bias.
Figure 2: This Scatter plot, one result from 100 replications, shows the
observed response $\mathbb{Y}$ ($x$-axis) and the residuals ($y$-axis) from
various methods on the testing set. The reference dashed line represents
$y=0$.
## 5 Conclusion
The bias-variance trade-off is fundamental in machine learning research. In
order to reduce the mean squared error, many machine learning regression
models tolerate biases of predicted outcomes. When the prediction is
systematically biased, as in the scenario “Machine learning towards the mean”,
the foundation of machine learning regression models is undermined. We
proposed a simple remedy to the systematic bias of machine learning regression
models. We developed new algorithms for two regression shrinkage models
handling high-throughput predictors (i.e., $p>n$) by imposing two-sided
constraints to correct the systematic bias. In both simulation analysis and
real-world data applications our methods yield unbiased prediction.
Although our correction methods can avoid systematic bias, they are not exempt
from the bias-variance trade-off. The mean squared error of corrected methods
is generally higher than the classic machine learning regression models. In
future research, we will integrate the proposed corrections into commonly used
machine learning regression models such as random forest, XGBoost, and others
and explore adaptive solutions to address the bias-variance trade-off.
## References
* Belitz and Stackelberg, (2021) Belitz, K. and Stackelberg, P. (2021). Evaluation of six methods for correcting bias in estimates from ensemble tree machine learning regression models. Environmental Modelling & Software, 139:105006.
* Breiman, (2001) Breiman, L. (2001). Random forests. Machine Learning, 45(1):5–32.
* Butler et al., (2021) Butler, E. R., Chen, A., Ramadan, R., Le, T. T., Ruparel, K., Moore, T. M., Satterthwaite, T. D., Zhang, F., Shou, H., Gur, R. C., Nichols, T. E., and Shinohara, R. T. (2021). Pitfalls in brain age analyses. Human Brain Mapping, 42(13):4092–4101.
* Chen and Guestrin, (2016) Chen, T. and Guestrin, C. (2016). Xgboost: A scalable tree boosting system. KDD ’16, page 785–794, New York, NY, USA. Association for Computing Machinery.
* Friedman, (1991) Friedman, J. H. (1991). Multivariate Adaptive Regression Splines. The Annals of Statistics, 19(1):1 – 67.
* Gaines et al., (2018) Gaines, B. R., Kim, J., and Zhou, H. (2018). Algorithms for fitting the constrained lasso. Journal of Computational and Graphical Statistics, 27(4):861–871.
* Hastie et al., (2009) Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning. Springer New York, NY, 2 edition.
* Hofmann et al., (2008) Hofmann, T., Schölkopf, B., and Smola, A. J. (2008). Kernel methods in machine learning. The Annals of Statistics, 36(3):1171 – 1220.
* Schölkopf and Smola, (2002) Schölkopf, B. and Smola, A. J. (2002). Learning with Kernels:Support vector machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge, MA, USA.
* Smith et al., (2019) Smith, S. M., Vidaurre, D., Alfaro-Almagro, F., Nichols, T. E., and Miller, K. L. (2019). Estimation of brain age delta from brain imaging. NeuroImage, 200:528–539.
* Wahba, (1990) Wahba, G. (1990). Spline Models for Observational Data. Society for Industrial and Applied Mathematics.
* Zhang and Lu, (2012) Zhang, G. and Lu, Y. (2012). Bias-corrected random forests in regression. Journal of Applied Statistics, 39(1):151–160.
## Appendix A Proof of Proposition 1
###### Proof.
Let $\widehat{y}_{i}$ denote an unbiased prediction $y_{i}$. Recall that
$R^{2}=1-\dfrac{\sum_{i=1}^{n}(y_{i}-\widehat{y}_{i})^{2}}{\sum_{i=1}^{n}(y_{i}-\overline{y})^{2}}$,
and
$\sum_{i=1}^{n}(y_{i}-\widehat{y}_{i})^{2}=\sum_{i=1}^{n}(y_{i}-\overline{y})^{2}\times(1-R^{2})$.
Without loss of generality, we have
$\mathbb{E}\left[\sum_{i=1}^{n}(y_{i}-\widehat{y}_{i})^{2}\right]=(n-1)\sigma^{2}(1-R^{2})$.
Suppose $\widetilde{y}_{i}=\widehat{y}_{i}-c\widehat{y}_{i}$ is the linear
trend of the systematic bias, then following holds.
$\displaystyle\mathbb{E}\left[\sum_{i=1}^{n}(y_{i}-\widetilde{y}_{i})^{2}\right]$
$\displaystyle=\mathbb{E}\left[\sum_{i=1}^{n}\left(y_{i}-\mathbb{E}(\widetilde{y}_{i})+\mathbb{E}(\widetilde{y}_{i})-\widetilde{y}_{i}\right)^{2}\right]$
$\displaystyle=\mathbb{E}\left[\sum_{i=1}^{n}(y_{i}-\mathbb{E}(\widetilde{y}_{i}))^{2}+\sum_{i=1}^{n}(\mathbb{E}(\widetilde{y}_{i})-\widetilde{y}_{i})^{2}+\sum_{i=1}^{n}2(y_{i}-\mathbb{E}(\widetilde{y}_{i}))(\mathbb{E}(\widetilde{y}_{i})-\widetilde{y}_{i})\right]$
$\displaystyle=\mathbb{E}\sum_{i=1}^{n}(cy_{i})^{2}+\mathbb{E}\sum_{i=1}^{n}[(1-c)(y_{i}-\widehat{y}_{i})]^{2}+2\mathbb{E}\sum_{i=1}^{n}(y_{i}-\mathbb{E}(\widetilde{y}_{i}))(\mathbb{E}(\widetilde{y}_{i})-\widetilde{y}_{i})$
$\displaystyle=(n-1)c^{2}\sigma^{2}+(n-1)(1-c)^{2}\sigma^{2}(1-R^{2}).$
It suffices to verify the condition where
$\mathbb{E}\left[\sum_{i=1}^{n}(y_{i}-\widehat{y}_{i})^{2}\right]>\mathbb{E}\left[\sum_{i=1}^{n}(y_{i}-\widetilde{y}_{i})^{2}\right]$:
$\displaystyle(1-R^{2})$ $\displaystyle>c^{2}+(1-c)^{2}(1-R^{2})$
$\displaystyle(1-R^{2})$ $\displaystyle>1-2c+2c^{2}-R^{2}+2cR^{2}-c^{2}R^{2}$
$\displaystyle 2c-2c^{2}-2cR^{2}+c^{2}R^{2}$ $\displaystyle>0$ $\displaystyle
1-c-R^{2}+cR^{2}/2$
$\displaystyle>0\rightarrow\left(1-\frac{R^{2}}{2}\right)c<1-R^{2}$
Thus, we have
$\mathbb{E}\left[\sum_{i=1}^{n}(y_{i}-\widehat{y}_{i})^{2}\right]>\mathbb{E}\left[\sum_{i=1}^{n}(y_{i}-\widetilde{y}_{i})^{2}\right]$,
when $c<\frac{1-R^{2}}{1-R^{2}/2}$. ∎
Therefore, machine learning regression models with the objective function to
minimize mean squared error naturally incorporate the linear bias
$\widetilde{y}_{i}=\widehat{y}_{i}-c\widehat{y}_{i}$ in their prediction.
|
11institutetext: Faculty of EEMCS, University of Twente, 7500 AE Enschede,
The Netherlands
11email<EMAIL_ADDRESS>Hospital Group
Twente (ZGT), 7609 PP Almelo, The Netherlands
22email<EMAIL_ADDRESS>University of Marburg,
35037 Marburg, Germany
33email<EMAIL_ADDRESS>
# Feature importance to explain multimodal prediction models. A clinical use
case
Jorn-Jan van de Beld 1122 0000-0001-6220-0508 Shreyasi Pathak 11
0000-0002-6984-8208 Jeroen Geerdink 22 0000-0001-6718-6653 Johannes H. Hegeman
1122 0000-0003-2188-2738 Christin Seifert 33 0000-0002-6776-3868
###### Abstract
Surgery to treat elderly hip fracture patients may cause complications that
can lead to early mortality. An early warning system for complications could
provoke clinicians to monitor high-risk patients more carefully and address
potential complications early, or inform the patient. In this work, we develop
a multimodal deep-learning model for post-operative mortality prediction using
pre-operative and per-operative data from elderly hip fracture patients.
Specifically, we include static patient data, hip and chest images before
surgery in pre-operative data, vital signals, and medications administered
during surgery in per-operative data. We extract features from image
modalities using ResNet and from vital signals using LSTM. Explainable model
outcomes are essential for clinical applicability, therefore we compute
Shapley values to explain the predictions of our multimodal black box model.
We find that i) Shapley values can be used to estimate the relative
contribution of each modality both locally and globally, and ii) a modified
version of the chain rule can be used to propagate Shapley values through a
sequence of models supporting interpretable local explanations. Our findings
imply that a multimodal combination of black box models can be explained by
propagating Shapley values through the model sequence.
###### Keywords:
clinical decision support mortality prediction multimodal machine learning hip
fractures
## 1 Introduction
The absolute number of hip fractures in the older Dutch population ($\geq$ 65
years) almost doubled between 1981 and 2008 from 7,614 to 16,049 [25]. In
1997, it was estimated that in 2050 4.5 million people worldwide will suffer
from a hip fracture [11]. Mortality is high during the early postoperative
period with rates reported of up to 13.3% in the first 30 days after surgery
[13]. An accurate risk score computed using data known before surgery _(pre-
operative data)_ can lead to better information about the patient and be of
help in the treatment selection [14]. A risk score computed after surgery
additionally using data from surgery _(per-operative data)_ could provide an
early warning for complications, allowing for swift measures to mitigate the
consequences.
In recent years, machine learning (ML) has proven to be promising for clinical
practice to assist doctors in clinical decision-making and reduce workload
[3]. ML and deep neural network (DNN) models have been used to predict risks
of certain complications after surgery [33] and mortality using pre-operative
and per-operative data [19, 9], reporting ROC-AUC scores up to 0.91.
Multimodal models use multiple data sources to predict outcomes, while
unimodal models only consider a single data source. Clinicians use multiple
data sources in their decision-making, therefore multimodal models should in
theory be better at making these complex predictions [32].
It is crucial that decisions made by an ML model can be understood by
clinicians [31]. Some model types like decision trees are intrinsically
explainable, however, more complex models trade performance at the cost of
explainability [23]. In this paper, we investigate to what extent
explainability is hampered by using multimodal black box models that take
multiple data sources (modalities) as input. We apply model agnostic
explainability methods to understand the contribution of i) each modality and
ii) individual features.
The contributions of our paper are as follows:
1. 1.
We present a neural model for predicting 30-day mortality of hip fracture
patients combining three different data modalities (images, time-series, and
structured data.
2. 2.
We apply XAI techniques, specifically Shapley values, to validate the
mortality risk score given by our multimodal model.
3. 3.
We present the application of a novel method, specifically Shapley value
propagation, to provide local explanations for a complex multimodal clinical
prediction model.
## 2 Related Work
Table 1: Overview of related work on short-term complication prediction.
Abbreviations: convolutional neural network (CNN), auto-encoder (AE), random
forest (RF), k-nearest neighbor (KNN), logistic regression (LR), support
vector machine (SVM), gradient-boosted trees (GBT), decision tree (DT), naive
bayes (NB), deep neural network (DNN), fully connected network (FC), long
short-term memory (LSTM) network, XGBoost (XGB).
Paper | Studypopulation | Predictiontarget(s) | Data | ML model(s)*
---|---|---|---|---
[24] | Septic patients | Short-term mortality | Pre-operative static | CNN, AE, RF, KNN, SVM
[10] | Total shoulder arthroplasty | Post-operative complications | Pre- and per-operative static | LR, GBT, RF, KNN, DT, NB
[27] | Spinal metastasis surgery | Short-term outcomes including mortality | Pre-operative static | LR
[19] | Any surgery | Post-operative mortality | Pre- and per-operative static | LR, DNN
[9] | Surgery with tracheal intubation | Post-operative short-term mortality | Pre- and per-operative and temporal | FC+LSTM+CNN
[5] | Adult1 hip fracture patients | Post-operative short-term mortality | Pre-operative static | LR, CNN
[17] | Adult2 hip fracture patients | Post-operative short-term mortality | Pre-operative static | LR
[22] | Elderly3 hip fracture patients | Post-operative short-term mortality | Pre-operative static | LR
[34] | Elderly3 hip fracture patients | Post-operative short-term mortality | Pre-operative static and images | LR, XGB, RF, SVM
1 Patients were at least 18 years old
2 Patients were at least 23 years old
3 Patients were at least 71 years old
* Models separated with a comma were compared and models connected with a plus sign were combined
In this section, we discuss the literature on three aspects; short-term
complication prediction after surgery, multimodal prediction model, and
explainability of ML models.
### 2.1 Short-term Complication Prediction
ML models have been applied for short-term complication prediction after
surgery, e.g. in septic [24] and hip fracture patients [5]. Table 1 summarises
a sample of studies in literature featuring short-term complication
prediction, showing pre-operative data as the most common input modality and
logistic regression (LR) models as the most common ML model used in existing
work.
Logistic regression models have been developed to predict early mortality
($<$30 days) after surgery in hip fracture patients using pre-operative static
patient data, where reported AUC scores range from 0.76 to 0.82 [22, 5, 17].
In addition to pre-operative data, Yenidogan et al. [34] also included pre-
operative hip and chest images in their multimodal model. They extracted
features from the images using convolutional neural networks (CNN) and trained
a random forest (RF) to predict early mortality, where they reported an AUC of
0.79. However, the effect of per-operative data on risk score prediction after
hip fracture surgery has not been investigated before.
Cao et al. developed a model for the prediction of 30-day mortality of adult
patients after hip fracture surgery using static data from 134,915 patients
[5]. They compared the performance of a CNN with LR and reported an AUC of
0.76 for both models. Yenidogan et al. addressed the 30-day mortality
prediction after hip fracture surgery specifically for elderly patients [34].
The authors exploited structured and image data available before surgery and
showed significant improvement compared to their baseline the Almelo hip
fracture score (AHFS) developed by [22]. They trained two CNNs to extract
features from hip and chest X-ray images, which were fed to a RF classifier
together with the structured data features. Their multimodal model (AUC=0.79)
outperformed the AHFS baseline (AUC of 0.72). Thus the authors concluded, that
the additional information from multiple modalities is beneficial for model
performance. In this paper, we add more modalities to further improve the
prediction of postoperative mortality.
### 2.2 Multimodal Prediction Models
Clinical models that combine multiple modalities outperform models restricted
to a single modality [8]. Multimodal models are commonly used for video
classification tasks, where audio and image data are processed concurrently
[21]. The approaches differ in the way they share information between
modalities within the neural network. Two common approaches are late and early
fusion. Late fusion combines the predictions of the unimodal models with no
further cross-modal information flow, while early fusion combines modalities
very early in the pipeline [21].
While early fusion allows for full information flow between modalities, it has
a high computational cost, due to the high number of neuron connections. Late
fusion has a low computational cost but does not enable the model to learn
cross-modal interactions.
### 2.3 Explainability
Models are required to be explainable to gain the trust of clinicians, where
knowing what features are most important to the model for its prediction is
crucial [31]. Furthermore, clinicians need to be able to justify their
decision-making towards patients and colleagues. ML models can either be
explained locally or globally [7]. Local explanations in a clinical setting
focus on justifying the prediction for a single patient, while global
explanations provide insight into general prediction tendencies for a larger
population.
Multiple methods are available to compute the feature importance in deep
learning models, most notably: LIME [26], deepLIFT [30], layer-wise relevance
propagation [2] and Shapley values [20]. In this paper, we use SHAP (SHapley
Additive exPlanations) to estimate the importance of input features and
modalities. Theoretically, all possible input feature subsets are considered
during the computation of Shapley values, which makes them less sensitive to
multicollinearity and thereby suitable for medical data.
Feature attribution methods, like SHAP, were not designed to explain complex
multimodal clinical models [15]. However, feature attributions can be
propagated through a series models according to a modified version of the
chain rule, which has been shown specifically for SHAP [6]. A multimodal model
can be viewed as a sequence of unimodal models, which might make it suitable
to be explained using the proposed chain rule. In this study, we apply this
method to provide local explanations for our multimodal clinical prediction
model.
## 3 Materials and Methods
In this section, we describe our dataset, our unimodal and multimodal models,
our training procedure, and our evaluation.
### 3.1 Dataset
Our private data set contains anonymized data from 1669 patients who underwent
hip fracture surgery at a public hospital between January 2013 and July 2021.
Patients were at least 71 years old at the time of surgery. The data set
contains 1189 (71.2%) females and 480 males (28.8%). Our goal is to predict
30-day mortality after hip fracture surgery, which occurs at a rate of 7.8%
(131/1669) in our dataset. We collected data from five modalities, which we
divided into two groups - pre-operative and per-operative data.
Pre-operative data encompasses information known before surgery, specifically:
static patient data (Static) modality, an axial hip X-ray image (HipImg)
modality and an anterior-posterior chest X-ray image (ChestImg) modality. The
static patient data has 76 features, which we further subdivided into seven
categories: demographics, daily living conditions, nutrition, surgery
information, lab results, medication, and comorbidities, for details see
Appendix 0.B.
Per-operative data was collected during surgery containing vital signs
(Vitals) modality and medication data (Med) modality. The vital signs are
heart rate, pulse, oxygen saturation, and blood pressure. We split blood
pressure into diastolic, systolic, and mean blood pressure resulting in a
total of six time series. The medication data includes 17 medication groups,
which are commonly administered during hip fracture surgery, see Appendix 0.B.
If a patient has missing medication data, then all values were set to zero,
thus assuming the patient did not receive any medication during surgery at
all.
Figure 1: Overview of how the unimodal models are fused to form the multimodal
models. Dimensions at the input and feature extraction layers are shown in
brackets, where the sequence length (seq_len) for the vitals varies between
patients. No feature transformation was performed for the per-operative
medication data. Data types: tabular/structured (S), image (I), time-series
(T).
#### 3.1.1 Data preprocessing:
The pre-operative static patient data contain missing values, specifically 10
features had a missing percentage over 10%. These included some the Charlson
comorbidity index (66.6%) and living situation (30%). We imputed the data
iteratively with a K-Nearest Neighbor Regression model111We used the scikit-
learn implementation https://scikit-
learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html
($k=10$). We processed the vitals data, such that elements were spaced 15
seconds apart, where each element represents a single time step containing six
vital signs. We filled up gaps of up to 5 minutes (20 elements) in each vital
sign using linear interpolation. Furthermore, given the close similarity of
heart rate and pulse, we interchangeably replaced missing values if only one
value within the pair is missing. So, if the heart rate was missing at a
certain time step, then we took the pulse at that time step if available, and
vice versa for missing pulse values. We z-normalized the vital signs for each
patient separately because this made fewer assumptions about the data
population [16]. If after interpolation an element (time step) still contains
any missing data, it is skipped during training and inference.
### 3.2 Machine Learning Models
We built the multimodal model (cf. Figure 1), by developing a model for each
modality separately and fusing the representations of the single modalities.
We trained the unimodal models to predict mortality. Then, we transferred the
learned weights from unimodal models to the multimodal models to further train
the latter. Table 2 provides an overview of the models (abbreviation and
description) that we consider in this paper. The remainder of this section
details the unimodal models and the fusion approach to create the multimodal
model.
Table 2: Overview of models and their abbreviations. The per-operative vital signs have variable sequence length (_len_). Data types: tabular/structured (S), image (I), time-series (T). Model | Pre-orper-operative | Modality | Architecture | Inputsize | Outputsize
---|---|---|---|---|---
Pre-Static${}^{\text{S}}$ | pre | static patient data (S) | 1 FC | 76$\times$1 | 16$\times$1
Pre-Hip${}^{\text{I}}$ | pre | hip (I) | ResNet50 | 224$\times$224 | 16$\times$1
Pre-Chest${}^{\text{I}}$ | pre | chest (I) | ResNet50 | 224$\times$224 | 16$\times$1
Pre${}^{\text{S+I}}$ | pre | static (S), hip (I), chest (I) | 2 FC | 48$\times$1 | 1
Per-Vitals${}^{\text{T}}$ | per | vital (T) | Bi-LSTM | 6$\times$_len_ | 16$\times$1
Per${}^{\text{S+T}}$ | per | vital (T), medication (S) | 2 FC | 33$\times$1 | 1
All${}^{\text{S+I+T}}$ | both | static (S), hip (I), chest (I), vital (T), medication (S) | 2 FC | 81$\times$1 | 1
#### 3.2.1 Pre-operative Models
We developed three pre-operative unimodal models: the Pre-
Static${}^{\text{S}}$ model for pre-operative static data and the Pre-
Hip${}^{\text{I}}$ and Pre-Chest${}^{\text{I}}$ models for hip and chest
images, respectively. For Pre-Static${}^{\text{S}}$, the main task was
dimensionality reduction, such that the pre-static feature dimension is
reduced to the same number as the other modalities, before passing as input to
the multimodal prediction model. Therefore, we used a single fully connected
hidden layer with 16 neurons with the leaky-ReLu activation function222 In the
early stages of development, we observed that our models suffered from the
“dying ReLu” problem, therefore we chose to employ leaky-ReLu instead of the
regular ReLu activation function for all our fully connected layers [1].. For
Pre-Hip${}^{\text{I}}$ and Pre-Chest${}^{\text{I}}$, we used CNNs, which have
emerged as the de-facto standard for medical image classification [4]. A wide
range of CNN architectures are available, but given our small data set size,
we chose ResNet50 [12], a standard architecture with relatively few parameters
for which a pre-trained model is available. We added two fully connected
layers with 256 and 16 neurons between ResNet50’s global average pooling layer
and the classification layer. We used the same architecture for Pre-
Hip${}^{\text{I}}$ and Pre-Chest${}^{\text{I}}$.
#### 3.2.2 Per-operative Models
The Per-Vitals${}^{\text{T}}$ model takes 6 temporal features with variable
sequence length as input, where the sequence lengths vary between 120 and 1337
elements, or 30 minutes to 5 hours, respectively. We used bidirectional long
short-term memory (Bi-LSTM) units [28] to extract information from the vital
signs. Our Per-Vitals${}^{\text{T}}$ model contains a single Bi-LSTM layer
with 2x128 units, followed by two hidden layers with 128 and 16 neurons.
Timesteps still containing missing data after imputation were skipped during
training and testing. Further, we encode the per-operative medication data
with binary encoding.333In preliminary experiments we investigated ordinal and
temporal encoding for the per-operative medication data but did not find a
difference in performance. This data contained only 17 features, so contrary
to the Pre-Static${}^{\text{S}}$ model we did not need to transform the input
features to a lower-dimensional feature space.
#### 3.2.3 Multimodal Models
We applied early fusion to generate three multimodal models:
Pre${}^{\text{S+I}}$, Per${}^{\text{S+T}}$, and All${}^{\text{S+I+T}}$ (see
Table 2). For each of these multimodal models, only the relevant modalities
shown in Figure 1 are included. We concatenated the pre-classification layer
outputs from all unimodal models, where each modality supplies 16 features,
except for the per-operative medication data, which yielded 17 features. The
concatenation layer is followed by a fully connected layer with 64 neurons and
a classification layer with a single neuron with sigmoid activation for
mortality prediction.
### 3.3 Training Procedure and Evaluation
We randomly split our data into a training (50%), validation (25%), and test
(25%) set. We used repeated (N=5) cross-validation (k=5) to measure
variability between training runs, furthermore, we used stratification to
ensure a similar number of positive cases in each set. Models were optimized
for maximum validation AUC, which was computed after each training epoch. We
set the maximum number of epochs to 100 and used early stopping with a
patience of 10. We halved the learning rate if there was no improvement for 5
epochs. We used the Adam optimizer and a batch size of 32. We tuned the
learning rate of each model, by experimenting with a predefined range starting
at $10^{-2}$ down to $10^{-5}$ with decrements of $10^{-1}$. The image models
were trained with a relatively small learning rate of $10^{-5}$ because higher
values led to low precision ($<$0.01). The specific learning rates for each
model can be found in Appendix 0.A. To prevent overfitting, we added a dropout
of 0.3 between fully connected layers and set the weight for L2 regularization
to $10^{-3}$.
During training, our models were tasked with minimizing the weighted binary
cross-entropy loss. The weight $w_{c_{i}}$ for class $c_{i}$ was computed
according to Equation 1 where $N_{total}$ is the total number of cases,
$N_{c}=2$ is the number of classes and $N_{c_{i}}$ is the number of cases with
class $c_{i}$ [18].
$\frac{N_{total}}{N_{c}\cdot N_{c_{i}}}$ (1)
During the training of the image models, we augmented training images with
random shift ($\pm$0.2), shear ($\pm$0.2), zoom ($\pm$0.2), and rotation
($\pm$20∘) to mimic a more diverse training set. We used bicubic interpolation
to scale images to the 224x224 for ResNet50.
The multimodal models were initialized with the weights of the unimodal
models, which we kept frozen while we trained the weights of the post-
concatenation layers. Afterwards, we finetuned the model by unfreezing all
layers, except for the layers in the image models. The learning rates were
$5\cdot 10^{-2}$ and $5\cdot 10^{-3}$, for the first and second steps,
respectively.
We computed recall, precision, F1-score, and AUC for the mortality prediction
task. Each model architecture was trained 5 times with different randomly
initialized weights, to measure variability between training runs. We report
the mean and standard deviation of these runs.
The data are not publicly available due to privacy and ethical considerations,
but all relevant code is available on
GitHub444https://github.com/jornjan/mmshap-XAI2024.
### 3.4 Explanation
We applied post-hoc model explanations to one version of the final model
(All${}^{\text{S+I+T}}$) to better understand its predictions, thereby
increasing clinical relevancy. To estimate the importance of features we used
Shapley values555SHAP library https://github.com/slundberg/shap, which
resemble the contribution of each extracted feature towards a predicted
outcome [20]. The contribution can be negative, indicating the value of the
feature lowered the predicted value, or the contribution can be positive
indicating the feature value increased the predicted value.
Given the multimodal nature of our model, we split the explanation into two
steps. In the first step, we compute the contribution of each modality towards
the predicted outcome. In the second step, we compute the contribution of
individual features from a given modality, specifically, we report the
importance of individual pre-operative static features in the multimodal
prediction, because we found that these contributed during our experiments. In
the first step, at the modality level, we provide local (single test case) and
global (all test cases) explanations, while we limit ourselves to local
explanations in the second step.
We define $H$ as the concatenation of the extracted hidden features from each
modality ($H_{static}$, $H_{hip}$, $H_{chest}$ and $H_{vitals}$) and the per-
operative medicine input values, $X_{med}$. We denote $f_{mm}$ as the function
that maps the multimodal features in the concatenation layer to a 30-day
mortality prediction ($\hat{y}$). The estimated Shapley values
($\hat{\phi}_{h_{i}}$) for each feature $h_{i}\in H$ have the property that
they sum up to the predicted outcome [20]. $\hat{\phi}_{0}$ equals the average
prediction based on the reference dataset, for which we took the training set.
$H=H_{static}\oplus H_{hip}\oplus H_{chest}\oplus H_{vitals}\oplus X_{med}.$
$\hat{y}=f_{mm}(H)=\hat{\phi}_{0}+\sum_{i=1}^{|H|}\hat{\phi}_{h_{i}}$ (2)
Global explanations follow from taking the average absolute contribution of
each feature across all cases. We report global explanation for each modality,
$m$ by calculating the absolute ($AC_{m}$) and relative ($RC_{m}$)
contributions as follows:
$\displaystyle AC_{m}$
$\displaystyle=\sum_{j=1}^{|H_{m}|}\hat{\phi}_{h_{j}},\mspace{20.0mu}h_{j}\in
H_{m}$ (3) $\displaystyle RC_{m}$
$\displaystyle=\frac{AC_{m}}{\sum_{i=1}^{|H|}\hat{\phi}_{h_{i}}},\mspace{25.0mu}h_{i}\in
H$ (4)
We perform the second step, where we compute the contribution of individual
static features for the multimodal prediction. Shapley values can be
propagated through a sequence of models using a modified version of the chain
rule [6]. Equation 5 shows how we computed the contribution of individual
static features for a single case. We define $f_{static}$ as the function that
extracts features from the static input features ($x_{i}$) and $AC_{static}$
as the absolute contribution of the extracted static features.
$\hat{\phi}_{x_{i}}=\hat{\phi}(f_{static},x_{i})\mspace{5.0mu}(\hat{\phi}(f_{mm},H_{static})\mspace{5.0mu}\oslash\mspace{5.0mu}AC_{static}),\mspace{15.0mu}x_{i}\in
X_{static}$ (5)
## 4 Results
### 4.1 Model Performance
Table 3 presents the average mortality prediction performance on the test set
for all unimodal and multimodal models. The Pre${}^{\text{S+I}}$, Pre-
Static${}^{\text{S}}$, and All${}^{\text{S+I+T}}$ models all scored the
highest average test set AUC of 0.78, which suggests adding modalities to the
pre-operative static patient data does not improve performance. Specifically,
per-operative features seem to have low predictive power given the poor test
set performances of the Per-Vitals${}^{\text{T}}$ and Per${}^{\text{S+T}}$
models. All models tend to have a higher recall than precision, meaning the
difficulty lies within correctly identifying non-risk patients. _To summarize,
pre-operative data has more predictive power than per-operative data.
Specifically, the pre-operative static modality is the most important among
all pre-operative modalities. This suggests that risk of mortality can be
estimated pre-operatively by looking at static patient data and treatment
options can be discussed prospectively._
Table 3: Average performance of all models along with standard deviation over 25 runs. The best performance in each column is indicated in bold. | AUC | Recall | Precision | F1-score
---|---|---|---|---
Unimodal | | | |
| Pre-Static${}^{\text{S}}$ | $\textbf{0.78}\pm 0.04$ | $\textbf{0.82}\pm 0.09$ | $0.17\pm 0.03$ | $0.28\pm 0.04$
| Pre-Hip${}^{\text{I}}$ | $0.50\pm 0.07$ | $0.55\pm 0.32$ | $0.09\pm 0.04$ | $0.14\pm 0.05$
| Pre-Chest${}^{\text{I}}$ | $0.50\pm 0.07$ | $0.55\pm 0.29$ | $0.10\pm 0.03$ | $0.15\pm 0.03$
| Per-Vitals${}^{\text{T}}$ | $0.49\pm 0.05$ | $0.49\pm 0.30$ | $0.09\pm 0.01$ | $0.15\pm 0.02$
Multimodal | | | |
| Pre${}^{\text{S+I}}$ | $\textbf{0.78}\pm 0.04$ | $0.80\pm 0.11$ | $\textbf{0.18}\pm 0.04$ | $\textbf{0.29}\pm 0.05$
| Per${}^{\text{S+T}}$ | $0.57\pm 0.06$ | $0.62\pm 0.23$ | $0.11\pm 0.02$ | $0.17\pm 0.02$
| All${}^{\text{S+I+T}}$ | $\textbf{0.78}\pm 0.04$ | $0.79\pm 0.10$ | $0.17\pm 0.02$ | $0.28\pm 0.03$
### 4.2 Explainability
Global explanation: Table 4 shows the average relative contribution of each
modality as well as the pre-operative and per-operative groups. The features
extracted from the static patient data contribute the most with a relative
contribution of 70.3%, while other modalities contribute much less. The second
highest contributor is the per-operative vital signs with 10.4% but with a
high standard deviation of 10.5%. Also, the pre-operative image modalities
have a standard deviation close to their mean value. On average the per-
operative medication data contributes the least towards the prediction.
Local explanation: Figure 2 shows a local explanation for two cases. Each case
is explained with two plots, where the first shows how much each modality
contributed towards the prediction, and the second shows feature-specific
contribution for the _static_ modality. Note that the contribution values of
the _Static_ features sum up to the _Static_ contribution on the modality
level. The local multimodal explanations start at $\mathbb{E}[f(x)]=0.362$,
which equals the average prediction for the training set. The plots show how
each modality influenced the patient prediction compared to the training set
average.
The positive case prediction is strongly influenced by static features with a
contribution of 46% towards the prediction of 80.7%, while the other
modalities contribute 3% or less. The help with bed transfer is the most
important static feature in this particular case with a contribution of 8%.
For the negative case prediction (plots at the bottom), the static features
are most important in reducing the mortality prediction by 31%. The fact that
this case concerned a femoral neck fracture reduced the prediction by 9%.
Table 4: Mean relative contribution for mortality prediction of input modalities taken over 25 runs and in brackets the standard deviation. RC: relative contribution. Modality | RC % (st. dev.)
---|---
Pre-operative | 84.6% (11.1)
| Static patient data | | 70.3% (10.6)
| Hip image | | 7.0% (6.0)
| Chest image | | 7.3% (8.3)
Per-operative | 15.4% (11.1)
| Vitals | | 10.4% (10.5)
| Medication | | 5.0% (2.0)
Figure 2: Shapley plots for single cases in the test set. First, the
contribution of each modality is shown, followed by the contribution of
individual static features.
## 5 Discussion
_Comparison to state-of-the-art:_ The addition of per-operative data did not
yield a significant performance improvement compared to our pre-operative
multimodal model. Our unimodal image models (Pre-Hip${}^{\text{I}}$ and Pre-
Chest${}^{\text{I}}$) score worse compared to the image models developed by
Yenidogan et al. [34], who reported an AUC of 0.67 and 0.70 on the hip and
chest images, respectively. The lower performance could be due to our smaller
data set and smaller CNN architecture. On the other hand, our Pre-
Static${}^{\text{S}}$ model performs on par with state-of-the-art models with
a smaller dataset [34, 22, 5].
_Usage of explanations:_ Using Shapley values, we explained the prediction of
the All${}^{\text{S+I+T}}$ model at the modality level. Globally the pre-
operative static features are most important with a relative importance of
70.3%, still the relative importance of per-operative features in our
All${}^{\text{S+I+T}}$ model was 15.4%. Further, we provide an example of a
local explanation in Figure 2. We explained our multimodal model in two steps.
First, the contribution of the extracted features from each modality was
computed with which we calculated the contribution of individual pre-operative
static features in the second step. We only show the explanation for the pre-
operative static data as we found this to be the most important modality.
Explanations for other unimodal models can be added as well, e.g. using Grad-
CAM [29] for images. Static features describing kidney function (Urea level,
diuretic drug, and sodium level) are important for both the single-patient
local explanations for predicting 30-day mortality. Furthermore, the level of
self-reliance is important given the importance of: help with bed-chair
transfer, help with dressing, and living situation.
_Limitations and future work:_ Our results show that predicting mortality for
hip fracture patients is difficult and starting with _predicting_ if any
_complication_ occurs within 30 days could improve the assessment of the data
set predictive capabilities. Coincidentally, this resolves the class imbalance
issue, because in our data set 49% of the patients experience at least one
complication within 30 days after surgery. Furthermore, as an intermediate
step complications could be grouped by severity or cases could be scored on a
scale from no complication to mortality. It would be interesting to look into
this in future work. Clinically, this could help determine, whether a patient
is at risk after surgery and requires more attention.
Our _fusion method_ could be described as mid fusion, because we used the pre-
classification layer of each unimodal model, however we did do dimension
reduction before concatenation. This method ensured each modality contributed
the same number of features to the classification layer, however, this might
not be optimal for post-operative complication prediction. Future work could
include a deeper investigation of fusion methods, like late fusion and
bottleneck fusion. Bottleneck fusion restricts cross-modal information flow by
using a very limited amount of neurons for information to pass through. The
idea is that the model is forced to condense the most important cross-modal
features leading to better performance at negligible computational cost [21].
During our experiments, we found that the models that include static patient
data or vital signs are overfitting the data suggesting that the model might
be memorizing the training data, with poor _generalization capability_ to the
test data. This might indicate that: i) there is some variation within the
instances and does not contain many common patterns, ii) there is not enough
data for generalization. The addition of dropout layers and L2-regularization
did not solve this problem, therefore a different approach is required. It has
been shown that reducing the number of pre-operative static features based on
their importance can prevent overfitting [5]. We could use the Shapley values
to iteratively select the most important feature, up until we reach a certain
subset size. Additionally, if we prevent strong overfitting on the pre-
operative static data, this could incentivize the multimodal models to focus
more on the other modalities for information.
Our multimodal model is not robust against _missing data_ , only the per-
operative medication data and part of the pre-operative static data are
allowed to be missing. In clinical practice, this would mean patients are
excluded if pre-operative images or per-operative vitals are missing. We
impute the pre-operative static patient data, so having some missing values
there does not lead to exclusion, however, the imputation quality is dependent
on the number of non-missing features. Therefore, for clinical applicability
future models should be robust against _missing modalities_ , to avoid patient
exclusion.
## 6 Conclusion
We conclude that the addition of per-operative data to pre-operative data does
not significantly improve 30-day mortality prediction. Further, investigation
confirmed that pre-operative features are most important for mortality
prediction, while per-operative features contribute little except for a few
per-operative medications. We show that multimodal black box models can be
made explainable by applying the chain rule to the feature attributions of
each model in the sequence. Future work, could restrict to only using pre-
operative data and further explain model predictions by providing
interpretable explanations for the contribution of the image modalities.
#### 6.0.1 Disclosure of Interests.
Christin Seifert is a member of the XAI 2024 World Conference committee. All
other authors have no competing interests to declare that are relevant to the
content of this article.
## References
* [1] Agarap, A.F.: Deep Learning using Rectified Linear Units (ReLU) (2019). https://doi.org/10.48550/arXiv.1803.08375
* [2] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLOS ONE 10(7), e0130140 (2015). https://doi.org/10.1371/journal.pone.0130140
* [3] Briganti, G., Le Moine, O.: Artificial Intelligence in Medicine: Today and Tomorrow. Frontiers in Medicine 7 (2020)
* [4] Cai, L., Gao, J., Zhao, D.: A review of the application of deep learning in medical image classification and segmentation. Annals of Translational Medicine 8(11), 713–713 (2020). https://doi.org/10.21037/atm.2020.02.44
* [5] Cao, Y., Forssten, M.P., Mohammad Ismail, A., Borg, T., Ioannidis, I., Montgomery, S., Mohseni, S.: Predictive Values of Preoperative Characteristics for 30-Day Mortality in Traumatic Hip Fracture Patients. J Pers Med 11(5), 353 (2021). https://doi.org/10.3390/jpm11050353
* [6] Chen, H., Lundberg, S.M., Lee, S.I.: Explaining a series of models by propagating shapley values. Nature Communications 13(1), 4512 (2022). https://doi.org/10.1038/s41467-022-31384-3, https://www.nature.com/articles/s41467-022-31384-3, number: 1 Publisher: Nature Publishing Group
* [7] Das, A., Rad, P.: Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey (2020). https://doi.org/10.48550/arXiv.2006.11371
* [8] de Munter, L., Polinder, S., Lansink, K.W.W., Cnossen, M.C., Steyerberg, E.W., de Jongh, M.A.C.: Mortality prediction models in the general trauma population: A systematic review. Injury 48(2), 221–229 (2017). https://doi.org/10.1016/j.injury.2016.12.009
* [9] Fritz, B.A., Cui, Z., Zhang, M., He, Y., Chen, Y., Kronzer, A., Ben Abdallah, A., King, C.R., Avidan, M.S.: Deep-learning model for predicting 30-day postoperative mortality. British Journal of Anaesthesia 123(5), 688–695 (2019). https://doi.org/10.1016/j.bja.2019.07.025
* [10] Gowd, A.K., Agarwalla, A., Amin, N.H., Romeo, A.A., Nicholson, G.P., Verma, N.N., Liu, J.N.: Construct validation of machine learning in the prediction of short-term postoperative complications following total shoulder arthroplasty. Journal of Shoulder and Elbow Surgery 28(12), e410–e421 (2019). https://doi.org/10.1016/j.jse.2019.05.017
* [11] Gullberg, B., Johnell, O., Kanis, J.: World-wide Projections for Hip Fracture. Osteoporos Int 7(5), 407–413 (1997). https://doi.org/10.1007/PL00004148
* [12] He, K., Zhang, X., Ren, S., Sun, J.: Deep Residual Learning for Image Recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90
* [13] Hu, F., Jiang, C., Shen, J., Tang, P., Wang, Y.: Preoperative predictors for mortality following hip fracture surgery: A systematic review and meta-analysis. Injury 43(6), 676–685 (2012). https://doi.org/10.1016/j.injury.2011.05.017
* [14] Jones, H.J., de Cossart, L.: Risk scoring in surgical patients. Br J Surg 86(2), 149–157 (1999). https://doi.org/10.1046/j.1365-2168.1999.01006.x
* [15] Junaid, M., Ali, S., Eid, F., El-Sappagh, S., Abuhmed, T.: Explainable machine learning models based on multimodal time-series data for the early detection of Parkinson’s disease. Computer Methods and Programs in Biomedicine 234 (2023). https://doi.org/10.1016/j.cmpb.2023.107495
* [16] Karim, F., Majumdar, S., Darabi, H.: Insights Into LSTM Fully Convolutional Networks for Time Series Classification. IEEE Access 7, 67718–67725 (2019). https://doi.org/10.1109/ACCESS.2019.2916828
* [17] Karres, J., Kieviet, N., Eerenberg, J.P., Vrouenraets, B.C.: Predicting Early Mortality After Hip Fracture Surgery: The Hip Fracture Estimator of Mortality Amsterdam. Journal of Orthopaedic Trauma 32(1), 27–33 (2018). https://doi.org/10.1097/BOT.0000000000001025
* [18] King, G., Zeng, L.: Logistic Regression in Rare Events Data. Political Analysis 9(2), 137–163 (2001/ed). https://doi.org/10.1093/oxfordjournals.pan.a004868
* [19] Lee, C.K., Hofer, I., Gabel, E., Baldi, P., Cannesson, M.: Development and Validation of a Deep Neural Network Model for Prediction of Postoperative In-hospital Mortality. Anesthesiology 129(4), 649–662 (2018). https://doi.org/10.1097/ALN.0000000000002186
* [20] Lundberg, S.M., Lee, S.I.: A Unified Approach to Interpreting Model Predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems. vol. 30. Curran Associates, Inc. (2017)
* [21] Nagrani, A., Yang, S., Arnab, A., Jansen, A., Schmid, C., Sun, C.: Attention Bottlenecks for Multimodal Fusion. In: Advances in Neural Information Processing Systems. vol. 34, pp. 14200–14213. Curran Associates, Inc. (2021)
* [22] Nijmeijer, W.S., Folbert, E.C., Vermeer, M., Slaets, J.P., Hegeman, J.H.: Prediction of early mortality following hip fracture surgery in frail elderly: The Almelo Hip Fracture Score (AHFS). Injury 47(10), 2138–2143 (2016). https://doi.org/10.1016/j.injury.2016.07.022
* [23] Pawar, U., O’Shea, D., Rea, S., O’Reilly, R.: Explainable AI in Healthcare. In: 2020 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). pp. 1–2 (2020). https://doi.org/10.1109/CyberSA49311.2020.9139655
* [24] Perng, J.W., Kao, I.H., Kung, C.T., Hung, S.C., Lai, Y.H., Su, C.M.: Mortality Prediction of Septic Patients in the Emergency Department Based on Machine Learning. Journal of Clinical Medicine 8(11), 1906 (2019). https://doi.org/10.3390/jcm8111906
* [25] Pham, T., Tran, T., Phung, D., Venkatesh, S.: Predicting healthcare trajectories from medical records: A deep learning approach. Journal of Biomedical Informatics 69, 218–229 (2017). https://doi.org/10.1016/j.jbi.2017.04.001
* [26] Ribeiro, M.T., Singh, S., Guestrin, C.: "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. pp. 1135–1144. KDD ’16, Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2939672.2939778
* [27] Schoenfeld, A.J., Le, H.V., Marjoua, Y., Leonard, D.A., Belmont, P.J., Bono, C.M., Harris, M.B.: Assessing the utility of a clinical prediction score regarding 30-day morbidity and mortality following metastatic spinal surgery: The New England Spinal Metastasis Score (NESMS). The Spine Journal 16(4), 482–490 (2016). https://doi.org/10.1016/j.spinee.2015.09.043
* [28] Schuster, M., Paliwal, K.: Bidirectional recurrent neural networks. Signal Processing, IEEE Transactions on 45, 2673–2681 (1997). https://doi.org/10.1109/78.650093
* [29] Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE international conference on computer vision. pp. 618–626 (2017)
* [30] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: International Conference on Machine Learning. pp. 3145–3153. PMLR (2017)
* [31] Tonekaboni, S., Joshi, S., McCradden, M.D., Goldenberg, A.: What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use. In: Proceedings of the 4th Machine Learning for Healthcare Conference. pp. 359–380. PMLR (2019)
* [32] Wang, W., Krishnan, E.: Big Data and Clinicians: A Review on the State of the Science. JMIR Medical Informatics 2 (Jan 2014). https://doi.org/10.2196/medinform.2913, https://medinform.jmir.org/2014/1/e1
* [33] Xue, B., Li, D., Lu, C., King, C.R., Wildes, T., Avidan, M.S., Kannampallil, T., Abraham, J.: Use of Machine Learning to Develop and Evaluate Models Using Preoperative and Intraoperative Data to Identify Risks of Postoperative Complications. JAMA Network Open 4(3), e212240–e212240 (2021). https://doi.org/10.1001/jamanetworkopen.2021.2240
* [34] Yenidogan, B., Pathak, S., Geerdink, J., Hegeman, J.H., van Keulen, M.: Multimodal Machine Learning for 30-Days Post-Operative Mortality Prediction of Elderly Hip Fracture Patients. In: 2021 International Conference on Data Mining Workshops (ICDMW). pp. 508–516 (2021). https://doi.org/10.1109/ICDMW53433.2021.00068
## Appendix 0.A Hyperparameters
Table 5: Hyperparameters for each model: Learning Rate (LR) Model | LR
---|---
Pre-Static${}^{\text{S}}$ | $10^{-3}$
Pre-Hip${}^{\text{I}}$ | $10^{-5}$
Pre-Chest${}^{\text{I}}$ | $10^{-5}$
Per-Vitals${}^{\text{T}}$ | $5\cdot 10^{-4}$
Pre${}^{\text{S+I}}$ | $5\cdot 10^{-2}$
Per${}^{\text{S+T}}$ | $5\cdot 10^{-2}$
All${}^{\text{S+I+T}}$ | $5\cdot 10^{-2}$
## Appendix 0.B Detailed feature overview
Table 6: Available pre-operative features grouped by category
Demographics | | Daily living
---
activities
Nutrition | | Surgery
---
information
Age | Help with transfer from bed to chair | Malnutrition risk | Fracture type
Surgery start/end | Help with showering | Unintended weight loss | Surgery type
Falling risk | Help with dressing | Drink or tube feeding | Fracture laterality
Fall last year | Help with going to toilet | Decreased appetite | ASA score
Pre-fracture mobility | Help with eating | SNAQ score |
Living situation | Help with self-care | |
Prone to delirium | Katz ADL score | |
Memory problems | Incontinence material used | |
Delirium in the past | | |
CCI score | | |
Lab results | Medication (reason/effect) | Comorbidities
---|---|---
HB | Blood thinners | Chronic pulmonary disease
HT | Vitamin D | Congestive heart failure
CRP | Polypharmacy | Peripheral vascular disease
LEUC | A02 (acid related disorders) | Cerebrovascular disease
THR | A10 (diabetes) | Dementia
BLGR | B01 (antithrombotic) | Renal disease
IRAI | B02 (antihemmorrhagics) | Rheumatological disease
ALKF | B03 (antianemic) | Cancer
GGT | C01 (cardiac therapy) | Cerebrovascular event
ASAT | C03 (diuretics) | Liver disease
ALAT | C07 (beta blockers) | Lymphoma
LDH1 | C08 (calcium channel blockers) | Leukemia
UREU | C09 (renin-angiotensin system) | Peptic ulcer disease
KREA | C10 (lipid modification) | Diabetes
GFRM | L04 (immunosuppressants) | Prior myocardial infarction
NA | M01 (anti-inflammatory) |
XKA | N05 (psycholeptics) |
GLUCGLUC | R03 (airway obstruction) |
Table 7: List of medications that were at least administered in 100 unique cases, also includes the general reason for usage. Percentages are with respect to a total of 1616 cases for which medication data is available. Medication | # unique cases | Effect
---|---|---
Bupivacaine | 649 (40.2%) | Anesthetic
Cefazolin | 692 (42.8%) | Antibiotic
Dexemethasone | 198 (12.3%) | Anti-inflammatory
Efedrine | 445 (27.5%) | Increase blood pressure
Elektrolytes | 468 (29.0%) | Minerals
Esketamine | 561 (34.7%) | Anesthetic
Lidocaine | 405 (25.1%) | Anesthetic
Metamizole | 187 (11.6%) | Painkiller
Midazolam | 475 (29.4%) | Anesthetic
Noradrenaline | 887 (54.9%) | Increase blood pressure
Ondansetron | 282 (17.5%) | Counter post-operative nausea
Piritramide | 446 (27.6%) | Painkiller
Propofol | 648 (40.1%) | Anesthetic
Rocuronium | 260 (16.1%) | Muscle relaxant
Sufentanil | 746 (46.2%) | Painkiller
Sugammadex | 112 (6.9%) | Reverse muscle relaxant
Tranexamic acid | 465 (28.8%) | Prevent blood loss
|
# Clinically Verified Hybrid Deep Learning System for Retinal Ganglion Cells
Aware Grading of Glaucomatous Progression
Hina Raja*, Taimur Hassan*$\dagger$, Muhammad Usman Akram, Naoufel Werghi
Copyright $\copyright$ 2020 IEEE. Personal use of this material is permitted.
However, permission to use this material for any other purposes must be
obtained from the IEEE by sending an email to<EMAIL_ADDRESS>This
work is supported by a research fund from Khalifa University: Ref:
CIRA-2019-047.H. Raja and M. U. Akram are with the Department of Computer and
Software Engineering, National University of Sciences and Technology,
Islamabad, Pakistan.T. Hassan and N. Werghi are with the Center for Cyber-
Physical Systems (C2PS), Department of Electrical Engineering and Computer
Sciences, Khalifa University, Abu Dhabi, United Arab Emirates.*Co-first
authors $\dagger$Corresponding author, Email<EMAIL_ADDRESS>
###### Abstract
Objective: Glaucoma is the second leading cause of blindness worldwide.
Glaucomatous progression can be easily monitored by analyzing the degeneration
of retinal ganglion cells (RGCs). Many researchers have screened glaucoma by
measuring cup-to-disc ratios from fundus and optical coherence tomography
scans. However, this paper presents a novel strategy that pays attention to
the RGC atrophy for screening glaucomatous pathologies and grading their
severity. Methods: The proposed framework encompasses a hybrid convolutional
network that extracts the retinal nerve fiber layer, ganglion cell with the
inner plexiform layer and ganglion cell complex regions, allowing thus a
quantitative screening of glaucomatous subjects. Furthermore, the severity of
glaucoma in screened cases is objectively graded by analyzing the thickness of
these regions. Results: The proposed framework is rigorously tested on
publicly available Armed Forces Institute of Ophthalmology (AFIO) dataset,
where it achieved the $F_{1}$ score of 0.9577 for diagnosing glaucoma, a mean
dice coefficient score of 0.8697 for extracting the RGC regions and an
accuracy of 0.9117 for grading glaucomatous progression. Furthermore, the
performance of the proposed framework is clinically verified with the markings
of four expert ophthalmologists, achieving a statistically significant Pearson
correlation coefficient of 0.9236. Conclusion: An automated assessment of RGC
degeneration yields better glaucomatous screening and grading as compared to
the state-of-the-art solutions. Significance: An RGC-aware system not only
screens glaucoma but can also grade its severity and here we present an end-
to-end solution that is thoroughly evaluated on a standardized dataset and is
clinically validated for analyzing glaucomatous pathologies.
###### Index Terms:
Retinal Ganglion Cells (RGCs), Retinal Nerve Fiber Layer (RNFL), Glaucoma,
Deep Learning, Optical Coherence Tomography (OCT).
## I Introduction
The human retina is composed of various nerve cells. Among these, retinal
ganglion cells (RGCs) are responsible for transmitting visual information to
the brain. The axons of these ganglion cells collectively form the retinal
never fiber layer (RNFL), their cell bodies are enclosed in the ganglion cell
layer (GCL) and the inner plexiform layer (IPL) embodies their dendrites. The
composition of these layers is commonly termed as ganglion cell complex (GCC).
Glaucoma (a progressive optic neuropathy) severely degrades these RGCs,
resulting in a thinning of RNFL, ganglion cell with the inner plexiform layer
(GC-IPL), and the GCC profiles, as shown in Figure 1. This RGC atrophy can
cause permanent visual impairments and even blindness if left untreated [1].
Clinically, glaucoma can be identified through fundus and optical coherence
tomography (OCT) based examinations. OCT imaging is generally preferred by the
clinicians over other modalities due to its objective assessments of early and
advanced staged glaucoma where early glaucoma refers to the condition when the
RGCs start to degenerate due to increased intraocular pressure [2]. However,
the progression of RGC dysfunction leads towards the advanced glaucomatous
stage where the total cupping of the optic nerve and severe visual impairments
can be observed.
Figure 1: Optic nerve head (ONH) OCT scan depicting (A) healthy and (B)
glaucomatous pathology. Inner Limiting Membrane (ILM), RNFL, GC-IPL, GCC, and
Retinal Pigment Epithelium (RPE) are highlighted along with the choroidal and
optic disc region. The thinning of RNFL, GC-IPL, and the GCC regions can be
observed in the glaucomatous scan (B) as compared to the healthy one (A). Both
scans are taken from publicly available Armed Forces Institute of
Ophthalmology (AFIO) dataset [3].
## II Related Work
Many researchers have worked on diagnosing glaucoma from retinal OCT images.
These studies either emphasize the clinical significance of retinal OCT
examination for analyzing glaucomatous severity, or they propose OCT-based
autonomous systems for analyzing the glaucomatous pathologies.
### II-A Clinical Studies
Development in retinal imaging modalities (especially OCT) is making rapid
strides in providing the objective visualization of early, mild, and severe
ocular complications [4], particularly for the glaucoma [5], the second
leading cause of irreversible blindness worldwide. Moreover, the detection and
monitoring of glaucoma by measuring the velocity of RNFL thickness loss has
been significantly highlighted in many recent state-of-the-art studies [6, 7].
Leung et al. [8] demonstrated the importance of RNFL thickness (generated
through OCT and visual field tests) in determining the retinal pathological
variations within different glaucomatous stages. Ojima et al. [9] signified
the importance of RNFL thickness and macular volume, and declared that RNFL
thickness has higher diagnostic power than a complete macular volume to detect
glaucoma. Furthermore, Medeiros et al. [10] evaluated RGC loss using standard
automated perimetry (SAP) and spectral-domain OCT (SD-OCT) examinations. They
observed that the early pathological degeneration of RGC results in the
drastic thinning of RNFL as compared to the RGC changes in the late
glaucomatous stages. Likewise, El-Naby et al. [11] extracted the RNFL
thickness from SD-OCT scans and compared it with the VF sensitivity to observe
their correlation in screening primary open-angle glaucoma. They concluded
that the mean RNFL thickness obtained through SD-OCT imagery is a very good
indicator of screening glaucomatous subjects and also for monitoring the
progression and severity of the disease.
### II-B Automated Glaucomatous Analysis
Initial methods developed for glaucomatous screening analyze cup-to-disc
ratios from macula-centered and disc-centered fundus images [12, 13, 14, 15,
16]. However, observing the degeneration of RGCs through optic nerve head
(ONH) SD-OCT scans can provide a superior and objective indication of early
glaucoma, resulting in the timely prevention of non-recoverable visual
impairments. Furthermore, due to the unprecedented clinical significance of
retinal OCT examination [5, 6, 7], many researchers have developed autonomous
systems to objectively screen glaucoma (especially in early-stage) using
retinal OCT scans [17]. Moreover, Almobarak et al. [18] manually segmented ONH
structures from SD-OCT scans to analyze pathological variations in healthy and
glaucomatous pathologies. Kromer et al. [19] extracted eight retinal
boundaries from 40 SD-OCT scans of healthy subjects using curve
regularization. In [20], a generated model was presented to segment retinal
layers from OCT images using a group-wise curve alignment. Niu et al. [21]
proposed an automated method to segment the six retinal layers using
correlation smoothness constraint and dual gradients. Apart from this, several
methods have been proposed to quantify retinal layer thickness from SD-OCT
scans depicting normal [22, 23], and abnormal retinal pathologies [24, 25, 26,
27, 28]. Ometto et al. [29] presented ReLayer, an automated framework to
segment and estimate the thickness of ILM, inner/outer segment, and RPE from
OCT scans to monitor retinal abnormalities. Gao et al. [30] extracted retinal
layers through graph decomposition from Topcon SD-OCT scans and evaluated the
mean macular thickness of RNFL regions. Afterward, they compared their
obtained results with the thickness measurements from Topcon’s built-in layer
extraction framework. Likewise, Mayer et al. [31] proposed an automated
framework for extracting the retinal layers and computing the RNFL thickness
by minimizing the energy obtained through scan gradients, local and regional
smoothing filters. They validated their framework on a dataset containing both
normal and glaucomatous affected OCT scans, and achieved a mean RNFL thickness
of 94.1$\pm$11.7$\mu m$ and 65.3$\pm$15.7$\mu m$, respectively for normal and
glaucomatous pathologies. In addition to this, many researchers have proposed
computer-aided diagnostic systems to diagnose glaucomatous pathologies from
fundus [32, 13, 14, 15], OCT [17] and fused fundus and OCT imagery [12]. More
recently, deep learning has been applied to analyze the glaucomatous
pathologies through segmentation-free retinal layers extraction framework
[33]. Zang et al. [34] used a convolutional neural network (CNN) and graph
search to delineate the retinal boundaries and the optic disc region from ONH
SD-OCT scans of normal and glaucomatous subjects, achieving the overall dice
coefficient of 0.91$\pm$0.04. Maetschke et al. [35] highlighted the
significance of RNFL, GC-IPL profiles for diagnosing, and monitoring glaucoma
progression and used the 3D CNN model to classify healthy and glaucomatous ONH
SD-OCT scans. They outperformed conventional machine learning approaches by
achieving the area under the curve ($AUC$) score of 0.94. Furthermore, Devalla
et al. [36] proposed a dilated-residual U-Net architecture (DRUNET) for the
extraction of six ONH tissues from SD-OCT scans to aid experts in analyzing
glaucomatous progression. DRUNET achieved the overall dice coefficient of
0.91$\pm$0.05 when accessed against manual tissue segmentation done by the
expert observer. In addition to this, a joint retinal segmentation and
classification pipeline was proposed in [37] to analyze healthy and
glaucomatous pathologies from 1,004 locally acquired circular OCT scans, and
also a severe diabetic macular edema (DME) pathology from selective 110
macular OCT scans of Duke dataset [28].
Figure 2: The block diagram of the proposed framework. First of all, the input
scan is preprocessed to remove the background noise and vendor annotations.
Afterward, the processed scan is passed to the hybrid convolutional network
(RAG-Netv2) for the simultaneous extraction of RNFL, GC-IPL, and GCC regions,
and its classification against glaucoma. The screened glaucomatous scan is
further graded by SVM based on the RGC atrophy observed through RNFL, GC-IPL,
and GCC thickness profiles.
Pathological degeneration of RGCs observed in RNFL, GC-IPL, and GCC thickness
profiles can objectively monitor the progression of glaucomatous severity.
However, manual extraction of these regions is a subjective and time-consuming
task. Several automated layers extraction methods have been proposed in the
literature to address this shortcoming [23, 30, 31, 37, 36]. But, to the best
of our knowledge, there is no clinically validated framework that utilizes the
degraded RGC profiles to diagnose and grade glaucomatous progression using ONH
SD-OCT scans. Moreover, as the ONH SD-OCT scans are considered to be more
significant for detecting glaucoma progression [18], validating an automated
framework on a publicly available standardized ONH SD-OCT dataset adds
significant value to the body of knowledge.
In this paper, we present a fully automated diagnosis and grading of glaucoma
from ONH SD-OCT images by analyzing pathological variations of RNFL, GC-IPL,
and GCC regions. The proposed framework is unique as it employs a hybrid
convolutional network for the RGC-aware diagnosis and grading of glaucoma, and
it has been clinically validated with four expert clinicians. The main
features of this paper are:
* •
A novel strategy for the classification and grading of glaucomatous
progression by analyzing RNFL, GC-IPL, and GCC regions from ONH SD-OCT scans.
* •
A significantly improved and lightweight hybrid retinal analysis and grading
network (RAG-Netv2) for the simultaneous pixel-level segmentation of retinal
regions and scan-level classification of glaucomatous pathologies.
* •
Rigorous clinical validation of the proposed framework with four expert
ophthalmologists to screen, track, and grade glaucomatous progression from
high-quality ONH SD-OCT scans. The complete dataset and the annotations from
the expert observers are publicly available at
https://data.mendeley.com/datasets/2rnnz5nz74/2.
The rest of the paper is organized as follows. Section III describes the
proposed method. Section IV showcases the experimental setup. Section V
presents the results, followed by detailed discussion and concluding remarks
about the proposed framework in Section VI.
## III Proposed Method
We present a novel framework that gives an RGC-aware diagnosis of glaucoma
using ONH SD-OCT scans. Furthermore, it measures the severity of glaucomatous
progression by analyzing the RNFL, GC-IPL, and GCC thickness profiles. The
block diagram of the proposed framework is shown in Figure 2. First of all,
the input scan is preprocessed to retain the retinal area. Afterward, the
preprocessed scan (containing only the retina and the ONH) is passed to the
hybrid convolutional network that extracts the RNFL, GC-IPL, and GCC regions,
and screens the scan against glaucoma. The thickness profiles of these
extracted regions are computed and their mean values are passed as a feature
vector to the supervised support vector machines (SVM) for grading the
screened glaucomatous scan as either early suspect or a severe case. The
detailed description of each block is presented below:
### III-A Preprocessing
The purpose of the preprocessing is to remove the background artifacts and
noisy content to obtain accurate extraction of RNFL, GC-IPL, and GCC regions.
The preprocessing is performed through structure tensor [38], which highlights
the predominant orientations of the image gradients within the specified
neighborhood of a pixel. For each pixel of the input image, we get a symmetric
$2\times 2$ matrix $S$ defined by the outer products of image gradients:
$\small S=\begin{bmatrix}\varphi*(\nabla X.\nabla X)&\varphi*(\nabla X.\nabla
Y)\\\ \varphi*(\nabla Y.\nabla X)&\varphi*(\nabla Y.\nabla Y)\end{bmatrix}$
(1)
where the image gradients $\nabla X$ and $\nabla Y$ are oriented at
$0^{\circ}$ and $90^{\circ}$, respectively. $\varphi$ denotes the parametric
smoothing filter (typically a Gaussian). Because of the symmetry, three out of
the four matrix elements are unique. Computing $S$ for each pixel we obtain
three unique tensors from which we select the one having the maximum coherency
according to their norm. Afterward, the selected tensor is transformed as an
8-bit grayscale image. Then, the ILM and choroidal boundaries are extracted
from it by detecting the first and last foreground-background transitions in
each column of the scan. To avoid outliers, we constrain the distance between
consecutive pixels in the ILM and choroidal boundaries to be below a threshold
$\tau$ determined empirically. Apart from this, the missing values in each
layer are estimated through linear interpolation and are smoothed through
median filtering. Then, a retinal mask is generated which is multiplied with
the original scan to isolate the retinal and ONH regions as shown in Figure 3.
The complete pseudo-code to extract the retina and ONH region is presented in
Algorithm 1:
### III-B Hybrid Convolution Framework
We propose a hybrid convolutional network (HCN) for extracting the retinal
regions and also for the classification of candidate scan as normal or
glaucomatous. Using an HCN rather than a conventional classification model in
this study aims to get an RGC-aware diagnosis of glaucoma. The HCN model
proposed here is an improved version of the Retinal Analysis and Grading
Network (RAG-Net) [39]. The RAG-Net and its improved version will be described
next.
1Input: OCT Image $I$
2Output: Preprocessed Image $I_{ONH}$
3ILM $\leftarrow$ $\phi$
4Choroid $\leftarrow$ $\phi$
5$\tau$ $\leftarrow$ 20
6$v_{1}$ $\leftarrow$ $\phi$
7$v_{2}$ $\leftarrow$ $\phi$
8$S$ $\leftarrow$ ComputeStructureTensor(I)
9$S_{u}$ $\leftarrow$ GetUniqueTensors($S$)
10$S_{u}^{t}$ $\leftarrow$ GetCoherentTensor($S_{u}$)
11$I_{u}^{t}$ $\leftarrow$ ConvertTensorToImage($S_{u}^{t}$)
12[nRow,nCol] $\leftarrow$ GetSize($I_{u}^{t}$)
13for _$c$ $\leftarrow$ 1 to nCol_ do
14 $p_{1}$ $\leftarrow$ FindFirstTransitionInRow($I_{u}^{t}$(:,$c$))
15 $p_{2}$ $\leftarrow$ FindLastTransitionInRow($I_{u}^{t}$(:,$c$))
16 if _$c$ is 1_ then
17 ILM($c$) $\leftarrow$ $p_{1}$
18 Choroid($c$) $\leftarrow$ $p_{2}$
19 $v_{1}$ = $v_{2}$ = $c$
20 else
21 isP1Valid $\leftarrow$ CheckDistance($p_{1}$,ILM($v_{1}$), $\tau$)
22 isP2Valid $\leftarrow$ CheckDistance($p_{2}$,Choroid($v_{2}$), $\tau$)
23 if _isP1Valid_ then
24 $v_{1}$ $\leftarrow$ $c$
25 ILM($v_{1}$) $\leftarrow$ $p_{1}$
26 end if
27 if _isP2Valid_ then
28 $v_{2}$ $\leftarrow$ $c$
29 Choroid($v_{2}$) $\leftarrow$ $p_{2}$
30 end if
31
32 end if
33
34 end for
35
36ILM $\leftarrow$ InterpolateGapsAndSmoothLayer(ILM)
37Choroid $\leftarrow$ InterpolateGapsAndSmoothLayer(Choroid)
38mask $\leftarrow$ GenerateMask($I_{u}^{t}$,ILM, Choroid)
39$I_{ONH}$ $\leftarrow$ I * mask
Algorithm 1 Retina and ONH Extraction
#### III-B1 Retinal Analysis and Grading Network
RAG-Net is a hybrid convolutional network specifically designed to extract
retinal lesions and abnormalities from the macular OCT scans and give lesion-
aware grading of retinal diseases by performing simultaneous pixel-level
segmentation and scan-level classification. Architecturally, it contains 112
convolution layers, 111 batch normalization layers, 102 ReLUs, 6 pooling
layers, 5 lambda layers, 2 softmax, and a fully connected layer [39].
Furthermore, it contains around 62.3M parameters from which around 0.1M are
non-learnable. RAG-Net showed the capacity to generalize across multiple
scanner specifications for retinal lesion extraction and lesion-aware grading
of retinopathy and has also been applied to the multi-modal imagery for
retinal lesions extraction, achieving superior performance among its
competitors [40]. However, the original RAG-Net has been found limited in
discriminating similar texture objects, like retinal boundaries, when their
transitional variations are small. This is because RAG-Net possesses kernels
with smaller receptive fields (field of view) that do not retain accurately
the contextual information of small and similar textural regions. Although,
the feature pyramid module within the RAG-Net architecture tends to overcome
this limitation to some extent. But overall the performance is still capping
as will be shown in the results section (Section V). Moreover, the source code
of RAG-Net and its complete documentation is available at
http://biomisa.org/index.php/downloads/ [39].
Figure 3: Preprocessing stage (A) original ONH SD-OCT scan, (B) tensor with maximum coherency, (C) grayscale tensor from which the retinal and choroidal layer points are iteratively picked, (D) extracted ILM and choroidal layers, (E) retinal extraction mask, (F) extracted retina and ONH. Figure 4: Illustration of dilated convolution with $3\times 3$ kernel and (A) dilation rate $r=1$, (B) $r=2$, and (C) $r=3$. Table I: RAG-Netv2 hyper-parameters Layers | Number of Layers | Parameters
---|---|---
Convolution | 16 | 4,847,369
Pooling | 4 Average, 10 Max | 0
Batch Normalization | 15 | 17,920
Activation | 13 ReLU, 2 Softmax | 0
Lambda | 5 | 0
Input | 2 | 0
Zero-Padding | 10 | 0
Concatenation | 1 | 0
Reshape | 1 | 0
Fully Connected and Flatten | 2 | 716,810
Classification | 1 | 22
Learnable Parameters | 5,573,161 |
Non-learnable Parameters | 8,960 |
Total Parameters | 5,582,121 |
#### III-B2 Modified Retinal Analysis and Grading Network
We propose a RAG-Net modified version, dubbed RAG-Netv2, whereby the
contextual-aware unit is built upon atrous convolutions (also known as dilated
convolution). The atrous convolutions are formulated in a residual fashion
which greatly enhances the kernels receptive field to perform more broad and
context-aware filtering while maintaining the same spatial resolution [41].
The 2D atrous convolution is expressed as:
$g(x,y)=\sum_{i=1}^{N_{1}}\sum_{j=1}^{N_{2}}k(i,j)f(x-r*i,y-r*j)$ (2)
where $f$ denotes input function (typically a feature map from the previous
layer), $k$ represents the $N_{1}\times N_{2}$ kernel, $r$ is the dilation
rate and $g$ denotes the convolution output (a feature map produced in the
current layer). It should be noticed in the above equation that we have
introduced a common dilation rate $r$ in both image dimensions to ensure a
consistent reception field at both of them. When $r=1$, then atrous
convolution is simply a linear convolution as shown in Figure 4 (A), and when
$r>=1$, the receptive field is enlarged so the kernel captures wider
contextual area from the input function to produce more distinctive feature
maps. However, increasing the $r$ also introduces gridding artifacts [41] due
to large gaps between convolving pixels in the input function, causing a
cascading effect in the consecutive convolution layers which may result in a
significant decrease of network performance. The gridding artifacts are also
illustrated in Figure 5 (top row) for the stacked convolution layers.
Figure 5: A block containing 5 consecutive atrous convolution layers with
fixed dilation rate ($r=3$) in top row and variable dilation rates ($r=k$ in
$k^{th}$ layer where $k=1,2,…,5$) in bottom row. A single pixel in layer 5
(highlighted in light blue color) is computed using (green pixels in layer 4).
Similarly, the green pixels (in layer 4) are computed through brown pixels in
layer 3, brown pixels are computed through red pixels in layer 2 and red
pixels are computed through dark blue pixels in layer 1. It can be observed
that the receptive field of a $3\times 3$ kernel is greatly enhanced in both
(top row) and (bottom row) as compared to standard linear convolutions.
However, the fixed dilation rate (in the top row) introduces gridding
artifacts i.e. output pixel is layer 5 is computed by totally disjoint input
pixels of layer 4 and so on. Also, as the layers are cascaded, the effects of
gridding artifacts can be catastrophic e.g. observe (in the top row) that how
a pixel in layer 5 relates to the pixels in layer 1. Employing variable
dilation rates can effectively diminish these gridding artifacts while
preserving the increased field of view at the same time as shown in the bottom
row.
In order to compensate this, we perform atrous convolution with variable
dilation factors [42] in the RAG-Netv2 architecture. For a block of $n$
consecutive convolution layers within the network having the dilation rate
‘$r$’ where $n>r$, the dilation factors in the proposed framework are
generated through $round(r-\frac{n}{2}+i)$ where $i$ varies from $0$ to $n-1$.
For example, for a block containing 5 cascaded convolution layers having the
dilation rate $r=3$, the dilation factors will be [1, 2, 3, 4, 5], meaning
that the first convolution layer within the block will perform standard
convolution (as $r=1$), the second layer will have $r=2$ and so on as shown in
Figure 5 (bottom row). Similarly, for $n=3$, $r=3$, the dilation factors will
be [2, 3, 4]. The second major benefit of RAG-Netv2 is that it is extremely
lightweight and contains 91.04% fewer parameters than original RAG-Net
architecture (having 62,352,188 parameters in total) while achieving the
better segmentation and classification performance. The detailed architectural
description and hyper-parameters of RAG-Netv2 are reported in Table I from
which we can see that it contains 5,573,161 learnable and 8,960 non-learnable
parameters. Moreover, rather than training RAG-Netv2 from scratch, it is fine-
tuned on RAG-Net weights (which are already adjusted in [39] for lesion-aware
retinal image analysis) to achieve faster convergence.
### III-C Estimation of Retinal Profiles
The severity of glaucoma can be effectively graded by analyzing the RGC
atrophy. RGCs primarily exists within the GCC region that consists of RNFL,
the ganglion cellular layer (GCL), and the inner plexiform layer (IPL) [35].
To objectively evaluate the glaucoma progression, the proposed system computes
the RNFL, GC-IPL, and the GCC thickness profiles. The RNFL thickness is
computed by taking the absolute difference between the ILM and the GCL.
Moreover, GC-IPL thickness is computed by taking the absolute difference
between GCL and IPL. Also, the GCC thickness is computed by taking the
absolute difference between ILM and IPL. Afterward, the mean RNFL, GC-IPL, and
GCC thickness values are computed from the extracted thickness profiles which
are then passed to the supervised SVM model for grading the glaucomatous
progression. The reason for choosing these thickness values as features for
the SVM is because they reflect the pathological degeneration of RGCs which
can be used for grading glaucoma (predicted through the RAG-Netv2
classification unit) as early or advanced (more severe).
### III-D Modified Dice-Entropy Loss Function
In order to effectively extract the retinal regions and use them for the RGC-
aware classification of glaucomatous scans, RAG-Netv2 jointly optimized the
dice-entropy loss which is a linear combination of both dice and cross-entropy
loss as expressed below:
$\displaystyle L_{de}$ $\displaystyle=$
$\displaystyle\alpha_{1}L_{d}+\alpha_{2}L_{e}$ (3) $\displaystyle L_{d}$
$\displaystyle=$
$\displaystyle\frac{1}{N}\sum_{i=1}^{N}\left(1-\frac{2\sum_{j=1}^{C}t_{i,j}p_{i,j}}{\sum_{j=1}^{C}t_{i,j}^{2}+\sum_{j=1}^{C}p_{i,j}^{2}}\right)$
(4) $\displaystyle L_{e}$ $\displaystyle=$
$\displaystyle-\frac{1}{N}\sum_{i=1}^{N}\sum_{j=1}^{C}t_{i,j}\log(p_{i,j})$
(5)
where $L_{d}$ denotes the dice loss, $L_{e}$ represents the multi-category
cross-entropy loss, $t_{i,j}$ represents the true labels for $i^{th}$ sample
belonging to $j^{th}$ class, $p_{i,j}$ denotes the predicted probability for
the $i^{th}$ sample belonging to $j^{th}$ class, $\alpha_{1,2}$ represent the
loss functions weights, $N$ represents the total number of samples in a batch,
and $C$ represents the total number of classes. The dice-entropy loss
penalizes RAG-Netv2 to accurately segment the RNFL and GC-IPL pixels from the
other retinal pixels and at the same time enables RAG-Netv2 to robustly
classify the ONH SD-OCT scans as healthy or glaucomatous.
## IV Experimental Setup
This section presents a detailed description of the dataset and the training
protocol. It also presents the evaluation metrics which have been used to
validate the performance of the proposed framework.
### IV-A AFIO Dataset
Armed Forces Institute of Ophthalmology (AFIO) dataset, firstly introduced in
[3], is a publicly available repository containing high-resolution ONH SD-OCT
scans of healthy and glaucomatous subjects. The dataset has been acquired from
AFIO Hospital, Rawalpindi, Pakistan. To the best of our knowledge, it is the
only dataset that contains OD centered fundus and ONH centered SD-OCT scans
for each subject along with the detailed cup-to-disc markings and annotations
from four expert ophthalmologists. The scans within the AFIO dataset are
acquired using Topcon 3D OCT-1000 sampled over four years. Furthermore, all
the scans within AFIO datasets have been thoroughly graded by four expert
ophthalmologists in a blind-manner (i.e. each grader does not know the grading
done by his/her colleagues). Also, all four clinicians were very senior
(having 20 to 25 years of professional experience in clinical ophthalmology).
Moreover, the detailed specifications of the AFIO dataset are presented in
Table II.
Table II: AFIO Dataset Specifications Acquisition Machine | Topcon 3D OCT 1000
---|---
Scan Reference | Optic Nerve Head (ONH) Centered
Examination | Dilated Pupil with Ø4.0mm (45º) Diameter
Images | 196 ONH SD-OCT Images
Scan Type | B-scan
Resolution | 951x456
Subjects | 101
Categories | Healthy: 50
| Glaucoma: 146
### IV-B Training Details
RAG-Netv2 in the proposed framework is implemented using Keras APIs on
Anaconda Python 3.7.4 platform111The source code is available at
https://github.com/taimurhassan/rag-net-v2.. The training is conducted for 40
epochs where each epoch lasted for 512 iterations on a machine with Intel Core
<EMAIL_ADDRESS>GHz processor and 32 GB RAM with a single NVIDIA RTX 2080 Max-Q
GPU having cuDNN v7.5 and a CUDA Toolkit 10.1.243. The optimization during the
training is performed through ADADELTA [43] having a default learning rate of
one with the decay factor of 0.95. Moreover, 70% of the dataset is used for
the training and the rest of 30% were used for testing as per the dataset
standard [3]. To compensate for the low number of training scans within the
AFIO dataset, we fine-tuned the weights of the original RAG-Net architecture
(obtained after training on more than 0.1 million macular OCT scans [39]) and
also performed augmentation of the training scans. The data augmentation we
performed is as follows: first, all the scans were horizontally flipped, and
then they were rotated between -5 to 5 degrees. Then, we added a zero-mean
white Gaussian noise with 0.01 variance. The augmentation procedure resulted
in a total of 6,028 training scans to fulfill the training requirements.
### IV-C Evaluation Metrics
The proposed framework has been evaluated using a number of metrics described
below:
#### IV-C1 Confusion Matrix
The performance of the proposed framework for accurately classifying and
grading glaucomatous subject is measured through the confusion matrix and its
associated metrics such as accuracy
($A_{C}=\frac{T_{P}+T_{N}}{T_{P}+T_{N}+F_{P}+F_{N}}$), recall
($T_{PR}=\frac{T_{P}}{T_{P}+F_{N}}$), specificity
($T_{NR}=\frac{T_{N}}{T_{N}+F_{P}}$), false positive rate
($F_{PR}=\frac{F_{P}}{T_{N}+F_{P}}$), precision
($P_{PV}=\frac{T_{P}}{T_{P}+F_{P}}$) and F1 score
($F_{1}=\frac{2*P_{PV}*T_{PR}}{P_{PV}+T_{PR}}$), where $T_{P}$ denotes true
positives, $T_{N}$ denotes the true negatives, $F_{P}$ denotes the false
positives, and $F_{N}$ denotes the false negatives. To measure classification
performance, $T_{P}$, $F_{P}$, $T_{N}$ and $F_{N}$ are calculated scan-wise.
#### IV-C2 Receiver Operating Characteristics (ROC) Curve
ROC curve indicates the capacity of the proposed framework to correctly
classify and grade healthy and glaucomatous pathologies at various
classification thresholds. Moreover, the performance through ROC curves is
quantitatively measured through $AUC$ scores.
#### IV-C3 Dice Coefficient ($D_{C}$)
The dice coefficient ($D_{C}$) measures how well the proposed framework
segments the RNFL, GC-IPL, and GCC regions as compared to their ground truths,
and it is computed through: ($D_{C}=\frac{2*TP}{2*TP+FN+FP}$). Here $T_{P}$,
$F_{P}$, and $F_{N}$ are calculated pixel-wise where $T_{P}$ indicates the
correct extraction of positives (RNFL, GC-IPL, and GCC regions), $F_{P}$
indicates the misclassified background pixels, and $F_{N}$ denotes those
positive pixels which have been missed by the proposed framework. Afterward,
the mean dice coefficient ($\mu_{DC}$) is computed by taking an average of
$D_{C}$ scores scan-wise across the whole dataset.
#### IV-C4 Mask Precision
To further validate the performance of the proposed framework for extracting
RNFL, GC-IPL, and GCC regions, we used the mask precision ($m_{p}$) metric.
Unlike the dice coefficient, $m_{p}$ measures both the capacity of the
proposed framework in accurately recognizing the RNFL, GC-IPL, and GCC regions
as well as extracting their corresponding masks. First of all, the dice
coefficient $D_{C}$ of the extracted regions is computed in each image using
their ground truths. If $D_{C}\geq 0.5$, then the $m_{p}$ (for each region) is
computed pixel-wise through $m_{p}=\frac{T_{P}}{T_{P}+F_{P}}$. However, if the
dice coefficient is below 0.5, then the whole region is considered as $F_{P}$,
resulting in a $m_{p}$ score of 0. Moreover, the mean mask precision
$\mu_{mp}$ is computed by taking the average of $m_{p}$ for each region as
$\mu_{mp}=\sum_{k=0}^{c}m_{p}(k)$, where $c$ denotes the number of classes
(regions).
#### IV-C5 Clinical Validation
Apart from using performance metrics, we clinically validated the glaucomatous
screening and grading performance of the proposed framework with four expert
ophthalmologists using the standardized Pearson correlation coefficient
($r_{c}$) and its statistical significance measured through the $p$-value.
## V Results
The proposed framework has been thoroughly evaluated on the AFIO dataset for
the RGC-aware diagnosis and grading of glaucoma. First, we present an ablative
analysis to evaluate different segmentation models for the extraction of RNFL,
GC-IPL, and GCC regions. Afterward, we present a detailed comparison of the
proposed framework with state-of-the-art solutions for extracting the RGC
regions as well as screening glaucomatous subjects. Apart from this, we also
present the clinical validation of our RAG-Netv2 driven grading system with
four expert ophthalmologists.
### V-A Ablation Study
The ablative aspect of this research involves choosing the segmentation
framework that can accurately extract the retinal regions such as the RNFL and
the GC-IPL regions to compute the GCC profiles and grade glaucomatous subjects
accordingly. For this purpose, we compared the performance of the proposed
RAG-Netv2 segmentation unit with the popular state-of-the-art PSPNet [44],
SegNet [45], UNet [46], FCN-(8, 32) [47], as well as our original RAG-Net
architecture [39]. The extraction performance is shown in Table IV, where we
can observe that RAG-Netv2 achieved the overall best $\mu_{DC}$ score of
0.8697 leading the second-best FCN-8 [47] by 2.78%. Moreover, the performance
of PSPNet [44] and FCN-8 [47] are extremely comparable as the FCN-8 [47] is
leading PSPNet [44] by 0.035% only. Similarly, FCN-8 [47] is leading RAG-Net
[39] by 7.12%. If we look at the performance of each model for extracting
individual regions, we can see that the best score for extracting RNFL is
achieved by FCN-8 [47], though it’s leading PSPNet [44] by 0.011% and RAG-
Netv2 by 0.651%. For extracting GC-IPL, the best performance is achieved by
RAG-Netv2, leading the second-best FCN-8 [47] by 6.28%. The best performance
for extracting GCC regions is also achieved by the RAG-Netv2 i.e. $\mu_{DC}$
score of 0.8698. In terms of $\mu_{mp}$, the overall best performance is also
achieved by RAG-Netv2 as shown in Table V, leading the second-best FCN-8 by
1.78%. It leads the second-best FCN-8 by 3.19% and 3.38%, respectively for
extracting GC-IPL and GCC regions. However, for the RNFL extraction, it lags
from both FCN-8 and PSPNet by 1.05% and 0.76%. But overall, RAG-Netv2
outperforms FCN-8 and PSPNet with a larger margin in extracting RNFL, GC-IPL,
and GCC regions. Figure 6 showcases some qualitative comparison of all the
hybrid and conventional segmentation models. Here, scan (A), (J), and (S)
depict glaucomatous pathologies whereas scan (AB) depicts a healthy pathology.
Both RAG-Netv2 and FCN-8 produced good performance (comparing it with the
ground truths) in extracting the RNFL and GC-IPL regions. But RAG-Netv2 has an
upper hand in extracting GC-IPL boundaries from both healthy and glaucomatous
scans (highlighted in green color). Furthermore, the original RAG-Net [39] was
found limited in accurately extract RGC regions especially in scans (D), (M),
(V), and (AE). Such limitation is because RAG-Net [39] cannot well
differentiate similar textural patterns (often depicted in the retinal layers
and boundaries). Moreover, because RAG-Netv2 achieves the overall best
performance for extracting RGC regions (as evident from Table IV, Table V and
Figure 6), besides having the capacity to classify and grade glaucomatous
pathologies, we chose it in the proposed framework for further analysis.
### V-B Extraction of RNFL, GC-IPL and GCC Regions
To the best of our knowledge, all the existing methods (except [35]) which
have been proposed for the extraction of RNFL, GC-IPL and GCC regions have
been validated on either local in-house datasets or on publicly available
datasets which only contains macular OCT scans. Moreover, the framework
proposed in [35] is designed for screening normal and glaucomatous pathologies
without paying attention to RGC atrophy. The comparison of RAG-Netv2 with
state-of-the-art literature [36, 37] for the extraction of RNFL, GC-IPL and
GCC regions is indirect as the experimental protocols and the datasets were
different (see Table III). Nonetheless, this indirect comparison highlights
the capacity of RAG-Netv2 for extracting the retinal regions while
simultaneously diagnosing and grading the glaucomatous pathologies as compared
to competitive methods. As reported in Table III, our proposed framework lags
from DRUNET [36] by 5.49% in terms of $\mu_{DC}$. However, their dataset is
not public and its size is almost the half of AFIO dataset. Furthermore,
DRUNET is only a conventional segmentation model that does not possess
glaucomatous screening and grading capabilities as compared to RAG-Netv2. The
joint segmentation and classification pipeline, termed bi-decision [37]
achieved the mean dice coefficient of 0.72 for extracting the retinal regions.
However, for glaucoma screening, bi-decision [37] is only validated on the
local dataset (although the authors verified bi-decision on the selective 110
DME affected scans as well from the publicly available Duke dataset [28]).
Here, we want to highlight that we have already validated the original RAG-Net
[39] on 43,613 macular OCT scans from five publicly available datasets
(including the Duke dataset [28]), and here our emphasis is on extending our
hybrid framework for the RGC-aware diagnosis and grading of glaucoma using
high-quality ONH SD-OCT scans from the publicly available dataset, and for
reproducing the clinical trials.
Table III: Comparison of the proposed framework with DRUNET [36] and Bi-
decision [37]. The abbreviations are DS: Dataset Size, DPA: Dataset Publicly
Available, EP: Experimentation Protocol, GS: Glaucomatous Screening, GG:
Glaucomatous Grading, TR: Training, TE: Testing, CV: Cross-Validation.
| Proposed | DRUNET [36] | Bi-decision [37]
---|---|---|---
DS | 196# (ONH) | 100# (ONH) | 1,114*
DPA | Yes | No | No
EP | TR: 137, TE: 59 | TR: 40, TE: 60 | 3-Fold CV
GS | Yes | No | Yes
GG | Yes | No | No
$\mu_{DC}$ | 0.86 | 0.91 | 0.72
* 1,004 are circular OCT scans from the local dataset, and 110 selected macular scans are taken from Duke dataset [28] representing DME pathologies.
# This count represents the dataset size excluding the augmented scans.
Table IV: Comparison of segmentation models in terms of $\mu_{DC}$ for extracting the RNFL, GC-IPL, and GCC regions. Framework | RNFL | GC-IPL | GCC | Mean
---|---|---|---|---
RAG-Netv2 | 0.8692 | 0.8703 | 0.8698 | 0.8697
RAG-Net [39] | 0.8192 | 0.7508 | 0.7860 | 0.7853
PSPNet [44] | 0.8748 | 0.8151 | 0.8457 | 0.8452
SegNet [45] | 0.8111 | 0.6945 | 0.7555 | 0.7537
UNet [46] | 0.8216 | 0.8253 | 0.8234 | 0.8234
FCN-32 [47] | 0.8638 | 0.7470 | 0.8083 | 0.8064
FCN-8 [47] | 0.8749 | 0.8156 | 0.8460 | 0.8455
Table V: Comparison of segmentation models in terms of $\mu_{mp}$ for extracting the RNFL, GC-IPL, and GCC regions. Framework | RNFL | GC-IPL | GCC | Mean
---|---|---|---|---
RAG-Netv2 | 0.8410 | 0.8108 | 0.7915 | 0.8144
RAG-Net [39] | 0.7661 | 0.6701 | 0.6016 | 0.6792
PSPNet [44] | 0.8475 | 0.7685 | 0.7229 | 0.7796
SegNet [45] | 0.7402 | 0.6978 | 0.6630 | 0.7003
UNet [46] | 0.8082 | 0.7796 | 0.7291 | 0.7723
FCN-8 [47] | 0.8500 | 0.7849 | 0.7647 | 0.7999
FCN-32 [47] | 0.7623 | 0.7080 | 0.6867 | 0.7190
Figure 6: Performance comparison of deep segmentation models for the
extraction of RNFL (shown in red color) and GC-IPL regions (shown in green
color). Left to right, column-wise: Original scans, ground truths, the
performance of RAG-Netv2, RAG-Net, PSPNet, SegNet, UNet, FCN-8, and FCN-32.
### V-C Classification of Glaucomatous Scans
Since the thinness of the RNFL, GC-IPL and GCC profiles highlight the
degeneration of RGCs, which directly reflects glaucomatous progression. We
have utilized the encoder end of the RAG-Netv2 to perform RGC-aware
classification of healthy and classification pathologies. After training the
RAG-Netv2 for extracting the RGC regions, the trained weights are used for
screening healthy and glaucomatous pathologies through the RAG-Netv2
classification unit. The performance of RAGNetv2 classification unit for
screening glaucoma is measured through standard metric such as $A_{C}$,
$T_{PR}$, $T_{NR}$, $P_{PV}$, $AUC$ and $F_{1}$ score. As reported in Table
VI, the proposed framework achieves 0.958% better results as compared with
[17] and [12], and 12.5% better results as compared to [32] in terms of
accuracy. The comparison with [32] is indirect as the authors only used fundus
imagery for the classification of glaucomatous pathologies222We also evaluated
the RAGNetv2 classification unit for screening glaucoma using fundus images.
Please see the supplementary material for more details on these additional
experiments.. However, the comparison with [17] and [12] is fair and direct as
both of these frameworks were tested on the AFIO dataset using the same
experimental protocols. We can further observe the capacity of the proposed
framework in screening glaucomatous pathologies through the $T_{PR}$ ratings
in Table VI, where the RAG-Netv2 achieves 1.73% better performance than [17],
and also leading [12] by 4.41%. This performance gain is achieved because RAG-
Netv2 pays attention to the pathological variations of RGCs related to the
progression of glaucoma. The classification performance of RAG-Netv2 is also
evaluated through ROC curve as shown in Figure 7, and compared with [35] in
terms of $AUC$ as shown in Table VI, where we can see that RAG-Netv2 leads
[35] by 4.77%. However, this comparison is also indirect as both frameworks
are tested on different datasets.
Table VI: Comparison of classification performance of RAG-Netv2 with state-of-the-art solutions. Bold indicates the best scores while the second-best scores are underlined. Metric | Proposed | [17] | [12] | [32] | [35] | [37]
---|---|---|---|---|---|---
$A_{C}$ | 0.9491 | 0.9400 | 0.9400 | 0.8300 | - | 0.8140
$T_{PR}$ | 0.9714 | 0.9545 | 0.9285 | 0.8846 | - | -
$T_{NR}$ | 0.9166 | 0.9285 | 0.9545 | 0.7708 | - | -
$F_{PR}$ | 0.0834 | 0.0714 | 0.0455 | 0.2292 | - | -
$P_{PV}$ | 0.9444 | 0.9629 | 0.9629 | 0.8604 | - | -
$F_{1}$ | 0.9577 | 0.9453 | 0.9453 | 0.8723 | - | -
$AUC$ | 0.9871 | - | - | - | 0.9400 | 0.8640
Figure 7: ROC curve highlighting the performance of RAG-Netv2 for classifying
and grading glaucoma.
### V-D Profile Extraction
The novel aspect of the proposed framework is that it can grade glaucomatous
progression by analyzing pathological variations of RGCs through RNFL, GC-IPL,
and GCC regions represented within the ONH SD-OCT scans. Table VII reports the
mean RNFL, GC-IPL, and GCC region thickness range for the early and advanced
stage glaucomatous pathologies. We can observe here that the RAG-Netv2
achieved the mean RNFL thickness of 93.50$\mu m$ for early glaucomatous
suspects and 69.46$\mu m$ for the advanced glaucomatous stage. These RNFL
thickness ranges were obtained from scans of publicly available AFIO datasets
and confirmed by expert clinicians. They deviate from the ranges defined by
[11] by 2.67% and 3.25%, respectively, for the early and advanced stage
glaucoma. We can also observe that GC-IPL and GCC profiles provide a unique
distinction between glaucomatous severity, and can contribute positively
towards grading it. In Figure 8, we report some qualitative examples
highlighting the segmentation performance of RAG-Netv2. The scans in this
figure (except the original ones in the first column) are intentionally
converted to grayscale so that the extracted regions can be visualized. Scans
(A), (E), and (I) represent normal, early glaucomatous suspect and advanced-
stage glaucoma, respectively from which RAG-Netv2 has accurately extracted the
RNFL, GC-IPL, and GCC profiles. For example, we can see RNFL thinning in the
scan (F) and (J) as compared to (B), and how precisely it is picked by RAG-
Netv2. Also, the yellow color in the scans of Figure 8 (except the first
column) indicates the overlap for the extracted retinal regions with the
ground truth whereas other colors indicate incorrectly segmented regions
(these regions are very small, please zoom in to best see them).
Table VII: Mean RNFL, GC-IPL, and GCC thickness profiles extracted by the proposed framework for early and advanced glaucomatous pathologies. Mean Thickness | Early | Advanced
---|---|---
RNFL | 93.50$\pm$9.84 | 69.46$\pm$5.17
GC-IPL | 62.23$\pm$5.67 | 33.96$\pm$7.53
GCC | 155.73$\pm$13.10 | 103.42$\pm$10.27
RNFL [11] | 91.00$\pm$7.28 | 67.20$\pm$7.06
Figure 8: Examples of normal, early and advanced stage glaucomatous scans from
which the RNFL, GC-IPL, and GCC regions are extracted. The scans are
intentionally made grayscale to highlight the extraction results as compared
to the ground truths. The yellow color indicates complete overlap with the
ground truths while other colors indicate incorrect extractions. Please zoom-
in to best see the results. Figure 9: Distinctive RNFL, GC-IPL, and GCC
thickness profiles to discriminate early and advanced stages glaucomatous
pathologies.
In Figure 9 we report plots of the class distribution for the early and
advanced glaucomatous classes in the feature space defined by RNFL, GC-IPL,
and GCC profiles. We can observe that the thinning of these profiles can
clearly distinguish glaucoma progression. We notice that the training samples
in Figure 9 are highly clustered compared to the testing samples. This is due
to the fact that these training samples are generated through data
augmentation and, thus, highly correlated with the actual samples. Overall,
these profiles can show clear discrimination and great potential to be used
with an SVM classifier for grading glaucoma progression.
### V-E RGC Aware Grading of Glaucoma
Using the RNFL, GC-IPL, and GCC profiles as distinctive features, the proposed
system provides RGC-aware grading of glaucoma using the SVM classification
model. Both the classification and grading performance of the proposed
framework is shown through confusion matrices in Figure 10 (A) and Figure 10
(B), respectively. Moreover, Figure 10 (C) reports the grading performance
using only the mean RNFL thickness threshold (see Table VII). The
classification between normal and glaucomatous scans is performed through RAG-
Netv2 classification unit by directly passing the preprocessed ONH SD-OCT
scans whereas the RGC-aware grading of glaucoma as early suspect or advanced
stage (reported in Figure 10-B) is performed through SVM based on RNFL, GC-
IPL, and GCC profiles. In Figure 10 (B), we can see that the proposed
framework achieved an accuracy of 0.9117 for screening early and advanced
stage glaucoma cases which is 16.123% superior compared to grading approach
based on analyzing only the RNFL thickness. Also, out of 34 test samples in
Figure 10 (B), three are falsely graded i.e. two early cases are graded as
severe and one severe case is graded as having early glaucoma symptoms. But,
these three misclassifications have less impact on the overall performance of
the proposed system because both of them have been correctly classified as
having glaucoma. In Figure 10 (A), we notice that one misclassification (out
of 35) of a glaucomatous scan predicted as normal by our system. This is one
challenging boundary case showing very early glaucomatous symptoms i.e. having
the cup-to-disc ratio of 0.325. All the four ophthalmologists have considered
this as an early suspect (please see the recommendation of the
ophthalmologists in the patient record sheet within the dataset for more
detail. The misclassified case has a name
’149155_20150914_095855_Color_R_001.jpg’).
### V-F Clinical Trials
We have also performed a series of clinical trials in which we cross-validated
the predictions of the proposed framework with the recommendations from the
expert clinicians. Table VIII reports samples of the clinical trials along
with the recommendations of the four expert ophthalmologists (publicly
available in the dataset package [3]). Furthermore, we report in Table IX the
Pearson correlation analysis along with its statistical significance
showcasing the clinical validation of the proposed framework with each
clinician. $r_{c}$ ranges between -1 to +1 where -1 indicates the strong
negative association between the two entities, +1 shows the strong positive
association and 0 depicts that both entities are not related. Furthermore,
$p$-value $<$ 0.05 indicates that the obtained $r_{c}$ score is statistically
significant. From Table IX, we can observe that although the recommendations
contradict with each other (as they are marked by each clinician based on
his/her own experience), the proposed framework achieves the highest
correlation with Clinician-4 (i.e. $r_{c}$ = 0.9236), having $p$-value of 4.40
$\times$ 10-58). Moreover, the minimum $r_{c}$ score is achieved with the
grading of Clinician-3 (i.e. $r_{c}=$ 0.6863), but it is still quite
significant. In $8^{th}$ row of Table VIII, we have an exception where all
clinicians have marked the scan as having early glaucomatous symptoms, and the
proposed framework grades it as depicting advanced stage pathology. This
indicates the proposed framework is tuned in a way to prioritize advanced
stage subjects that need immediate attention and treatment to prevent vision
loss and blindness. We also report in Table VIII the fundus scan with each
case to help the readers (and other clinicians) in cross-verifying the
predictions made by the proposed framework by correlating the cup-to-disc
ratios. Through these fundus scans, we can also notice that the ONH SD-OCT
scans graded as having early or advanced stage glaucoma typically contains a
cup-to-disc ratio of 0.6 or above which is normally considered to imply
glaucoma [48].
Figure 10: Confusion matrix representing (A) classification of healthy and glaucomatous ONH SD-OCT scans, (B) grading of early suspects, and advanced-stage glaucoma, (C) glaucomatous grading using mean RNFL thickness threshold. Table VIII: Clinical validation of the proposed framework for glaucomatous screening with respect to the recommendation from four expert ophthalmologists. C1: Clinician-1, C2: Clinician-2, C3: Clinician-3, C4: Clinician-4, PF: Proposed Framework, AG: Advanced Glaucoma, EG: Early Glaucoma, H: Healthy. Fundus scans are provided with each ONH scan for reader cross-verification. ONH Scan | Fundus | C1 | C2 | C3 | C4 | PF
---|---|---|---|---|---|---
| | AG | H | EG | AG | AG
| | AG | H | EG | AG | AG
| | EG | EG | EG | EG | EG
| | EG | EG | EG | EG | EG
| | EG | EG | H | EG | EG
| | H | H | H | H | H
| | AG | AG | AG | AG | AG
| | EG | EG | EG | EG | AG
| | H | H | EG | H | H
| | H | H | H | H | H
Table IX: Clinical validation of the proposed framework through the Pearson correlation coefficient ($r_{c}$) and its statistical significance ($p$-value). Metric | Clinician-1 | Clinician-2 | Clinician-3 | Clinician-4
---|---|---|---|---
$r_{c}$ | 0.7260 | 0.8416 | 0.6863 | 0.9236
$p$-value | 1.04 $\times$ 10-23 | 6.10 $\times$ 10-38 | 2.10 $\times$ 10-20 | 4.40 $\times$ 10-58
## VI Discussion and Conclusion
In this work, we proposed a fully automated system for the classification and
grading of glaucoma from ONH SD-OCT scans. Unlike other frameworks that rely
on cup-to-disc ratios for screening glaucoma, the proposed system analyzes the
pathological changes related to the degeneration of RGCs through RNFL, GC-IPL,
and GCC thickness profiles. We propose a hybrid convolutional network (RAG-
Netv2) for the extraction of these profiles, coupled with an SVM classifier
for the classification and the grading of healthy and glaucomatous
pathologies. The experiments evidenced the superiority of our framework in
screening early and advanced glaucoma cases as compared to the state-of-the-
art solutions relying on the cup-to-disc ratios as evidenced by the $F_{1}$
score of 0.9577. The preeminence of our system emanated from the newly
proposed architecture variants in RAG-Netv2, integrating contextual-aware
modules, built on residual atrous convolutions, along with the feature pyramid
block. This proposed variants boosted the capacity of the RAG-Netv2 for
discriminating the retinal regions as reflected by the $\mu_{DC}$ score of
0.8697, outperforming popular deep segmentation models. Apart from this, RAG-
Netv2 significantly reduces the total number of trainable and non-trainable
parameters by 91.04% as compared to the original RAG-Net architecture. This
improvement also relates to the addition of contextual-aware modules replacing
the standard convolutional blocks with the atrous convolutions, making the
RAG-Netv2 a lightweight architecture for screening and monitoring the glaucoma
progression. The introduction of contextual-aware modules in RAG-Netv2
addresses the limitations of the original RAG-Net architecture which, while
outperforms state-of-the-art frameworks for lesions extraction, showed
restraints in differentiating similar textural patterns in retinal layers and
boundaries as shown Figure 6. However, the implications of introducing
contextual-aware modules need to be thoroughly tested for the lesions
extraction from the macular pathologies, if we want to extend the modified
RAG-Net architecture to the analysis of lesion-aware maculopathy. This
investigation will be part of our next future work.
## Acknowledgment
This work is supported by a research fund from Khalifa University: Ref:
CIRA-2019-047. We are also thankful to the four expert ophthalmologists for
providing the clinical validations of the retinal scans within the AFIO
dataset.
## References
* [1] R. N. Weinreb, T. Aung, and F. A. Medeiros, “The Pathophysiology and Treatment of Glaucoma: A Review,” JAMA, vol. 311, no. 18, pp. 1901-1911, 2014.
* [2] C. F. Burgoyne, J. C. Downs, A. J. Bellezza, J.-K. F. Suh, and R. T. Hart, “The optic nerve head as a biomechanical structure: A new paradigm for understanding the role of IOP-related stress and strain in the pathophysiology of glaucomatous optic nerve head damage,” Progress in Retinal and Eye Research, vol 24, pp. 39-73, 2005.
* [3] H. Raja, M. U. Akram, S. G. Khawaja, M. Arslan, A. Ramzan, and N. Nazir, “Data on OCT and fundus images for the detection of glaucoma,” Data in Brief, Volume 29, April 2020.
* [4] T. Hassan, M. U. Akram, B. Hassan, A. Nasim, and S. A. Bazaz, “Review of OCT and fundus images for detection of Macular Edema,” IEEE 12th International Conference on Imaging Systems and Techniques, 2015\.
* [5] J. E. A. Majoor, K. A. Vermer, E. R. Andrinopoulou, and H. G. Lemij, “Contrast-to-Noise Ratios for Assessing the Detection of Progression in the Various Stages of Glaucoma,” Translational Vision Science & Technology, vol. 8, no. 3, pp. 1-12, 2019\.
* [6] D. S. Grewal, M. Sehi, J. D. Paauw, and D. S. Greenfield, “Detection of Progressive Retinal Nerve Fiber Layer Thickness Loss With Optical Coherence Tomography Using 4 Criteria for Functional Progression,” Wolters Kluwer Journal of Glaucoma.
* [7] M. G. Wollstein, M. J. S. Schuman, M. L. L. Price, M. A. Aydin, S. P. C. Stark, M. E. Hertzmark, M. E. Lai, M. H. Ishikawa, M. C. Mattox, P. J. G. Fujimoto, and P. L. A. Paunescu, “Spectral-Domain Optical Coherence Tomography for Glaucoma Diagnosis,” Open Ophthalmol J, vol. 9, pp. 68-77, 2015.
* [8] C. K. S. Leung, M. Yu, R. N. Weinreb, G. Lai, G. Xu, and D. S. C. Lam, “Retinal nerve fiber layer imaging with spectral-domain optical coherence tomography: patterns of retinal nerve fiber layer progression,” Ophthalmology, vol. 119, no. 9, pp. 1856-66, 2012.
* [9] T. Ojima, T. Tanabe, M. Hangai, S. Yu, S. Morishita, and N. Yoshimura, “Measurement of Retinal Nerve Fiber Layer Thickness and Macular Volume for Glaucoma Detection Using Optical Coherence Tomography,” Jpn J Ophthalmol 2007, vol. 51, pp. 197-203, 2007.
* [10] F. A. Medeiros, L. M. Zangwill, C. Bowd, K. Mansouri, and R. N. Weinreb, “The Structure and Function Relationship in Glaucoma: Implications for Detection of Progression and Measurement of Rates of Change,” Investigative Ophthalmology & Visual Science, vol. 53, no. 11, pp. 6939-6946, 2012.
* [11] A. E. A. El-Naby, H. Y. Abouelkheir, H. T. Al-Sharkawy, and T. H. Mokbel, “Correlation of retinal nerve fiber layer thickness and perimetric changes in primary open-angle glaucoma,” Journal of the Egyptian Ophthalmological Society, Vol. 111, Issue 1, 2018\.
* [12] T. Shehryar, M. U. Akram, S. Khalid, S. Nasreen, A. Tariq, A. Perwaiz, and A. Shaukat, “Improved automated detection of glaucoma by correlating fundus and SD‐OCT image analysis,” Int J Imaging Syst Technol., 1– 20, 2020.
* [13] X. Sun, Y. Xu, M. Tan, H. Fu, W. Zhao, T. You, and J. Liu, “Localizing Optic Disc and Cup for Glaucoma Screening via Deep Object Detection Networks,” Computational Pathology and Ophthalmic Medical Image Analysis, Springer, Cham, pp. 236-244, 2018.
* [14] X. Chen, Y. Xu, D. W. K. Wong, T. Y. Wong, and J. Liu, “Glaucoma detection based on deep convolutional neural network,” 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 2015.
* [15] J. Cheng, J. Liu, Y. Xu, F. Yin, D. W. K. Wong, N.-M. Tan, D. Tao, C.-Y. Cheng, T. Aung, and T. Y. Wong, “Superpixel Classification Based Optic Disc and Optic Cup Segmentation for Glaucoma Screening,” IEEE Transactions on Medical Imaging, vol 32, issue 6, pp. 1019-1032, 2013\.
* [16] H. Fu, J. Cheng, Y. Xu, C. Zhang, D. W. K. Wong, J. Liu, and X. Cao, “Disc-Aware Ensemble Network for Glaucoma Screening From Fundus Image,” IEEE Transactions on Medical Imaging, vol 37, issue 11, pp. 2493-2501, 2018.
* [17] T. Khalil, M. U. Akram, H. Raja, A. Jameel, and I. Basit, “Improved automated detection of glaucoma from fundus image using hybrid structural and textural features,” IEEE Access, Vol. 6, pp. 4560-4576, 2018.
* [18] F. A. Almobarak, N. O’Leary, A. S. C. Reis, G. P. Sharpe, D. M. Hutchison, M. T. Nicolela, and B. C. Chauhan, “Automated segmentation of optic nerve head structures with optical coherence tomography,” Invest Ophthalmol Vis Sci., 26, 55(2), 1161-8, 2014.
* [19] R. Kromer, S. Rahman, F. Filev, and M. Klemm, “An Approach for Automated Segmentation of Retinal Layers In Peripapillary Spectralis SDOCT Images Using Curve Regularisation,” Insights in Ophthalmology, vol. 1, no. 10, pp. 1-6, 2017.
* [20] W. Duan, Y. Zheng, Y. Ding, S. Hou, Y. Tang, Y. Xu, M. Qin, J. Wu, and D. Shen, “A Generative Model for OCT Retinal Layer Segmentation by Groupwise Curve Alignment,” IEEE Access, vol. 6, pp. 25130-25141, 2018.
* [21] S. Niu, Q. Chena, L. D. Sisternesb, D. L. Rubinb, W. Zhangc, and Q. Liu, “Automated retinal layers segmentation in SD-OCT images using dual-gradient and spatial correlation smoothness constraint,” Computers in Biology and Medicine, vol. 54, pp. 116–128, 2014.
* [22] R. Kafieh, H. Rabbani, F. Hajizadeh, M. D. Abramoff, and M. Sonka, “Thickness Mapping of Eleven Retinal Layers Segmented Using the Diffusion Maps Method in Normal Eyes,” Hindawi Journal of Ophthalmology, vol. 2015, p. 14. Article ID 259123, March 2015.
* [23] A. M. Bagci, R. Ansari, and M. Shahid, “A Method for Detection of Retinal Layers by Optical Coherence Tomography Image Segmentation,” IEEE/NIH Life Science Systems and Applications Workshop, 2007.
* [24] M. K. Abdellatif, Y. A. M. Elzankalony, A. A. A. Ebeid, and W. M. Ebeid, “Outer Retinal Layers’ Thickness Changes in relation to Age and Choroidal Thickness in Normal Eyes,” Hindawi Journal of Ophthalmology, vol. 2019, pp. 8, Article ID 1698967, July 2019.
* [25] T. Hassan, M. U. Akram, B. Hassan, A. M. Syed, and S. A. Bazaz, “Automated segmentation of subretinal layers for the detection of macular edema,” Applied Optics Vol. 55, Issue 3, pp. 454-461, 2016.
* [26] T. Hassan, M. U. Akram, M. F. Masood, and U. Yasin, “Deep structure tensor graph search framework for automated extraction and characterization of retinal layers and fluid pathology in retinal SD-OCT scans,” Computers in Biology and Medicine, Volume 105, Pages 112-124, February 2019.
* [27] T. Hassan, M. U. Akram, A. Shaukat, S. G. Khawaja, and B. Hassan, “Structure Tensor Graph Searches Based Fully Automated Grading and Profiling of Maculopathy From Retinal OCT Images,” IEEE Access, vol. 6, pp. 44644 - 44658, 2018.
* [28] S. J. Chiu, M. J. Allingham, P. S. Mettu, S. W. Cousins, J. A. Izatt, and S. Farsiu, “Kernel regression based segmentation of optical coherence tomography images with diabetic macular edema,” Biomedical Optics Express, 6(4), pp. 1172–1194, 2015.
* [29] G. Ometto, I. Moghul, G. Montesano, A. Hunter, N. Pontikos, P. R. Jones, P. A. Keane, X. Liu, A. K. Denniston, and D. P. Crabb, “ReLayer: a Free, Online Tool for Extracting Retinal Thickness From Cross-Platform OCT Images,” Trans Vis Sci Tech, vol. 8, no. 3, p. Article 25, 2019.
* [30] E. Gao, B. Chen, J. Yang, F. Shi, W. Zhu, D. Xiang, H. Chen, M. Zhang, and X. Chen., “Comparison of Retinal Thickness Measurements between the Topcon Algorithm and a Graph-Based Algorithmin Normal and Glaucoma Eyes,” PLOS ONE, vol. 10, no. 6, p. e0128925., 2015.
* [31] M. A. Mayer, J. Hornegger, and T.-R. P. Mardin, C. Y., “Retinal Nerve Fiber Layer Segmentation on FD-OCT Scans of Normal Subjects and Glaucoma Patients,” Biomedical Optics Express, vol. 1, no. 5, pp. 1358-1381, 2010.
* [32] T. Khalil, M. U. Akram, S. Khalid, and A. Jameel, “Improved automated detection of glaucoma from fundus image using hybrid structural and textural features,” IET Image Processing, Volume: 11 , Issue: 9, 2017.
* [33] E. B. Mariottoni, A. A. Jammal, C. N. Urata, S. I. Berchuck, A. C. Thompson, T. Estrela, and F. A. Medeiros, “Quantification of Retinal Nerve Fibre Layer Thickness on Optical Coherence Tomography with a Deep Learning Segmentation-Free Approach,” Scientific Reports, vol. 10, p. Article 402, 2020.
* [34] P. Zang, J. Wang, T. T. Hormel, L. Liu, D. Huang, and Y. Jia, “Automated segmentation of peripapillary retinal boundaries in oct combining a convolutional neural network and a multi-weights graph search,” Biomedical Optics Express, Vol. 10, Issue 8, pp. 4340-4352, 2019.
* [35] S. Maetschke, B. Antony, H. Ishikawa, G. Wollstein, J. Schuman, and R. Garnavi, “A feature agnostic approach for glaucoma detection in oct volumes,” PLOS ONE, 14(7): e0219126, 2019.
* [36] S. K. Devalla, P. K. Renukanand, B. K. Sreedhar, G. Subramanian, L. Zhang, S. Perera, J.-M. Mari, K. S. Chin, T. A. Tun, N. G. Strouthidis, T. Aung, A. H. Thiéry, and M. J. A. Girard, “DRUNET: a dilated-residual U-Net deep learning network to segment optic nerve head tissues in optical coherence tomography images,” Biomedical Optics Express, Vol. 9, Issue 7, pp. 3244-3265, 2018.
* [37] J. Wang, Z. Wang, F. Li, G. Qu, Y. Qiao, H. Lv, and X. Zhang, “Joint retina segmentation and classification for early glaucoma diagnosis,” Biomedical Optics Express, Vol. 10, Issue 5, pp. 2639-2656, 2019.
* [38] J. Bigun, G. Granlund, and J. Wiklund, “Multidimensional Orientation Estimation with Applications to Texture Analysis and Optical Flow,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 1991.
* [39] T. Hassan, M. U. Akram, N. Werghi, and N. Nazir, “RAG-FW: A hybrid convolutional framework for the automated extraction of retinal lesions and lesion-influenced grading of human retinal pathology,” IEEE Journal of Biomedical and Health Informatics, 10.1109/JBHI.2020.2982914, March 2020.
* [40] T. Hassan, M. U. Akram, and N. Werghi, “Evaluation of Deep Segmentation Models for the Extraction of Retinal Lesions from Multi-modal Retinal Images,” arXiv:2006.02662, June 2020.
* [41] Z. Wang and S. Ji, “Smoothed Dilated Convolutions for Improved Dense Prediction,” 24TH ACM Sigkdd Conference On Knowledge Discovery And Data Mining, 2018\.
* [42] P. Wang, P. Chen, Y. Yuan, D. Liu, Z. Huang, X. Hou, and G. Cottrell, “Understanding convolution for semantic segmentation,” IEEE Winter Conference on Applications of Computer Vision (WACV), 2018\.
* [43] M. D. Zeiler, “ADADELTA: An Adaptive Learning Rate Method,” arXiv:1212.5701, 2012.
* [44] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid Scene Parsing Network,” IEEE International Conference on Computer Vision and Pattern Recognition, 2017.
* [45] V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, December 2017.
* [46] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Medical Image Computing and Computer Assisted Intervention (MICCAI), 2015\.
* [47] J. Long, E. Shelhamer, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2015\.
* [48] “Eye Exam and tests for Glaucoma diagnosis,” The Eye Digest. The University of Illinois Eye and Ear Infirmary. Archived from the original on 8 July 2012.
|
# The KPZ equation and the directed landscape
Xuan Wu Department of Mathematics, University of Chicago, Eckhart Hall 5734 S
University Ave Chicago IL, 60637<EMAIL_ADDRESS>
###### Abstract.
This paper studies the convergence of the narrow wedge solutions of the KPZ
equation to the Airy sheet and the direct landscape. Our main approach is
inspired by the polymer free energy interpretation of the KPZ equation. We
derive a simple inequality relating Busemann functions and quantiles of the
polymer measure (Lemma 4.6) and this is one of the main ingredients for
proving the convergence. Another main ingredient is Proposition 3.5, an
identity for the geometric RSK correspondence.
## 1\. Introduction
### 1.1. Kardar-Parisi-Zhang equation
This paper investigates the large-time asymptotics for solutions to the
Kardar-Parisi-Zhang (KPZ) equation. The KPZ equation was introduced in 1986 by
Kardar, Parisi and Zhang [KPZ86] as a model for random interface growth. In
one spatial dimension (sometimes also called (1+1)-dimension to emphasize that
time is one dimension too), it describes the evolution of a function
$\mathcal{H}(t,y)$ recording the height of an interface at time $t$ above
position $y$. The KPZ equation is written formally as a stochastic partial
differential equation (SPDE),
(1.1)
$\partial_{t}\mathcal{H}=\frac{1}{2}\partial_{y}^{2}\mathcal{H}+\frac{1}{2}(\partial_{y}\mathcal{H})^{2}+\mathscr{\dot{W}},$
where $\mathscr{\dot{W}}$ is a space-time white noise on $\mathbb{R}^{2}$ (for
mathematical background or literature review, see [Cor12, QS15] for instance).
The KPZ equation (1.1) is a canonical member of the associated KPZ
universality class. Its large-time asymptotics is believed to be governed by a
universal Markov process called the KPZ fixed point. We will discuss
progresses along this direction and we refer readers to [MQR21] for the
construction of the KPZ fixed point.
The KPZ equation is related to the stochastic heat equation (SHE) with
multiplicative noise through the Hopf–Cole transformation. Denote
$\mathcal{Z}(t,y)$ as the solution to the following SHE,
(1.2)
$\partial_{t}\mathcal{Z}=\frac{1}{2}\partial_{y}^{2}\mathcal{Z}+\mathcal{Z}\mathscr{\dot{W}}.$
The Hopf-Cole solution to the KPZ equation (1.1) is defined by taking
$\mathcal{H}(t,y)=\log\mathcal{Z}(t,y).$ It was first proved in [Mue91] that
$\mathcal{Z}(t,y)$ is almost surely strictly positive (with positive initial
conditions), which justifies the validity of the transform. The fundamental
solutions to the SHE (1.2) are of great importance. For fixed
$(s,x)\in\mathbb{R}^{2}$, we denote by $\mathcal{Z}(s,x;t,y)$ the solution to
(1.2) on $t>s$, $y\in\mathbb{R}$ with the delta initial condition
$\mathcal{Z}(s,x;s,y)=\delta(y-x)$. We take a logarithm and define the narrow
wedge solutions to (1.1):
(1.3) $\displaystyle\mathcal{H}(s,x;t,y)\coloneqq\log\mathcal{Z}(s,x;t,y).$
Let $\mathbb{R}^{4}_{+}\coloneqq\\{(s,x,t,y)\in\mathbb{R}^{4}\,|\,s<t\\}$. It
is recently proved in [Alb+22] that $\\{\mathcal{Z}(s,x;t,y),\
(s,x;t,y)\in\mathbb{R}^{4}_{+}\\}$ can be coupled in one probability space to
form a process on $\mathbb{R}_{+}^{4}$ with many desired features. In the
following theorem, we collect some of the results in [Alb+22]. We formulate
them in terms of narrow wedge solutions which are more suitable for our
purpose.
###### Theorem 1.1.
There exists a coupling of narrow wedge solutions $\\{\mathcal{H}(s,x;t,y),\
(s,x;t,y)\in\mathbb{R}^{4}_{+}\\}$ with the following properties.
1. (1)
$\mathcal{H}(s,x;t,y)$ is a random continuous function on $\mathbb{R}^{4}_{+}$
.
2. (2)
Almost surely for all $(s,x,t,y)\in\mathbb{R}^{4}_{+}$ and $r\in(s,t)$, it
holds that
(1.4)
$\displaystyle\exp\big{(}\mathcal{H}(s,x;t,y)\big{)}=\int_{-\infty}^{\infty}\exp\big{(}\mathcal{H}(s,x;r,z)+\mathcal{H}(r,z;t,y)\big{)}\,dz.$
3. (3)
For any fixed $(r,z)\in\mathbb{R}^{2}$, as
$C(\mathbb{R}^{4}_{+},\mathbb{R})$-valued random variables,
$\mathcal{H}(s+r,x+u;t+r,y+u)\overset{d}{=}\mathcal{H}(s,x;t,y).$ Also,
$\mathcal{H}(-t,y;-s,x)\overset{d}{=}\mathcal{H}(s,x;t,y).$
4. (4)
Fix finitely many disjoint open intervals $\\{(s_{j},t_{j})\\}_{j=1}^{m}$. The
random functions $\\{\mathcal{H}(s_{j},\cdot;t_{j},\cdot)\\}_{j=1}^{m}$ are
independent.
For $t=T>s=0$ fixed, the marginal $\mathcal{H}(0,x;T,y)$, viewed as a random
continuous function on $\mathbb{R}^{2}$, is called a KPZ sheet. We denote it
by
(1.5) $\mathcal{H}^{T}(x,y)\coloneqq\mathcal{H}(0,x;T,y).$
### 1.2. The Airy line ensemble, Airy sheets and the directed landscape
In this subsection, we introduce several central objects in the KPZ
universality class: the Airy line ensemble, Airy sheets and the directed
landscape.
###### Definition 1.2.
The stationary Airy line ensemble
$\tilde{\mathcal{A}}=\\{\tilde{\mathcal{A}}_{1}>\tilde{\mathcal{A}}_{2}>\cdots\\}$
is countable many random functions indexed by $\mathbb{N}$. The law of
$\tilde{\mathcal{A}}$ is uniquely determined by its determinantal structure.
More precisely, for any finite set
$I=\\{u_{1},\cdots,u_{k}\\}\subset\mathbb{R}$, the point process on
$I\times\mathbb{R}$ given by
$\\{(s,\tilde{\mathcal{A}}_{i}(s)):i\in\mathbb{N},s\in I\\}$ is a
determinantal point process with kernel given by
$K(s_{1},x_{1};s_{2},x_{2})=\left\\{\begin{array}[]{cc}\int_{0}^{\infty}e^{-z(s_{1}-s_{2})}\textup{Ai}(x_{1}+z)\textup{Ai}(x_{2}+z)dz&\textrm{if}\quad
s_{1}\geq s_{2},\\\
-\int_{-\infty}^{0}e^{-z(s_{1}-s_{2})}\textup{Ai}(x_{1}+z)\textup{Ai}(x_{2}+z)dz&\textrm{if}\quad
s_{1}<s_{2},\end{array}\right.$
where Ai is the Airy function. The (parabolic) Airy line ensemble
${\mathcal{A}}=\\{{\mathcal{A}}_{1}>{\mathcal{A}}_{2}>\dots\\}$ is defined by
(1.6) ${\mathcal{A}}_{i}(x):=\tilde{\mathcal{A}}_{i}(x)-x^{2}.$
The finite-dimensional marginal of the stationary Airy line ensemble was
introduced by Prähofer and Spohn [PS02] in which it was called the “multi-line
Airy process.” Later, Corwin and Hammond [CH14] showed that $\mathcal{A}$ can
be realized as a random continuous function on $\mathbb{N}\times\mathbb{R}$
through the Brownian Gibbs property. The first indexed random function,
$\tilde{\mathcal{A}}_{1}$, is of particular importance and is known as the
$\textup{Airy}_{2}$ process.
In the monumental work [DOV18], Dauvergne, Ortmann and Virág constructed Airy
sheets and the directed landscape via the Airy line ensemble. The directed
landscape can be viewed as “fundamental solutions” to the KPZ fixed point and
Airy sheets are fixed time marginals of the directed landscape. We follow the
presentation in [DOV18] and define Airy sheets and the directed landscape
through their characterization properties. We remark that it was proved in
[DOV18] that those properties uniquely determine the laws of Airy sheets and
the directed landscape respectively.
###### Definition 1.3.
The Airy sheet $\mathcal{S}(x,y)$ is a $C(\mathbb{R}^{2},\mathbb{R})$-valued
random variable which can be coupled with the Airy line ensemble $\mathcal{A}$
with the following properties.
1. (1)
$\mathcal{S}(\cdot+t,\cdot+t)$ has the same distribution as
$\mathcal{S}(\cdot,\cdot).$
2. (2)
$S(0,\cdot)=\mathcal{A}_{1}(\cdot)$.
3. (3)
Almost surely for all $x>0$ and $y_{1},y_{2}$ in $\mathbb{R}$, we have
(1.7)
$\begin{split}\lim_{k\to\infty}\mathcal{A}[(-2^{-1/2}k^{1/2}x^{-1/2},k)\xrightarrow{\infty}(y_{2},1)]-\mathcal{A}[(-2^{-1/2}k^{1/2}x^{-1/2},k)\xrightarrow{\infty}&(y_{1},1)]\\\
&=\mathcal{S}(x,y_{2})-\mathcal{S}(x,y_{1}).\end{split}$
Here $\mathcal{A}[(x,k)\xrightarrow{\infty}(y,1)]$ is the last passage time
from $(x,k)$ to $(y,1)$ on the Airy line ensemble. We refer readers to (2.5)
in Section 2 for the detailed definition. For any $s>0$,
$s\mathcal{S}(s^{-2}x,s^{-2}y)$ is called an Airy sheet of scale $s$.
###### Definition 1.4.
The directed landscape $\mathcal{L}(s,x;t,y)$ is a
$C(\mathbb{R}^{4}_{+},\mathbb{R})$-valued random variable with the following
properties.
1. (1)
Given $s<t$, $\mathcal{L}(s,\cdot;t,\cdot)$ is distributed as an Airy sheet of
scale $(t-s)^{1/3}$.
2. (2)
For any finite disjoint open intervals $\\{(s_{j},t_{j})\\}_{j=1}^{m}$, the
random functions $\\{\mathcal{L}(s_{j},\cdot;t_{j},\cdot)\\}_{j=1}^{m}$ are
independent.
3. (3)
Almost surely for all $s<r<t$ and $x,y\in\mathbb{R}$, it holds that
$\displaystyle\mathcal{L}(s,x;t,y)=\max_{z\in\mathbb{R}}\big{(}\mathcal{L}(s,x;r,z)+\mathcal{L}(r,z;t,y)\big{)}.$
As a direct consequence of Definition 1.4, we have the following description
on the marginal law of $\mathcal{L}$ when the time variables are restricted on
a finite set.
###### Corollary 1.5.
Fix a finite set $\\{t_{1}<t_{2}<\dots<t_{m}\\}$. Then the restriction of the
directed landscape, $\\{\mathcal{L}(t_{i},\cdot;t_{j},\cdot)\\}$ is uniquely
characterized by the following properties.
1. (1)
For all $1\leq i<j\leq m$, $\mathcal{L}(t_{i},\cdot;t_{j},\cdot)$ is
distributed as an Airy sheet of scale $(t_{j}-t_{i})^{1/3}$.
2. (2)
$\\{\mathcal{L}(t_{i},\cdot;t_{i+1},\cdot)\\}_{i=1}^{m-1}$ are independent.
3. (3)
Almost surely for all $x,y\in\mathbb{R}$ and $1\leq i<k<j\leq m$,
$\displaystyle\mathcal{L}(t_{i},x;t_{k},y)=\max_{z\in\mathbb{R}}\left(\mathcal{L}(t_{i},x;t_{j},z)+\mathcal{L}(t_{j},z;t_{k},y)\right).$
### 1.3. Main results
In this subsection, we perform the $1:2:3$ scaling to $\mathcal{H}(s,x;t,y)$
and state our main results concerning the large-time asymptotics. For $T>0$,
define
(1.8)
$\displaystyle\mathfrak{H}^{T}(s,x;t,y):=2^{1/3}T^{-1/3}\mathcal{H}(Ts,2^{1/3}T^{2/3}x;Tt,2^{1/3}T^{2/3}y)+(t-s)2^{1/3}T^{2/3}/24.$
For $t=1$ and $s=0$ fixed, we call the marginal $\mathfrak{H}^{T}(0,x;1,y)$
the scaled KPZ sheet and denote it by
(1.9) $\mathfrak{H}^{T}(x,y)\coloneqq\mathfrak{H}^{T}(0,x;1,y).$
Note that from (1.5), $\mathfrak{H}^{T}$ can be expressed in terms of the KPZ
sheet $\mathcal{H}^{T}$ as
(1.10) $\mathfrak{H}^{T}(x,y)\coloneqq
2^{1/3}T^{-1/3}\mathcal{H}^{T}(2^{1/3}T^{2/3}x,2^{1/3}T^{2/3}y)+{2^{1/3}T^{2/3}}/{24}.$
It is conjectured that the KPZ fixed point describes the large-time behavior
of solutions to the KPZ equation (1.1). In [ACQ11], Amir, Corwin and Quastel
gave strong evidence for this conjecture and proved that
$\mathfrak{H}^{T}(0,0;1,0)$ converges to the Tracy-Widom law. Equivalently,
using the notation introduced above, they showed that
$\mathfrak{H}^{T}(0,0;1,0)$ converges in distribution to
$\mathcal{L}(0,0;1,0)$. Recently, a breakthrough was made by two groups,
Quastel-Sarkar [QS23] and Virág [Vir20]. The authors independently proved that
$\mathfrak{H}^{T}(0,0;1,y)$, as a random function on $\mathbb{R}$, converges
in distribution to $\mathcal{L}(0,0;1,y)$. We remark that in [QS23] and
[Vir20], the convergence can be shown for a large class of initial data.
In this paper, we establish the convergence of $\mathfrak{H}^{T}(s,x;t,y)$ to
$\mathcal{L}(s,x;t,y)$ as a four-parameter process and this is the content of
Theorem 1.6. Before stating the theorem, we note that for a topological space
$\mathcal{T}$, we always equip $C(\mathcal{T},\mathbb{R})$, the collection of
continuous functions on $\mathcal{T}$, with the topology of uniform
convergence on compact subsets.
###### Theorem 1.6.
Fix a finite set $\Lambda=\\{t_{1}<t_{2}<\dots<t_{m}\\}$,
$\\{\mathfrak{H}^{T}(t_{i},x;t_{j},y)\\}$ converges in distribution to the
directed landscape $\\{\mathcal{L}((t_{i},x);(t_{j},y))\\}$ as $T$ goes to
infinity. Here $\mathfrak{H}^{T}(t_{i},x;t_{j},y)$ and
$\mathcal{L}(t_{i},x;t_{j},y)$ are viewed as
$C(\Lambda^{2}_{+}\times\mathbb{R}^{2},\mathbb{R})$-valued random variables
with $\Lambda_{+}^{2}=\\{(s,t)\in\Lambda^{2}\,|\,s<t\\}$.
As a crucial middle step, we show the convergence of the scaled KPZ sheet to
the Airy sheet.
###### Theorem 1.7.
The scaled KPZ sheet $\mathfrak{H}^{T}(x,y)$ converges in distribution to the
Airy sheet $\mathcal{S}(x,y)$ as $T$ goes to infinity. Here
$\mathfrak{H}^{T}(x,y)$ and $\mathcal{S}(x,y)$ are viewed as
$C(\mathbb{R}^{2},\mathbb{R})$-valued random variables .
Our approach for Theorem 1.7 and Theorem 1.6 adopts the interpretation that
narrow wedge solutions are free energies for the continuum directed random
polymers (CDRP). We derive an identity of polymer free energy under the
geometric RSK correspondence, Proposition 3.5, which is a main ingredient of
our argument. We consider Busemann functions on the KPZ line ensemble under
the 1:2:3 KPZ scaling. Another key to our argument is an inequality relating
Busemann functions and quantiles of the polymer measure, see Lemma 4.6. These
allow us to establish the characterizing property (1.7) for all subsequential
limits of the scaled KPZ sheet through the Busemann functions. We note that
Lemma 4.6 holds deterministically and can be applied to other polymer models
such as the log-gamma polymer [Sep].
### 1.4. KPZ line ensembles
Motivated by recent developments on solvable directed polymer models,
O’Connell and Warren [OW16] defined multi-layer extension of the SHE. More
precisely, they defined a hierarchy of partition functions
$\mathcal{Z}_{i}(t,y)$,
(1.11)
$\mathcal{Z}_{i}(t,y)=p(t,y)^{i}\sum_{k=0}^{\infty}\int_{\Delta_{k}(t)}\int_{\mathbb{R}^{k}}R_{k}^{(i)}\left((t_{1},x_{1}),\cdots,(t_{k},x_{k})\right)\mathscr{\dot{W}}(dt_{1}dx_{1})\cdots\mathscr{\dot{W}}(dt_{k}dx_{k}),$
where $p(t,y)=(2\pi y)^{-1/2}e^{-y^{2}/(2t)}$,
$\Delta_{k}(t)=\\{0<t_{1}<\cdots<t_{k}<t\\}$, and $R_{k}^{(i)}$ is the
$k$-point correlation function for a collection of $i$ non-intersecting
Brownian bridges each of which starts at $0$ at time $0$ and ends at $y$ at
time $t$. For notational simplicity, set $\mathcal{Z}_{0}(t,y)\equiv 1$.
###### Definition 1.8.
For $t>0$ fixed, the KPZ line ensemble is a continuous $\mathbb{N}$-indexed
line ensemble $\mathcal{X}^{t}=\\{\mathcal{X}^{t}_{i}\\}_{i\in\mathbb{N}}$ on
$\mathbb{R}$ given by
(1.12)
$\mathcal{X}^{t}_{i}(y):=\log\left(\frac{\mathcal{Z}_{i}(t,y)}{\mathcal{Z}_{i-1}(t,y)}\right),$
with convention $\mathcal{Z}_{0}\equiv 1$. Note that
$\mathcal{X}^{t}_{1}(\cdot)$ equals $\mathcal{H}(0,0;t,\cdot)$, the time $t$
spatial process of the solution to the KPZ equation (1.1).
The KPZ line ensemble also arises as the scaling limits of O’Connell-Yor semi-
discrete polymers [OY01]. We will discuss this convergence in more detail in
Section 4. We note that in [CN17], the convergence to the KPZ line ensemble is
proved for a wide range of polymer models.
The nature of our proof of Theorem 1.7 leads us to the following conjecture
which relates the KPZ sheet and the Busemann functions on the KPZ line
ensemble. It can be viewed as a counterpart of (1.7).
###### Conjecture 1.9.
Fix $T>0$. There exists a coupling of the KPZ line ensemble
$\mathcal{X}^{T}(y)$ and the KPZ sheet $\mathcal{H}^{T}(x,y)$ such that the
following holds.
1. (1)
$\mathcal{H}^{T}(0,\cdot)=\mathcal{X}^{T}_{1}(\cdot)$.
2. (2)
For all $x>0$ and $y_{1},y_{2}$ in $\mathbb{R}$, we have
(1.13)
$\lim_{k\to\infty}\bigg{(}\mathcal{X}^{T}[(-kT/x,k)\to(y_{2},1)]-\mathcal{X}^{T}[(-kT/x,k)\to(y_{1},1)]\bigg{)}=\mathcal{H}^{T}(x,y_{2})-\mathcal{H}^{T}(x,y_{1}).$
Here $\mathcal{X}^{T}[(x,k)\to(y,1)]$ stands for the polymer free energy from
$(x,k)$ to $(y,1)$ on $\mathcal{X}^{T}$. We refer readers to (2.5) in Section
2 for the detailed definition.
Even though we do not have a proof for Conjecture 1.9, we are able to reduce
it to certain path regularity for the KPZ line ensemble.
###### Theorem 1.10.
Suppose for any $\varepsilon>0$ and $x>0$, it holds that
(1.14)
$\displaystyle\sum_{k=1}^{\infty}\mathbb{P}\bigg{(}\left|\mathcal{X}^{t}[(0,k+1)\to(x,1)]-k\log
x+\log k!\right|>\varepsilon k\bigg{)}<\infty.$
Then Conjecture 1.9 holds true.
### 1.5. Outline
Section 2 contains the definition of semi-discrete polymer and some basic
deterministic properties. In Section 3, we derive a crucial identity related
to the geometric RSK correspondence, (3.3) in Proposition 3.5. In Section 4,
we introduce the O’Connell-Yor polymer and its scaled versions. We prove
Theorems 1.6, 1.7 in Section 5 and Theorem 1.10 in Section 6. Appendix A
contains proofs for results in Section 2.
### 1.6. Notation
We would like to explain some notation here. The natural numbers are defined
to be $\mathbb{N}=\\{1,2,...\\}$ and $\mathbb{N}_{0}=\mathbb{N}\cup\\{0\\}$.
The positive rational numbers are denoted by $\mathbb{Q}^{+}$. For
$m\leq\ell\in\mathbb{N}_{0}$, we write $\llbracket m,\ell\rrbracket$ for
$\\{m,m+1,m+2,\dots,\ell\\}$. We use a special font $\mathsf{t}$ to denote a
sequence of positive numbers $\\{T_{1}<T_{2}<\dots\\}$ which goes to infinity.
We also denote by $\mathsf{n}$ a sequence of positive integers
$\\{n_{1}<n_{2}<\dots\\}$ which goes to infinity.
For a topological space $\mathcal{T}$, we equip $C(\mathcal{T},\mathbb{R})$,
the collection of continuous functions on $\mathcal{T}$, with the topology of
uniform convergence on compact subsets. A family of
$C(\mathcal{T},\mathbb{R})$-valued random variables converges in distribution
if the corresponding family of measures on $C(\mathcal{T},\mathbb{R})$
converges weakly.
### 1.7. Acknowledge
The author thanks Alan Hammond for the discussion related to the temporal
correlation of the KPZ equation. The author thanks Ivan Corwin for advice on a
draft of this paper.
## 2\. Semi-Discrete Polymer
In this section, we introduce semi-discrete polymers with a deterministic
environment and record some basic properties. The proofs for those properties
can be found in Appendix A.
A semi-discrete environment is given by finitely or countably many continuous
functions defined on an interval.
###### Definition 2.1.
For $n\in\mathbb{N}$ and an interval $I\subset\mathbb{R}$, we define
$\displaystyle C^{n}(I)\coloneqq\left\\{(f_{1},f_{2},\dots,f_{n})\,|\,f_{i}\in
C(I,\mathbb{R})\ \textup{for}\ i\in\llbracket 1,n\rrbracket\right\\}.$
For the special cases $I=(0,T)$ or $I=[0,T)$ for some $0<T$, we denote
$\displaystyle C^{n}(T)\coloneqq C^{n}((0,T))\ \textup{and}\
\overline{C}^{n}(T)\coloneqq C^{n}([0,T)).$
We define the up/right paths connecting two points as follows.
###### Definition 2.2.
For real numbers $x<y$ and positive integers $\ell\geq m$, we denote by
$\mathcal{Q}[(x,\ell)\to(y,m)]$ the collection of non-increasing càdlàg
functions $\pi:[x,y]\to\mathbb{N}$ with $\pi(x)\leq\ell$ and $\pi(y)=m$. We
refer to a member of $\mathcal{Q}[(x,\ell)\to(y,m)]$ as a path from $(x,\ell)$
to $(y,m)$.
There is an injective map from $\mathcal{Q}[(x,\ell)\to(y,m)]$ to
$\mathbb{R}^{\ell-m}$, $\pi\mapsto(t_{\ell},\dots,t_{m+1})$, given by
(2.1) $\displaystyle t_{j}=\inf\\{t\in[x,y]\,|\,\pi(t)\leq j-1\\}\
\textup{for}\ j\in\llbracket m+1,\ell\rrbracket.$
It is convenient to set
(2.2) $\displaystyle t_{\ell+1}=x,\ t_{m}=y.$
In particular, it holds that
(2.3) $\displaystyle\pi(t)=j\ \textup{for}\ t\in(t_{j+1},t_{j})\ \textup{and}\
j\in\llbracket m,\ell\rrbracket.$
The image of $\mathcal{Q}[(x,\ell)\to(y,m)]$ is the closed convex subset
$\\{x\leq t_{\ell}\leq t_{\ell-1}\leq\dots\leq t_{m+1}\leq y\\}.$ We often
abuse the notation and view $\mathcal{Q}[(x,\ell)\to(y,m)]$ as a subset of
$\mathbb{R}^{\ell-m}$.
For $f\in C^{n}([a,b])$ and $\pi\in\mathcal{Q}[(x,\ell)\to(y,m)]$ with
$(x,\ell),(y,m)\in[a,b]\times\llbracket 1,n\rrbracket$, define
(2.4) $\displaystyle
f(\pi)\coloneqq\sum_{j=m}^{\ell}f_{j}(t_{j})-f_{j}(t_{j+1}),$
where $t_{j}$ are given by (2.1) and (2.2). Let $d\pi$ be the Lebesgue measure
on $\mathcal{Q}[(x,\ell)\to(y,m)]$. For $\beta>0$, the $\beta$-free energy
from $(x,\ell)$ to $(y,m)$ is defined by
(2.5) $\displaystyle
f[(x,\ell)\xrightarrow{\beta}(y,m)]\coloneqq\beta^{-1}\log\int_{\mathcal{Q}[(x,\ell)\to(y,m)]}\exp\left(\beta{f(\pi)}\right)\,d\pi.$
We also allow $\beta=\infty$ and set
$\displaystyle
f[(x,\ell)\xrightarrow{\infty}(y,m)]=\max_{\pi\in\mathcal{Q}[(x,\ell)\to(y,m)]}f(\pi).$
For $\beta=1$, we denote $f[(x,\ell)\xrightarrow{1}(y,m)]$ by
$f[(x,\ell)\to(y,m)]$.
The following lemma follows directly from the definition above.
###### Lemma 2.3.
For any $k\in\llbracket m,\ell-1\rrbracket$, we have
$\displaystyle\exp\left(f[(x,\ell)\to(y,m)]\right)=\int_{x}^{y}\exp\left(f[(x,\ell)\to(z,k+1)]+f[(z,k)\to(y,m)]\right)\,dz.$
###### Lemma 2.4.
Fix $n\geq\ell\geq m\geq 1$, $a\leq x_{1}\leq x_{2}<b$ and $f\in
C^{n}([a,b])$. Then the function
$f[(x_{2},\ell)\to(y,m)]-f[(x_{1},\ell)\to(y,m)]$ is monotone non-decreasing
for $y\in(x_{2},b]$.
###### Lemma 2.5.
Fix $n\geq\ell\geq m\geq 1$, $a\leq x<y_{1}\leq y_{2}\leq b$ and $f\in
C^{n}([a,b])$. Then
$\displaystyle f[(x,\ell)\to(y_{1},m)]\leq
f[(x,\ell)\to(y_{2},m)]-f_{m}(y_{1})+f_{m}(y_{2}).$
###### Lemma 2.6.
Fix constants $a_{1},a_{2}>0$, $a_{3},a_{4}\in\mathbb{R}$ and
$\\{a_{5,i}\\}_{i\in\mathbb{N}}$. For $g$ defined by
$g_{i}(x)=a_{1}f_{i}(a_{2}x+a_{3})+a_{4}x+a_{5,i}$, it holds that
$\displaystyle g[(x,\ell)\xrightarrow{\beta}(y,k)]=a_{1}\cdot
f[(a_{2}x+a_{3},\ell)\xrightarrow{a_{1}\beta}(a_{2}y+a_{3},k)]+a_{4}(y-x)-\beta^{-1}(\ell-k)\log
a_{2}.$
Next, we consider multiple paths that do not cross each other. Let $\pi_{1}$
and $\pi_{2}$ be two paths which belong to
$\mathcal{Q}[(x_{1},\ell_{1})\to(y_{1},m_{1})]$ and
$\mathcal{Q}[(x_{2},\ell_{2})\to(y_{2},m_{2})]$ respectively. We write
$\pi_{1}\prec\pi_{2}$ if $\pi_{1}(t)<\pi_{2}(t)$ for all
$t\in(x_{1},y_{1})\cap(x_{2},y_{2})$. In this case, we say $\pi_{1}$ and
$\pi_{2}$ are non-intersecting. The next lemma shows that non-intersecting
paths form a closed convex set.
###### Lemma 2.7.
For $i\in\\{1,2\\}$, let $(x_{i},\ell_{i})$ and $(y_{i},m_{i})$ be pairs with
$x_{i}<y_{i}$ and $\ell_{i}\geq m_{i}$. Further assume $x_{1}\leq x_{2}$ and
$y_{1}\leq y_{2}$. Then the collection of $(\pi_{1},\pi_{2})$ in
$\mathcal{Q}[(x_{1},\ell_{1})\to(y_{1},m_{1})]\times\mathcal{Q}[(x_{2},\ell_{2})\to(y_{2},m_{2})]$
with $\pi_{1}\prec\pi_{2}$ is a closed convex subset in
$\mathbb{R}^{\ell_{1}-m_{1}}\times\mathbb{R}^{\ell_{2}-m_{2}}$.
A pair of sequences in $\mathbb{R}\times\mathbb{N}$ which can be connected by
non-intersecting paths is called an endpoint pair. Its definition is given
below.
###### Definition 2.8.
Fix $k\in\mathbb{N}$. Let $U=\\{(x_{i},\ell_{i})\\}_{i\in\llbracket
1,k\rrbracket}\ \textup{and}\ V=\\{(y_{i},m_{i})\\}_{i\in\llbracket
1,k\rrbracket}$ be two sequences in $\mathbb{R}\times\mathbb{N}$ with
$x_{i}<y_{i}$ and $\ell_{i}\geq m_{i}$ for all $i$. We denote by
$\mathcal{Q}[U\to V]$ the collection of paths $\pi=(\pi_{1},\dots,\pi_{k})$ in
$\prod_{i=1}^{k}\mathcal{Q}[(x_{i},\ell_{i})\to(y_{i},m_{i})]$ that satisfy
$\pi_{1}\prec\pi_{2}\prec\dots\prec\pi_{k}$. We call $(U,V)$ an endpoint pair
if $\mathcal{Q}[U\to V]$ is non-empty and $x_{i}\leq x_{i+1}$, $y_{j}\leq
y_{j+1}$ for $i\in\llbracket 1,k-1\rrbracket$. We may call $(U,V)$ a
$k$-endpoint pair to emphasize that there are $k$ pairs of endpoints.
Let $(U,V)$ be a $k$-endpoint pair and $f\in C^{n}([a,b])$ with
$U,V\subset[a,b]\times\llbracket 1,n\rrbracket$. For
$\pi=(\pi_{1},\dots,\pi_{k})\in\mathcal{Q}[U\to V]$, we define
$\displaystyle f(\pi)\coloneqq\sum_{i=1}^{k}f(\pi_{i}),$
where $f(\pi_{i})$ are given in (2.4). In view of Lemma 2.7, $\mathcal{Q}[U\to
V]$ can be identified as a closed convex set in a Euclidean space. Let
$p\in\mathbb{N}_{0}$ be the smallest integer such that $\mathcal{Q}[U\to V]$
is contained in a $p$-dimensional subspace and let $d\pi$ be the
$p$-dimensional Hausdorff measure on $\mathcal{Q}[U\to V]$. We define
(2.6) $\displaystyle f[U\to V]\coloneqq\log\int_{\mathcal{Q}[U\to
V]}\exp\left({f(\pi)}\right)\,d\pi.$
The following reversing map will be used in Section 3. For
$f\in\overline{C}^{n}(T)$ and $z\in(0,T)$, we define
$R_{z}f\in\overline{C}^{n}([0,z])$ by
(2.7) $\displaystyle(R_{z}f)_{i}(t):=-f_{n+1-i,}(z-t).$
Let $U=\\{(x_{i},\ell_{i})\\}_{i\in\llbracket 1,k\rrbracket}\ \textup{and}\
V=\\{(y_{i},m_{i})\\}_{i\in\llbracket 1,k\rrbracket}$ be an endpoint pair with
$U,V\subset[0,z]\times\llbracket 1,n\rrbracket$. Let
$\displaystyle\widetilde{V}\coloneqq$
$\displaystyle\\{(z-x_{k+1-i},n+1-\ell_{k+1-i})\\}_{i\in\llbracket
1,k\rrbracket},$ $\displaystyle\widetilde{U}\coloneqq$
$\displaystyle\\{(z-y_{k+1-i},n+1-m_{k+1-i})\\}_{i\in\llbracket
1,k\rrbracket}.$
###### Lemma 2.9.
Under the setting above, it holds that
$\displaystyle f[U\to V]=(R_{z}f)[\widetilde{U}\to\widetilde{V}].$
It is convenient to introduce certain special sequences in
$\mathbb{R}\times\mathbb{N}$. We use $(x,\ell)^{k}$ to denote the sequence
$\\{\underbrace{(x,\ell),(x,\ell),\dots,(x,\ell)}_{k\ \textup{terms}}\\}.$ For
$1\leq k\leq n$, we set
(2.8) $\begin{split}&V_{k}(x)\coloneqq\\{(x,1),(x,2),\dots,(x,k)\\},\\\
&V^{\prime}_{k}(x)\coloneqq\\{(x,2),(x,3),\dots,(x,k+1)\\},\\\
&U_{n,k}(x)\coloneqq\\{(x,n-k+1),(x,n-k+2),\dots,(x,n)\\}.\end{split}$
Moreover, we denote by $\mathcal{V}_{n,k}(x)$ the collection of
$\\{(x,\ell_{1}),(x,\ell_{2}),\dots,(x,\ell_{k})\\}$ that satisfies
$1\leq\ell_{1}<\ell_{2}<\dots<\ell_{k}\leq n$.
For paths in $\mathcal{Q}[(x,n)^{k}\to V]$, because of the non-intersecting
requirement, the starting points need to pile up. Therefore, they belong to
$\mathcal{Q}[U_{n,k}(x)\to V]$. This is the content of the next lemma.
###### Lemma 2.10.
Fix $n\geq 2$, $1\leq k\leq n-1$, $0<x<y<z<T$ and $f\in{C}^{n}(T)$. Then the
following identities hold.
(2.9) $f[(x,n)^{k+1}\to(y,1)^{k+1}]=f[U_{n,k+1}(x)\to V_{k+1}(y)].$ (2.10)
$f[\\{(x,n)^{k},(y,n)\\}\to(z,1)^{k+1}]=f[\\{U_{n,k}(x),(y,n)\\}\to
V_{k+1}(z)].$ (2.11)
$f[(x,n)^{k+1}\to\\{(y,1),(z,1)^{k}\\}]=f[U_{n,k+1}(x)\to\\{(y,1),V_{k}(z)\\}].$
Lastly, we introduce the down/right paths which are analogous to the up/right
paths.
###### Definition 2.11.
For real numbers $x<y$ and positive integers $m\leq\ell$, we use the notation
$\mathcal{Q}[(x,m)\searrow(y,\ell)]$ to denote the collection of non-
decreasing càdlàg functions $\rho:[x,y]\to\mathbb{N}$ with $\rho(x)\geq m$ and
$\rho(y)=\ell$.
There is an injective map from $\mathcal{Q}[(x,m)\searrow(y,\ell)]$ to
$\mathbb{R}^{\ell-m}$ given by
(2.12) $\displaystyle t_{j}=\inf\\{t\in[x,y]\,|\,\rho(t)\geq j+1\\}\
\textup{for}\ j\in\llbracket m,\ell-1\rrbracket.$
The image of $\mathcal{Q}[(x,m)\searrow(y,\ell)]$ is a closed convex subset
and we often view $\mathcal{Q}[(x,m)\searrow(y,\ell)]$ as the subset of
$\mathbb{R}^{\ell-m}$.
For $f\in C^{n}([a,b])$ and $\rho\in\mathcal{Q}[(x,m)\searrow(y,\ell)]$ with
$(x,m),(y,\ell)\in[a,b]\times\llbracket 1,n\rrbracket$, we define
$\displaystyle f(\rho)\coloneqq\sum_{j=m}^{\ell}f_{j}(t_{j})-f_{j}(t_{j-1}),$
where $t_{j},\ j\in\llbracket m,\ell-1\rrbracket$ are given by (2.12) and
$t_{m-1}=x,\ t_{\ell}=y$. Let $d\rho$ be the Lebesgue measure on
$\mathcal{Q}[(x,\ell)\to(y,m)]$. We define
(2.13) $\displaystyle
f[(x,\ell)\searrow(y,m)]\coloneqq-\log\int_{\mathcal{Q}[(x,\ell)\searrow(y,m)]}\exp\left(-{f(\rho)}\right)\,d\rho.$
We finish this section with the lemma below which shows
$f[V^{\prime}_{k-1}(x)\to V_{k-1}(y)]$ and $f[(x,1)\searrow(y,k)]$ supplement
each other. Here $V^{\prime}_{k-1}(x)$ and $V_{k-1}(y)$ are given in (2.8).
###### Lemma 2.12.
Fix $n\geq 2$, $1\leq k\leq n-1$, $0<x<y<T$ and $f\in C^{n}(T)$. Then it holds
that
$\displaystyle f[V^{\prime}_{k}(x)\to
V_{k}(y)]+f[(x,1)\searrow(y,k+1)]=f[V_{k+1}(x)\to V_{k+1}(y)].$
## 3\. geometric RSK correspondence
In this section we define a geometric variant of the RSK correspondence
introduced in [OCo12]. The main goal of this section is to derive the identity
(3.3) in Proposition 3.5. This identity describes the polymer energy for an
environment under the geometric RSK and plays a crucial rule in convergence of
the scaled KPZ sheet.
Fix $n\geq 2$, $1\leq i\leq n-1$ and $f\in C^{n}(T)$. Suppose
$\int_{0}^{t}\exp(f_{i+1}(s)-f_{i}(s))ds$ is well-defined as an improper
integral. Define $\mathcal{T}_{i}f\in C^{n}(T)$ by
(3.1) $\displaystyle(\mathcal{T}_{i}f)(t)\coloneqq
f(t)+\left(\log\int_{0}^{t}\exp(f_{i+1}(s)-f_{i}(s))ds\right)(e_{i}-e_{i+1}).$
For $1\leq r\leq n-1$, define
$\displaystyle\mathcal{K}_{r}f\coloneqq\mathcal{T}_{r}\mathcal{T}_{r+1}\cdots\mathcal{T}_{n-1}f.$
###### Definition 3.1.
Given $f\in\overline{C}^{n}(T)$, we define $\mathcal{W}f\in C^{n}(T)$ by
(3.2)
$\displaystyle\mathcal{W}f\coloneqq\mathcal{K}_{n-1}\mathcal{K}_{n-2}\cdots\mathcal{K}_{1}f.$
The following Greene’s theorem was proved in [OCo12].
###### Proposition 3.2.
Fix $n\geq 2$ and $f\in\overline{C}^{n}(T)$. Recall that $U_{n,k}(0)$ and
$V_{k}(t)$ are given in (2.8). Then it holds for all $t\in(0,T)$ and $1\leq
k\leq n$ that
$\displaystyle\sum_{i=1}^{k}(\mathcal{W}f)_{i}(t)=f[U_{n,k}(0)\to V_{k}(t)].$
The following invariance of the free energy was proved in [Cor21, Theorem
3.4].
###### Proposition 3.3.
Fix $n\geq 2$, $f\in\overline{C}^{n}(T)$ and an endpoint pair
$U=\\{(x_{i},n)\\}_{i\in\llbracket 1,k\rrbracket}$ and
$V=\\{(y_{i},1)\\}_{i\in\llbracket 1,k\rrbracket}$ with
$U,V\subset(0,T)\times\llbracket 1,n\rrbracket$. Then it holds that
$\displaystyle f[U\to V]=\left(\mathcal{W}f\right)[U\to V].$
###### Remark 3.4.
In [Cor21], Proposition 3.3 was proved under the condition
$x_{1}<x_{2}<\dots<x_{n}$ and $y_{1}<y_{2}<\dots<y_{n}$. This condition can be
removed through approximation.
The next proposition relates $(\mathcal{W}f)[(x,n)\to(z,k)]$ and
$(\mathcal{W}f)_{k}(z).$
###### Proposition 3.5.
Fix $n\geq 2$, $1\leq k\leq n-1$, $0<T$ and $f\in\overline{C}^{n}(T)$. For any
$0<x<z<T$, it holds that
(3.3)
$\displaystyle(\mathcal{W}f)[(x,n)\to(z,k+1)]+(\mathcal{W}R_{z}f)[(z-x,1)\searrow(z,k+1)]=(\mathcal{W}f)_{k+1}(z).$
The rest of this section is devoted to proving Proposition 3.5. We start with
a direct consequence of Lemma 2.10 and Proposition 3.3.
###### Corollary 3.6.
Fix $n\geq 2$, $1\leq k\leq n-1$, $0<x<y<z<T$ and $f\in\overline{C}^{n}(T)$.
The following identities hold.
$\displaystyle f[U_{n,k+1}(x)\to V_{k+1}(y)]$
$\displaystyle=(\mathcal{W}f)[U_{n,k+1}(x)\to V_{k+1}(y)],$ $\displaystyle
f[\\{U_{n,k}(x),(y,n)\\}\to V_{k+1}(z)]$
$\displaystyle=(\mathcal{W}f)[\\{U_{n,k}(x),(y,n)\\}\to V_{k+1}(z)],$
$\displaystyle f[U_{n,k+1}(x)\to\\{(y,1),V_{k}(z)\\}]$
$\displaystyle=(\mathcal{W}f)[U_{n,k+1}(x)\to\\{(y,1),V_{k}(z)\\}].$
###### Lemma 3.7.
Fix $n\geq 2$, $1\leq k\leq n-1$, $0<x<y<T$ and $f\in\overline{C}^{n}(T)$. It
holds that
$\displaystyle(\mathcal{W}f)[(x,n)\to(y,k+1)]+f[U_{n,k}(0)\to
V_{k}(y)]=f[\\{U_{n,k}(0),(x,n)\\}\to V_{k+1}(y)].$
###### Proof.
Let $g=\mathcal{W}f$. Because of the natural measure-preserving injection from
$\mathcal{Q}[\\{U_{n,k}(\varepsilon),(x,n)\\}\to V_{k+1}(y)]$ to
$\mathcal{Q}[U_{n,k}(\varepsilon)\to
V_{k}(y)]\times\mathcal{Q}[(x,n)\to(y,k+1)]$, we have
$\displaystyle g[\\{U_{n,k}(\varepsilon),(x,n)\\}\to V_{k+1}(y)]\leq
g[U_{n,k}(\varepsilon)\to V_{k}(y)]+g[(x,n)\to(y,k+1)].$
Take $\varepsilon$ go to zero and apply Corollary 3.6, we get
$\displaystyle f[\\{U_{n,k}(0),(x,n)\\}\to V_{k+1}(y)]\leq f[U_{n,k}(0)\to
V_{k}(y)]+g[(x,n)\to(y,k+1)].$
Because of the natural measure-preserving injection from
$\mathcal{Q}[U_{n,k}(\varepsilon)\to V_{k}(x)]\times\mathcal{Q}[V_{k}(x)\to
V_{k}(y)]\times\mathcal{Q}[(x,n)\to(y,k+1)]$
to $\mathcal{Q}[\\{U_{n,k}(\varepsilon),(x,n)\\}\to V_{k+1}(y)]$, we have
$\displaystyle g[\\{U_{n,k}(\varepsilon),(x,n)\\}\to V_{k+1}(y)]\geq
g[U_{n,k}(\varepsilon)\to V_{k}(x)]+g[V_{k}(x)\to
V_{k}(y)]+g[(x,n)\to(y,k+1)].$
Take $\varepsilon$ go to zero and apply Corollary 3.6 and Proposition 3.2, we
get
$\displaystyle f[\\{U_{n,k}(0),(x,n)\\}\to V_{k+1}(y)]\geq f[U_{n,k}(0)\to
V_{k}(y)]+g[(x,n)\to(y,k+1)].$
∎
###### Lemma 3.8.
Fix $n\geq 2$, $1\leq k\leq n$, $0<x<T$ and $f\in\overline{C}^{n}(T)$. Recall
that $U_{n,k}(\varepsilon)$ and $V_{k}(x)$ are given in (2.8). Then for any
$V\in\mathcal{V}_{n,k}(x)$ with $V\neq V_{k}(x)$, it holds that
$\displaystyle\lim_{\varepsilon\to
0}\exp\bigg{(}(\mathcal{W}f)[U_{n,k}(\varepsilon)\to V]\bigg{)}=0.$
###### Proof.
Let $g=\mathcal{W}f$ and take $y\in(x,T)$. By separating
$\mathcal{Q}[U_{n,k}(\varepsilon)\to V_{k}(y)]$ according to $\pi(x)$, we have
$\displaystyle\exp\bigg{(}g[U_{n,k}(\varepsilon)\to
V_{k}(y)]\bigg{)}=\sum_{V\in\mathcal{V}_{n,k}(x)}\exp\bigg{(}g[U_{n,k}(\varepsilon)\to
V]+g[V\to V_{k}(y)]\bigg{)}.$
Therefore,
$\displaystyle\sum_{V\in\mathcal{V}_{k}(x),V\neq
V_{k}(x)}\exp\bigg{(}g[U_{n,k}(\varepsilon)\to V]+g[V\to V_{k}(y)]\bigg{)}$
$\displaystyle=$ $\displaystyle\exp\bigg{(}g[U_{n,k}(\varepsilon)\to
V_{k}(y)]\bigg{)}-\exp\bigg{(}g[U_{n,k}(\varepsilon)\to
V_{k}(x)]+g[V_{k}(x)\to V_{k}(y)]\bigg{)}.$
From Corollary 3.6, the limit of the right hand side equals
$\displaystyle\exp\bigg{(}f[U_{n,k}(0)\to
V_{k}(y)]\bigg{)}-\exp\bigg{(}f[U_{n,k}(0)\to V_{k}(x)]+g[V_{k}(x)\to
V_{k}(y)]\bigg{)}.$
From Proposition 3.2, the above vanishes. Therefore for any
$V\in\mathcal{V}_{n,k}(0)$ with $V\neq V_{k}(x)$, we have
$\lim_{\varepsilon\to 0}\exp\big{(}g[U_{n,k}(\varepsilon)\to V]\big{)}=0.$
∎
###### Lemma 3.9.
Fix $n\geq 2$, $1\leq k\leq n-1$, $0<x<y<T$ and $f\in\overline{C}^{n}(T)$.
Then
$\displaystyle f[U_{n,k+1}(0)\to\\{(x,1),V_{k}(y)\\}]=f[U_{n,k+1}(0)\to
V_{k+1}(y)]-(\mathcal{W}f)[(x,1)\searrow(y,k+1)].$
###### Proof.
Let $g=\mathcal{W}f$. From Corollary 3.6,
$f[U_{n,k+1}(0)\to\\{(x,1),V_{k}(y)\\}]$ equals
$\lim_{\varepsilon\to 0}g[U_{n,k+1}(\varepsilon)\to\\{(x,1),V_{k}(y)\\}].$
From Lemma 3.8, the above becomes
$\lim_{\varepsilon\to 0}g[U_{n,k+1}(\varepsilon)\to
V_{k+1}(x)]+g[V^{\prime}_{k}(x)\to V_{k}(y)].$
From Corollary 3.6 and Lemma 2.12, the above equals
$f[U_{n,k+1}(0)\to V_{k+1}(x)]+g[V_{k+1}(x)\to
V_{k+1}(y)]-g[(x,1)\searrow(y,k+1)].$
In view of Proposition 3.2, the above becomes $f[U_{n,k+1}(0)\to
V_{k+1}(y)]-g[(x,1)\searrow(y,k+1)]$. ∎
###### Proof of Proposition 3.5.
From Lemma 3.7, $(\mathcal{W}f)[(x,n)\to(z,k+1)]$ equals
$\displaystyle f[\\{U_{n,k}(0),(x,n)\\}\to V_{k+1}(z)]-f[U_{n,k}(0)\to
V_{k}(z)].$
From Lemma 2.9, the above equals
$\displaystyle(R_{z}f)[U_{n,k+1}(0)\to\\{(z-x,1),V_{k}(z)\\}]-f[U_{n,k}(0)\to
V_{k}(z)].$
From Lemma 3.9, the above equals
$\displaystyle(R_{z}f)[U_{n,k+1}(0)\to
V_{k+1}(z)]-(\mathcal{W}R_{z}f)[(z-x,1)\searrow(z,k+1)]-f[U_{n,k}(0)\to
V_{k}(z)].$
Applying Lemma 2.9 and Proposition 3.2, it becomes
$\displaystyle(\mathcal{W}f)_{k+1}(z)-(\mathcal{W}R_{z}f)[(z-x,1)\searrow(z,k+1)].$
∎
## 4\. O’Connell-Yor polymer model
Let $B_{1},B_{2},\dots$ be i.i.d. standard two-sided Brownian motions. For
$n\in\mathbb{N}$, let $B^{n}=(B_{1},B_{2},\dots,B_{n})$ restricted on
$[0,\infty)$ and define
$Y^{n}=(Y^{n}_{1},\dots,Y^{n}_{n})\coloneqq\mathcal{W}B^{n}$. It was shown by
O’Connell [OCo12] that $Y^{n}$ is a diffusion process in $\mathbb{R}^{n}$ with
an infinitesimal generator related to the quantum Toda lattice and
$\mathfrak{gl}_{n}$-Whittaker functions. Corwin and Hammond [CH16] derived the
$\mathbf{H}$-Brownian Gibbs property for $Y^{n}$ and used this property to
construct the KPZ line ensemble $\mathcal{H}^{T}$ as a scaling limit of
$Y^{n}$. We discuss this convergence in more detail below.
For $n,i\in\mathbb{N}$ and $T>0$, let
$\displaystyle C_{1}(T,n)\coloneqq$ $\displaystyle n^{1/2}T^{-1/2}+2^{-1},\
C_{3,i}(T)\coloneqq-(i-1)\log T+\log(i-1)!$ $\displaystyle
C_{2}(T,n)\coloneqq$ $\displaystyle
n+2^{-1}n^{1/2}T^{1/2}-2^{-1}(n-1)\log(n^{1/2}T^{-1/2}).$
Define
$\mathcal{X}^{T,n}=\\{\mathcal{X}^{T,n}_{1},\mathcal{X}^{T,n}_{2},\dots,\mathcal{X}^{T,n}_{n}\\}$
on $(-T^{1/2}n^{1/2},\infty)$ by
(4.1) $\mathcal{X}^{T,n}_{i}(x)\coloneqq
Y^{n}_{i}(n^{1/2}T^{1/2}+x)-C_{1}(T,n)x-C_{2}(T,n)-C_{3,i}(T),\ i\in\llbracket
1,n\rrbracket$
From Proposition 3.2,
$\displaystyle\mathcal{X}^{T,n}_{1}(x)=B^{n}[(0,n)\to(n^{1/2}T^{1/2}+x,1)]-C_{1}(T,n)x-C_{2}(T,n).$
For $T>0$, $n\geq 1$, $x\in\mathbb{R}$ and $y>-n^{1/2}T^{1/2}+x$, define
(4.2) $\displaystyle\mathcal{H}^{T,n}(x,y)\coloneqq
B^{n}[(x,n)\to(n^{1/2}T^{1/2}+y,1)]-C_{1}(T,n)(y-x)-C_{2}(T,n).$
Because $B_{i}(x+y)-B_{i}(x)\overset{d}{=}B_{i}(y)$, $\mathcal{H}^{T,n}(x,y)$
has the same distribution as $\mathcal{X}^{T,n}_{1}(y-x)$. In view of
Proposition 3.3, for $x>0$, we may rewrite $\mathcal{H}^{T,n}(x,y)$ as polymer
free energy on $Y^{n}$.
(4.3) $\displaystyle\mathcal{H}^{T,n}(x,y)\coloneqq
Y^{n}[(x,n)\to(n^{1/2}T^{1/2}+y,1)]-C_{1}(T,n)(y-x)-C_{2}(T,n).$
Combining [CH16, Theorem 3.10] and [Nic21, Theorem 1.2], we have the following
convergence result.
###### Theorem 4.1.
Fix $T>0$. When $n$ goes to infinity, $\mathcal{X}^{T,n}$ converges in
distribution to the KPZ line ensemble $\mathcal{X}^{T}$. Here
$\mathcal{X}^{T,n}$ and $\mathcal{X}^{T}$ are viewed as
$C(\mathbb{N}\times\mathbb{R},\mathbb{R})$-valued random variables.
Moreover, the random continuous function $\mathcal{H}^{T,n}$ also converges to
$\mathcal{H}^{T}$.
###### Proposition 4.2.
Fix $T>0$. When $n$ goes to infinity, the finite-dimensional marginal of
$\mathcal{H}^{T,n}$ converges in distribution to the finite-dimensional
marginal of the KPZ sheet $\mathcal{H}^{T}$.
###### Proof.
This is essentially proved in [Nic21]. In [Nic21, Theorem 1.2], the author
proved the finite-dimensional convergence for $\mathcal{H}^{T,n}(0,\cdot)$ and
the same argument applies for $\mathcal{H}^{T,n}(\cdot,\cdot)$. See [AKQ,
Section 6.2] for a similar result for discrete polymers.∎
For $1\leq k\leq n-1$, $T>0$, $x>0$ and $y>z>-n^{1/2}T^{1/2}+x$, we define
$\displaystyle F^{T,n}_{k}(x,z)\coloneqq$ $\displaystyle
Y^{n}[(x,n)\to(n^{1/2}T^{1/2}+z,k+1)]-Y^{n}_{k+1}(n^{1/2}T^{1/2}+z)+C_{1}(T,n)x,$
and
$\displaystyle G^{T,n}_{k}(z,y)\coloneqq$ $\displaystyle
Y^{n}[(n^{1/2}T^{1/2}+z,k)\to(n^{1/2}T^{1/2}+y,1)]+Y^{n}_{k+1}(n^{1/2}+z)-C_{1}(T,n)y-C_{2}(T,n).$
From Lemma 2.3 and (4.3), we have
(4.4)
$\exp\left(\mathcal{H}^{T,n}(x,y)\right)=\int_{-n^{1/2}T^{1/2}+x}^{y}\exp\left(F^{T,n}_{k}(x,z)+G^{T,n}_{k}(z,y)\right)\,dz.$
We note that from Lemma 2.6 and (4.1), $F^{T,n}_{k}(x,z)$ and
$G^{T,n}_{k}(z,y)$ can be expressed in terms of $\mathcal{X}^{T,n}$ as
$\begin{split}F^{T,n}_{k}(x,z)=\mathcal{X}^{T,n}[(-n^{1/2}T^{1/2}x,n)\to(z,k+1)]-&\mathcal{X}^{T,n}_{k+1}(z)+(n-1)\log(n^{1/2}T^{-1/2})-C_{3,k+1}.\end{split}$
(4.5) $\displaystyle
G^{T,n}_{k}(z,y)=\mathcal{X}^{T,n}[(z,k)\to(y,1)]+\mathcal{X}^{T,n}_{k+1}(z)+C_{3,k+1}.$
The next lemma concerns the distributional limit of $F^{T,n}_{k}(x,z)$ when
$n$ goes to infinity.
###### Lemma 4.3.
Fix $T>0$, $k\geq 1$, $x>0$ and $z\in\mathbb{R}$. Then when $n$ goes to
infinity, $F^{T,n}_{k}(z)$ converges in distribution to
$\mathcal{X}^{T}[(0,k+1)\to(x,1)]+T^{-1}zx$.
###### Proof.
From Proposition 3.5, and $Y^{n}=\mathcal{W}B^{n}$, $F^{T,n}_{k}(z)$ equals
$\displaystyle-(\mathcal{W}R_{n^{1/2}T^{1/2}+z}B^{n})[(n^{1/2}T^{1/2}+z-x,1)\searrow(n^{1/2}+z,k+1)]+C_{1}(T,n)x.$
Because $\mathcal{W}R_{n^{1/2}T^{1/2}+z}B^{n}\overset{d}{=}Y^{n}$, the above
has the same distribution as
$\displaystyle-Y^{n}[(n^{1/2}T^{1/2}+z-x,1)\searrow(n^{1/2}T^{1/2}+z,k+1)]+C_{1}(T,n)x.$
From (4.1) and Lemma 2.6, the above equals
$-\mathcal{X}^{T,n}[(z-x,1)\searrow(z,k+1)]$. By Theorem 4.1 it converges in
distribution to $-\mathcal{X}^{T}[(z-x,1)\searrow(z,k+1)]$. Because
$\mathcal{X}^{T}(y)$ has the same distribution as $\mathcal{X}^{T}(-y)$,
$-\mathcal{X}^{T}[(z-x,1)\searrow(z,k+1)]\overset{d}{=}\mathcal{X}^{T}[(-z,k+1)\to(-z+x,1)].$
From the stationarity of $\mathcal{X}^{T}(y)+2^{-1}T^{-1}y^{2}$,
$\mathcal{X}^{T}(-z+y)\overset{d}{=}\mathcal{X}(y)+T^{-1}zy-2^{-1}T^{-1}z^{2}$.
Therefore,
$\displaystyle\mathcal{X}^{T}[(-z,k+1)\to(-z+x,1)]\overset{d}{=}\mathcal{X}^{T}[(0,k+1)\to(x,1)]+T^{-1}zx.$
∎
In the rest of the section, we drive a relation between Busemann functions and
quantiles of the polymer measure. In particular, Lemma 4.6, (4.10) and (4.11)
will be used in Sections 5 and 6. Those are deterministic properties and do
not rely on the laws of $\mathcal{X}^{T,n}$ or $\mathcal{H}^{T,n}$.
We start with a simple consequence of Lemma 2.4.
###### Lemma 4.4.
Fix $T>0$, $n\geq 2$, $1\leq k\leq n-1$, $0<x_{1}\leq x_{2}$ and
$-n^{1/2}T^{1/2}<y_{1}\leq y_{2}$. Then
$F^{T,n}_{k}(x_{2},z)-F^{T,n}_{k}(x_{1},z)$ is monotone non-decreasing in
$z\in(-n^{1/2}T^{1/2}+x_{2},\infty)$ and
$G^{T,n}_{k}(z,y_{2})-G^{T,n}_{k}(z,y_{1})$ is monotone non-decreasing in
$z\in(-n^{1/2}T^{1/2},y_{1})$.
We define the random probability measure on $\mathbb{R}$ which corresponds to
(4.4). It is the marginal of the polymer measure.
###### Definition 4.5.
Fix $T>0$, $n\geq 2$, $1\leq k\leq n-1$, $x>0$ and $y>-n^{1/2}T^{1/2}+x$. We
denote by $d\mu^{T,n}_{k,x,y}(z)$ the random probability measure with the
density
$\exp\left(-\mathcal{H}^{T,n}(x,y)+F^{T,n}_{k}(x,z)+G^{T,n}_{k}(z,y)\right)\mathbbm{1}(-n^{1/2}T^{1/2}+x<z<y).$
We also set
$A^{T,n}_{k}(x,y;z)\coloneqq\mu^{T,n}_{k,x,y}([z,\infty)),\
B^{T,n}_{k}(x,y;z)\coloneqq\mu^{T,n}_{k,x,y}((-\infty,z]).$
###### Lemma 4.6.
Fix $T>0$, $n\geq 2$, $1\leq k\leq n-1$, $x_{2}\geq x_{1}>0$ and
$y>-n^{1/2}T^{1/2}+x_{2}$. Then for $z\in(-n^{1/2}T^{1/2}+x_{2},y)$, we have
(4.6) $\displaystyle F^{T,n}_{k}(x_{2},z)-F^{T,n}_{k}(x_{1},z)\leq$
$\displaystyle\mathcal{H}^{T,n}(x_{2},y)-\mathcal{H}^{T,n}(x_{1},y)-\log
A^{T,n}_{k}(x_{1},y;z),$
and
(4.7) $\displaystyle F^{T,n}_{k}(x_{2},z)-F^{T,n}_{k}(x_{1},z)\geq$
$\displaystyle\mathcal{H}^{T,n}(x_{2},y)-\mathcal{H}^{T,n}(x_{1},y)+\log
B^{T,n}_{k}(x_{2},y;z).$
###### Proof.
We start with (4.6).
$\displaystyle\exp\left(\mathcal{H}^{T,n}(x_{2},y)-\mathcal{H}^{T,n}(x_{1},y)\right)=$
$\displaystyle\int_{-n^{1/2}T^{1/2}+x_{2}}^{y}\exp\left(F^{T,n}_{k}(x_{2},z^{\prime})-F^{T,n}_{k}(x_{1},z^{\prime})\right)d\mu^{T,n}_{k,x_{1},y}(z^{\prime})$
$\displaystyle\geq$
$\displaystyle\int_{z}^{y}\exp\left(F^{T,n}_{k}(x_{2},z^{\prime})-F^{T,n}_{k}(x_{1},z^{\prime})\right)d\mu^{T,n}_{k,x_{1},y}(z^{\prime})$
$\displaystyle\geq$
$\displaystyle\exp\left(F^{T,n}_{k}(x_{2},z)-F^{T,n}_{k}(x_{1},z)\right)A^{T,n}_{k}(x_{1},y;z).$
We used Lemma 4.4 in the last inequality. Then (4.6) follows. For (4.7), we
derive similarly,
$\displaystyle\exp\left(\mathcal{H}^{T,n}(x_{1},y)-\mathcal{H}^{T,n}(x_{2},y)\right)\geq$
$\displaystyle\int_{-n^{1/2}T^{1/2}+x_{2}}^{y}\exp\left(F^{T,n}_{k}(x_{1},z^{\prime})-F^{T,n}_{k}(x_{2},z^{\prime})\right)d\mu^{T,n}_{k,x_{2},y}(z^{\prime})$
$\displaystyle\geq$
$\displaystyle\int_{-n^{1/2}T^{1/2}+x_{2}}^{z}\exp\left(F^{T,n}_{k}(x_{1},z^{\prime})-F^{T,n}_{k}(x_{2},z^{\prime})\right)d\mu^{T,n}_{k,x_{2},y}(z^{\prime})$
$\displaystyle\geq$
$\displaystyle\exp\left(F^{T,n}_{k}(x_{1},z)-F^{T,n}_{k}(x_{2},z)\right)B^{T,n}_{k}(x_{2},y;z).$
We again used Lemma 4.4 in the last inequality. Hence (4.7) follows. ∎
The lemma below is analogous to Lemma 4.6 and we omit the proof.
###### Lemma 4.7.
Fix $T>0$, $n\geq 2$, $1\leq k\leq n-1$, $x>0$ and $y_{2}\geq
y_{1}>-n^{1/2}T^{1/2}+x$. Then for $z\in(-n^{1/2}T^{1/2}+x,y_{1})$, we have
(4.8) $\displaystyle G^{T,n}_{k}(z,y_{2})-G^{T,n}_{k}(z,y_{1})\leq$
$\displaystyle\mathcal{H}^{T,n}(x,y_{2})-\mathcal{H}^{T,n}(x,y_{1})-\log
A^{T,n}_{k}(x,y_{1},z),$
and
(4.9) $\displaystyle G^{T,n}_{k}(z,y_{2})-G^{T,n}_{k}(z,y_{1})\geq$
$\displaystyle\mathcal{H}^{T,n}(x,y_{2})-\mathcal{H}^{T,n}(x,y_{1})+\log
B^{T,n}_{k}(x,y_{2},z).$
The following two inequalities will be applied in Section 5 and Section 6.
Combining (4.5), (4.8), (4.9) and $A^{T,n}_{k}(x,y;z)+B^{T,n}_{k}(x,y;z)=1$,
we have
(4.10)
$\begin{split}\mathcal{X}^{T,n}[(z,k)\to(y_{2},1)]-\mathcal{X}^{T,n}[(z,k)\to(y_{1},1)]-\mathcal{H}^{T,n}&(x,y_{2})+\mathcal{H}^{T,n}(x,y_{1})\\\
&\leq-\log\left(1-B^{T,n}_{k}(x,y_{1},z)\right),\end{split}$
and
(4.11)
$\begin{split}\mathcal{X}^{T,n}[(z,k)\to(y_{2},1)]-\mathcal{X}^{T,n}[(z,k)\to(y_{1},1)]-\mathcal{H}^{T,n}(x&,y_{2})+\mathcal{H}^{T,n}(x,y_{1})\\\
&\geq\log\left(1-A^{T,n}_{k}(x,y_{2},z)\right).\end{split}$
## 5\. Proof of Theorems 1.6 and 1.7
We present the proofs for Theorems 1.6 and 1.7 in this section. We begin with
the scaled KPZ line ensemble and record some convergence results.
Recall that in (1.10), the scaled KPZ sheet is given by
$\displaystyle\mathfrak{H}^{T}(x,y)\coloneqq
2^{1/3}T^{-1/3}\mathcal{H}^{T}(2^{1/3}T^{2/3}x,2^{1/3}T^{2/3}y)+{2^{1/3}T^{2/3}}/{24}.$
We accordingly define the scaled KPZ line ensemble
$\mathfrak{X}^{T}=\\{\mathfrak{X}^{T}_{1},\mathfrak{X}^{T}_{2},\dots\\}$ as
$\mathfrak{X}^{T}_{i}(x)\coloneqq
2^{1/3}T^{-1/3}\mathcal{X}^{T}(2^{1/3}T^{2/3}x)+{2^{1/3}T^{2/3}}/{24}.$
The pre-limit versions are given by
(5.1)
$\begin{split}\mathfrak{X}^{T,n}_{i}(x)\coloneqq&2^{1/3}T^{-1/3}\mathcal{X}^{T,n}(2^{1/3}T^{2/3}x)+{2^{1/3}T^{2/3}}/{24},\\\
\mathfrak{H}^{T,n}(x,y)\coloneqq&2^{1/3}T^{-1/3}\mathcal{H}^{T,n}(2^{1/3}T^{2/3}x,2^{1/3}{T}^{2/3}y)+2^{1/3}{T^{2/3}}/{24}.\end{split}$
The convergence of the scaled KPZ line ensemble to the Airy line ensemble is a
consequence of a series of works [QS23, Vir20, DM21, Wu22].
###### Theorem 5.1.
The scaled KPZ line ensemble $\mathfrak{X}^{T}(x)$ converges in distribution
to the Airy line ensemble $\mathcal{A}(x)$ when $T$ goes to infinity. Here
$\mathfrak{X}^{T}$ and $\mathcal{A}$ are considered as
$C(\mathbb{N}\times\mathbb{R},\mathbb{R})$-valued random variables.
Next, we record the tightness of the scaled KPZ sheet.
###### Proposition 5.2.
When $T$ goes to infinity, the scaled KPZ sheet $\mathfrak{H}^{T}(x,y)$ is
tight in $C(\mathbb{R}^{2},\mathbb{R})$.
###### Proof.
It suffices to prove that for all $b>0$, $\mathfrak{H}^{T}|_{[-b,b]^{2}}$ is
tight in $C([-b,b]^{2},\mathbb{R})$. The tightness of
$\mathfrak{H}^{T}(0,0)=\mathfrak{H}^{T}(0,0;1,0)$ follows direct from its
convergence [ACQ11]. It remains to obtain the modulus of continuity. From
[CGH21, Theorem 1.3], there exist $D_{1}>0$ depending on and $b$ such that for
all $T\geq 1$, $d\in(0,1]$, $K\geq 0$ and $x,x+d\in[-2b,2b]$, we have
(5.2)
$\displaystyle\mathbb{P}\left(|\mathfrak{X}_{1}^{T}(x+d)-|\mathfrak{X}_{1}^{T}(x)|>Kd^{1/2}\right)\leq
D_{1}e^{-D_{1}^{-1}K^{3/2}}.$
Note that
$\mathfrak{H}^{T}(\cdot,y)\overset{d}{=}\mathfrak{X}^{T}_{1}(y-\cdot)$ and
$\mathfrak{H}^{T}(x,\cdot)\overset{d}{=}\mathfrak{X}^{T}_{1}(\cdot-x)$.
Therefore, for any $(x,y)\in[-b,b]^{2}$,
$\displaystyle\mathbb{P}\left(|\mathfrak{H}^{T}(x+d,y)-\mathfrak{H}^{T}(x,y)|>Kd^{1/2}\right)\leq
D_{1}e^{-D_{1}^{-1}K^{3/2}},$
provided $x+d\in[-b,b]$. Similarly, if $y+d\in[-b,b]$, then
$\displaystyle\mathbb{P}\left(|\mathfrak{H}^{T}(x,y+d)-\mathfrak{H}^{T}(x,y)|>Kd^{1/2}\right)\leq
D_{1}e^{-D_{1}^{-1}K^{3/2}},$
From [DV21, Lemma 3.3], there exists a random constant $C^{T,n}$ such that
almost surely for all $(x,y),(x^{\prime},y^{\prime})\in[-b,b]^{2}$ with
$|x-x^{\prime}|,|y-y^{\prime}|\leq 1$, we have
$\displaystyle|\mathfrak{H}^{T}(x,y)-\mathfrak{H}^{T}(x^{\prime},y^{\prime})|\leq
C^{T,n}\left(|x-x^{\prime}|^{1/2}\log^{2/3}(2/|x-x^{\prime}|)+|y-y^{\prime}|^{1/2}\log^{2/3}(2/|y-y^{\prime}|)\right).$
Moreover, $\mathbb{P}(C^{T,n}>K)<D_{2}e^{-D_{2}^{-1}K^{3/2}}$ for some
constant $D_{2}$ depending only on $T$ and $b$. By the Kolmogorov-Chentsov
criterion (see Corollary 14.9 in [Kal97]) this implies the tightness of
$\mathfrak{H}^{T}|_{[-b,b]^{2}}$. ∎
The next proposition shows that any subsequential limit of $\mathfrak{H}^{T}$
and the Airy line ensemble can be coupled together with desired properties.
###### Proposition 5.3.
Let $\mathfrak{H}$ be a distributional limit of $\mathfrak{H}^{T}$ along some
sequence. Then there exists a coupling of $\mathfrak{H}$ and the Airy line
ensemble $\mathcal{A}$ such that the following holds.
1. (1)
$\mathfrak{H}(0,\cdot)=\mathcal{A}_{1}(\cdot)$.
2. (2)
Almost surely for all $x>0$ and $y_{1},y_{2}$ in $\mathbb{R}$, we have
(5.3)
$\begin{split}\lim_{k\to\infty}\mathcal{A}[(-2^{-1/2}k^{1/2}x^{-1/2},k)\xrightarrow{\infty}(y_{2},1)]-\mathcal{A}[(-2^{-1/2}k^{1/2}x^{-1/2},k)&\xrightarrow{\infty}(y_{1},1)]\\\
&=\mathfrak{H}(x,y_{2})-\mathfrak{H}(x,y_{1}).\end{split}$
###### Proof of Theorem 1.7.
Let $\mathfrak{H}$ be a distributional limit of $\mathfrak{H}^{T}$ along some
sequence. From Proposition 5.3, (5.3) holds. Because of (3) in Theorem 1.1,
$\mathfrak{H}(\cdot+t,\cdot+t)$ has the same distribution as
$\mathfrak{H}(\cdot,\cdot)$. From Definition 1.3, $\mathfrak{H}$ has the same
law as the Airy sheet. As a result, $\mathfrak{H}^{T}$ converges to the Airy
sheet in distribution. ∎
It remains to prove Proposition 5.3. We begin with the rescaled version of
Lemma 4.6, (4.10) and (4.11). Define
(5.4) $\mathfrak{R}^{T,n}_{k}(x,z)\coloneqq
2^{1/3}T^{-1/3}F^{T,n}_{k}(2^{1/3}T^{2/3}x,2^{1/3}T^{2/3}z)-2^{3/2}k^{1/2}x^{1/2}-2zx.$
(5.5)
$\begin{split}\mathfrak{A}^{T,n}_{k}(x,y;z)\coloneqq&A^{T,n}_{k}(2^{1/3}T^{2/3}x,2^{1/3}T^{2/3}y;2^{1/3}T^{2/3}\bar{z}),\\\
\mathfrak{B}^{T,n}_{k}(x,y;z)\coloneqq&B^{T,n}_{k}(2^{1/3}T^{2/3}x,2^{1/3}T^{2/3}y;2^{1/3}T^{2/3}\bar{z}).\end{split}$
###### Lemma 5.4.
Fix $T\geq 2$, $n\geq 2$, and $1\leq k\leq n-1$. Then for all $x,\bar{x}>0$,
and $y>\max\\{-2^{-1/3}n^{1/2}T^{-1/6}+x,-2^{-1/3}n^{1/2}T^{-1/6}+\bar{x}\\}$,
the following statements hold. Let $\bar{z}=-2^{-1/2}k^{1/2}\bar{x}^{-1/2}$.
If $\bar{x}\geq x$, then
(5.6)
$\begin{split}\log\mathfrak{A}^{T,n}_{k}(x,y;\bar{z})+2^{1/2}k^{1/2}\bar{x}^{1/2}&\left(1-\bar{x}^{-1/2}x^{1/2}\right)^{2}\\\
&\leq\mathfrak{H}^{T,n}(\bar{x},y)-\mathfrak{H}^{T,n}(x,y)-\mathfrak{R}^{T,n}_{k}(\bar{x},\bar{z})+\mathfrak{R}^{T,n}_{k}(x,\bar{z}).\end{split}$
If $\bar{x}\leq x$, then
(5.7)
$\begin{split}\log\mathfrak{B}^{T,n}_{k}(x,y;\bar{z})+2^{1/2}k^{1/2}\bar{x}^{1/2}&\left(1-\bar{x}^{-1/2}x^{1/2}\right)^{2}\\\
&\leq\mathfrak{H}^{T,n}(\bar{x},y)-\mathfrak{H}^{T,n}(x,y)-\mathfrak{R}^{T,n}_{k}(\bar{x},\bar{z})+\mathfrak{R}^{T,n}_{k}(x,\bar{z}).\end{split}$
###### Proof.
First, we consider the case $\bar{x}\geq x$. From (4.6), (5.1) and (5.5), we
have
$\displaystyle
2^{-1/3}T^{1/3}\big{(}\mathfrak{H}^{T,n}(\bar{x},y)-\mathfrak{H}^{T,n}(x,y)\big{)}$
$\displaystyle-\log\mathfrak{A}^{T,n}_{k}(x,y;z)$ $\displaystyle\geq
F^{T,n}_{k}(2^{1/3}T^{2/3}\bar{x},2^{1/3}T^{2/3}\bar{z})-F^{T,n}_{k}(2^{1/3}T^{2/3}x,2^{1/3}T^{2/3}\bar{z}).$
From (5.4), the right hand side equals
$\displaystyle
2^{-1/3}T^{1/3}\bigg{(}2^{1/2}k^{1/2}\bar{x}^{1/2}\left(1-\bar{x}^{-1/2}x^{1/2}\right)^{2}+\mathfrak{R}^{T,n}_{k}(\bar{x},\bar{z})-\mathfrak{R}^{T,n}_{k}(x,\bar{z})\bigg{)}.$
Together with $T\geq 2$, (5.6) follows by rearranging terms. The proof of
(5.7) is analogous. ∎
###### Lemma 5.5.
Fix $T\geq 2$, $n\geq 2$, $1\leq k\leq n-1$, $x>0$, and $y_{2}\geq y_{1}\geq
z>-2^{-1/3}n^{1/2}T^{-1/6}+x$ . Then it holds that
(5.8)
$\begin{split}\mathfrak{H}^{T,n}(x,y_{2})-\mathfrak{H}^{T,n}(x,y_{1})-\mathfrak{X}^{T,n}[(z,k)\xrightarrow{(T/2)^{1/3}}(y_{2},1)]+\mathfrak{X}^{T,n}[(z,k)&\xrightarrow{(T/2)^{1/3}}(y_{1},1)]\\\
\geq&\log\big{(}1-\mathfrak{B}^{T,n}_{k}(x,y_{1};z)\big{)}.\end{split}$
Similarly, it holds that
(5.9)
$\begin{split}\mathfrak{H}^{T,n}(x,y_{2})-\mathfrak{H}^{T,n}(x,y_{1})-\mathfrak{X}^{T,n}[(z,k)\xrightarrow{(T/2)^{1/3}}(y_{2},1)]+\mathfrak{X}^{T,n}[(z&,k)\xrightarrow{(T/2)^{1/3}}(y_{1},1)]\\\
\leq&-\log\big{(}1-\mathfrak{A}^{T,n}_{k}(x,y_{2};z)\big{)}.\end{split}$
###### Proof.
From (5.1) and Lemma 2.6,
$\mathfrak{X}^{T,n}[(z,k)\xrightarrow{(T/2)^{1/3}}(y_{2},1)]-\mathfrak{X}^{T,n}[(z,k)\xrightarrow{(T/2)^{1/3}}(y_{1},1)]$
equals
$\displaystyle
2^{1/3}T^{-1/3}\left(\mathcal{X}^{T,n}[(2^{1/3}T^{2/3}z,k){\longrightarrow}(2^{1/3}T^{2/3}y_{2},1)]-\mathcal{X}^{T,n}[(2^{1/3}T^{2/3}z,k){\longrightarrow}(2^{1/3}T^{2/3}y_{1},1)]\right).$
From (4.10), it is bounded from above by
$\displaystyle
2^{1/3}T^{-1/3}\bigg{(}\mathcal{H}^{T,n}(2^{1/3}T^{2/3}x,2^{1/3}T^{2/3}y_{2})-$
$\displaystyle\mathcal{H}^{T,n}(2^{1/3}T^{2/3}x,2^{1/3}T^{2/3}y_{1})$
$\displaystyle-\log
A^{T,n}_{k}(2^{1/3}T^{2/3}x,2^{1/3}T^{2/3}y_{1};2^{1/3}T^{2/3}z)\bigg{)}.$
From (5.1), the above equals
$\displaystyle\mathfrak{H}^{T,n}(x,y_{2})-\mathfrak{H}^{T,n}(x,y_{1})-2^{1/3}T^{-1/3}\log\left(1-\mathfrak{B}^{T,n}_{k}(x,y_{1};z)\right).$
Together with $T\geq 2$, (5.8) follows by rearranging terms. The proof for
(5.9) is similar. ∎
The next proposition provides the coupling which allows us to prove
Proposition 5.3.
###### Proposition 5.6.
Fix a sequence $\mathsf{t}_{0}$. Then there exists a sequence $\mathsf{n}$, a
subsequence $\mathsf{t}\subset\mathsf{t}_{0}$ and a coupling of
$\\{\mathfrak{X}^{T,n},\mathfrak{H}^{T,n}\\}_{(T,n)\in\mathsf{t}\times\mathsf{n}}$,
$\\{\mathfrak{X}^{T},\mathfrak{H}^{T}\\}_{T\in\mathsf{t}}$ and $\mathcal{A}$
such that the following statements hold.
First, fix any $T\in\mathsf{t}_{0}$. Almost surely $\mathfrak{X}^{T,n}$
converges to $\mathfrak{X}^{T}$ in $C(\mathbb{N}\times\mathbb{R},\mathbb{R})$,
$\mathfrak{H}^{T,n}(x,y)$ converges to $\mathfrak{H}^{T}(x,y)$ for all
$x,y\in\mathbb{Q}$, and
$\mathfrak{R}^{T,n}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$ converges for all
$k\geq 1$ and $x,\bar{x}\in\mathbb{Q}^{+}$. We denote the limits by
$\mathfrak{R}^{T}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$.
Second, almost surely $\mathfrak{X}^{T}$ converges to $\mathcal{A}$ in
$C(\mathbb{N}\times\mathbb{R},\mathbb{R})$, $\mathfrak{H}^{T}(x,y)$ converges
in $C(\mathbb{R}^{2},\mathbb{R})$, and
$\mathfrak{R}^{T}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$ converges for all
$k\geq 1$ and $x,\bar{x}\in\mathbb{Q}^{+}$. We denote the limits by
$\mathfrak{H}$ and $\mathfrak{R}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$
respectively.
Lastly, $\mathfrak{H}(0,\cdot)=\mathcal{A}_{1}(\cdot)$. For all
$x,\bar{x}\in\mathbb{Q}^{+}$, it holds almost surely
(5.10)
$\lim_{k\to\infty}|k^{-1/2}\mathfrak{R}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})|=0.$
###### Proof.
Fix $T\in\mathsf{t}_{0}$ and an arbitrary sequence $\mathsf{n}_{0}$. From
Theorem 4.1, $\\{\mathfrak{X}^{T,n}\\}_{n\in\mathsf{n}_{0}}$ is tight in
$C(\mathbb{N}\times\mathbb{R},\mathbb{R})$. From Proposition 4.2, the finite-
dimensional distribution of $\\{\mathfrak{H}^{T,n}\\}_{n\in\mathsf{n}_{0}}$ is
tight. From (5.4) and Lemma 4.3, we have the convergence in distribution of
$\mathfrak{R}^{T,n}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$ to
$\mathfrak{X}^{T}[(0,k+1)\xrightarrow{(T/2)^{1/3}}(\bar{x},1)]-2^{3/2}k^{1/2}\bar{x}^{1/2}+2^{1/3}T^{-1/3}k\log(2^{1/3}T^{2/3}).$
By the Skorokhod’s representation theorem [Bil99, Theorem 6.7], we may find a
subsequence $\mathsf{n}^{\prime}\subset\mathsf{n}_{0}$ and a coupling of
$\\{\mathfrak{X}^{T,n},\mathfrak{H}^{T,n}\\}_{n\in\mathsf{n}^{\prime}}$ such
that along $\mathsf{n}^{\prime}$, $\mathfrak{X}^{T,n}$,
$\mathfrak{H}^{T,n}(x,y)$ and
$\mathfrak{R}^{T,n}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$ converge almost
surely. We note that the convergences of the latter two hold at rational
points. From Theorem 4.1, the limit of $\mathfrak{X}^{T,n}$ is distributed as
the scaled KPZ line ensemble and we denote it by $\mathfrak{X}^{T}$. From
Proposition 4.2, we may augment the probability space to accommodate a scaled
KPZ sheet $\mathfrak{H}^{T}$ such that $\mathfrak{H}^{T,n}(x,y)$ converges
almost surely to $\mathfrak{H}^{T}(x,y)$ for all $x,y\in\mathbb{Q}$. We note
that since $\mathfrak{H}^{T,n}(0,\cdot)=\mathfrak{X}^{T,n}_{1}(\cdot)$, we may
further require $\mathfrak{H}^{T}(0,\cdot)=\mathfrak{X}^{T}_{1}(\cdot)$. The
limits of $\mathfrak{R}^{T,n}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$ are
denoted by $\mathfrak{R}^{T}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$ .
Moreover,
$\displaystyle\mathfrak{R}^{T}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})\overset{d}{=}\mathfrak{X}^{T}[(0,k+1)\xrightarrow{(T/2)^{1/3}}(\bar{x},1)]-2^{3/2}k^{1/2}\bar{x}^{1/2}+2^{1/3}T^{-1/3}k\log(2^{1/3}T^{2/3}).$
By a diagonal argument, we can find a sequence $\mathsf{n}$ and couplings of
$\\{\mathfrak{X}^{T,n},\mathfrak{H}^{T,n}\\}_{n\in\mathsf{n}}$,
$\mathfrak{X}^{T}$ and $\mathfrak{H}^{T}$ for each $T\in\mathsf{t}_{0}$ such
that along $\mathsf{n}$, the convergences in the previous paragraph hold. From
now on we fix such a sequence $\mathsf{n}$. From Theorem 5.1,
$\\{\mathfrak{X}^{T}\\}_{T\in\mathsf{t}_{0}}$ is tight in
$C(\mathbb{N}\times\mathbb{R},\mathbb{R})$. From Proposition 5.2,
$\\{\mathfrak{H}^{T}\\}_{T\in\mathsf{t}_{0}}$ is tight in
$C(\mathbb{R}^{2},\mathbb{R})$. Similarly,
$\\{\mathfrak{R}^{T}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})\\}_{T\in\mathsf{t}_{0}}$
is tight. By the Skorokhod’s representation theorem, we can find a subsequence
$\mathsf{t}\subset\mathsf{t}_{0}$ and a coupling such that along $\mathsf{t}$,
$\mathfrak{X}^{T}$, $\mathfrak{H}^{T}$ and
$\mathfrak{R}^{T}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$ converge almost
surely. From Theorem 5.1, the limit of $\mathfrak{X}^{T}$ is distributed as an
Airy line ensemble and we denote it by $\mathcal{A}$. Denote by $\mathfrak{H}$
and $\mathfrak{R}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$ the limits of
$\mathfrak{H}^{T}$ and
$\mathfrak{R}^{T}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})$ respectively. From
$\mathfrak{H}^{T}(0,\cdot)=\mathfrak{X}^{T}_{1}(\cdot)$, we have
$\mathfrak{H}(0,\cdot)=\mathcal{A}_{1}(\cdot)$. Moreover,
$\displaystyle\mathfrak{R}_{k}(\bar{x},-2^{-1/2}k^{1/2}x^{-1/2})\overset{d}{=}\mathcal{A}[(0,k+1)\xrightarrow{\infty}(\bar{x},1)]-2^{3/2}k^{1/2}\bar{x}^{1/2}.$
From [DOV18, Theorem 6.3], for all $\varepsilon>0$,
$\displaystyle\sum_{k=1}^{\infty}\mathbb{P}\left(|\mathfrak{R}_{k}(\bar{x},-k^{1/2}x^{-1/2})|>\varepsilon
k^{1/2}\right)<\infty.$
Then (5.10) follows from the Borel-Cantelli lemma. ∎
###### Proof of Proposition 5.3.
Let $\mathfrak{H}$ be the distributional limit of $\mathfrak{H}^{T}$ along
some sequence $\mathsf{t}_{0}$. From Proposition 5.6, we can find a sequence
$\mathsf{n}$, a subsequence $\mathsf{t}\subset\mathsf{t}_{0}$ and a coupling
of
$\\{\mathfrak{X}^{T,n},\mathfrak{H}^{T,n}\\}_{(T,n)\in\mathsf{t}\times\mathsf{n}}$,
$\mathfrak{H}$ and the Airy line ensemble $\mathcal{A}$ such that the
assertions in Proposition 5.6 holds. In particular,
$\mathfrak{H}(0,\cdot)=\mathcal{A}_{1}(\cdot)$. From Definition 1.3, we may
further augment the probability space to accommodate an Airy sheet
$\mathcal{S}$ such that on an event with probability one,
(5.11)
$\begin{split}\lim_{k\to\infty}\mathcal{A}[(-2^{-1/2}k^{1/2}x^{-1/2},k)\xrightarrow{\infty}(y_{2},1)]-\mathcal{A}[(-2^{-1/2}k^{1/2}x^{-1/2},k)&\xrightarrow{\infty}(y_{1},1)]\\\
&=\mathcal{S}(x,y_{2})-\mathcal{S}(x,y_{1}),\end{split}$
for all $x>0$ and $y_{1},y_{2}$ in $\mathbb{R}$. From now one, we fix an event
$\Omega_{0}$ such that for each element in $\Omega_{0}$, all assertions in
Proposition 5.6 and (5.11) hold. Our goal is to prove that when this event
$\Omega_{0}$ occurs,
(5.12)
$\displaystyle\mathfrak{H}(x,y_{2})-\mathfrak{H}(x,y_{1})=\mathcal{S}(x,y_{2})-\mathcal{S}(x,y_{1}),$
for all $x>0$ and $y_{1},y_{2}$ in $\mathbb{R}$.
Fix $x_{-}<x_{0}$ in $\mathbb{Q}^{+}$ and $y_{1}\leq y_{2}$ in $\mathbb{Q}$.
We want to show that
(5.13)
$\mathfrak{H}(x_{0},y_{2})-\mathfrak{H}(x_{0},y_{1})\geq\mathcal{S}(x_{-},y_{2})-\mathcal{S}(x_{-},y_{1}).$
Let $z_{k}=-2^{-1/2}k^{1/2}x_{-}^{-1/2}$. From (5.8), we have
(5.14)
$\begin{split}\mathfrak{H}^{T,n}(x_{0},y_{2})-\mathfrak{H}^{T,n}(x_{0},y_{1})-\mathfrak{X}^{T,n}[(z_{k},k)\xrightarrow{(T/2)^{1/3}}(y_{2},1)]+\mathfrak{X}^{T,n}&[(z_{k},k)\xrightarrow{(T/2)^{1/3}}(y_{1},1)]\\\
&\geq\log\big{(}1-\mathfrak{B}^{T,n}_{k}(x_{0},y_{1};z_{k})\big{)}.\end{split}$
From our arrangement,
$\displaystyle\lim_{k\to\infty}\lim_{\begin{subarray}{c}T\in\mathsf{t}\\\
T\to\infty\end{subarray}}\lim_{\begin{subarray}{c}n\in\mathsf{n}\\\
n\to\infty\end{subarray}}\bigg{(}\textup{LHS\ of\
\eqref{equ:Airy_middle}}\bigg{)}=\mathfrak{H}(x_{0},y_{2})-\mathfrak{H}(x_{0},y_{1})-\mathcal{S}(x_{-},y_{2})+\mathcal{S}(x_{-},y_{1}).$
Therefore, to prove (5.13), it suffices to show
(5.15)
$\displaystyle\liminf_{k\to\infty}\liminf_{\begin{subarray}{c}T\in\mathsf{t}\\\
T\to\infty\end{subarray}}\liminf_{\begin{subarray}{c}n\in\mathsf{n}\\\
n\to\infty\end{subarray}}\left(\log\mathfrak{B}^{T,n}_{k}(x_{0},y_{1};z_{k})\right)=-\infty.$
Applying (5.7) with $x=x_{0}$ and $\bar{x}=x_{-}$,
$\log\mathfrak{B}^{T,n}_{k}(x_{0},y_{1};z_{k})$ is bounded from above by
$\displaystyle-2^{1/2}k^{1/2}x_{-}^{1/2}\left(1-x_{-}^{-1/2}x_{0}^{1/2}\right)^{2}+\mathfrak{H}^{T,n}(x_{-},y_{1})-\mathfrak{H}^{T,n}(x_{0},y_{1})-\mathfrak{R}^{T,n}_{k}(x_{-},z_{k})+\mathfrak{R}^{T,n}_{k}(x_{0},z_{k}).$
Because of (5.10), the above goes to $-\infty$. Therefore (5.15) holds. A
similar argument yields
(5.16)
$\mathfrak{H}(x_{0},y_{2})-\mathfrak{H}(x_{0},y_{1})\leq\mathcal{S}(x_{+},y_{2})-\mathcal{S}(x_{+},y_{1}),$
for all $x_{0}<x_{+}$ in $\mathbb{Q}^{+}$ and $y_{1}\leq y_{2}$ in
$\mathbb{Q}$. As a result, (5.12) holds for all $x\in\mathbb{Q}^{+}$ and
$y_{1},y_{2}\in\mathbb{Q}$. By the continuity, (5.12) holds for all $x>0$ and
$y_{1},y_{2}\in\mathbb{R}$.
∎
In the rest of the section, we prove Theorem 1.6, which is actually a simple
consequence of Theorem 1.7. For $T>0$, recall that
$\displaystyle\mathfrak{H}^{T}(s,x;t,y)=2^{1/3}T^{-1/3}\mathcal{H}(Ts,2^{1/3}T^{2/3}x;Tt,2^{1/3}T^{2/3}y)+(t-s)2^{1/3}T^{2/3}/24.$
From (3) in Theorem 1.1 and (1.9), for fixed $s<t$ it holds that
(5.17)
$\displaystyle\mathfrak{H}^{T}(s,x;t,y)\overset{d}{=}(t-s)^{1/3}\mathfrak{H}^{(t-s)T}((t-s)^{-2/3}x,(t-s)^{-2/3}y).$
The both sides of (5.17) are viewed as $C(\mathbb{R}^{2},\mathbb{R})$-valued
random variables. The linearity (1.4) can be rewritten as
(5.18)
$\begin{split}\mathfrak{H}^{T}(s,x;t,y)=2^{1/3}T^{-1/3}\log\int_{-\infty}^{\infty}\exp\big{[}2^{-1/3}T^{1/3}\big{(}\mathfrak{H}^{T}(s,x;&\tau,z)+\mathfrak{H}^{T}(\tau,z;t,y)\big{)}\big{]}dz\\\
&+2^{1/3}T^{-1/3}\log(2^{1/3}T^{2/3}).\end{split}$
###### Proof of Theorem 1.6.
Fix a finite set $\\{t_{1}<t_{2}<\dots<t_{m}\\}$. From (5.17) and Theorem 1.7,
$\\{\mathfrak{H}^{T}(t_{i},\cdot;t_{j},\cdot)\\}_{i<j}$ is tight. Denote by
$\\{\mathfrak{H}(t_{i},\cdot;t_{j},\cdot)\\}_{i<j}$ a subsequential limit. By
the Skorokhod’s representation theorem [Bil99, Theorem 6.7], we may take a
coupling such that $\mathfrak{H}^{T}(t_{i},\cdot;t_{j},\cdot)$ converges to
$\mathfrak{H}(t_{i},\cdot;t_{j},\cdot)$ almost surely in
$C(\mathbb{R}^{2},\mathbb{R})$. From (4) in Theorem 1.1,
$\\{\mathfrak{H}(t_{i},\cdot;t_{i+1},\cdot)\\}_{i=1}^{m-1}$ are independent.
Moreover, from (5.17) and Theorem 1.7, $\mathfrak{H}(t_{i},\cdot;t_{j},\cdot)$
is distributed as an Airy sheet of scale $(t_{j}-t_{i})^{1/3}$. In view of
Corollary 1.5, it remains to prove that for any $t_{i}<t_{j}<t_{k}$, it holds
almost surely
(5.19)
$\displaystyle\mathfrak{H}(t_{i},x;t_{k},y)=\max_{z\in\mathbb{R}}\left(\mathfrak{H}(t_{i},x;t_{j},z)+\mathfrak{H}(t_{j},z;t_{k},y)\right).$
From [DOV18, Proposition 9.2], the right hand side of (5.19) is well-defined
as a random variable on $C(\mathbb{R}^{2},\mathbb{R})$. Moreover, it is
distributed as an Airy sheet of scale $(t_{k}-t_{i})^{1/3}$. Therefore, it
suffices to show that almost surely for all $x,y\in\mathbb{R}$,
(5.20)
$\displaystyle\mathfrak{H}(t_{i},x;t_{k},y)\geq\max_{z\in\mathbb{R}}\left(\mathfrak{H}(t_{i},x;t_{j},z)+\mathfrak{H}(t_{j},z;t_{k},y)\right).$
Let $\Omega_{0}$ be the event on which the following holds. First,
$\mathfrak{H}^{T}(t_{i},\cdot;t_{j},\cdot)$ converges to
$\mathfrak{H}(t_{i},\cdot;t_{j},\cdot)$ in $C(\mathbb{R}^{2},\mathbb{R})$ for
all $t_{i}<t_{j}$. Second, the right hand side of (5.19) defines a continuous
function in $x$ and $y$. We will show that (5.20) holds on $\Omega_{0}$.
Fix $t_{i}<t_{j}$ and $x,y\in\mathbb{R}$. Denote by $Z_{j}(t_{i},x;t_{k},y)$
the collection of maximum points of
$\mathfrak{H}(t_{i},x;t_{j},z)+\mathfrak{H}(t_{j},z;t_{k},y)$. Note that when
$\Omega_{0}$ occurs, $Z_{j}(t_{i},x;t_{k},y)\neq\varnothing$. For $M>0$,
consider the event
$\Omega_{0}\cap\\{Z_{j}(t_{i},x;t_{k},y)\cap[-M,M]\neq\varnothing\\}$. When
such an event occurs, we have
$\displaystyle\max_{z\in\mathbb{R}}\left(\mathfrak{H}(t_{i},x;t_{j},z)+\mathfrak{H}(t_{j},z;t_{k},y)\right)$
$\displaystyle=$
$\displaystyle\max_{z\in[-M,M]}\left(\mathfrak{H}(t_{i},x;t_{j},z)+\mathfrak{H}(t_{j},z;t_{k},y)\right)$
$\displaystyle=$
$\displaystyle\lim_{T\to\infty}2^{1/3}T^{-1/3}\log\int_{-M}^{M}\exp\big{[}2^{-1/3}T^{1/3}\big{(}\mathfrak{H}^{T}(t_{i},x;t_{j},z)+\mathfrak{H}^{T}(t_{j},z;t_{k},y)\big{)}\big{]}\,dz$
$\displaystyle\leq$
$\displaystyle\lim_{T\to\infty}\mathfrak{H}^{T}(t_{i},x;t_{k},y)-2^{1/3}T^{-1/3}\log(2^{1/3}T^{2/3})=\mathfrak{H}(t_{i},x;t_{k},y).$
This implies (5.20) and finishes the proof. ∎
## 6\. Proof of Theorem 1.10
In this section, we prove Theorem 1.10. That is, we show that Conjecture 1.9
is true provided (1.14) holds.
We begin by giving upper bounds for $A^{T,n}_{k}(x,y;z)$ and
$B^{T,n}_{k}(x,y;z)$ (see Definition 4.5). We need the following elementary
inequality. Fix $\varepsilon>0$ and $k\geq 1$. There exists a constant
$D=D(\varepsilon)>0$ such that for all $T>0$ and
$x,\bar{x}\in[\varepsilon,\varepsilon^{-1}]$, we have
(6.1) $\displaystyle k\log\bar{x}+T^{-1}\bar{z}\bar{x}-k\log
x-T^{-1}\bar{z}x\geq D^{-1}k|x-\bar{x}|^{2},$
where $\bar{z}=-kT/\bar{x}$. Define
(6.2) $\displaystyle R^{T,n}_{k}(x,z)\coloneqq F^{T,n}_{k}(x,z)-k\log
x-T^{-1}zx+\log k!.$
###### Lemma 6.1.
Fix $T,\varepsilon>0$, $n\geq 2$, and $1\leq k\leq n-1$. There exists
$D=D(\varepsilon)>0$ such that for all
$x,\bar{x}\in[\varepsilon,\varepsilon^{-1}]$, and
$y>\max\\{-n^{1/2}T^{1/2}+x,-n^{1/2}T^{1/2}-\bar{x}\\}$, the following
statements hold. Let $\bar{z}=-kT/\bar{x}$. If $\bar{x}\geq x$, then
(6.3) $\begin{split}\log
A^{T,n}_{k}(x,y;\bar{z})\leq&-D^{-1}k|x-\bar{x}|^{2}+\mathcal{H}^{T,n}(\bar{x},y)-\mathcal{H}^{T,n}(x,y)-R^{T,n}_{k}(\bar{x},\bar{z})+R^{T,n}_{k}(x,\bar{z}).\end{split}$
If $\bar{x}\leq x$, then
(6.4) $\begin{split}\log
B^{T,n}_{k}(x,y;\bar{z})\leq&-D^{-1}k|x-\bar{x}|^{2}+\mathcal{H}^{T,n}(\bar{x},y)-\mathcal{H}^{T,n}(x,y)-R^{T,n}_{k}(\bar{x},\bar{z})+R^{T,n}_{k}(x,\bar{z}).\end{split}$
###### Proof.
First, we consider the case $\bar{x}\geq x$. From (4.6), we have
$\displaystyle\mathcal{H}^{T,n}(\bar{x},y)-\mathcal{H}^{T,n}(x,y)-\log
A^{T,n}_{k}(x,y;\bar{z})\geq$ $\displaystyle
F^{T,n}_{k}(\bar{x},\bar{z})-F^{T,n}_{k}(x,\bar{z}).$
Using (6.2), the right hand side of the above equals
$\displaystyle(k\log\bar{x}+T^{-1}\bar{z}\bar{x})-(k\log
x+T^{-1}\bar{z}x)+R^{T,n}_{k}(\bar{x},\bar{z})-R^{T,n}_{k}(x,\bar{z}).$
Using (6.1), it is bounded from below by
$\displaystyle
D^{-1}k|x-\bar{x}|^{2}+R^{T,n}_{k}(\bar{x},\bar{z})-R^{T,n}_{k}(x,\bar{z}).$
Hence (6.3) follows by rearranging terms.
Next, we consider the case $\bar{x}\leq x$. From (4.7), we have
$\displaystyle\mathcal{H}^{T,n}(x,y)-\mathcal{H}^{T,n}(\bar{x},y)+\log
B^{T,n}_{k}(x,y;\bar{z})\leq
F^{T,n}_{k}(x,\bar{z})-F^{T,n}_{k}(\bar{x},\bar{z}).$
Using (6.2), the right hand side of the above equals
$\displaystyle(k\log
x+T^{-1}\bar{z}x)-(k\log\bar{x}+T^{-1}\bar{z}\bar{x})+R^{T,n}_{k}(x,\bar{z})-R^{T,n}_{k}(\bar{x},\bar{z}).$
Using (6.1), it is bounded from above by
$\displaystyle-D^{-1}k|x-\bar{x}|^{2}+R^{T,n}_{k}(x,\bar{z})-R^{T,n}_{k}(\bar{x},\bar{z}).$
Rearranging the terms gives (6.4). ∎
The next proposition provides us the coupling to prove Theorem 1.10.
###### Proposition 6.2.
Fix $T>0$. There exists a sequence $\mathsf{n}$, and a coupling of
$\\{\mathcal{X}^{T,n},\mathcal{H}^{T,n}\\}_{n\in\mathsf{n}}$,
$\mathcal{X}^{T}$ and $\mathcal{H}^{T}$ such that the following statements
hold.
Almost surely, $\mathcal{X}^{T,n}$ converges to $\mathcal{X}^{T}$ in
$C(\mathbb{N}\times\mathbb{R},\mathbb{R})$, $\mathcal{H}^{T,n}(x,y)$ converges
to $\mathcal{H}^{T}(x,y)$ for all $x,y\in\mathbb{Q}$ and
${R}^{T,n}_{k}(x,-Tk/\bar{x})$ converge for all $k\geq 1$ and
$x,\bar{x}\in\mathbb{Q}^{+}$. The limits of ${R}^{T,n}_{k}(x,-Tk/\bar{x})$ are
denoted by ${R}^{T}_{k}(x,-Tk/\bar{x})$. It holds that
$\mathcal{H}^{T}(0,\cdot)=\mathcal{X}^{T}_{1}(\cdot)$.
Moreover, suppose (1.14) holds. Then for all $x,\bar{x}\in\mathbb{Q}^{+}$, it
holds almost surely
(6.5) $\lim_{k\to\infty}|R^{T}_{k}(x,-Tk/\bar{x})/k|=0.$
###### Proof.
From Theorem 4.1, $\\{\mathcal{X}^{T,n}\\}_{n\in\mathbb{N}}$ is tight in
$C(\mathbb{N}\times\mathbb{R},\mathbb{R})$. From Proposition 4.2, the finite-
dimensional distribution of $\mathcal{H}^{T,n}$ is tight. From Lemma 4.3,
${R}^{T,n}_{k}(x,-Tk/\bar{x})$ has the distributional limit
$\mathcal{X}^{T}[(0,k+1)\to(x,1)]-k\log x+\log k!.$
By the Skorokhod’s representation theorem [Bil99, Theorem 6.7], we may find a
sequence $\mathsf{n}$ and a coupling of
$\\{\mathcal{X}^{T,n},\mathcal{H}^{T,n}\\}_{n\in\mathsf{n}}$ such that along
$\mathsf{n}$, $\mathcal{X}^{T,n}$, $\mathcal{H}^{T,n}(x,y)$ and
${R}^{T,n}_{k}(x,-Tk/\bar{x})$ converge almost surely. We note that the
convergences of the latter two hold at rational points. From Theorem 4.1, the
limit of $\mathcal{X}^{T,n}$ is distributed as the KPZ line ensemble and we
denote it by $\mathcal{X}^{T}$. From Proposition 4.2, we may augment the
probability space to accommodate the KPZ sheet $\mathcal{H}^{T}$ such that
$\mathcal{H}^{T,n}(x,y)$ converges to $\mathcal{H}^{T}(x,y)$ for all
$x,y\in\mathbb{Q}$. From
$\mathcal{H}^{T,n}(0,\cdot)=\mathcal{X}^{T,n}_{1}(\cdot)$, we may further
require $\mathcal{H}^{T}(0,\cdot)=\mathcal{X}^{T}_{1}(\cdot)$. Denote the
limits of ${R}^{T,n}_{k}(x,-Tk/\bar{x})$ by ${R}^{T}_{k}(x,-Tk/\bar{x})$. From
Proposition 4.3,
$\displaystyle
R^{T}_{k}(x,-Tk/\bar{x})\overset{d}{=}\mathcal{X}^{T}[(0,k+1)\to(x,1)]-k\log
x+\log k!.$
Suppose (1.14) holds. This implies for all $\varepsilon>0$,
$\displaystyle\sum_{k=1}^{\infty}\mathbb{P}\left(|R^{T}_{k}(x,-Tk/\bar{x})|>\varepsilon
k\right)<\infty.$
Then (6.5) follows from the Borel-Cantelli lemma. ∎
###### Proof of Theorem 1.10.
Throughout this proof, we assume (1.14) is valid. Fix $T>0$. From Proposition
6.2, we can find a sequence $\mathsf{n}$ and a coupling of
$\\{\mathcal{X}^{T,n},\mathcal{H}^{T,n}\\}_{n\in\mathsf{n}}$ with the
following property. There exists an event $\Omega_{0}$ with probability one on
which the statements below hold.
1. (1)
$\mathcal{X}^{T,n}$ converges to the KPZ line ensemble $\mathcal{X}^{T}$ in
$C(\mathbb{N}\times\mathbb{R},\mathbb{R})$.
2. (2)
$\mathcal{H}^{T,n}(x,y)$ converges to the KPZ sheet $\mathcal{H}^{T}(x,y)$ for
all $x,y\in\mathbb{Q}$.
3. (3)
$R^{T,n}_{k}(x,-Tk/\bar{x})$ converges to $R^{T}_{k}(x,-Tk/\bar{x})$ for all
$x,\bar{x}\in\mathbb{Q}^{+}$ and $k\in\mathbb{N}$.
4. (4)
(6.5) holds.
Our goal is to show that (1.13) holds on $\Omega_{0}$.
Fix arbitrary $x_{0}<x_{+}$ in $\mathbb{Q}^{+}$ and $y_{1}\leq y_{2}$ in
$\mathbb{Q}$. We claim that
(6.6)
$\begin{split}\limsup_{k\to\infty}\bigg{(}\mathcal{X}^{T}[(-Tk/x_{0},k)\to(y_{2},1)]-\mathcal{X}^{T}[(-Tk/x_{0},k)&\to(y_{1},1)]\bigg{)}\\\
&\leq\mathcal{H}^{T}(x_{+},y_{2})-\mathcal{H}^{T}(x_{+},y_{1}).\end{split}$
Let $z_{k}=-kT/x_{0}$. From (4.10), we have
$\begin{split}\mathcal{X}^{T,n}[(z_{k},k)\to(y_{2},1)]-\mathcal{X}^{T,n}[(z_{k},k)\to(y_{1},1)]-\mathcal{H}^{T,n}&(x_{+},y_{2})+\mathcal{H}^{T,n}(x_{+},y_{1})\\\
&\leq-\log\left(1-B^{T,n}_{k}(x_{+},y_{1},z_{k})\right).\end{split}$
Let $n$ and $k$ go to infinity, we have
$\displaystyle\limsup_{k\to\infty}\bigg{(}\mathcal{X}^{T}[(z_{k},k)\to(y_{2},1)]-\mathcal{X}^{T}[(z_{k},k)\to$
$\displaystyle(y_{1},1)]\bigg{)}-\mathcal{H}^{T}(x_{+},y_{2})+\mathcal{H}^{T}(x_{+},y_{1})$
$\displaystyle\leq$
$\displaystyle-\log\bigg{(}1-\limsup_{k\to\infty}\limsup_{\begin{subarray}{c}n\in\mathsf{n}\\\
n\to\infty\end{subarray}}B^{T,n}_{k}(x_{+},y_{1};z_{k})\bigg{)}.$
To obtain (6.6), it suffice to show that the limit of
$B^{T,n}_{k}(x_{+},y_{1};z_{k})$ is zero. Equivalently,
(6.7) $\limsup_{k\to\infty}\limsup_{\begin{subarray}{c}n\in\mathsf{n}\\\
n\to\infty\end{subarray}}\log B^{T,n}_{k}(x_{+},y_{1};z_{k})=-\infty.$
Applying (6.4) with $\bar{x}=x_{0}$ and $x=x_{+}$, we have
$\displaystyle\log B^{T,n}_{k}(x_{+},y_{1};z_{k})\leq$
$\displaystyle-D^{-1}k|x_{+}-x_{0}|^{2}+\mathcal{H}^{T,n}(x_{0},y_{1})-\mathcal{H}^{T,n}(x_{+},y_{1})-R^{T,n}_{k}(x_{0},z_{k})+R^{T,n}_{k}(x_{+},z_{k}).$
Because of (6.5), the limit of the right hand side is $-\infty$. Therefore we
proved (6.7) and (6.6). For any $x_{-}<x_{0}$ in $\mathbb{Q}^{+}$, a similar
argument yields
(6.8)
$\begin{split}\liminf_{k\to\infty}\bigg{(}\mathcal{X}^{T}[(-Tk/x_{0},k)\to(y_{2},1)]-\mathcal{X}^{T}[(-Tk/x_{0},k)&\to(y_{1},1)]\bigg{)}\\\
&\geq\mathcal{H}^{T}(x_{-},y_{2})-\mathcal{H}^{T}(x_{-},y_{1}).\end{split}$
Combining (6.6) and (6.8), we obtain (1.13) for $x,y_{1},y_{2}\in\mathbb{Q}$.
Next, we show that (1.13) holds for $x\in\mathbb{Q}^{+}$ and
$y_{1},y_{2}\in\mathbb{R}$. Let $y_{1,j}$ and $y_{2,j}$ be a sequence of
rational numbers that converge to $y_{1}$ and $y_{2}$ respectively. We further
require $y_{1,j}\leq y_{1}$ and $y_{2,j}\geq y_{2}$, from Lemma 2.5, we have
$\displaystyle\mathcal{X}^{T}[(-Tk/x,k)\to(y_{2},1)]\leq$
$\displaystyle\mathcal{X}^{T}[(-Tk/x,k)\to(y_{2,j},1)]-\mathcal{X}^{T}_{1}(y_{2,j})+\mathcal{X}^{T}_{1}(y_{2}),$
$\displaystyle\mathcal{X}^{T}[(-Tk/x,k)\to(y_{1},1)]\geq$
$\displaystyle\mathcal{X}^{T}[(-Tk/x,k)\to(y_{1,j},1)]-\mathcal{X}^{T}_{1}(y_{1,j})+\mathcal{X}^{T}_{1}(y_{1}).$
Therefore,
$\begin{split}&\limsup_{k\to\infty}\bigg{(}\mathcal{X}^{T}[(-Tk/x,k)\to(y_{2},1)]-\mathcal{X}^{T}[(-Tk/x,k)\to(y_{1},1)]\bigg{)}\\\
\leq&\mathcal{H}^{T}(x,y_{2,j})-\mathcal{H}^{T}(x,y_{1,j})-\mathcal{X}^{T}_{1}(y_{2,j})+\mathcal{X}^{T}_{1}(y_{2})+\mathcal{X}^{T}_{1}(y_{1,j})-\mathcal{X}^{T}_{1}(y_{1}).\end{split}$
Let $j$ go to infinity, we get
$\begin{split}\limsup_{k\to\infty}\bigg{(}\mathcal{X}^{T}[(-Tk/x,k)\to(y_{2},1)]-\mathcal{X}^{T}[(-Tk/x,k)&\to(y_{1},1)]\bigg{)}\\\
&\leq\mathcal{H}^{T}(x,y_{2})-\mathcal{H}^{T}(x,y_{1}).\end{split}$
The other direction can be proved similarly.
Lastly, we can remove the condition $x\in\mathbb{Q}_{+}$ by noting that from
Lemma 2.4,
$\mathcal{X}^{T}[(-Tk/x,k)\to(y_{2},1)]-\mathcal{X}^{T}[(-Tk/x,k)\to(y_{1},1)]$
is monotone non-decreasing in $x$. ∎
## Appendix A
In the appendix we provide proofs for basic results in Section 2.
###### Proof of Lemma 2.4.
We use an induction argument on $\ell-m$. The assertion holds when $\ell=m$
because $f[(x,m)\to(y,m)]=f_{m}(y)-f_{m}(x)$. From Lemma 2.3, we have
$\displaystyle\exp\big{(}f[(x,\ell)\to(y,m)]\big{)}=\int_{x}^{y}\exp\big{(}f[(x,\ell)\to(z,m+1)]+f_{m}(y)-f_{m}(z)\big{)}\,dz.$
Hence
$\displaystyle\frac{d}{dy}\bigg{(}f[(x_{2},\ell)\to(y,m)]-f[(x_{1},\ell)\to(y,m)]\bigg{)}$
equals
(A.1)
$\begin{split}&\left(\int_{x_{2}}^{y}\exp\big{(}f[(x_{2},\ell)\to(z,m+1)]-f_{m}(z)\big{)}\,dz\right)^{-1}\exp\bigg{(}f[(x_{2},\ell)\to(y,m+1)]-f_{m}(y)\bigg{)}\\\
-&\left(\int_{x_{1}}^{y}\exp\big{(}f[(x_{1},\ell)\to(z,m+1)]-f_{m}(z)\big{)}\,dz\right)^{-1}\exp\bigg{(}f[(x_{1},\ell)\to(y,m+1)]-f_{m}(y)\bigg{)}.\end{split}$
From the induction hypothesis,
$f[(x_{2},\ell)\to(z,m+1)]-f[(x_{1},\ell)\to(z,m+1)]$ is non-decreasing in
$z$. Therefore,
$\displaystyle\int_{x_{2}}^{y}\exp\big{(}f[(x_{2},\ell)\to(z,m+1)]-f_{m}(z)\big{)}\,dz$
$\displaystyle\leq$
$\displaystyle\exp\big{(}f[(x_{2},\ell)\to(y,m+1)]-f[(x_{1},\ell)\to(y,m+1)]\big{)}\times\int_{x_{2}}^{y}\exp\big{(}f[(x_{1},\ell)\to(z,m+1)]-f_{m}(z)\big{)}\,dz$
$\displaystyle\leq$
$\displaystyle\exp\big{(}f[(x_{2},\ell)\to(y,m+1)]-f[(x_{1},\ell)\to(y,m+1)]\big{)}\times\int_{x_{1}}^{y}\exp\big{(}f[(x_{1},\ell)\to(z,m+1)]-f_{m}(z)\big{)}\,dz.$
Apply the above inequality to (A.1), we obtain
$\displaystyle\frac{d}{dy}\bigg{(}f[(x_{2},\ell)\to(y,m)]-f[(x_{1},\ell)\to(y,m)]\bigg{)}\geq
0$. ∎
###### Proof of Lemma 2.5.
Consider the following measure-preserving injection from
$\mathcal{Q}[(x,\ell)\to(y_{1},m)]$ to $Q[(x,\ell)\to(y_{2},m)]$. Given
$\pi\in\mathcal{Q}[(x,\ell)\to(y_{1},m)]$, let
$\displaystyle\bar{\pi}(t)=\left\\{\begin{array}[]{cc}\pi(t),&t\in[x,y_{1}],\\\
m,&t\in(y_{1},y_{2}].\end{array}\right.$
Then the assertion follows $f(\pi)=f(\bar{\pi})-f_{m}(y_{2})+f_{m}(y_{1})$. ∎
###### Proof of Lemma 2.6.
There is a natural map from $\mathcal{Q}[(x,\ell)\to(y,m)]$ to
$\mathcal{Q}[(a_{2}x+a_{3},\ell)\to(a_{2}y+a_{3},m)]$ given by
$\pi(t)\mapsto\pi^{\prime}(t)=\pi(a_{2}^{-1}t-a_{2}^{-1}a_{3})$. Moreover,
$d\pi=a_{2}^{-(\ell-m)}d\pi^{\prime}$. Together with
$g(\pi)=a_{1}f(\pi^{\prime})+a_{4}(y-x)$, we derive
$\displaystyle g[(x,\ell)\xrightarrow{\beta}(y,m)]=$
$\displaystyle\beta^{-1}\log\int_{\mathcal{Q}[(x,\ell)\to(y,m)]}\exp(\beta
g(\pi))d\pi$ $\displaystyle=$
$\displaystyle\beta^{-1}\log\int_{\mathcal{Q}[(a_{2}x+a_{3},\ell)\to(a_{2}y+a_{3},m)]}a_{2}^{-(\ell-m)}\exp(a_{1}\beta
f(\pi^{\prime})+a_{4}\beta(y-x))d\pi^{\prime}$ $\displaystyle=$ $\displaystyle
a_{1}\cdot
f[(a_{2}x+a_{3},\ell)\xrightarrow{a_{1}\beta}(a_{2}y+a_{3},m)]+a_{4}(y-x)-\beta^{-1}(\ell-m)\log
a_{2}.$
∎
###### Proof of Lemma 2.7.
Fix $\pi_{i}\in\mathcal{Q}[(x_{i},\ell_{i})\to(y_{i},m_{i})]$ and let
$(t_{i,j})_{j\in\llbracket m_{i}+1,\ell_{i}\rrbracket}$ be the coordinates of
$\pi_{i}$ under the identification (2.1). We again follow the convention (2.2)
and set $t_{i,\ell_{i}+1}=x_{i}$ and $t_{i,m_{i}}=y_{i}.$ It suffices to show
that $\pi_{1}\prec\pi_{2}$ if and only if for all $j_{1}\in\llbracket
m_{1},\ell_{1}\rrbracket$ and $j_{2}\in\llbracket m_{2},\ell_{2}\rrbracket$
with $j_{1}\geq j_{2}$, it holds that
(A.2) $\displaystyle t_{1,j_{1}}\leq t_{2,j_{2}+1}.$
Suppose $\pi_{1}\prec\pi_{2}$ fails. There exists
$t_{0}\in(x_{1},y_{1})\cap(x_{2},y_{2})$ such that
$\pi_{1}(t_{0})\geq\pi_{2}(t_{0})$. Set $j_{i}=\pi_{i}(t_{0})$. Because
$\pi_{i}$ are càdlàg and integer-valued, there exists $\varepsilon>0$ such
that $\pi_{i}(t)=j_{i}$ for $t\in[t_{0},t_{0}+\varepsilon)$. In view of (2.3),
this implies $t_{1,j_{1}}>t_{2,j_{2}+1}$ and $\eqref{equ:t1t2}$ is violated.
Suppose $t_{1,j_{1}}>t_{2,j_{2}+1}$ for some $j_{1}\in\llbracket
m_{1},\ell_{1}\rrbracket$ and $j_{2}\in\llbracket m_{2},\ell_{2}\rrbracket$
with $j_{1}\geq j_{2}$. Because $x_{1}\leq x_{2}$ and $y_{1}\leq y_{2}$, we
may assume $(t_{1,j_{1}+1},t_{1,j_{1}})$ and $(t_{2,j_{2}+1},t_{2,j_{2}})$ are
non-empty by increasing $j_{1}$ or decreasing $j_{2}$ if necessary. Moreover,
by further increasing $j_{1}$, we may assume
$(t_{1,j_{1}+1},t_{1,j_{1}})\cap(t_{2,j_{2}+1},t_{2,j_{2}})$ is non-empty. In
view of (2.3), this implies there exists $t\in(x_{1},y_{1})\cap(x_{2},y_{2})$
such that $\pi_{1}(t)=j_{1}\geq\pi_{2}(t)=j_{2}$ and hence
$\pi_{1}\prec\pi_{2}$ fails. ∎
###### Proof of Lemma 2.9.
For $i\in\llbracket 1,k\rrbracket$, let
$(\tilde{x}_{i},\tilde{\ell}_{i})=(z-y_{k+1-i},n+1-m_{k+1-i})$ and
$(\tilde{y}_{i},\tilde{m}_{i})=(z-x_{k+1-i},n+1-\ell_{k+1-i})$. There is a
natural measure-preserving bijection between
$\mathcal{Q}[(x_{k+1-i},\ell_{k+1-i})\to(y_{k+1-i},m_{k+1-i})]$ and
$\mathcal{Q}[(\tilde{x}_{i},\tilde{\ell}_{i})\to(\tilde{y}_{i},\tilde{m}_{i})]$.
Given
$\pi_{k+1-i}\in\mathcal{Q}[(x_{k+1-i},\ell_{k+1-i})\to(y_{k+1-i},m_{k+1-i})]$,
set
$\tilde{\pi}_{i}\in\mathcal{Q}[(\tilde{x}_{i},\tilde{\ell}_{i})\to(\tilde{y}_{i},\tilde{m}_{i})]$
as follows. Let $\tilde{\pi}_{i}(\tilde{y}_{i})=\tilde{m}_{i}$ and
$\tilde{\pi}_{i}(t)=n+1-\lim_{t^{\prime}\to t+}\pi_{k+1-i}(z-t)$. It can be
checked that $f(\pi_{k+1-i})=(R_{z}f)(\tilde{\pi}_{i})$ and then
$f(\pi)=(R_{z}f)(\tilde{\pi})$. Hence the assertion follows.
∎
###### Proof of Lemma 2.10.
We give the proof for (2.9). The arguments for (2.10) and (2.11) are
analogous. Consider a map
$\mathbf{G}:\mathcal{Q}[(x,n)^{k+1}\to(y,1)^{k+1}]\to\mathcal{Q}[U_{n,k+1}(x)\to
V_{k+1}(y)]$ given by the following. For
$\pi=(\pi_{1},\dots,\pi_{k+1})\in\mathcal{Q}[(x,n)^{k+1}\to(y,1)^{k+1}]$, we
define $\mathbf{G}(\pi)=\bar{\pi}=(\bar{\pi}_{1},\dots,\bar{\pi}_{k+1})$
through
$\displaystyle\bar{\pi}_{j}(t)=\left\\{\begin{array}[]{cc}\pi_{j}(t),&x\leq
t<y,\\\ j,&t=y.\end{array}\right.$
It can be checked that $\bar{\pi}\in\mathcal{Q}[U_{n,k+1}(x)\to V_{k+1}(y)]$
and $\mathbf{G}$ is a bijection. Moreover, $\mathbf{G}$ is the restriction of
a projection map between Euclidean spaces. This implies $\mathbf{G}$ is
measure-preserving. Together with $f(\pi)=f(\mathbf{G}(\pi))$, (2.9) follows.
∎
###### Proof of Lemma 2.12.
Consider the map from $\mathcal{Q}[V^{\prime}_{k}(x)\to V_{k}(y)]$ to
$\mathcal{Q}[(x,1)\searrow(y,k+1)]$ given by the following. For
$\pi=(\pi_{1},\dots,\pi_{k})\in\mathcal{Q}[V^{\prime}_{k}(x)\to V_{k}(y)]$,
let $\rho$ be defined in the way such that for all $t\in[x,y]$,
$\\{\pi_{1}(t),\pi_{2}(t),\dots,\pi_{k}(t),\rho(t)\\}=\llbracket
1,k+1\rrbracket$. It is straightforward to check that $\rho$ belongs to
$\mathcal{Q}[(x,1)\searrow(y,k+1)]$. Moreover, this map is a measure-
preserving bijection. Together with
$f(\pi)+f(\rho)=\sum_{i=1}^{k+1}f_{i}(y)-f_{i}(x),$
we have
$\displaystyle f[V^{\prime}_{k-1}(x)\to V_{k-1}(y)]$
$\displaystyle=\log\int_{\mathcal{Q}[V^{\prime}_{k}(x)\to
V_{k}(y)]}\exp(f(\pi))\,d\pi$
$\displaystyle=\sum_{i=1}^{k+1}f_{i}(y)-f_{i}(x)+\log\int_{\mathcal{Q}[(x,1)\searrow(y,k+1)]}\exp(-f(\rho))\,d\rho$
$\displaystyle=f[V_{k+1}(x)\to V_{k+1}(y)]-f[(x,1)\searrow(y,k+1)].$
∎
## References
* [ACQ11] Gideon Amir, Ivan Corwin and Jeremy Quastel “Probability distribution of the free energy of the continuum directed random polymer in $1+1$ dimensions” In _Comm. Pure Appl. Math._ 64.4, 2011, pp. 466–537 DOI: 10.1002/cpa.20347
* [Alb+22] Tom Alberts, Christopher Janjigian, Firas Rassoul-Agha and Timo Seppäläinen “The Green’s function of the parabolic Anderson model and the continuum directed polymer” arXiv, 2022 DOI: 10.48550/ARXIV.2208.11255
* [Bil99] Patrick Billingsley “Convergence of probability measures” A Wiley-Interscience Publication, Wiley Series in Probability and Statistics: Probability and Statistics John Wiley & Sons, Inc., New York, 1999, pp. x+277 DOI: 10.1002/9780470316962
* [CGH21] Ivan Corwin, Promit Ghosal and Alan Hammond “KPZ equation correlations in time” In _Ann. Probab._ 49.2, 2021, pp. 832–876 DOI: 10.1214/20-aop1461
* [CH14] Ivan Corwin and Alan Hammond “Brownian Gibbs property for Airy line ensembles” In _Invent. Math._ 195.2, 2014, pp. 441–508 DOI: 10.1007/s00222-013-0462-3
* [CH16] Ivan Corwin and Alan Hammond “KPZ line ensemble” In _Probab. Theory Related Fields_ 166.1-2, 2016, pp. 67–185 DOI: 10.1007/s00440-015-0651-7
* [CN17] Ivan Corwin and Mihai Nica “Intermediate disorder directed polymers and the multi-layer extension of the stochastic heat equation” In _Electron. J. Probab._ 22, 2017, pp. Paper No. 1349 DOI: 10.1214/17-EJP32
* [Cor12] Ivan Corwin “The Kardar-Parisi-Zhang equation and universality class” In _Random Matrices Theory Appl._ 1.1, 2012, pp. 113000176 pp DOI: 10.1142/S2010326311300014
* [Cor21] Ivan Corwin “Invariance of polymer partition functions under the geometric RSK correspondence” In _Stochastic analysis, random fields and integrable probability—Fukuoka 2019_ 87, Adv. Stud. Pure Math. Math. Soc. Japan, Tokyo, [2021] ©2021, pp. 89–136
* [DM21] Evgeni Dimitrov and Konstantin Matetski “Characterization of Brownian Gibbsian line ensembles” In _Ann. Probab._ 49.5, 2021, pp. 2477–2529 DOI: 10.1214/21-aop1513
* [DOV18] Duncan Dauvergne, Janosch Ortmann and Bálint Virág “The directed landscape”, 2018 arXiv:1812.00309
* [DV21] Duncan Dauvergne and Bálint Virág “Bulk properties of the Airy line ensemble” In _Ann. Probab._ 49.4, 2021, pp. 1738–1777 DOI: 10.1214/20-aop1492
* [Jan97] Svante Janson “Gaussian Hilbert spaces” 129, Cambridge Tracts in Mathematics Cambridge University Press, Cambridge, 1997, pp. x+340 DOI: 10.1017/CBO9780511526169
* [Kal97] Olav Kallenberg “Foundations of modern probability”, Probability and its Applications (New York) Springer-Verlag, New York, 1997, pp. xii+523
* [KPZ86] Mehran Kardar, Giorgio Parisi and Yi-Cheng Zhang “Dynamic Scaling of Growing Interfaces” In _Phys. Rev. Lett._ 56 American Physical Society, 1986, pp. 889–892 DOI: 10.1103/PhysRevLett.56.889
* [LW20] Chin Hang Lun and Jon Warren “Continuity and strict positivity of the multi-layer extension of the stochastic heat equation” In _Electron. J. Probab._ 25, 2020, pp. Paper No. 10941 DOI: 10.1214/20-ejp511
* [MQR21] Konstantin Matetski, Jeremy Quastel and Daniel Remenik “The KPZ fixed point” In _Acta Math._ 227.1, 2021, pp. 115–203 DOI: 10.4310/acta.2021.v227.n1.a3
* [Mue91] Carl Mueller “On the support of solutions to the heat equation with noise” In _Stochastics Stochastics Rep._ 37.4, 1991, pp. 225–245 DOI: 10.1080/17442509108833738
* [Nic21] Mihai Nica “Intermediate disorder limits for multi-layer semi-discrete directed polymers” In _Electron. J. Probab._ 26, 2021, pp. Paper No. 6250 DOI: 10.1214/21-ejp614
* [OCo12] Neil O’Connell “Directed polymers and the quantum Toda lattice” In _Ann. Probab._ 40.2, 2012, pp. 437–458 DOI: 10.1214/10-AOP632
* [OW16] Neil O’Connell and Jon Warren “A multi-layer extension of the stochastic heat equation” In _Comm. Math. Phys._ 341.1, 2016, pp. 1–33 DOI: 10.1007/s00220-015-2541-3
* [OY01] Neil O’Connell and Marc Yor “Brownian analogues of Burke’s theorem” In _Stochastic Process. Appl._ 96.2, 2001, pp. 285–304 DOI: 10.1016/S0304-4149(01)00119-3
* [PS02] Michael Prähofer and Herbert Spohn “Scale invariance of the PNG droplet and the Airy process” Dedicated to David Ruelle and Yasha Sinai on the occasion of their 65th birthdays In _J. Statist. Phys._ 108.5-6, 2002, pp. 1071–1106 DOI: 10.1023/A:1019791415147
* [QS15] Jeremy Quastel and Herbert Spohn “The one-dimensional KPZ equation and its universality class” In _J. Stat. Phys._ 160.4, 2015, pp. 965–984 DOI: 10.1007/s10955-015-1250-9
* [QS23] Jeremy Quastel and Sourav Sarkar “Convergence of exclusion processes and the KPZ equation to the KPZ fixed point” In _J. Amer. Math. Soc._ 36.1, 2023, pp. 251–289 DOI: 10.1090/jams/999
* [Vir20] Bálint Virág “The heat and the landscape I” arXiv, 2020 DOI: 10.48550/ARXIV.2008.07241
* [Wu22] Xuan Wu “Convergence of the KPZ Line Ensemble” In _International Mathematics Research Notices_ , 2022 DOI: 10.1093/imrn/rnac272
* [Wu23] Xuan Wu “Some convergence results for the O’Connell-Yor polymer” in preparation, 2023
|
11institutetext: Zhejiang University 22institutetext: ETH Zürich
33institutetext: Karlsruhe Institute of Technology 44institutetext: KU Leuven
# Event-Based Fusion for Motion Deblurring with Cross-modal Attention
Lei Sun 1122 Christos Sakaridis 22 Jingyun Liang 22 Qi Jiang 11 Kailun Yang 33
Peng Sun 11 Yaozu Ye 11 Kaiwei Wang 11 Luc Van Gool 2244
###### Abstract
Traditional frame-based cameras inevitably suffer from motion blur due to long
exposure times. As a kind of bio-inspired camera, the event camera records the
intensity changes in an asynchronous way with high temporal resolution,
providing valid image degradation information within the exposure time. In
this paper, we rethink the event-based image deblurring problem and unfold it
into an end-to-end two-stage image restoration network. To effectively fuse
event and image features, we design an event-image cross-modal attention
module applied at multiple levels of our network, which allows to focus on
relevant features from the event branch and filter out noise. We also
introduce a novel symmetric cumulative event representation specifically for
image deblurring as well as an event mask gated connection between the two
stages of our network which helps avoid information loss. At the dataset
level, to foster event-based motion deblurring and to facilitate evaluation on
challenging real-world images, we introduce the Real Event Blur (REBlur)
dataset, captured with an event camera in an illumination-controlled optical
laboratory. Our Event Fusion Network (EFNet) sets the new state of the art in
motion deblurring, surpassing both the prior best-performing image-based
method and all event-based methods with public implementations on the GoPro
dataset (by up to 2.47dB) and on our REBlur dataset, even in extreme blurry
conditions. The code and our REBlur dataset will be made publicly available.
## 1 Introduction
Motion blur often occurs in images due to camera shake or object motion during
the exposure time. The goal of deblurring is to recover a sharp image with
clear edge structures and texture details from the blurry image. This is a
highly ill-posed problem because of the infinitely many feasible solutions [2,
9, 50]. Traditional methods explicitly utilize natural image priors and
various constraints [2, 10, 16, 17, 21, 22, 46]. To better generalize when
addressing the deblurring problem, modern learning-based methods choose to
train Convolutional Neural Networks (CNNs) on large-scale data to learn the
implicit relationships between blurry and sharp images [12, 26, 38, 39, 49].
Despite their high performance on existing public datasets, these learning-
based methods often fail when facing extreme or real-world blur. Their
performance heavily relies on the quality and scale of the training data,
which creates the need for a more general and reliable deblurring method.
Event cameras [5, 11, 29] are bio-inspired asynchronous sensors with high
temporal resolution (in the order of $\mu$s) and they operate well in
environments with high dynamic range. Different from traditional frame-based
cameras, event cameras capture the intensity change of each pixel (i.e.
_event_ information) independently, if the change surpasses a threshold. Event
cameras encode the intensity change information within the exposure time of
the image frame into an event stream, making it possible to deblur an image
frame with events [27]. However, because of sensor noise and uncertainty in
the aforementioned threshold, it is difficult to use a physical model to
deblur images based solely on events. Thus, some methods [14, 23, 34] utilize
CNNs to deal with noise corruption and threshold uncertainty. Nevertheless,
these methods only achieve slight performance gains compared to image-only
methods, due to rather ineffective event representations and fusion mechanisms
between events and images.
In this paper, we first revisit the mechanism of motion blur and how event
information is utilized in image reconstruction. To deal with the inherent
defect of the event-based motion deblurring equation, we propose EFNet, an
Event Fusion Network for image deblurring which effectively combines
information from event and frame-based cameras for image deblurring. Motivated
by the physical model of event-based image deblurring [27], we design a
symmetric cumulative event representation (SCER) specifically for deblurring
and formulate our network based on a two-stage image restoration model. Each
stage of the model has a U-Net-like architecture [32]. The first stage
consists of two branches, an image branch and an event branch, the features of
which are fused at multiple levels. In order to perform the fusion of the two
modalities, we propose an Event-Image Cross-modal Attention (EICA) fusion
module, which allows to attend to the event features that are relevant for
deblurring via a channel-level attention mechanism. To the best of our
knowledge, this is the first time that a multi-head attention mechanism is
applied to event-based image deblurring. We also enable information exchange
between the two stages of our network by applying Event Mask Gated Connections
(EMGC), which selectively transfer feature maps from the encoder and decoder
of the first stage to the second stage. A detailed ablation study shows the
effectiveness of our novel fusion module using cross-modal attention, our
gated connection module and our multi-level middle fusion design.
Additionally, we record a real-world event blur dataset named Real Event Blur
(REBlur) in an optical laboratory with stable illumination and a high-
precision electronically controlled slide-rail which allows various types of
motion. We conduct extensive comparisons against state-of-the-art deblurring
methods on the GoPro dataset [26] with synthetic events and on REBlur with
real events and demonstrate the superiority of our event-based image
deblurring method.
In summary, we make the following main contributions:
* •
We design a novel event-image fusion module which applies cross-modal channel-
wise attention to adaptively fuse event features with image features, and
incorporate it at multiple levels of a novel end-to-end deblurring network.
* •
We introduce a novel symmetric cumulative event voxel representation for
deblurring, which is inspired by the physical model that connects blurry image
formation and event generation.
* •
We present REBlur, a real-world dataset consisting of tuples of blurry images,
sharp images and event streams from an event camera, which provides a
challenging evaluation setting for deblurring methods.
* •
Our deblurring network, equipped with our proposed modules and event
representation, sets the new state of the art for image deblurring on the
GoPro dataset and our REBlur dataset.
## 2 Related Work
Image deblurring. Traditional approaches often formulate deblurring as an
optimization problem [10, 16, 17, 21, 22, 46]. Recently, with the success of
deep learning, image deblurring has achieved impressive performance thanks to
the usage of CNNs. CNN-based methods directly map the blurry image to the
latent sharp image. Several novel components and techniques have been
proposed, such as attention modules [37, 40], multi-scale fusion [26, 39],
multi-stage networks [7, 47], and coarse-to-fine strategies [8], improving the
accuracy and robustness of deblurring. Despite the benefits they have shown
for deblurring, all aforementioned deep networks operate solely on images, a
modality which does not explicitly capture _motion_ and thus inherently limits
performance when facing real-world blurry images, especially in extreme
conditions.
Event-based deblurring. Recently, events have been used for motion deblurring,
due to the strong connection they possess with motion information. Pan _et
al_. [27] proposed an Event Double Integral (EDI) deblurring model using the
double integral of event data. They established a mathematical event-based
model mapping blurry frames to sharp frames, which is a seminal approach to
deblurring with events. However, limited by the sampling mechanism of event
cameras, this method often introduces strong, accumulated noise. Jiang _et
al_. [14] extracted motion information and sharp edges from events to assist
deblurring. However, their early fusion approach, which merely concatenates
events into the main branch of the network, does not take full advantage of
events. Lin _et al_. [23] fused events with the image via dynamic filters
from STFAN [51]. In addition, Shang _et al_. [34] fused event information
into a weight matrix that can be applied to any state-of-the-art network. To
sum up, most of the above event-based learning methods did not use event
information effectively, achieving only minor improvements compared to image-
only methods on standard benchmarks.
Event representation. Different from synchronous signals such as images from
frame-based cameras, events are asynchronous and sparse. A key point in how to
extract information from events effectively is the representation of the
events. Event representation is an application-dependent problem and different
tasks admit different solutions. The event-by-event method is suitable for
spiking neural networks owing to its asynchronous architecture [28, 33, 44]. A
Time Surface, which is a 2D map that stores the time value deriving from the
timestamp of the last event, has proved suitable for event-based
classification [1, 20, 35]. Some modern learning-based methods convert events
to a 2D frame by counting events or accumulating polarities [24, 25, 34]. This
approach is compatible with conventional computer vision tasks but loses
temporal information. 3D space-time histograms of events, also called voxel
grids, preserve the temporal information of events better by accumulating
event polarity on a voxel [4, 52]. For image deblurring, most works utilized
2D event-image pairs [34] or borrowed 3D voxel grids like Stacking Based on
Time (SBT) from image reconstruction [41]. However, there still is no event
representation specifically designed for motion deblurring.
## 3 Method
Figure 1: (a): The architecture of our Event Fusion Network (EFNet). EFNet
consists of two UNet-like backbones [32] and an event extraction branch. After
each residual convolution block (“Res Block”), feature maps from the event
branch and the image branch are fused. The second UNet backbone refines the
deblurred image further. “SCER”: symmetric cumulative event representation,
“EICA”: event-image cross-modal attention, “SConv”: 4$\times$4 strided
convolution with stride 2, “TConv”: 2$\times$2 transposed convolution with
stride 2, “SAM”: supervision attention module [47]. (b): The Event Mask Gated
Connection module (EMGC) transfers features across stages guided by an event
mask.
We first introduce the mathematical model for the formation of blurry images
from sharp images that involves events in Sec. 3.1. Based on this model, we
pose the event-based deblurring problem as a deblurring-denoising problem and
base the high-level design of our network architecture on this formulation, as
explained in Sec. 3.2. We present our symmetric cumulative representation for
events, which constitutes a 3D voxel grid in which the temporal dimension is
discretized, in Sec. 3.3. This event representation is provided as input
together with the blurry image to our two-stage network. We then detail the
two main novel components of our network: our novel event-image cross-modal
attention fusion mechanism (Sec. 3.4), which adaptively fuses feature channels
associated with events and images, and our event mask gated connection module
between the two stages of our network (Sec. 3.5), which helps selectively
forward to the second stage the features at sharp regions of the input from
the encoder and the features at blurry regions from the decoder of the first
stage.
### 3.1 Problem Formulation
For an event camera, the $i$-th event $e_{i}$ is represented as a tuple
$e_{i}=(x_{i},y_{i},t_{i},p_{i})$, where $x_{i}$, $y_{i}$ and $t_{i}$
represent the pixel coordinates and the timestamp of the event respectively,
and $p_{i}\in\left\\{-1,+1\right\\}$ is the polarity of the event [5, 29]. An
event is triggered at time $t$ only when the change in pixel intensity
$\mathcal{I}$ surpasses the threshold compared to the pixel intensity at the
time of the last trigger. This is formulated as
$p_{i}=\begin{cases}+1,\text{if}\log\left(\frac{\mathcal{I}_{t}{(x_{i},y_{i})}}{\mathcal{I}_{t-\Delta
t}{(x_{i},y_{i})}}\right)>c,\\\
-1,\text{if}\log\left(\frac{\mathcal{I}_{t}{(x_{i},y_{i})}}{\mathcal{I}_{t-\Delta
t}{(x_{i},y_{i})}}\right)<-c,\end{cases}$ (1)
where $c$ is the contrast threshold of intensity change, which may differ
across the sensor plane.
Given the intensity of a latent sharp image $\mathbf{L}$, according to [27],
the corresponding blurred image $\mathbf{B}$ can be derived by the Event-based
Double Integral (EDI) model:
$\begin{split}\mathbf{B}&=\frac{1}{T}\int_{f-T/2}^{f+T/2}\mathbf{L}(t)dt\\\
&=\frac{\mathbf{L}(f)}{T}\int_{f-T/2}^{f+T/2}\exp\Big{(}c\int_{f}^{t}p(s)ds\Big{)}\
dt,\end{split}$ (2)
where $f$ is the middle point of the exposure time $T$, $p(s)$ is the polarity
component of the event stream and $\mathbf{L}(f)$ is the latent sharp image
corresponding to the blurred image $\mathbf{B}$. The discretized version of
(2) can be expressed as
$\mathbf{B}=\frac{\mathbf{L}(N)}{2N+1}\sum_{i=0}^{2N}\exp\left(c\operatorname*{sgn}(i-N)\sum_{j:\;m\leq
t_{j}\leq M}p_{j}\delta_{x_{j}y_{j}}\right),$ (3)
where $\operatorname*{sgn}$ is the signum function,
$m=\min\\{f+T/2(i/N-1),f\\}$, $M=\max\\{f+T/2(i/N-1),f\\}$ and $\delta$ is the
Kronecker delta, defined as
$\delta_{kl}(m,n)=\begin{cases}1,\text{ if }k=m\text{ and }l=n,\\\ 0,\text{
otherwise.}\end{cases}$ (4)
In (3), we partition the exposure time $T$ into $2N$ equal intervals.
Rearranging (3) yields:
$\mathbf{L}(N)=\frac{(2N+1)\mathbf{B}}{\sum_{i=0}^{2N}\exp\left(c\operatorname*{sgn}(i-N)\sum_{j:\;m\leq
t_{j}\leq M}p_{j}\delta_{x_{j}y_{j}}\right)}.$ (5)
### 3.2 General Architecture of EFNet
The formulation in (5) indicates that the latent sharp image can be derived
from the blurred image combined with the set of events
$\mathcal{E}=\\{e_{i}=(x_{i},y_{i},t_{i},p_{i}):f-T/2\leq t_{i}\leq f+T/2\\}$
(_i.e_., all the events which are triggered within the exposure time), when
events in this set are accumulated over time. We propose to learn this
relation with a deep neural network, named Event Fusion Network (EFNet), which
admits as inputs the blurred image and the events and maps them to the sharp
image. The generic form of the learned mapping is
$\mathbf{L}_{\text{initial}}=f_{3}\left(f_{1}(\mathbf{B};\Theta_{1}),f_{2}(\mathcal{E};\Theta_{2});\Theta_{3}\right),$
(6)
where the blurred image and the events are mapped individually to intermediate
representations via $f_{1}$ and $f_{2}$ respectively and these intermediate
representations are afterwards passed to a joint mapping $f_{3}$.
$\Theta_{1}$, $\Theta_{2}$ and $\Theta_{3}$ denote the respective parameters
of the three mappings. The main challenges we need to address given this
generic formulation of our model are (i) how to represent the set of events
$\mathcal{E}$ in a suitable way for inputting it to the network, and (ii) how
and when to fuse the intermediate representations that are generated for the
blurred image by $f_{1}$ and for the events by $f_{2}$, _i.e_., how to design
$f_{3}$. We address the issue of how to represent the events in Sec. 3.3 and
how to perform fusion in Sec. 3.4.
(3) is the ideal formulation for event-based motion image deblurring. However,
in real-world settings, three factors make it impossible to restore the image
simply based on this equation:
* •
Instead of being strictly equal to a fixed value, the values of threshold $c$
for a given event camera are neither constant in time nor across the image
[36, 43].
* •
Intensity changes that are lower than the threshold $c$ do not trigger an
event.
* •
Spurious events occur over the entire image.
Most of the restoration errors come from the first two factors, which result
in degradation of the restored image in regions with events. We denote these
regions as $R_{e}$. Taking the above factors into account, we design our
network so that it includes a final mapping of the initial deblurred image
$\mathbf{L}_{\text{initial}}$ to a denoised version of it, which can correct
potential errors in the values of pixels inside $R_{e}$:
$\mathbf{L}_{\text{final}}=f_{4}(\mathbf{L}_{\text{initial}};\Theta_{4}).$ (7)
Two-stage backbone. Based on the generic architecture of the network presented
in (6) and (7), we design EFNet as a two-stage network to progressively
restore sharp images from blurred images and event streams, where the first
and second stage implement the mappings in (6) and (7) respectively. The
detailed architecture of EFNet is illustrated in Fig. 1. Both stages of EFNet
have an encoder-decoder structure, based on the UNet [32] architecture, and
each stage consists of two down-sampling and two up-sampling layers. Between
the encoder and decoder, we add a skip connection with $3\times 3$
convolution. The residual convolution block in the UNet consists of two
$3\times 3$ convolution layers and leaky ReLU functions with a $1\times 1$
convolution shortcut. Recently, the Supervised Attention Module (SAM) in
multi-stage methods demonstrated superior capacity in transferring features
between different sub-networks [7, 47]. Thus, we use SAM to connect the two
stages of our network. In the first stage, we fuse features from the event
branch and the image branch at multiple levels using a novel cross-modal
attention-based block. Between the two stages, we design an Event Mask Gated
Connection module to boost feature aggregation with blurring priors from
events. The details of the two aforementioned components of EFNet are given in
Sec. 3.4 and 3.5.
### 3.3 Symmetric Cumulative Event Representation
Figure 2: The proposed Symmetric Cumulative Event Representation (SCER). Red
and blue dots represent events with positive and negative polarity
respectively.
To feed the asynchronous events corresponding to synchronized image frames to
our network, we design a representation specifically suited for deblurring. In
(3), the accumulation of polarities via the inner sum on the right-hand side
indicates the relative intensity changes between the target latent sharp image
$\mathbf{L}(N)$ and each of the rest of latent sharp images in the exposure
time. The accumulation via the outer sum on the right-hand side represents the
sum of all latent sharp images. Based on this relationship, we propose the
Symmetric Cumulative Event Representation (SCER). As Fig. 2 shows, the
exposure time $T$ of the blurry image is divided equally into $2N$ intervals.
Assuming $2N+1$ latent sharp images in $T$, the polarity accumulation from the
central target latent image $\mathbf{L}(N)$ to a single latent image turns
into a 2D tensor with dimensions $(H,W)$:
$\mathbf{SCER}_{i}=\operatorname*{sgn}(i-N)\sum_{j:\;m\leq t_{j}\leq
M}p_{j}\delta_{x_{j}y_{j}}.$ (8)
For $i=N$, $\mathbf{SCER}_{N}=0$, so we discard this tensor. The remaining
$2N$ tensors are concatenated together, forming a tensor which indicates
intensity changes between the central latent sharp image $\mathbf{L}(N)$ and
each of the $2N$ other latent images. In this way,
$\mathbf{SCER}\in\mathbb{R}^{H\times W\times 2N}$ includes all the relative
intensity values corresponding to the center latent sharp frame and it becomes
suitable for feature extraction with our image deblurring model. As the
accumulation limits change, our SCER also contains both information about the
area in which blur occurs (channel 0 and channel $2N-1$) and information about
sharp edges (channel $N-1$ and channel $N$).
Our method discretizes $T$ into $2N$ parts, quantizing temporal information of
events within the time interval $\frac{T}{2N}$. However, SCER still holds
temporal information, as the endpoints of the time interval in which events
are accumulated is different across channels. The larger $N$ is, the less
temporal information is lost. In our implementation, we fix $N=3$.
Figure 3: The proposed Event-Image Cross-modal Attention fusion module. The
size of the attention map is $c\times c$.
### 3.4 Event-Image Cross-modal Attention Fusion
How to jointly extract and fuse information from event streams and images is
the key to event-based deblurring. Previous work [14, 23] simply multiplies or
concatenates low-resolution feature maps from the two modalities, but this
simple fusion approach cannot fully model the relation between events and
images. Other methods estimate optical flow with events and use that for
deblurring [34]. However, the estimation of optical flow itself introduces
errors.
To utilize event data, we instead include a novel cross-modal attention block
at multiple levels of EFNet. Contrary to self-attention blocks, in which the
queries ($\mathbf{Q}$), keys ($\mathbf{K}$) and values ($\mathbf{V}$) all come
from the same branch of the network, our Event-Image Cross-modal Attention
(EICA) block admits as inputs the queries $\mathbf{Q}_{\text{image}}$ from the
image branch and the keys $\mathbf{K}_{\text{event}}$ and values
$\mathbf{V}_{\text{event}}$ from the event branch, as shown in Fig. 3. In
particular, the input features from the two branches are first fed to
normalization and $1\times 1$ convolution layers, where the latter have $c$
output channels. We then apply cross-modal attention between vectorized
features from the two modalities via
$\operatorname*{Attention}(\mathbf{Q}_{\text{image}},\mathbf{K}_{\text{event}},\mathbf{V}_{\text{event}})=\mathbf{V}_{\text{event}}\operatorname*{softmax}\left(\frac{\mathbf{Q}^{T}_{\text{image}}\mathbf{K}_{\text{event}}}{\sqrt{d_{k}}}\right).$
(9)
The reason for introducing the $1\times 1$ convolution layer is to reduce the
spatial complexity of the above attention operation. In particular, $c$ is
chosen to be much smaller than $hw$, where $h$ and $w$ are the height and
width of the input feature maps, and the soft indexing of
$\mathbf{K}_{\text{event}}$ by $\mathbf{Q}_{\text{image}}$ is performed at the
_channel_ dimension instead of the spatial dimensions. Thus, the resulting
soft attention map from (9) is $c\times{}c$ instead of $hw\times{}hw$,
reducing the spatial complexity from $\mathcal{O}(h^{2}w^{2})$ to
$\mathcal{O}(c^{2})$ and making the operation feasible even for features with
high spatial resolution, as in our case. Finally, the output of the attention
operation is added to the input image features and the result of this addition
is passed to a multi-layer perceptron (MLP) consisting of two fully connected
layers with a Gaussian Error Linear Unit (GELU) [13] in between. We use the
EICA module at multiple levels of EFNEt to fuse event information aggregated
across receptive fields of varying size.
### 3.5 Event Mask Gated Connection Module
Previous work [30] predicts a mask indicating which areas of an image are
severely distorted, but this mask is not completely accurate. Apart from
information about intensity changes, event data also contain spatial
information about the blurred regions of the input image. Typically, regions
in which events occur are more severely degraded in the blurry image.
Motivated by this observation, we introduce an Event Mask Gated Connection
(EMGC) between the two stages of our network to exploit the spatial
information about blurred regions.
In particular, we binarize the sum of the first and last channel of SCER and
thus obtain a binary event mask, in which pixels where an event has occurred
are set to 0 and the rest are set to 1. As illustrated in Fig. 1(b), EMGC
masks out the feature maps of the encoder at regions where the event mask is
0, which are expected to be more blurry, and masks out the feature maps of the
decoder at regions where the event mask is 1 (using the complement of the
event mask), which are expected to be less blurry. A skip connection is added
beside the mask operation. Feature maps with less artifacts in the encoder and
better restored feature maps are combined through the event mask gate. In this
way, we selectively connect feature maps from the encoder and the decoder of
the first stage to the second stage. Besides, EMGC eases the flow of
information through the network, as it creates a shortcut through which
features can be transferred directly from the first to the second stage.
## 4 REBlur Dataset
Almost all event-based motion deblurring methods [6, 14, 23, 34, 45] train
models on blurred image datasets, such as GoPro [26], with synthetic events
from ESIM [31]. Although the contrast threshold $c$ in the event simulator
varies across pixels as in reality, a domain gap between synthetic and real
events still exists because of the background activity noise, dark current
noise, and false negatives in refractory period [3, 36, 43]. Recently, Jiang
_et al_. [14] proposed BlurDVS by capturing an image plus events with slow
motion, and then synthesizing motion blur by averaging multiple nearby frames.
However, motion blur in the ground-truth images is inevitable in this setting
and fast motion causes different events from slow motion because of the false
negatives in the refractory period of event cameras [3, 45]. Thus, a large-
scale real-world dataset with blurry images, reliable corresponding events,
and ground-truth sharp images is missing.
We present a new event-based dataset for deblurring, Real Event Blur (REBlur),
to provide ground truth for blurry images in a two-shot way. To collect
REBlur, we built an image collection system in a high-precision optical
laboratory with very stable illumination. We fixed an Insightness Seem 1 event
camera and a Dynamic and Active Pixel Vision Sensor (DAVIS) to the optical
table, outputting time-aligned event streams and $260\times 360$ gray images.
To obtain blurry-sharp image pairs under high-speed motion, we also fixed a
high-precision electronic-controlled slide-rail system to the optical table.
In the first shot, we captured images with motion blur for the pattern on the
slide-rail and corresponding event streams. In the second shot, according to
the timestamp $t_{s}$ of the blurry images, we selected events within the time
range $[t_{s}-125\mu s,t_{s}+125\mu s]$ and visualized these events in the
preview of the sharp image capture program. Referring to the edge information
from high-temporal-resolution events, we could relocate the slide-rail to the
coordinate corresponding to the timestamp $t_{s}$ by an electronic-controlled
stepping motor and then capture the latent sharp image. Between the two shots,
the background was kept static.
Figure 4: Distribution of different motion categories in our REBlur dataset.
To enhance the generalization of the network for different objects and moving
processes, our dataset includes 12 kinds of linear and nonlinear motions for
$3$ different moving patterns and for the camera itself, as detailed in Fig.
4. The dataset consists of $36$ sequences and $1469$ groups of blurry-sharp
image pairs with associated events, where $486$ pairs are used for training
and $983$ for testing. We also include an additional set of 4 sequences
including extreme blur, without ground truth. Please refer to the supplement
for more details on REBlur.
Table 1: Comparison of motion deblurring methods on the GoPro dataset [26].
${\dagger}$ denotes event-based methods. SRN+ and HINet+ denote event-enhanced
versions of SRN [39] and HINet [7] using our SCER.
Method | PSNR $\uparrow$ | SSIM $\uparrow$
---|---|---
DeblurGAN [18] | 28.70 (54.1%) | 0.858 (80.3%)
BHA† [27] | 29.06 (52.1%) | 0.940 (53.3%)
Nah _et al_. [26] | 29.08 (52.0%) | 0.914 (67.4%)
DeblurGAN-v2 [19] | 29.55 (49.4%) | 0.934 (57.6%)
SRN [39] | 30.26 (45.1%) | 0.934 (57.6%)
SRN+† [39] | 31.02 (40.0%) | 0.936 (56.3%)
DMPHN [48] | 31.20 (38.8%) | 0.940 (53.3%)
D2Nets† [34] | 31.60 (35.9%) | 0.940 (53.3%)
LEMD† [14] | 31.79 (34.5%) | 0.949 (45.1%)
Suin _et al_. [37] | 31.85 (34.0%) | 0.948 (46.2%)
SPAIR[30] | 32.06 (32.4%) | 0.953 (40.4%)
MPRNet [47] | 32.66 (27.6%) | 0.959 (31.7%)
HINet [7] | 32.71 (27.1%) | 0.959 (31.7%)
ERDNet† [6] | 32.99 (24.8%) | 0.935 (56.9%)
HINet+† [7] | 33.69 (18.4%) | 0.961 (28.2%)
EFNet (Ours)† | 35.46 | 0.972
## 5 Experiments
### 5.1 Datasets and Settings
GoPro dataset. We use the GoPro dataset [26], which is widely used in motion
deblurring, for training and evaluation. It consists of $3214$ pairs of blurry
and sharp images with a resolution of $1280\times 720$ and the blurred images
are produced by averaging several high-speed sharp images. We use 2103 pairs
for training and 1111 pairs for testing, following standard practice [26]. We
use ESIM [31], an open-source event camera simulator, to generate simulated
event data for GoPro. To make the results more realistic, we set the contrast
threshold $c$ randomly for each pixel, following a Gaussian distribution
$\mathit{N}(\mu=0.2,\sigma=0.03)$.
REBlur dataset. In order to close the gap between simulated events and real
events, before evaluating models that are trained on GoPro on REBlur, we fine-
tune them on the training set of REBlur. We then evaluate the fine-tuned
models on the test set of REBlur. More details on this fine-tuning follow.
Implementation details. Our network requires no pre-training. We train it on
$256\times 256$ crops of full images from GoPro. Full details about our
network configuration (numbers of channels, kernel sizes _etc_.) are given in
the supplement. For data augmentation, horizontal and vertical flipping,
random noise and hot pixels in event voxels [36] are applied. We use Adam [15]
with an initial learning rate of $2\times 10^{-4}$, and the cosine learning
rate strategy with a minimum learning rate of $10^{-7}$. The model is trained
with a batch size of $8$ for 300k iterations. Fine-tuning on REBlur involves
$600$ iterations, the initial learning rate is $2\times 10^{-5}$ and other
configurations are kept the same as for GoPro. We use the same training and
fine-tuning settings for our method and other methods for a fair comparison.
Evaluation protocol. All quantitative comparisons are performed using PSNR and
SSIM [42]. Apart from these, we also report the relative reduction in error of
the best-performing model for the GoPro benchmark. This is done by converting
PSNR to RMSE ($\textrm{RMSE}\propto\sqrt{10^{-\textrm{PSNR}/10}}$) and
translating SSIM to DSSIM ($\textrm{DSSIM}=(1-\textrm{SSIM})/2$).
| Blurry Image SRN [39] HINet [7] BHA [27] MPRNet [47] SRN+ [39] HINet+ [7]
EFNet (Ours) GT
---
| Blurry Image SRN [39] HINet [7] BHA [27] MPRNet [47] SRN+ [39] HINet+ [7]
EFNet (Ours) GT
---
Figure 5: Visual comparison on the GoPro dataset. SRN+ and HINet+ are event-
enhanced versions of SRN and HINet using our SCER. Compared to image-based and
event-based state-of-the-art methods, our method restores fine texture and
structural patterns better.
### 5.2 Comparisons with State-of-the-Art Methods
We compare our method with state-of-the-art image-only and event-based
deblurring methods on GoPro and REBlur. Since most learning-based methods
using events do not have publicly available implementations, in the
qualitative comparison part, apart from BHA [27], we compare our method with
SRN [39] and HINet [7], the latter being the current best model on the GoPro
benchmark. To have a fair comparison, we also include event-enhanced versions
of these two models by concatenating event voxel grids and images in the
input.
GoPro dataset. We report deblurring results in Table 1. Compared to the best
existing image-based method [7] and event-based method [6], our method
achieves 2.75 dB and 2.47dB improvement in PSNR and 0.013 and 0.037
improvement in SSIM, resp., with a low parameter count of 8.47M. Despite
utilizing an extra modality, other learning-based methods using events such as
D2Nets, LEMD, and ERDNet do not improve significantly upon image-only methods,
indicating that they do not take full advantage of event features. Our model
sets the new state of the art in image deblurring, showing that our principled
two-stage architecture with multi-level attentive fusion leverages event
information more effectively for this task. Note that by simply including our
SCER to HINet [7], the resulting enhanced version of it also surpasses the
best previous event-based method [6]. We show qualitative results on GoPro in
Fig. 5. Results from image-based methods are more blurry, losing sharp edge
information. BHA [27] restores edges better but suffers from noise around them
because of the factors described in Sec. 3.1. Learning-based methods using
events cannot fully exploit the motion information from events. By inputting
the concatenation of our SCER with the image to SRN+ and HINet+, they both
achieve large improvements. However, results from SRN+ include artifacts and
noise due to the absence of a second stage in the network that would refine
the result. HINet+ produces results with more artifacts, indicating that
simply concatenating events and images in the input is not effective. Based on
the physical model for event deblurring, EFNet achieves sharp and faithful
results. Both dominant structures and fine details are restored well thanks to
our attentive fusion at multiple levels.
Table 2: Comparison of motion deblurring methods on our REBlur dataset. The
notation is the same as in Table 1.
Method | PSNR $\uparrow$ | SSIM $\uparrow$ | Params (M) $\downarrow$
---|---|---|---
SRN [39] | 35.10 (29.4%) | 0.961 (35.9%) | 10.25
HINet [7] | 35.58 (25.4%) | 0.965 (28.6%) | 88.67
BHA† [27] | 36.52 (16.8%) | 0.964 (30.6%) | 0.51
SRN+† [39] | 36.87 (13.4%) | 0.970 (16.7%) | 10.43
HINet+† [7] | 37.68 (4.9%) | 0.973 (7.4%) | 88.85
EFNet (Ours)† | 38.12 | 0.975 | 8.47
Figure 6: Visual comparison on the REBlur dataset. The first two columns are
from the test set of the REBlur dataset, and the rest are from the additional
set, for which ground truth is not available. Our method shows superior
performance in cases with severe blur both due to object motion and due to
camera motion. Best viewed on a screen and zoomed in.
REBlur dataset. We report quantitative results on REBlur in Table 2. Our model
outperforms all other methods in this challenging real-world setting. Fig. 6
depicts qualitative results from the test set and the additional set. Even the
best image-based method, HINet, does not perform well on these severe cases of
real-world motion blur. Event-based methods are more robust to such adverse
conditions and less prone to overfitting on synthetic training data. Results
from BHA are sharper, but accumulation noise still exists. Simply adding
events with our SCER representation to the state-of-the-art image-based method
[7] improves performance significantly because of the physical basis of SCER,
but still leads to artifacts and ghost structures. Our EFNet restores both
smooth texture and sharp edges, demonstrating the utility of our two-stage
architecture and our cross-modal attention for fusion. Thanks to the selective
feature connection via EMGC, EFNet restores blurry regions well while also
maintaining the content of sharp regions. Results on more images are provided
in the supplement.
### 5.3 Ablation Study
We conduct two ablation studies on GoPro to analyze the contribution of
different components of our network (Table 3) and our event representation
(Table 4). First and foremost, our EICA fusion block fuses event and image
features effectively, improving PSNR by 0.6 dB or more and SSIM by 0.4%
compared to simple strategies for fusion such as multiplication or addition
(rows 6–8 and 10 of Table 3). Second, simply introducing middle fusion at
multiple levels and using simple strategies for fusion yields an improvement
of $\sim$1 dB in PSNR and 0.7% in SSIM over early fusion (rows 5–8),
evidencing the benefit of using multi-level fusion in our EFNet. Third, simply
adding events as input to the network via early fusion of our SCER voxel grids
with the images improves PSNR by 1.53 dB and SSIM by 0.6% (rows 3–4),
showcasing the informativeness of the event modality regarding motion, which
leads to better deblurring results. Fourth, adding a second stage in our
network to progressively restore the blurry image benefits deblurring
significantly, both in the image-only case (rows 1 and 3) and in the case
where our fully-fledged EFNet is used (rows 2 and 10). Fifth, connecting the
two stages of EFNet with our EMGC improves the selective flow of information
from the first stage to the second one, yielding an improvement of 0.15 dB
(rows 9–10). Finally, all our contributions put together yield a substantial
improvement of 6.4 dB in PSNR and 3.6% in SSIM over the image-only one-stage
baseline, setting the new state of the art in motion deblurring.
Table 3: Ablation study of various components of our method on GoPro [26].
“Early”: fusion by concatenation of event voxel grid and image, “Multi-level”:
fusion with our proposed architecture. SCER is used to represent events.
| Architecture | Events | Fusion type | EMGC | Fusion module | PSNR $\uparrow$ | SSIM $\uparrow$
---|---|---|---|---|---|---|---
1 | 1-Stage | ✗ | n/a | n/a | n/a | 29.06 | 0.936
2 | 1-Stage | ✓ | Multi-level | n/a | EICA | 34.90 | 0.968
3 | 2-Stage | ✗ | n/a | n/a | n/a | 32.15 | 0.954
4 | 2-Stage | ✓ | Early | ✗ | n/a | 33.68 | 0.960
5 | 2-Stage | ✓ | Early | ✓ | n/a | 33.79 | 0.961
6 | 2-Stage | ✓ | Multi-level | ✓ | Concat. | 34.80 | 0.968
7 | 2-Stage | ✓ | Multi-level | ✓ | Multiply | 34.86 | 0.968
8 | 2-Stage | ✓ | Multi-level | ✓ | Add | 34.78 | 0.968
9 | 2-Stage | ✓ | Multi-level | ✗ | EICA | 35.31 | 0.971
10 | 2-Stage | ✓ | Multi-level | ✓ | EICA | 35.46 | 0.972
Table 4: Comparison between different event representations on the GoPro [26]
dataset. “Stack”: temporal accumulation of events in a single channel.
Event representation | PSNR $\uparrow$ | SSIM $\uparrow$
---|---|---
None (image-only) | 32.15 | 0.954
Stack | 31.90 | 0.950
SBT [41] | 35.12 | 0.970
SCER (Ours) | 35.46 | 0.972
Event representation. Introducing events can improve performance due to the
high temporal resolution of the event stream, which provides a vital signal
for deblurring. Table 4 shows a comparison between SCER and other event
representations, including SBT [41], which accumulates polarities in fixed
time intervals. We use the same number of intervals (6) for SBT and SCER for a
fair comparison. Based explicitly on physics, SCER utilizes event information
for image deblurring (35.46dB) more effectively than SBT (35.12 dB). Note that
simply accumulating all events across the exposure time (“Stack”) deteriorates
the performance compared to not using events at all, which demonstrates that
finding a suitable event representation for deblurring, such as SCER, is non-
trivial.
## 6 Conclusion
In this work, we have looked into single image motion deblurring from the
perspective of event-based fusion. Based on the common physical model which
describes both blurry image formation and event generation, we have introduced
EFNet, an end-to-end motion deblurring network with an attention-based event-
image fusion module applied at multiple levels of the network. In addition, we
have proposed a novel event voxel representation to best utilize events for
deblurring. We have captured a new real-world dataset, REBlur, including
several cases of severe motion blur, which provides a challenging evaluation
setting. Our EFNet significantly surpasses the prior state of the art in image
deblurring, both on the GoPro dataset and on our new REBlur dataset.
Acknowledgements. This work was partly supported by China Scholarship Council
and Sunny Optical Technology (Group) Co., Ltd.
## References
* [1] Ahad, M.A.R., Tan, J.K., Kim, H., Ishikawa, S.: Motion history image: its variants and applications. Machine Vision and Applications (2012)
* [2] Bahat, Y., Efrat, N., Irani, M.: Non-uniform blind deblurring by reblurring. In: ICCV (2017)
* [3] Baldwin, R., Almatrafi, M., Asari, V., Hirakawa, K.: Event probability mask (EPM) and event denoising convolutional neural network (EDnCNN) for neuromorphic cameras. In: CVPR (2020)
* [4] Bardow, P., Davison, A.J., Leutenegger, S.: Simultaneous optical flow and intensity estimation from an event camera. In: CVPR (2016)
* [5] Brandli, C., Berner, R., Yang, M., Liu, S.C., Delbruck, T.: A 240 × 180 130 dB 3 $\mu$s latency global shutter spatiotemporal vision sensor. IEEE Journal of Solid-State Circuits (2014)
* [6] Chen, H., Teng, M., Shi, B., Wang, Y., Huang, T.: Learning to deblur and generate high frame rate video with an event camera. arXiv preprint arXiv:2003.00847 (2020)
* [7] Chen, L., Lu, X., Zhang, J., Chu, X., Chen, C.: HINet: Half instance normalization network for image restoration. In: CVPRW (2021)
* [8] Cho, S.J., Ji, S.W., Hong, J.P., Jung, S.W., Ko, S.J.: Rethinking coarse-to-fine approach in single image deblurring. In: ICCV (2021)
* [9] Cho, S., Lee, S.: Fast motion deblurring. ACM Transactions on Graphics (2009)
* [10] Fergus, R., Singh, B., Hertzmann, A., Roweis, S.T., Freeman, W.T.: Removing camera shake from a single photograph. ACM Transactions on Graphics (2006)
* [11] Gallego, G., Delbruck, T., Orchard, G.M., Bartolozzi, C., Taba, B., Censi, A., Leutenegger, S., Davison, A., Conradt, J., Daniilidis, K., et al.: Event-based vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)
* [12] Gong, D., Yang, J., Liu, L., Zhang, Y., Reid, I., Shen, C., Van Den Hengel, A., Shi, Q.: From motion blur to motion flow: A deep learning solution for removing heterogeneous motion blur. In: CVPR (2017)
* [13] Hendrycks, D., Gimpel, K.: Gaussian error linear units (gelus). arXiv preprint arXiv:1606.08415 (2016)
* [14] Jiang, Z., Zhang, Y., Zou, D., Ren, J., Lv, J., Liu, Y.: Learning event-based motion deblurring. In: CVPR (2020)
* [15] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015)
* [16] Kotera, J., Šroubek, F., Milanfar, P.: Blind deconvolution using alternating maximum a posteriori estimation with heavy-tailed priors. In: CAIP (2013)
* [17] Krishnan, D., Tay, T., Fergus, R.: Blind deconvolution using a normalized sparsity measure. In: CVPR (2011)
* [18] Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., Matas, J.: DeblurGAN: Blind motion deblurring using conditional adversarial networks. In: CVPR (2018)
* [19] Kupyn, O., Martyniuk, T., Wu, J., Wang, Z.: DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better. In: ICCV (2019)
* [20] Lagorce, X., Orchard, G., Galluppi, F., Shi, B.E., Benosman, R.B.: HOTS: A hierarchy of event-based time-surfaces for pattern recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (2017)
* [21] Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Understanding and evaluating blind deconvolution algorithms. In: CVPR (2009)
* [22] Levin, A., Weiss, Y., Durand, F., Freeman, W.T.: Efficient marginal likelihood optimization in blind deconvolution. In: CVPR (2011)
* [23] Lin, S., Zhang, J., Pan, J., Jiang, Z., Zou, D., Wang, Y., Chen, J., Ren, J.: Learning event-driven video deblurring and interpolation. In: ECCV (2020)
* [24] Liu, M., Delbruck, T.: Adaptive time-slice block-matching optical flow algorithm for dynamic vision sensors. In: BMVC (2018)
* [25] Maqueda, A.I., Loquercio, A., Gallego, G., García, N., Scaramuzza, D.: Event-based vision meets deep learning on steering prediction for self-driving cars. In: CVPR (2018)
* [26] Nah, S., Hyun Kim, T., Mu Lee, K.: Deep multi-scale convolutional neural network for dynamic scene deblurring. In: CVPR (2017)
* [27] Pan, L., Scheerlinck, C., Yu, X., Hartley, R., Liu, M., Dai, Y.: Bringing a blurry frame alive at high frame-rate with an event camera. In: CVPR (2019)
* [28] Paredes-Vallés, F., Scheper, K.Y.W., de Croon, G.C.H.E.: Unsupervised learning of a hierarchical spiking neural network for optical flow estimation: From events to global motion perception. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020)
* [29] Patrick, L., Posch, C., Delbruck, T.: A 128×128 120 dB 15$\mu$ s latency asynchronous temporal contrast vision sensor. IEEE Journal of Solid-State Circuits (2008)
* [30] Purohit, K., Suin, M., Rajagopalan, A.N., Boddeti, V.N.: Spatially-adaptive image restoration using distortion-guided networks. In: ICCV (2021)
* [31] Rebecq, H., Gehrig, D., Scaramuzza, D.: ESIM: an open event camera simulator. In: CoLR (2018)
* [32] Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional networks for biomedical image segmentation. In: MICCAI (2015)
* [33] Scheerlinck, C., Barnes, N., Mahony, R.: Continuous-time intensity estimation using event cameras. In: ACCV (2018)
* [34] Shang, W., Ren, D., Zou, D., Ren, J.S., Luo, P., Zuo, W.: Bringing events into video deblurring with non-consecutively blurry frames. In: ICCV (2021)
* [35] Sironi, A., Brambilla, M., Bourdis, N., Lagorce, X., Benosman, R.: HATS: Histograms of averaged time surfaces for robust event-based object classification. In: CVPR (2018)
* [36] Stoffregen, T., Scheerlinck, C., Scaramuzza, D., Drummond, T., Barnes, N., Kleeman, L., Mahony, R.: Reducing the sim-to-real gap for event cameras. In: ECCV (2020)
* [37] Suin, M., Purohit, K., Rajagopalan, A.N.: Spatially-attentive patch-hierarchical network for adaptive motion deblurring. In: CVPR (2020)
* [38] Sun, J., Cao, W., Xu, Z., Ponce, J.: Learning a convolutional neural network for non-uniform motion blur removal. In: CVPR (2015)
* [39] Tao, X., Gao, H., Shen, X., Wang, J., Jia, J.: Scale-recurrent network for deep image deblurring. In: CVPR (2018)
* [40] Tsai, F.J., Peng, Y.T., Lin, Y.Y., Tsai, C.C., Lin, C.W.: BANet: Blur-aware attention networks for dynamic scene deblurring. arXiv preprint arXiv:2101.07518 (2021)
* [41] Wang, L., I., S.M.M., Ho, Y., Yoon, K.: Event-based high dynamic range image and very high frame rate video generation using conditional generative adversarial networks. In: CVPR (2019)
* [42] Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing (2004)
* [43] Wang, Z., Ng, Y., van Goor, P., Mahony, R.: Event camera calibration of per-pixel biased contrast threshold. In: ACRA (2019)
* [44] Weikersdorfer, D., Conradt, J.: Event-based particle filtering for robot self-localization. In: ROBIO (2012)
* [45] Xu, F., Yu, L., Wang, B., Yang, W., Xia, G.S., Jia, X., Qiao, Z., Liu, J.: Motion deblurring with real events. In: ICCV (2021)
* [46] Xu, L., Zheng, S., Jia, J.: Unnatural L0 sparse representation for natural image deblurring. In: CVPR (2013)
* [47] Zamir, S.W., Arora, A., Khan, S., Hayat, M., Khan, F.S., Yang, M.H., Shao, L.: Multi-stage progressive image restoration. In: CVPR (2021)
* [48] Zhang, H., Dai, Y., Li, H., Koniusz, P.: Deep stacked hierarchical multi-patch network for image deblurring. In: CVPR (2019)
* [49] Zhang, J., Pan, J., Ren, J., Song, Y., Bao, L., Lau, R.W.H., Yang, M.H.: Dynamic scene deblurring using spatially variant recurrent neural networks. In: CVPR (2018)
* [50] Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Transactions on Image Processing (2017)
* [51] Zhou, S., Zhang, J., Pan, J., Xie, H., Zuo, W., Ren, J.: Spatio-temporal filter adaptive network for video deblurring. In: ICCV (2019)
* [52] Zhu, A.Z., Yuan, L., Chaney, K., Daniilidis, K.: Unsupervised event-based learning of optical flow, depth, and egomotion. In: CVPR (2019)
|
# When do Generative Query and Document Expansions Fail?
A Comprehensive Study Across Methods, Retrievers, and Datasets
Orion Weller${}^{\hskip 0.70004pt{\color[rgb]{0,0,0}\boldsymbol{\ast}}\hskip
0.70004pt{\color[rgb]{0,0,0}\boldsymbol{\iota}}}$ Kyle Lo${}^{\hskip
0.70004pt\color[rgb]{0,0,0}\boldsymbol{\alpha}}$ David Wadden${}^{\hskip
0.70004pt\color[rgb]{0,0,0}\boldsymbol{\alpha}}$ Dawn Lawrie${}^{\hskip
0.70004pt\color[rgb]{0,0,0}\boldsymbol{\iota}}$
Benjamin Van Durme${}^{\hskip 0.70004pt\color[rgb]{0,0,0}\boldsymbol{\iota}}$
Arman Cohan${}^{\hskip 0.70004pt\color[rgb]{0,0,0}\boldsymbol{\gamma\hskip
0.70004pt\alpha}}$ Luca Soldaini${}^{\hskip
0.70004pt\color[rgb]{0,0,0}\boldsymbol{\alpha}}$
${}^{\color[rgb]{0,0,0}\iota\hskip 0.70004pt}$Johns Hopkins University
${}^{\color[rgb]{0,0,0}\alpha\hskip 0.70004pt}$Allen Institute for AI
${}^{\color[rgb]{0,0,0}\gamma\hskip 0.70004pt}$Yale University
<EMAIL_ADDRESS>{kylel<EMAIL_ADDRESS>
###### Abstract
Using large language models (LMs) for query or document expansion can improve
generalization in information retrieval. However, it is unknown whether these
techniques are universally beneficial or only effective in specific settings,
such as for particular retrieval models, dataset domains, or query types. To
answer this, we conduct the first comprehensive analysis of LM-based
expansion. We find that there exists a strong negative correlation between
retriever performance and gains from expansion: expansion improves scores for
weaker models, but generally harms stronger models. We show this trend holds
across a set of eleven expansion techniques, twelve datasets with diverse
distribution shifts, and twenty-four retrieval models. Through qualitative
error analysis, we hypothesize that although expansions provide extra
information (potentially improving recall), they add additional noise that
makes it difficult to discern between the top relevant documents (thus
introducing false positives). Our results suggest the following recipe: use
expansions for weaker models or when the target dataset significantly differs
from training corpus in format; otherwise, avoid expansions to keep the
relevance signal clear.111Code and data are available at
https://github.com/orionw/LM-expansions
When do Generative Query and Document Expansions Fail?
A Comprehensive Study Across Methods, Retrievers, and Datasets
Orion Weller${}^{\hskip
0.70004pt{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\boldsymbol{\ast}}\hskip
0.70004pt{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\boldsymbol{\iota}}}$
Kyle Lo${}^{\hskip
0.70004pt\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\boldsymbol{\alpha}}$
David Wadden${}^{\hskip
0.70004pt\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\boldsymbol{\alpha}}$
Dawn Lawrie${}^{\hskip
0.70004pt\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\boldsymbol{\iota}}$
Benjamin Van Durme${}^{\hskip
0.70004pt\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\boldsymbol{\iota}}$
Arman Cohan${}^{\hskip
0.70004pt\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\boldsymbol{\gamma\hskip
0.70004pt\alpha}}$ Luca Soldaini${}^{\hskip
0.70004pt\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\boldsymbol{\alpha}}$
${}^{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\iota\hskip
0.70004pt}$Johns Hopkins University
${}^{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\alpha\hskip
0.70004pt}$Allen Institute for AI
${}^{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\gamma\hskip
0.70004pt}$Yale University<EMAIL_ADDRESS>{kylel<EMAIL_ADDRESS>
## 1 Introduction
00footnotetext: ∗ Work performed during internship at AI2.
Neural information retrieval (IR) systems routinely achieve state-of-the-art
performance on tasks where labeled data is abundant Karpukhin et al. (2020);
Yates et al. (2021). When limited or no data is available, neural models fine-
tuned on data-rich domains are used in zero-shot manner Thakur et al. (2021);
Rosa et al. (2022b). However, shifts in distribution of queries and documents
can negatively impact their performance Lupart et al. (2023).
Figure 1: Methods like query expansion and document expansion typically
improve performance when used with weaker models but not for stronger models;
more accurate models generally lose relevance signal when expansions are
provided. Best expansion and model results taken from those in Table 1.
To mitigate this effect, large-scale Language Models (LMs) can be used to
expand queries or documents from unseen domains Gao et al. (2022); Wang et al.
(2023a); Dai et al. (2022); Jeronymo et al. (2023); Jagerman et al. (2023).
These methods generally work by providing either the original documents or
queries to the LM, which then generates additional expanded information to
facilitate relevance matching. For example, HyDE Gao et al. (2022) uses an LM
to generate a fictitious relevant document for a user query; the document is
then used alongside of the user query to retrieve similar, and thus hopefully
relevant, real documents. As another example, Doc2Query Nogueira et al.
(2019c) uses an LM to generate likely queries for documents in the collection;
queries are appended to documents to increase their likelihood to match real
user queries. As the LMs doing the expansion are typically slower but more
capable than ranking models, they can provide additional context and
connections that the IR models could not (e.g. providing specialized
vocabulary, etc.). This property is particularly desirable when ranking models
are used in unseen domains, as LMs can help close distribution shift gaps.
Although many works have shown that LM-based expansions provide improvements,
proposed approaches are generally tested only a small subset of retrieval
techniques, such as small bi-encoder models or BM25 Gao et al. (2022);
Jagerman et al. (2023); Wang et al. (2023a). Further, as new models continue
to be developed in IR and natural language processing (NLP), there is a
pressing need to comprehensively analyze the relationship between expansion
techniques, ranking models, and distribution shifts. We seek to fill this gap
and aim to answer the following questions:
##### RQ1: How do different models impact query and document expansion (§ 3)?
Across all types of IR models and architectures, performance is negatively
correlated with gains from expansion: after a certain score threshold these
expansions generally hurt performance (as they blur the relevance signal from
the original documents).
##### RQ2: How do different distribution shifts impact these results (§ 4)?
Our main results hold for all types of shift – better models are harmed by
expansion – except for long query shift, where expansions generally help most-
to-all models.
##### RQ3: Why do expansions hurt stronger IR models (§ 5)?
We find that query and document expansions change the keywords that the
retrieval models focus on, obscuring the relevance signal of the original
texts.
Overall, this work aims at answering the following question: when should one
use LM-based expansions? Through our investigation, we provide evidence to
help practitioners answer this question. Our results run counter to the common
intuition that query and document expansion are helpful techniques in all
cases; instead, they show that LM expansions generally benefit weaker rankers,
but hurt more accurate rankers. Further, analysis over twelve datasets shows
that whether a given model benefits from expansion varies dramatically
depending on task; datasets with significant distributional shifts (e.g., very
long queries) are more likely to benefit from expansion.
## 2 Experimental Settings
In this section, we provide an overview of document and query expansion
methods used in the reminder of the manuscript, as well as key aspects of our
experimental setup.
We choose expansion techniques according to two criteria: (i) their overall
performance, as claimed in the paper introducing them, and (ii) their
applicability to a large set of retrieval models. We note that there exists
more specific expansion techniques for particular architectures, such as
ColBERT PRF Wang et al. (2023d, b). However, for generality we use text-based
expansions from LMs only and avoid model-specific techniques.
We generate expansions from gpt-3.5-turbo 222We use version
gpt-3.5-turbo-0613. To show that our results generalize beyond this specific
language model, we include results using alternative LMs (such as gpt-4-0613)
in Appendix A that show the same conclusion. Prompts and example input/output
can be found in Appendix D and C. We also explore the placement of these
augmentations (should we prepend/append/replace the original query?) in
Appendix B and show that this also makes little difference. as it is
inexpensive and shows strong performance in previous work Wang et al. (2023a);
Jagerman et al. (2023). Since using LMs to generate expansions for large
collections would be prohibitive, we restrict our expansions to the reranking
setting, e.g. the top 100 documents per query found from BM25 following Asai
et al. (2022).333Using gpt-3.5-turbo for just Doc2Query on the MSMarco
collection would cost roughly $4,000 USD (8 million docs at 250 tokens each)
as of September 2023. Thus we adopt the reranking setting (top 100 docs per
query) in order to evaluate on many datasets.
Figure 2: Effect of expansion over twelve datasets. For each dataset, markers
show base performance for models, while the boxplot indicates the range of
changes in scores for document and/or query expansion. Across all datasets and
models, we note a consistent trend: models with lower base performance benefit
from expansion; higher performing rankers generally suffer when expansion
techniques are used.
| DL Track 2019 | FiQA | Arguana
---|---|---|---
Type | Model | Base | QE | DE | Both | Base | QE | DE | Both | Base | QE | DE | Both
First Stage | DPR | 38.4 | +6.6 | +3.1 | +10.8 | 14.4 | +4.7 | +1.7 | +5.7 | 34.9 | -7.1 | +1.6 | -4.4
Contriever | 49.0 | +3.5 | +4.0 | +8.1 | 21.3 | +3.6 | +1.6 | +5.1 | 45.8 | -0.1 | +2.9 | -3.2
Contriever FT | 62.3 | +1.6 | -0.2 | +0.6 | 29.6 | +3.2 | +0.6 | +3.8 | 48.8 | -3.6 | +2.0 | -2.5
E5 Base v2 | 67.3 | -3.4 | -0.9 | -3.7 | 37.8 | -0.6 | -3.8 | -2.5 | 51.1 | -8.4 | +2.6 | -5.7
MPNet Base v2 | 68.3 | -6.0 | -2.9 | -6.8 | 44.5 | -4.1 | -3.5 | -5.7 | 47.6 | -5.1 | +5.3 | -0.7
E5 Small v2 | 69.1 | -4.8 | -1.9 | -6.8 | 36.4 | +0.4 | -2.9 | -0.6 | 46.1 | -8.7 | +2.7 | -9.8
GTE Large | 70.0 | -4.5 | -1.3 | -4.5 | 41.2 | -2.0 | -4.1 | -3.2 | 56.8 | -8.8 | -0.9 | -9.0
E5 Large v2 | 70.1 | -5.7 | -1.7 | -7.6 | 38.6 | -0.9 | -2.7 | -3.2 | 48.9 | -5.9 | +3.2 | -3.4
Rerankers | MonoT5-Small | 66.6 | -2.0 | -2.8 | -2.8 | 34.3 | +0.1 | -0.6 | -0.3 | 21.1 | +22.7 | -3.0 | +22.2
MiniLM-2-v2 | 68.0 | -3.2 | -4.1 | -5.1 | 27.5 | -2.0 | +0.6 | -15.8 | 15.2 | +11.4 | +10.8 | +11.2
SPLADEv2 | 70.1 | -4.3 | -3.7 | -5.6 | 33.4 | +1.3 | -0.2 | +1.2 | 45.0 | -4.5 | -1.3 | -4.0
MonoBERT | 70.4 | -4.6 | -2.0 | -4.8 | 36.2 | +0.2 | -0.7 | +0.0 | 50.1 | -5.7 | +2.5 | -9.3
MiniLM-4-v2 | 70.6 | -3.0 | -2.5 | -4.9 | 33.8 | +1.5 | -0.3 | +1.2 | 43.4 | +0.4 | +1.0 | -0.8
MonoT5-Base | 71.5 | -3.2 | -1.4 | -5.2 | 39.2 | -1.2 | -1.2 | -0.9 | 27.0 | +20.0 | +0.7 | +18.7
MonoT5-3B | 71.7 | -2.8 | -2.0 | -5.0 | 45.9 | -3.8 | -3.2 | -5.6 | 42.4 | +6.8 | -1.9 | +5.2
ColBERTv2 | 71.8 | -4.2 | -2.8 | -6.4 | 33.8 | -0.4 | -0.3 | -0.7 | 47.4 | -5.2 | -0.6 | -4.8
| MiniLM-12-v2 | 72.0 | -4.3 | -4.5 | -5.6 | 35.5 | -0.4 | -0.5 | +0.0 | 33.2 | +12.0 | +1.1 | +9.8
| MonoT5-Large | 72.2 | -4.0 | -1.8 | -5.6 | 42.8 | -2.3 | -2.3 | -3.1 | 31.2 | +14.8 | -2.0 | +14.8
| LLAMA | 72.6 | -2.9 | -4.9 | -7.7 | 40.0 | -3.7 | -4.9 | -5.8 | 52.6 | -3.9 | -6.9 | -9.4
| LLAMAv2 | 72.8 | -4.2 | -4.9 | -9.3 | 41.1 | -3.6 | -7.4 | -7.9 | 52.3 | -1.5 | -8.2 | -7.0
| LLAMAv2-13B | 73.6 | -4.5 | -5.4 | -7.3 | 41.2 | -4.5 | -4.9 | -7.0 | 49.4 | -2.1 | -6.0 | -4.9
Table 1: Results for the best expansion strategies across different models. QE
stands for query expansion (Q-LM PRF), DE for document expansion (Doc2Query),
and Both for the combination (Q-LM PRF + Doc2Query). Colors indicate a
positive or negative delta from the non-augmented base score. Notice that
models with higher base scores are generally harmed by expansions while weaker
models benefit from them.
### 2.1 Query Expansion
We use three types of query expansion, selecting the best methods from
previous work. We note that although there are infinite strategies for
prompting LMs to develop terms for search, these three provide the strongest
candidates from the literature.
##### HyDE from Gao et al. (2022)
HyDE provides task-specific instructions for the LM to generate a document
that would answer that question. We use the prompts from their work when
available.444We use similar styled prompts for datasets not evaluated on in
the original HyDE paper. We also append a phrase asking ChatGPT to be concise
to match the original HyDE method which used the much more concise Davinci-003
model (see Appendix D for the full text of the prompts).
##### Chain of Thought from Wang et al. (2023a)
Chain of Thought (CoT) for query expansion was inspired by Wei et al. (2022)
and asks the model to reason before giving the answer. As the reasoning
includes relevant information to the query, this additional text is used as
the query expansion. Similar techniques have been shown to be effective in
multiple works (Jagerman et al., 2023; Wang et al., 2023a; Trivedi et al.,
2022).
##### LM-based Pseudo Relevance Feedback (Q-LM PRF)
PRF is a classical technique that shows retrieved documents to the model doing
the expansion. We provide the top 3 relevant documents found using a bi-
encoder model (Contriever) to the LM. It produces a list of expansion terms
and then updates the original question to include those terms in a new fluent
question. LM-aided PRF has been shown broadly effective Mackie et al. (2023);
Jagerman et al. (2023); Wang et al. (2023c).
Axis | Dataset | # Queries | # Documents | Avg. D / Q | Q Len | D Len
---|---|---|---|---|---|---
In-Domain | TREC DL Track 2019 Craswell et al. (2020) | 43 | 8,841,823 | 212.5 | 5.4 | 56.6
TREC DL Track 2020 Craswell et al. (2021) | 54 | 8,841,823 | 207.9 | 6.0 | 56.6
Domain Shift | FiQA-2018 Maia et al. (2018) | 648 | 57,600 | 2.6 | 10.9 | 137.4
Gooaq Technical Khashabi et al. (2021) | 1,000 | 4,086 | 1.0 | 8.3 | 44.5
NFCorpus Boteva et al. (2016) | 323 | 3,633 | 38.2 | 3.3 | 233.5
Relevance Shift | Touché-2020 Bondarenko et al. (2020) | 49 | 382,545 | 19.0 | 6.6 | 293.7
SciFact Refute Wadden et al. (2020) | 64 | 5,183 | 1.2 | 12.1 | 214.8
Long Query Shift | Tip of My Tongue Lin et al. (2023) | 2,272 | 1,877 | 1.0 | 144.3 | 100.5
TREC Clinical Trials ’21 Roberts et al. (2021) | 75 | 375,580 | 348.8 | 133.3 | 919.5
ArguAna Wachsmuth et al. (2018) | 1,406 | 8,674 | 1.0 | 197.1 | 170.3
Short Doc Shift | WikiQA Yang et al. (2015) | 369 | 26,196 | 1.2 | 6.3 | 25.1
Quora Iyer et al. (2017) | 10,000 | 522,931 | 1.6 | 9.5 | 12.5
Table 2: Statistics of datasets used by type of generalization shift. Avg. D/Q indicates the number of relevant documents per query. Length is measured in words. The TREC DL Track uses MSMarco data Nguyen et al. (2016). | DL 2019 Track | DL 2020 Track
---|---|---
Type | Model | DPR | Contriever FT | MonoT5-3B | DPR | Contriever FT | MonoT5-3B
$-$ | Base | 38.4 | 62.3 | 71.2 | 39.2 | 57.5 | 68.3
Query | HyDE | +18.8 | +9.3 | -4.0 | +13.2 | +7.4 | -5.8
CoT | +12.6 | +2.7 | -6.7 | +5.5 | +4.2 | -9.3
Q-LM PRF | +6.6 | +1.6 | -2.2 | +6.3 | +2.7 | -3.0
Doc | D2Q | +3.1 | -0.2 | -1.2 | +3.1 | +1.3 | -1.9
D-LM PRF | -1.1 | -15.5 | -23.6 | -2.6 | -9.1 | -19.3
Both | HyDE + D2Q | +21.9 | +9.0 | -4.5 | +15.0 | +6.2 | -5.4
CoT + D2Q | +15.1 | +0.8 | -7.3 | +7.2 | +4.2 | -8.1
Q-LM PRF + D2Q | +10.8 | +0.6 | -4.2 | +8.1 | +3.7 | -3.3
HyDE + D-LM PRF | +16.7 | -3.1 | -22.8 | +11.4 | +1.2 | -17.9
CoT + D-LM PRF | +10.9 | -10.9 | -25.0 | +4.1 | -4.4 | -21.8
Q+D LM PRF | +6.8 | -5.6 | -14.4 | +4.5 | -2.4 | -11.8
Table 3: In-Domain performance on the TREC Deep Learning Tracks, according to
various types of expansions, showing that expansion typically helps weaker
models (like DPR) but hurts stronger models (especially large reranker models
like MonoT5-3B). Colors indicate a positive or negative delta from the non-
augmented base score.
### 2.2 Document Expansion
##### Doc2Query
There are fewer widespread LM document expansion techniques, with the main one
being Doc2Query Nogueira et al. (2019c). Work has found that improving the
question generation model results in higher scores, hence we use ChatGPT
instead of T5 for our experiments Nogueira et al. (2019a). See Appendix A for
results using alternative LMs for document expansion.
| FiQA-2018 | GooAQ Technical | NFCorpus
---|---|---|---
Type | Model | DPR | Contriever FT | MonoT5-3B | DPR | Contriever FT | MonoT5-3B | DPR | Contriever FT | MonoT5-3B
$-$ | Base | 14.4 | 29.6 | 45.9 | 42.5 | 71.0 | 80.2 | 24.1 | 34.6 | 39.1
Query | HyDE | +3.6 | -0.3 | -14.7 | +3.1 | +3.8 | -10.0 | +0.3 | +0.0 | -5.9
CoT | +3.6 | +0.4 | -13.2 | +2.0 | +2.1 | -9.7 | -0.7 | -0.6 | -4.5
Q-LM PRF | +4.7 | +3.2 | -3.8 | +6.4 | +1.9 | -3.4 | +0.2 | -0.4 | -2.7
Doc | D2Q | +1.7 | +0.6 | -3.2 | +6.4 | +3.0 | -1.1 | +1.3 | +0.6 | -0.5
D-LM PRF | +3.3 | +1.6 | -12.5 | +3.8 | +0.6 | -11.4 | +0.3 | -0.3 | -0.7
Both | HyDE + D2Q | +4.5 | +0.4 | -14.8 | +8.2 | +5.2 | -7.4 | +1.6 | +0.1 | -7.2
CoT + D2Q | +4.4 | +0.2 | -13.4 | +7.2 | +3.8 | -6.9 | +0.8 | +0.0 | -5.6
Q-LM PRF + D2Q | +5.7 | +3.8 | -5.6 | +10.9 | +4.2 | -4.1 | +1.4 | -0.1 | -3.0
HyDE + D-LM PRF | +5.8 | +1.2 | -14.8 | +5.3 | +2.7 | -14.2 | +0.8 | +0.1 | -6.3
CoT + D-LM PRF | +6.2 | +1.7 | -14.9 | +3.6 | +1.9 | -13.6 | -0.1 | -0.2 | -4.2
Q+D LM PRF | +7.3 | +4.6 | -8.4 | +7.9 | +3.5 | -6.4 | +0.2 | +0.0 | -2.8
Table 4: How different expansions affect results on datasets that measure Domain Shift. Colors indicate a positive or negative delta from the non-augmented base score. Notice that models with higher base scores are generally harmed by expansions while weaker models benefit from them. | Touche-2020 | Scifact-Refute
---|---|---
Type | Model | DPR | Contriever FT | MonoT5-3B | DPR | Contriever FT | MonoT5-3B
$-$ | Base | 23.0 | 24.8 | 32.6 | 33.9 | 76.4 | 82.1
Query | HyDE | -0.3 | +4.8 | -5.9 | -9.1 | -0.9 | -12.3
CoT | +0.3 | +5.1 | -7.4 | -7.6 | +0.3 | -8.8
Q-LM PRF | +0.6 | +3.9 | -1.3 | +6.5 | +1.1 | -1.7
Doc | D2Q | -0.2 | +0.0 | -0.9 | +2.0 | -1.8 | +0.9
D-LM PRF | -0.2 | -1.2 | -8.3 | +2.5 | -4.6 | -16.5
Both | HyDE + D2Q | -0.1 | +5.0 | -3.0 | -6.1 | -1.0 | -16.6
CoT + D2Q | +0.3 | +2.6 | -5.4 | -6.5 | -1.1 | -16.9
Q-LM PRF + D2Q | -0.1 | +1.0 | -2.0 | +9.1 | +1.3 | -1.1
HyDE + D-LM PRF | +0.5 | +1.4 | -10.1 | -5.2 | -2.9 | -17.6
CoT + D-LM PRF | -0.2 | +0.8 | -8.4 | -7.2 | -1.5 | -19.3
Q+D LM PRF | +0.3 | +2.5 | -2.7 | +7.6 | -2.5 | -4.0
Table 5: How different expansions affect results on datasets that measure
Relevance Shift.
##### LM-based Document PRF (D-LM PRF)
Similar to the Q-LM PRF technique above, we propose a document expansion that
draws pseudo-relevance from related queries instead of related documents. In
this setting, where there exists a set of unjudged user queries, we show the
LM the top 5 relevant queries and ask it to expand the original document to
better answer them.
## 3 RQ1: How do different models impact query and document expansion?
##### Experimental Setting
To understand the effects of different models on the helpfulness of LM-based
expansions, we employ a wide variety of models from all major IR
architectures: DPR Karpukhin et al. (2020), ColBERT v2 Santhanam et al.
(2022), SPLADE v2 Formal et al. (2021a), MonoBERT Nogueira et al. (2019b), the
MonoT5 family of models Nogueira et al. (2020), the E5 family of models Wang
et al. (2022b), GTE Li et al. (2023), several MiniLM models with varying sizes
Wang et al. (2020), all-mpnet-v2-base Reimers and Gurevych (2019) and Llama
models Touvron et al. (2023a, b) we fine-tune on MSMarco.555Model information
and weights are available at https://github.com/orionw/LLM-
expansions/llama_for_ranking.md.
Due to the exponential combination of models and datasets, we evaluate all
models on three representative datasets in Table 1 (see § 5 for details on
datasets and types of generalization) and use five representative models (DPR,
Contriever, ColBERTv2, MonoT5-small, and MonoT5-3B) on a larger suite of
datasets (see Figure 2).
We show results in comparison to the “base” version (colored grey), e.g. the
version without any expansion. Values above zero (e.g. greater than the no-
expansion version) are colored blue while values below the base are colored
red. Colors are scaled linearly according to the difference between the base
value and the min/max (i.e., the worst value in the column will be the max
red, while the best value will be max blue, all others will be shaded in
between).
| Tip of My Tongue | TREC CT 2021 | Arguana
---|---|---|---
Type | Model | DPR | Contriever FT | MonoT5-3B | DPR | Contriever FT | MonoT5-3B | DPR | Contriever FT | MonoT5-3B
| Base | 13.4 | 38.3 | 39.5 | 16.4 | 26.7 | 25.8 | 34.9 | 48.8 | 40.6
Query | HyDE | +3.0 | -9.4 | -26.8 | +0.3 | +2.1 | +4.2 | -4.5 | -5.4 | +15.8
CoT | +2.1 | -9.5 | -23.3 | +2.3 | +3.0 | +3.0 | -5.8 | -5.3 | +11.3
Q-LM PRF | -2.9 | -1.9 | +6.4 | +2.2 | +0.6 | -0.1 | -7.1 | -3.6 | +8.3
Doc | D2Q | +1.6 | -3.2 | -8.5 | +0.3 | -1.3 | -1.8 | +1.6 | +2.0 | -2.1
D-LM PRF | +5.5 | +2.9 | +0.9 | -0.7 | -0.9 | +0.6 | +2.3 | +3.5 | -2.5
Both | HyDE + D2Q | +3.6 | -10.7 | -29.7 | +0.4 | +2.1 | +2.7 | -2.8 | -2.5 | +12.9
CoT + D2Q | +2.2 | -10.6 | -25.3 | +2.3 | +1.5 | -0.1 | -4.3 | -3.0 | +10.6
Q-LM PRF + D2Q | -1.8 | -4.7 | +2.1 | +0.7 | -0.9 | -0.2 | -4.4 | -2.5 | +6.9
HyDE + D-LM PRF | +6.0 | -7.2 | -32.6 | +0.0 | +1.0 | +3.2 | -3.0 | +1.0 | +10.3
CoT + D-LM PRF | +5.3 | -7.4 | -25.8 | +1.9 | +2.7 | +1.0 | -4.0 | +0.9 | +8.8
Q+D LM PRF | +0.7 | +1.6 | +6.4 | +0.6 | -1.0 | +0.4 | -4.0 | -0.2 | +3.3
Table 6: How different expansions affect results on datasets that measure Long Query Format Shift. Colors indicate a positive or negative delta from the non-augmented base score. Unlike previous results, notice that all model benefit from some type of expansions on all three datasets. | WikiQA | Quora
---|---|---
Type | Model | DPR | Contriever FT | MonoT5-3B | DPR | Contriever FT | MonoT5-3B
| Base | 47.2 | 68.6 | 75.9 | 68.4 | 86.7 | 83.9
Query | HyDE | +16.4 | +3.6 | -1.6 | -15.4 | -13.8 | -8.2
CoT | +9.8 | -0.9 | -6.1 | -32.3 | -31.5 | -35.4
Q-LM PRF | +11.9 | -2.2 | -4.2 | -13.8 | -11.4 | -7.0
Doc | D2Q | +5.4 | -1.8 | -1.7 | -6.2 | -3.7 | +0.0
D-LM PRF | -2.8 | -10.8 | -21.4 | -10.0 | -15.6 | -17.0
Both | HyDE + D2Q | +17.7 | +2.1 | -2.7 | -11.4 | -10.1 | -7.1
CoT + D2Q | +11.3 | -1.5 | -6.9 | -25.7 | -26.3 | -32.5
Q-LM PRF + D2Q | +13.0 | -1.1 | -6.2 | -9.4 | -8.7 | -6.9
HyDE + D-LM PRF | +12.6 | -6.2 | -18.0 | -21.1 | -22.1 | -20.2
CoT + D-LM PRF | +7.0 | -10.3 | -19.0 | -35.6 | -36.8 | -41.4
Q+D LM PRF | +9.5 | -6.1 | -10.8 | -19.4 | -19.6 | -17.8
Table 7: How different expansions affect results on datasets that measure
Short Document Format Shift. Colors indicate a positive or negative delta from
the non-augmented base score. Notice that models with higher base scores are
generally harmed by expansions while weaker models benefit from them.
##### Effect of Different Models
Our results with all models (Figure 1) shows a consistent pattern: as base
performance on a task increases, the gains from expansion decrease. We also
see this trend from Table 1 (note that ArguAna results are sorted by MSMarco
performance, when sorted by ArguAna they appear as in Figure 1).
Interestingly, these results do not depend on the model architecture: this is
true for bi-encoders, late-interaction models, neural sparse models, and
cross-encoders.
However, do these results hold for other datasets? Figure 2 answers this and
shows the distributions of scores changes for models when using expansions
over a wide range of datasets. We find the same pattern: models that perform
better (such as MonoT5-3B) get less from expansions.
## 4 RQ2: How do different distribution shifts impact these results?
##### Experimental Setting
We evaluate how query and document expansion are impacted by different
distribution shifts: in-domain/no shift (MSMarco), domain shift (e.g. medical,
code, legal), relevance shift (finding the opposite or a counterargument), and
format shift (queries that are long documents or documents that are short).
The datasets we use and their descriptive statistics are in Table 2. We use
three representative models for these experiments.
Figure 3: An example of expansions obscuring the relevance signal. The non-
relevant document in red was ranked higher than the relevant blue document due
to the phrase “Home Equity Line of Credit" being added to the query. The left
side indicates original query and documents while the right side shows the
query and document expansions.
##### In-Domain
We use two datasets that test performance on the MSMarco collection: TREC Deep
Learning Tracks 2019 and 2020 Craswell et al. (2020, 2021)666Despite the
different names, TREC DL 2019 and 2020 use the same document collection as
MSMarco, albeit with new queries and relevance judgements.. Nearly all
retrieval models use MSMarco for training, hence these are in-domain.
##### Domain Shift
In this setting models must generalize from their training on standard web
documents (e.g. MSMarco) to new domains, such as legal or medical text. This
type of shift is made difficult by specialized vocabulary in these domains. We
use NFCorpus (medical) Boteva et al. (2016), GooAQ Technical (code) Khashabi
et al. (2021), and FiQA-2018 (finance) Maia et al. (2018).
##### Relevance Shift
This setting is characterized by a difference in the way relevance is defined.
Standard retrieval models have learned to define relevance in terms of casual
web searches. However, there are other situations where this differs, such as
queries that are looking for opposites, counterarguments, or neutral
information. We use two datasets that search for refutations or
counterarguments: Touché-2020 Bondarenko et al. (2020) and a subset of SciFact
Wadden et al. (2020) whose gold documents refute the queries claims.
##### Format Shift
Another type of shift is the length of inputs: generally, queries are short
and documents are paragraph-sized. However, there are situations where queries
could be document-sized or the documents could be short. This shift tests
whether models can generalize new length formats.
We consider two groups of datasets: for shift to long query we use Tip of My
Tongue Lin et al. (2023), TREC Clinical Trials Track 2021 Roberts et al.
(2021), and ArguAna Wachsmuth et al. (2018). For shift to short document, we
use two datasets: Quora Iyer et al. (2017) and WikiQA Yang et al.
(2015).777Due to the Twitter API restrictions, we could not use Signal from
BEIR.
### 4.1 Results by Type of Shift
Table 3 shows results for in-domain data on the 2019 and 2020 Deep Learning
TREC Tracks. We see that weaker models improve with different expansion types,
with DPR improving for almost every expansion and the stronger Contriever
showing minor improvements for some combinations. However, when we move to the
stronger models (e.g., MonoT5-3B), we find that all of these gains disappear
and expansions hurt the model.
We find that this trend holds in most other categories of shift: Table 4 for
domain shift, Table 5 for relevance shift, and Table 7 for short document
shift. Note that Figure 2 also shows this visually.
The exceptions to this pattern occur only in format shift: for Quora (Table 5)
where all models are harmed with expansion and for long query shift (Table 6)
where expansions generally help most models. When we examine why expansions
help for long query shift, we find that it transforms the query to become more
“standard” (i.e., short) for MSMarco trained models (e.g., for ArguAna the
query changes from a long document of an argument to one shorter question that
summarizes it).
As no model evaluated in this work is fine-tuned on long queries, it is an
open-question of whether additional training would make this category of
generalisation easier for models and less reliant on expansions.
Figure 4: Effect of scale on the impact of expansions (Table 1, MonoT5).
Larger models use expansions less.
## 5 RQ3: Why do expansions hurt stronger IR models?
Sections 3 and 4 show that strong IR models do not benefit from expansions.
But why is this true? One suggestion might be that larger models are better
able to take advantage of the information in the original documents. We test
this hypothesis and provide an error analysis to answer these questions.
### 5.1 Effect of Model Size
To show whether it is solely model size that impacts the gains from expansion,
we use two different families of models: MonoT5 and E5. If model size is the
cause, we would expect to see larger models gain less from expansions for both
families.
However, Figure 4 shows that model scale is inversely correlated with gains
from expansion for the MonoT5-family, but not the E5-family. The crucial
difference between them888Another obvious difference is that E5 is a bi-
encoder while MonoT5 is not. However, previous work Muennighoff (2022) has
shown that bi-encoders also improve with scale. can be attributed to the E5
models having similar performance scores across sizes whereas T5 has a much
wider range: T5 differs by 21 nDCG@10 points on ArguAna from 3B to small while
E5 differs by only 3 points from large to small. Thus, we see that model size
impacts gains from expansions only in tandem with the correlation between
model size and performance.
### 5.2 Error Analysis
If model size is not the reason for this phenomena, what could be causing it?
To gain an intuition on possible failures of LM-based expansion, we annotate
30 examples from three datasets where performance declines when expanding both
queries and documents.
We find that out of the 30 examples, two are false negatives, i.e., relevant
documents that are unjudged and not labeled as relevant (both from FiQA). Of
the remaining 28, all errors are due to the expanded version including
keywords that hurt the ranking: deemphasizing pertinent keywords by shifting
focus to less salient keywords that were already present or to new keywords
added by the expansion. An example of this behavior is in Figure 3, where we
can see how query expansion added the term “Home Equity Line of Credit” and
distracted from the main focus of the question (using bitcoins as collateral).
On the other hand, when no irrelevant information is introduced by LMs, well
tuned ranker models can accurately estimate relevance of subtly different
documents.
## 6 Discussion
Our results indicate three phenomena regarding query expansion using LMs: (i)
expansion generally benefit weaker models, such as DPR, while better
performing rankers, such as T5, are penalized; (ii) exceptions are observed in
case of severe distribution shift, such with very long queries; finally, (iii)
when model performance is negatively impacted, the cause is generally
expansion weakening the original relevance signal.
This implies that even though the LMs are orders of magnitude larger and more
powerful than smaller rerankers, they should not be used to augment strong
performing IR models without careful testing. The strong performance of
reranker models for generalization confirms previous work by Rosa et al.
(2022a). Further, Table 3 indicates this characterization of LM expansion also
holds even when models are tested on in-domain collections (no distribution
shift).
Interestingly, our experiments find that the only distribution shift that
consistently needs expansion is long query format shift; we found no
equivalent result for domain, document, or relevance shift. Future work may
examine whether improved training techniques on longer queries can overcome
this limitation or whether longer queries are innately more difficult for
ranking tasks.
## 7 Related Work
##### Large Scale Analyses in Neural IR
Comprehensive analysis in retrieval have provided great insight into practical
uses of retrieval. These include many aspects of information retrieval,
including interpretability MacAvaney et al. (2022), domain changes Lupart et
al. (2023), syntax phenomena Chari et al. (2023); Weller et al. (2023), and
relationship between neural models and classical IR approaches Formal et al.
(2021b); Chen et al. (2022).
##### Generalization in Neural IR
As retrieval models have become more effective, attention has turned to
improving and evaluating the way that IR models generalize to out-of-
distribution datasets (e.g. not MSMarco-like corpora). One prominent example
of this is the BEIR dataset suite Thakur et al. (2021), which is commonly used
for retrieval evaluation. Much other work has proposed new datasets for types
of shift (e.g. MTEB Muennighoff et al. (2023) among others Han et al. (2023);
Ravfogel et al. (2023); Weller et al. (2023); Mayfield et al. (2023)), as well
as many new modeling strategies for better zero-shot retrieval Dai et al.
(2022); Wang et al. (2022a). We follow these works by showing different types
of generalization and whether these type of shift change the results for LM-
based expansion techniques.
##### Effect of Scale on Neural IR Models
As in Natural Language Processing (NLP), IR models typically improve with
scale Nogueira et al. (2020) but are also more heavily constrained, due to the
requirement of processing millions of documents in real-time for live search.
Thus, most first-stage IR models typically use a BERT backbone Santhanam et
al. (2022); Izacard et al. (2021) while reranker models have scaled to the
billions of parameters Nogueira et al. (2020). Previous work on scaling bi-
encoder architectures have also shown performance gains from scale Muennighoff
(2022), but scaling up first-stage retrieval is less common than scaling
cross-encoders.
Due to the effectiveness of larger models, recent work has even shown that a
better first-stage model does not lead to improvements over a BM25 + reranker
pipeline Rosa et al. (2022a). Thus, for our experiments we use BM25 as first
stage retrieval and show results reranking those.
## 8 Conclusion
We conduct the first large scale analysis on large language model (LM) based
query and document expansion, studying how model performance, architecture,
and size affects these results. We find that these expansions improve weaker
IR models while generally harming performance for the strongest models
(including large rerankers and heavily optimized first-stage models). We
further show that this negative correlation between model performance and
gains from expansion are true for a wide variety of out of distribution
datasets, except for long query shift, where this correlation is weaker.
Overall, our results indicate that LM expansion should not be used for
stronger IR models and should instead be confined to weaker retrieval models.
## Limitations
* •
This work does not train rankers to deal with augmentations. That might
mitigate negative effects of some expansions, although it requires having
access to supervised data, which might not be available on out-of domain
tasks.
* •
Deciding whether to use augmentation requires having access to evaluation data
for the target domain; in some cases, such data might not be available.
* •
In the current version of the manuscript, we tested our approach with
commercial language models available via paid APIs. We feel this is justified
since our contributions are independent from the specific model used, as long
as it can follow instruction given. Nevertheless, use of commercial APIs
limits reproducibility and present a significant barrier to those who cannot
get access to the model.
* •
Similarly, a replication of this work would require access to significant
computational resources, including GPUs. A rough estimate shows that
generating results for this paper required north of 10,000 A6000 GPU hours,
with further 5,000 hours required to reach develop a stable experimental
platform.
* •
This work only studies datasets in English. While LM augmentations could play
an important role in improving non-English, cross-lingual, and multilingual
information retrieval, they require careful analysis.
## Ethical Considerations
* •
This work shows that LM augmentations make mistakes; while our system never
returns output of LM, inaccuracies might result in non-relevant documents
being presented to users.
## References
* Asai et al. (2022) Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen, Gautier Izacard, Sebastian Riedel, Hannaneh Hajishirzi, and Wen-tau Yih. 2022. Task-aware retrieval with instructions. _ArXiv preprint_ , abs/2211.09260.
* Bondarenko et al. (2020) Alexander Bondarenko, Maik Fröbe, Meriem Beloucif, Lukas Gienapp, Yamen Ajjour, Alexander Panchenko, Chris Biemann, Benno Stein, Henning Wachsmuth, Martin Potthast, and Matthias Hagen. 2020. Overview of Touché 2020: Argument Retrieval. In _Working Notes Papers of the CLEF 2020 Evaluation Labs_ , volume 2696 of _CEUR Workshop Proceedings_.
* Boteva et al. (2016) Vera Boteva, Demian Gholipour, Artem Sokolov, and Stefan Riezler. 2016. A full-text learning to rank dataset for medical information retrieval. In _Proceedings of the 38th European Conference on Information Retrieval (ECIR 2016)_ , pages 716–722.
* Chari et al. (2023) Andreas Chari, Sean MacAvaney, and Iadh Ounis. 2023. On the effects of regional spelling conventions in retrieval models. _Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval_.
* Chen et al. (2022) Xilun Chen, Kushal Lakhotia, Barlas Oguz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta, and Wen-tau Yih. 2022. Salient phrase aware dense retrieval: Can a dense retriever imitate a sparse one? In _Findings of the Association for Computational Linguistics: EMNLP 2022_ , pages 250–262, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
* Craswell et al. (2021) Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the trec 2020 deep learning track.
* Craswell et al. (2020) Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. 2020. Overview of the trec 2019 deep learning track. _ArXiv preprint_ , abs/2003.07820.
* Dai et al. (2022) Zhuyun Dai, Vincent Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov, Kelvin Guu, Keith B. Hall, and Ming-Wei Chang. 2022. Promptagator: Few-shot dense retrieval from 8 examples. _ArXiv preprint_ , abs/2209.11755.
* Formal et al. (2021a) Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. 2021a. Splade v2: Sparse lexical and expansion model for information retrieval. _ArXiv preprint_ , abs/2109.10086.
* Formal et al. (2021b) Thibault Formal, Benjamin Piwowarski, and Stéphane Clinchant. 2021b. Match your words! a study of lexical matching in neural information retrieval. In _European Conference on Information Retrieval_.
* Gao et al. (2022) Luyu Gao, Xueguang Ma, Jimmy Lin, and Jamie Callan. 2022. Precise zero-shot dense retrieval without relevance labels. _ArXiv preprint_ , abs/2212.10496.
* Han et al. (2023) Rujun Han, Peng Qi, Yuhao Zhang, Lan Liu, Juliette Burger, William Yang Wang, Zhiheng Huang, Bing Xiang, and Dan Roth. 2023. Robustqa: Benchmarking the robustness of domain adaptation for open-domain question answering. In _Annual Meeting of the Association for Computational Linguistics_.
* Iyer et al. (2017) Shankar Iyer, Nikhil Dandekar, and Kornél Csernai. 2017. Quora question pairs. _First Quora Dataset Release: Question Pairs_.
* Izacard et al. (2021) Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Unsupervised dense information retrieval with contrastive learning. _ArXiv preprint_ , abs/2112.09118.
* Jagerman et al. (2023) Rolf Jagerman, Honglei Zhuang, Zhen Qin, Xuanhui Wang, and Michael Bendersky. 2023\. Query expansion by prompting large language models. _ArXiv preprint_ , abs/2305.03653.
* Jeronymo et al. (2023) Vitor Jeronymo, Luiz Bonifacio, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, Jakub Zavrel, and Rodrigo Nogueira. 2023. Inpars-v2: Large language models as efficient dataset generators for information retrieval. _ArXiv preprint_ , abs/2301.01820.
* Karpukhin et al. (2020) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen Tau Yih. 2020. Dense passage retrieval for open-domain question answering. In _2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020_ , pages 6769–6781. Association for Computational Linguistics (ACL).
* Khashabi et al. (2021) Daniel Khashabi, Amos Ng, Tushar Khot, Ashish Sabharwal, Hannaneh Hajishirzi, and Chris Callison-Burch. 2021. GooAQ: Open question answering with diverse answer types. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 421–433, Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Li et al. (2023) Zehan Li, Xin Zhang, Yanzhao Zhang, Dingkun Long, Pengjun Xie, and Meishan Zhang. 2023. Towards general text embeddings with multi-stage contrastive learning. _ArXiv preprint_ , abs/2308.03281.
* Lin et al. (2023) Kevin Lin, Kyle Lo, Joseph E Gonzalez, and Dan Klein. 2023. Decomposing complex queries for tip-of-the-tongue retrieval. _ArXiv preprint_ , abs/2305.15053.
* Lupart et al. (2023) Simon Lupart, Thibault Formal, and Stéphane Clinchant. 2023. Ms-shift: An analysis of ms marco distribution shifts on neural retrieval. In _Advances in Information Retrieval_ , pages 636–652, Cham. Springer Nature Switzerland.
* MacAvaney et al. (2022) Sean MacAvaney, Sergey Feldman, Nazli Goharian, Doug Downey, and Arman Cohan. 2022\. ABNIRML: Analyzing the behavior of neural IR models. _Transactions of the Association for Computational Linguistics_ , 10:224–239.
* Mackie et al. (2023) Iain Mackie, Shubham Chatterjee, and Jeffrey Stephen Dalton. 2023. Generative relevance feedback with large language models. _Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval_.
* Maia et al. (2018) Macedo Maia, Siegfried Handschuh, André Freitas, Brian Davis, Ross McDermott, Manel Zarrouk, and Alexandra Balahur. 2018. Www’18 open challenge: Financial opinion mining and question answering. In _Companion Proceedings of the The Web Conference 2018_ , WWW ’18, page 1941–1942, Republic and Canton of Geneva, CHE. International World Wide Web Conferences Steering Committee.
* Mayfield et al. (2023) James Mayfield, Eugene Yang, Dawn J Lawrie, Samuel Barham, Orion Weller, Marc Mason, Suraj Nair, and Scott Miller. 2023. Synthetic cross-language information retrieval training data. _ArXiv preprint_ , abs/2305.00331.
* Muennighoff (2022) Niklas Muennighoff. 2022. Sgpt: Gpt sentence embeddings for semantic search. _ArXiv preprint_ , abs/2202.08904.
* Muennighoff et al. (2023) Niklas Muennighoff, Nouamane Tazi, Loic Magne, and Nils Reimers. 2023. MTEB: Massive text embedding benchmark. In _Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics_ , pages 2014–2037, Dubrovnik, Croatia. Association for Computational Linguistics.
* Nguyen et al. (2016) Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. Ms marco: A human generated machine reading comprehension dataset. _choice_ , 2640:660.
* Nogueira et al. (2020) Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pretrained sequence-to-sequence model. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 708–718, Online. Association for Computational Linguistics.
* Nogueira et al. (2019a) Rodrigo Nogueira, Jimmy Lin, and AI Epistemic. 2019a. From doc2query to doctttttquery. _Online preprint_ , 6:2.
* Nogueira et al. (2019b) Rodrigo Nogueira, Wei Yang, Kyunghyun Cho, and Jimmy Lin. 2019b. Multi-stage document ranking with bert. _ArXiv preprint_ , abs/1910.14424.
* Nogueira et al. (2019c) Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. 2019c. Document expansion by query prediction. _ArXiv preprint_ , abs/1904.08375.
* Ravfogel et al. (2023) Shauli Ravfogel, Valentina Pyatkin, Amir D. N. Cohen, Avshalom Manevich, and Yoav Goldberg. 2023. Retrieving texts based on abstract descriptions. _ArXiv preprint_ , abs/2305.12517.
* Reimers and Gurevych (2019) Nils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence embeddings using Siamese BERT-networks. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3982–3992, Hong Kong, China. Association for Computational Linguistics.
* Roberts et al. (2021) Kirk Roberts, Dina Demner-Fushman, Ellen M Voorhees, Steven Bedrick, and Willian R Hersh. 2021. Overview of the trec 2021 clinical trials track. In _Proceedings of the Thirtieth Text REtrieval Conference (TREC 2021)_.
* Rosa et al. (2022a) Guilherme Rosa, Luiz Bonifacio, Vitor Jeronymo, Hugo Abonizio, Marzieh Fadaee, Roberto Lotufo, and Rodrigo Nogueira. 2022a. In defense of cross-encoders for zero-shot retrieval. _ArXiv preprint_ , abs/2212.06121.
* Rosa et al. (2022b) Guilherme Moraes Rosa, Luiz Henrique Bonifacio, Vitor Jeronymo, Hugo Abonizio, Marzieh Fadaee, Roberto de Alencar Lotufo, and Rodrigo Nogueira. 2022b. No parameter left behind: How distillation and model size affect zero-shot retrieval. _ArXiv preprint_ , abs/2206.02873.
* Santhanam et al. (2022) Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. ColBERTv2: Effective and efficient retrieval via lightweight late interaction. In _Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 3715–3734, Seattle, United States. Association for Computational Linguistics.
* Thakur et al. (2021) Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogeneous benchmark for zero-shot evaluation of information retrieval models. In _Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2)_.
* Touvron et al. (2023a) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. _ArXiv preprint_ , abs/2302.13971.
* Touvron et al. (2023b) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. _ArXiv preprint_ , abs/2307.09288.
* Trivedi et al. (2022) Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot, and Ashish Sabharwal. 2022\. Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions. _ArXiv preprint_ , abs/2212.10509.
* Wachsmuth et al. (2018) Henning Wachsmuth, Shahbaz Syed, and Benno Stein. 2018. Retrieval of the best counterargument without prior topic knowledge. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 241–251, Melbourne, Australia. Association for Computational Linguistics.
* Wadden et al. (2020) David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verifying scientific claims. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7534–7550, Online. Association for Computational Linguistics.
* Wang et al. (2022a) Kexin Wang, Nandan Thakur, Nils Reimers, and Iryna Gurevych. 2022a. GPL: Generative pseudo labeling for unsupervised domain adaptation of dense retrieval. In _Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_ , pages 2345–2360, Seattle, United States. Association for Computational Linguistics.
* Wang et al. (2022b) Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, and Furu Wei. 2022b. Text embeddings by weakly-supervised contrastive pre-training. _ArXiv preprint_ , abs/2212.03533.
* Wang et al. (2023a) Liang Wang, Nan Yang, and Furu Wei. 2023a. Query2doc: Query expansion with large language models. _ArXiv preprint_ , abs/2303.07678.
* Wang et al. (2020) Wenhui Wang, Furu Wei, Li Dong, Hangbo Bao, Nan Yang, and Ming Zhou. 2020. Minilm: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. In _Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual_.
* Wang et al. (2023b) Xiao Wang, Sean MacAvaney, Craig Macdonald, and Iadh Ounis. 2023b. Effective contrastive weighting for dense query expansion. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 12688–12704.
* Wang et al. (2023c) Xiao Wang, Sean MacAvaney, Craig Macdonald, and Iadh Ounis. 2023c. Generative query reformulation for effective adhoc search. _ArXiv preprint_ , abs/2308.00415.
* Wang et al. (2023d) Xiao Wang, Craig Macdonald, Nicola Tonellotto, and Iadh Ounis. 2023d. Colbert-prf: Semantic pseudo-relevance feedback for dense passage and document retrieval. _ACM Transactions on the Web_ , 17(1):1–39.
* Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. _Advances in Neural Information Processing Systems_ , 35:24824–24837.
* Weller et al. (2023) Orion Weller, Dawn J Lawrie, and Benjamin Van Durme. 2023. Nevir: Negation in neural information retrieval. _ArXiv preprint_ , abs/2305.07614.
* Yang et al. (2015) Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain question answering. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 2013–2018, Lisbon, Portugal. Association for Computational Linguistics.
* Yates et al. (2021) Andrew Yates, Rodrigo Nogueira, and Jimmy Lin. 2021. Pretrained transformers for text ranking: BERT and beyond. In _Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Tutorials_ , pages 1–4, Online. Association for Computational Linguistics.
## Appendix A Different LMs for Expansion
Here we show results for GPT-4 expansions instead of ChatGPT in Table 8. We
can see that although absolute numbers differ slightly, there is no change to
the trends discussed in the main paper: i.e. that stronger models are harmed
by expansions while weaker models benefit. We swap FiQA for NFCorpus due to
the larger collection size and increased costs of annotating with GPT-4.
| TREC DL 2019 | NFCorpus | Arguana
---|---|---|---
Type | Model | DPR | Contriever FT | MonoT5-3B | DPR | Contriever FT | MonoT5-3B | DPR | Contriever FT | MonoT5-3B
| Base | 38.4 | 62.3 | 71.7 | 24.1 | 34.6 | 39.2 | 34.9 | 48.8 | 42.4
ChatGPT | Q-LM PRF | +6.6 | +1.6 | -2.8 | +0.2 | -0.4 | -2.8 | -7.1 | -3.6 | +6.8
D2Q | +3.1 | -0.2 | -2.0 | +1.3 | +0.6 | -0.5 | +1.6 | +2.0 | -1.9
Q-LM PRF + D2Q | +10.8 | +0.6 | -5.0 | +1.4 | -0.1 | -3.0 | -4.4 | -2.5 | +5.2
GPT-4 | Q-LM PRF | +13.3 | +5.2 | -0.6 | -7.8 | -17.5 | -22.6 | -6.2 | -4.5 | +4.5
D2Q | -4.3 | -14.0 | -2.3 | +1.2 | +1.0 | -0.1 | +0.9 | +1.2 | +0.2
Q-LM PRF + D2Q | +8.0 | -8.6 | -3.2 | -7.6 | -17.8 | -23.3 | -4.8 | -2.9 | +5.2
Table 8: How different LLMs used as the generator affect results. Colors
indicate a positive or negative delta from the non-augmented base score.
Although there are small differences between models the overall trends are the
same.
## Appendix B Placement of Expansions
In Table 9 we show different placements of expansions (i.e. do we
prepend/append/replace the original query when doing query expansion?). We
find that the placement does not make a significant difference to our overall
results, as the core conclusion of the paper remains the same.
| MSMarco 2019 | FiQA | Arguana
---|---|---|---
Type | Model | Contriever | MonoT5-small | MonoT5-3B | Contriever | MonoT5-small | MonoT5-3B | Contriever | MonoT5-small | MonoT5-3B
$-$ | Base | 49.0 | 66.6 | 71.2 | 21.3 | 34.3 | 45.9 | 45.8 | 21.0 | 40.6
Query | Prepend | +8.1 | -2.8 | -4.2 | +5.1 | -0.3 | -5.6 | -3.2 | +22.2 | +6.9
Append | +9.8 | -1.6 | -3.5 | +4.1 | +0.8 | -4.6 | -3.5 | +22.6 | +8.4
Replace | +8.3 | -7.3 | -7.9 | +7.2 | -3.2 | -8.8 | -15.9 | +19.3 | +3.3
Doc | Prepend | +8.5 | -2.2 | -1.9 | +5.9 | -2.0 | -3.1 | +1.4 | -5.4 | -12.4
Append | +10.3 | -0.8 | -1.4 | +4.0 | -1.4 | -2.2 | +0.4 | -6.8 | -8.6
Replace | +9.3 | -8.9 | -6.2 | +8.3 | -6.9 | -8.8 | -4.1 | -11.0 | -20.1
Both | Prepend/Prepend | +9.4 | -2.2 | -2.0 | +5.9 | -4.0 | -4.6 | +1.5 | -9.7 | -19.8
Prepend/Append | +11.0 | -0.9 | -1.9 | +4.1 | -3.3 | -2.8 | +0.5 | -8.7 | -18.3
Prepend/Replace | +9.6 | -9.0 | -6.2 | +8.1 | -8.5 | -9.3 | -5.1 | -10.0 | -26.8
Append/Prenpend | +3.5 | -2.0 | -2.2 | +3.6 | +0.1 | -3.8 | -0.1 | +22.7 | +8.3
Append/Append | +2.7 | -1.7 | -1.1 | +4.8 | -3.5 | -2.0 | -0.5 | -5.3 | -9.0
Append/Replace | +3.0 | -1.7 | -1.3 | +4.6 | -5.6 | -2.2 | -0.3 | -8.0 | -18.8
Replace/Prepend | +4.0 | -2.8 | -1.2 | +1.6 | -0.6 | -3.2 | +2.9 | -3.0 | -2.1
Replace/Append | +5.9 | +0.2 | -0.7 | +0.9 | +0.6 | -1.2 | +1.2 | -1.5 | -0.9
Replace/Replace | +5.7 | -11.8 | -8.7 | +4.4 | -5.3 | -10.4 | -1.0 | -5.0 | -9.1
Table 9: How different placements of the expansions affect results (e.g.
prepend/append/replace). Colors indicate a positive or negative delta from the
non-augmented base score. Although there are small differences the overall
trends are the same.
## Appendix C Example Input/Output for expansions
Here we show example input and output for expansion for query and document
expansions.
### C.1 Document Expansion
We show example expansion from this document from MSMarco:
Whale Shark. The Whale Shark is the largest and the heaviest living shark
because its weight is more than 21 tons and its length is more than 12 meter.
It is found in open oceans and warm waters. It feeds on plankton but some
times it also uses other small fishes as food.
#### C.1.1 Doc2Query
What is the largest and heaviest living shark in the world? What are the
characteristics of the Whale Shark? Where can the Whale Shark be found? What
does the Whale Shark feed on? What is the size and weight of the Whale Shark?
#### C.1.2 D-LM PRF
The Whale Shark is the largest and heaviest living shark, weighing over 21
tons and measuring over 12 meters in length. It is found in open oceans and
warm waters. While it primarily feeds on plankton, it occasionally consumes
other small fishes as well.
### C.2 Query Expansion
For the MSMarco query “why did the us volunterilay enter ww1" the following
output was generated. For prompt formatting, see Appendix D.
#### C.2.1 HyDE
The United States voluntarily entered World War I primarily due to a
combination of factors. Firstly, the sinking of the British ocean liner RMS
Lusitania by a German U-boat in 1915, resulting in the loss of American lives,
created a strong public outcry for intervention. Additionally, Germany’s
resumption of unrestricted submarine warfare in 1917 further escalated
tensions. Furthermore, the interception of the Zimmermann Telegram, which
revealed Germany’s proposal to Mexico for a military alliance against the
United States, heightened fears of potential threats to American security.
These events, coupled with a desire to protect American interests and uphold
democratic principles, ultimately led the United States to voluntarily join
the war effort.
#### C.2.2 Q-LM PRF
What were the factors leading to the United States’ voluntary entry into WW1
and what were the consequences of their decision to join the Allies?
#### C.2.3 Chain of Thought
The United States voluntarily entered World War I primarily due to a
combination of factors. One of the key reasons was the German unrestricted
submarine warfare, which threatened American lives and interests at sea.
Additionally, the interception and decoding of the Zimmerman Telegram, in
which Germany proposed an alliance with Mexico against the US, further fueled
public sentiment for intervention. Moreover, there was a desire to protect
American economic interests, as the US had significant trade ties with the
Allied powers. Lastly, President Woodrow Wilson’s belief in promoting
democracy and his vision for a post-war world order based on the principles of
self-determination and collective security also played a role in the US
decision to enter the war.
## Appendix D Prompts for LM expansions
Here we show the prompts used for LM query and document expansion.
### D.1 Query Expansions
#### D.1.1 HyDE
We use the prompts from their paper, which are dataset specific, in this
manner:
<SPECIFIC PROMPT> <Type of Input>: QUERY <TYPE OF OUTPUT> (one short paragraph
max):
for example on Clinical Trials we use:
Please write a clinical trial summary that would apply to the following
patient. Patient Info: QUERY Trial Summary (one short paragraph max):
and on FiQA we use:
Please write a financial article passage to answer the question Question:
QUERY Passage (one short paragraph max):
#### D.1.2 Q-LM PRF
You are a query expansion engine, primed and ready to take in text and output
additional keywords will provide new and expanded context behind the original
input. Your extensive world knowledge and linguistic creativity enables you to
provide questions that maximally optimize the new questions to find new
websites. You **always** provide creative synonyms and acronym expansions in
your new queries that will provide additional insight. Be sure to use new
words and spell out acronyms (or add new acronyms). Hint: think of ***new
synonyms and/or acronyms*** for “QUESTION" using these documents for
inspiration: DOCUMENTS Return the following information, filling it in: Input:
QUESTION Comma Separated List of 10 important New Keywords: “““NEW KEYWORDS
HERE""" New Question (combining Input and New Keywords, only **one** new
question that expands upon the Input): “““NEW QUESTION HERE""" Your output:
#### D.1.3 Chain of Thought
We use a the same specific prompt for CoT as we do for HyDE. The format is as
follows:
<SPECIFIC PROMPT> QUESTION Give the rationale (one short paragraph max) before
answering.
### D.2 Document Expansions
#### D.2.1 D-LM PRF
Change the following document to answer these questions, if they are partially
answered by the document. If the queries are not relevant, ignore them. Your
new documents should be one concise paragraph following the examples. Example
1: Queries: 1\. “how much caffeine is in a 12 ounce cup of coffee?" 2\. “what
are the effects of alcohol and caffeine" 3\. “what can pregnant women not do?"
Document: “We don’t know a lot about the effects of caffeine during pregnancy
on you and your baby. So it’s best to limit the amount you get each day. If
you are pregnant, limit caffeine to 200 milligrams each day. This is about the
amount in 1½ 8-ounce cups of coffee or one 12-ounce cup of coffee." New
Document (similar to Document): “There is a lack of research about the effects
of caffeine during pregnancy on you and your baby. So it’s best to limit the
amount you get each day. If you are pregnant, limit caffeine to 200 milligrams
(mg) each day. This is about the amount in 1½ 8-ounce cups of coffee or one
12-ounce cup of coffee (e.g. 200 milligrams)." Example 2: Queries: QUERIES
Document: “DOCUMENT" New Document (similar to Document):
#### D.2.2 Doc2Query
You are an optimized query expansion model, ExpansionGPT. You will write 5
queries for the given document that help retrieval models better find this
document during search. Document: “QUESTION" Queries:
|
# Uryson width and pants decompositions of hyperbolic surfaces
Gregory R. Chambers<EMAIL_ADDRESS>Department of Mathematics, Rice
University, Houston, Texas, USA
###### Abstract.
Suppose that $M$ is a hyperbolic closed orientable surface of genus $g$ and
with $n$ cusps. Then we can find a pants decomposition of $M$ composed of
simple closed geodesics so that each curve is contained in a ball of diameter
at most $C\sqrt{g+n}$, where $C$ is a universal constant.
## 1\. Introduction
In this article, we will examine pants decompositions of hyperbolic surfaces.
In particular, suppose that $M$ is a hyperbolic surface with genus $g$ and
with $n$ cusps. A pants decomposition of such a surface is a finite sequence
$\gamma_{1},\dots,\gamma_{k}$ of simple closed smooth curves which are
pairwise disjoint, and so that if we remove their images from $M$, the
remainder is a finite union of thrice punctured hyperbolic spheres. If
$g+2n<3$, then no decomposition exists; we will assume that $g+2n\geq 3$.
In general there are possible ways to decompose such a hyperbolic surface; we
will be interested in the lengths of the curves $\gamma_{1},\dots,\gamma_{k}$.
In [3] and [4], Bers showed that in $M$ is closed, there is a choice so that
these lengths are all bounded by a constant that depends only on the genus;
these bounds are called Bers’ constants.
In [6], [7], and [8], Buser studied optimal bounds on these constants,
producing linear upper bounds and square root lower bounds (the square root of
$g+n$ if the surface has cusps). He made the following conjecture:
###### Conjecture 1 (Buser).
Suppose that $M$ is a hyperbolic surface with genus $g$ and $n$ cusps. Then it
has a pants decomposition in which each curve has length at most $C\sqrt{g+n}$
for some universal constant $C$.
A more in depth discussion of the background of his conjecture can be found in
the introduction of [1], in which Balacheff and Parlier prove Conjecture 1 if
$g=0$. In this article, we prove the following:
###### Theorem 1.
Suppose that $M$ is a hyperbolic manifold with genus $g$ and $n$ cusps. Then
$M$ has a pants decomposition so that each curve is a simple closed geodesic,
and each has diameter at most $C\sqrt{g+n}$.
A higher-dimensional version of this theorem, sweeping out closed Riemannian
manifolds using $1$-dimensional cycles, was proved by Nabutovsky, Rotman, and
Sabourau in [13].
The organization of the remainder of the article is as follows. In Section 2,
we prove Theorem 1, leaving several main components of the proof for later
discussion. These components involve bounds on the Uryson width of a closed
Riemannian manifold, which will be discussed in Section 3, and a curve
shortening process developed by Hass and Scott, which will be discussed in
Section 4.
We close this section with a few remarks. Throughout this article, we will use
the notation $A\lesssim B$ to mean that there is some universal constant $C$
so that $A\leq CB$. If $C$ depends on the dimension $n$ only, then we will
write $A\lesssim_{n}B$ to mean $A\leq C(n)B$. We define $\gtrsim$ and
$\gtrsim_{n}$ analogously. We define $A\approx B$ to mean $A\lesssim B$ and
$B\lesssim A$, and $A\approx_{n}B$ to mean $A\lesssim_{n}B$ and
$B\lesssim_{n}A$.
We will also state the following proposition containing several standard facts
about hyperbolic surfaces; these will be useful later on. Since these results
are standard, we omit their proofs:
###### Proposition 2.
Suppose that $M$ is a hyperbolic surface of genus $g$ and with $n$ cusps. Then
the following are true:
1. (1)
The area of $M$ is $\lesssim\sqrt{g+n}$.
2. (2)
For every cusp of $M$, there is an open subset $U$ of $M$ which is isometric
to $D=\\{z\in\mathbb{R}^{2}:0<|z|<1\\}$ with a metric $G$ so that:
1. (a)
The lengths of the circle $C_{\rho}=\\{z\in\mathbb{R}^{2}:|z|=\rho\\}$ goes to
$0$ as $\rho\rightarrow 0^{+}$ (with $\rho\in(0,1)$).
2. (b)
For every $\rho\in(0,1)$, if $\\{x_{i}\\}$ and $\\{y_{i}\\}$ are sequences of
points in $D$ with $|x_{i}|>\rho$ and $|y_{i}|\rightarrow 0^{+}$, then the
distance from $x_{i}$ to $y_{i}$ goes to $\infty$.
Acknowledgments The author would like to thank Larry Guth for first
introducing him to this problem. He would also like to thank Alexander
Nabutovsky, Regina Rotman, Yevgeny Liokumovich, Robert Young, Arnaud de
Mesmay, and Stéphane Sabourau for many discussions about this problem and
surrounding literature. The author would also like to thank Alan Reid for
helpful comments on the initial draft of this article. Lastly, the author
would like to thank his mother, Catherine Chambers, for taking care of his son
William while this article was being written. The author would also like to
thank Maxime Fortier Bourque and Bram Petri for pointing out errors in a
previous version of this article. The research of the author was partially
supported by NSF Grant DMS-1906543.
## 2\. Proof of Theorem 1
In this section, we prove Theorem 1. As mentioned in the introduction, if
$n+2g<3$, then no such pants decomposition exists. We will deal with the
$n+2g\geq 3$ and $g=1$ case at the end of this section. If $n+2g\geq 3$ and
$g\geq 2$, then we proceed with the following definition:
###### Definition 1.
Suppose that $M$ is a hyperbolic surface of genus $g$ and $n$ cusps. A pseudo
pants decomposition is a finite sequence $\gamma_{1},\dots,\gamma_{k}$ of
simple closed pairwise disjoint curves so that $M$ with
$\gamma_{1},\dots,\gamma_{k}$ removed consists of a finite union of spheres,
each of which has three punctures formed from removing
$\gamma_{1},\dots,\gamma_{k}$, and between $0$ and $n$ of the original $n$
cusps.
The first part of the proof will involve finding a “good” pseudo pants
decomposition. Such a pants decomposition is one in which the ambient diameter
of each curve is $\lesssim\sqrt{g+n}$. This is stated in the following
proposition:
###### Proposition 3.
Suppose that $M$ is a hyperbolic surface with genus $g$ and $n$ cusps. If
$g\geq 2$, then there exists a pseudo pants decomposition so that the ambient
diameter of each curve is $\lesssim\sqrt{g+n}$.
To prove this proposition, we will use bounds on the Uryson width. We will
discuss the definition of Uryson width and some background in Section 3. The
main result that we will need, first proved in [10] by Guth in 2011, is as
follows:
###### Theorem 4.
If $M$ is a closed Riemannian $n$-manifold, then there is an
$(n-1)$-dimensional simplicial complex $\Gamma$ and a continuous function
$f:M\rightarrow\Gamma$ so that, for every $P\in\Gamma$, $f^{-1}(p)$ has
ambient diameter $\lesssim_{n}\textrm{Vol}(M)^{1/n}.$
The idea for the proof of Proposition 3 will be to first identify short curves
around each cusp, to cut the cusp out using this curve, and then to fill the
curve with a portion of a small sphere. After smoothing out the result, we use
Theorem 4, then map the simplicial complex to $\mathbb{R}$, and then finally
approximate the resulting function with a Morse function.
For a suitably small $C^{0}$ approximation, we will show that we can make the
same conclusion about the preimages of the Morse function, with a slightly
worse constant. We use the Morse function to find a pants decomposition of the
aforementioned closed Riemannian manifold. After arguing that these curves lie
outside the pieces that we glued in, we obtain our pseudo pants decomposition.
Since $\textrm{Area}(M)\approx\sqrt{g+n}$, this yields a good bound on the
diameter.
The next step is to take this pseudo pants decomposition, and to replace the
curves with simple closed geodesics:
###### Proposition 5.
Suppose that $M$ is a hyperbolic surface and $\gamma_{1},\dots,\gamma_{k}$ is
a pseudo pants decomposition for $M$ so that the ambient diameter of each
$\gamma_{i}$ is $\leq D$. Then there exists a pseudo pants decomposition
$\tilde{\gamma}_{1},\dots,\tilde{\gamma}_{k}$ so that each curve is a simple
closed geodesic and has ambient diameter $D$.
The method that we employ to prove Proposition 5 is a curve shortening process
developed by Hass and Scott in [11]. This process resembles the Birkhoff curve
shortening process; it involves replacing short segments of a given sequence
of curves $\alpha_{1},\dots,\alpha_{m}$ with geodesic segments, forming a new
sequence of collections of curves
$\tilde{\alpha_{1}},\dots,\tilde{\alpha}_{m}$. This process has the important
properties that if the original curves are simple and disjoint, then the new
curves are also simple and disjoint. In addition, they are (respectively)
homotopic to the original curves.
Lastly, and critically, if we continue to repeat this procedure, then we
obtain $m$ sequences of curves; each sequence of curves converges to a closed
geodesic. If we begin with a pseudo pants decomposition, then these final
closed geodesics are simple and disjoint, and also form a pseudo pants
decomposition.
Next, we argue that since this process replaces small segments with short
geodesic arcs, and since each of the original curves has ambient diameter
$\lesssim\sqrt{g+n}$, the new curves still have this diameter bound. This uses
the fact that the surface is hyperbolic. Furthermore, these curves are all
closed geodesics, as desired. To complete the proof, we use the results of
Balacheff and Parlier in [1]. Their main theorem is a proof of Conjecture 1
for $g=0$:
###### Theorem 6 (Balacheff and Parlier).
Suppose that $M$ is a hyperbolic sphere with $n$ cusps. Then $M$ has a pants
decomposition composed of curves each of length $\lesssim\sqrt{n}$.
We would like to apply this theorem to each of the components of the manifold
after we remove the $k$ simple closed geodesics that form the pseudo pants
decomposition that we found above. However, each component has $3+m$
punctures, where $3$ punctures come from three curves which are removed, and
the $m$ punctures come from cusps $(0\leq m\leq n)$. In the same article, they
prove a similar result for hyperbolic spheres with boundary geodesics. This is
stated in Lemma 8 of that article.
The approach is, for each geodesic boundary component $\gamma$, to glue a
hyperbolic pair of pants to it with two cusps and one geodesic boundary
component equal in length to $\gamma$. This produces a sphere with cusps, to
which we can then apply Theorem 6. Furthermore, Balacheff and Parlier show
that we can force the original closed geodesics to be in this pants
decomposition at the expense of adding the sum of the lengths of all of the
geodesics to the bound. We state this as follows; the proof follows from
Theorem 6 and Lemma 8 in [1] as described:
###### Theorem 7 (Balacheff and Parlier).
Suppose that $M$ is a hyperbolic sphere with $m$ cusps and $k$ boundary
components, each of which is a geodesic of length at most $\ell$. Then we can
find a pants decomposition of $M$ so that each curve has length at most
$\lesssim\sqrt{m}+k\ell.$
We then apply Theorem 7 to each of the components we found above; in this case
$k=3$, $0\leq m\leq n$, and $\ell\lesssim\sqrt{g+n}$. Thus, each curve that we
had added is itself contained in a ball of radius $\lesssim\sqrt{g+n}$. We
then apply the approach of Hass and Scott described above to all of the curves
(the original curves and the new ones) to complete the proof of Theorem 1.
If $g=1$, then we find the systole $\gamma$ of $M$. This is the shortest
noncontractible curve; it is smooth and simple. Gromov proved that it has
length at most $\lesssim\sqrt{\textrm{Area}(M)}$ in [9] (for hyperbolic
surfaces we can actually do better, bounding the systole by the logarithm of
the genus times a constant). Since $\textrm{Area}(M)\approx\sqrt{g+n}$, if we
choose this curve $\gamma$ as the single curve then remove it, then the result
is a sphere with exactly two geodesic boundary components (each of length
$\lesssim\sqrt{g+n}$ and $n$ cusps. We then apply the above results of
Balacheff and Parlier to complete the proof of this case. This concludes the
proof of Theorem 1.
## 3\. Uryson Width and Proposition 3
The purpose of this section is to prove Proposition 3. To do this, we will use
bounds on the Uryson $1$-width of a closed Riemannian surface. We begin with
the definition of Uryson width, a method of measuring how closely an
$n$-manifold resembles a $k$-dimensional simplicial complex first introduced
by Uryson and popularized by Gromov in [9]:
###### Definition 2.
Suppose that $M$ is a closed Riemannian $n$-manifold, and $0\leq k\leq n$. We
say that the Uryson $k$-width is bounded by $\rho$ if there is some
$k$-dimensional simplicial complex $\mathcal{C}$ and a continuous function
$f:M\rightarrow\mathcal{C}$ so that the preimage of every point in
$\mathcal{C}$ has ambient diameter $\leq\rho$. The Uryson $k$-width is then
the infimum over all such $\rho$.
As stated in Theorem 4, Guth showed that if $M$ is a closed Riemannian
$n$-manifold, then the Uryson $(n-1)$-width is bounded by
$\lesssim_{n}\textrm{Vol}(M)^{1/n}$; this is the main result that we will use
to prove Proposition 3, using $n=2$. Guth’s proof was extended to Hausdorff
Content (instead of volume) by Liokumovich, Lishak, Nabutovsky, and Rotman in
[12], and a new proof of both of these results was given in 2021 by Papasoglu
[14].
We now will prove Proposition 5. Suppose that $M$ is a hyperbolic surface with
genus $g\geq 2$ and $n$ cusps. To apply Guth’s result, we need to work with a
closed Riemannian surface; to this end, we cut out the cusps. Since the
surface is hyperbolic, we can find $n$ curves $\alpha_{1},\dots,\alpha_{n}$
which are smooth, simple, closed and disjoint so that each $\alpha_{i}$
encloses a punctured disc that contains exactly one cusp, and $\alpha_{i}$ and
$\alpha_{j}$ enclose distinct cusps if $i\neq j$. For every $\epsilon>0$, we
can find such curves so that the sum of all of the lengths of $\alpha_{i}$ is
less than $\epsilon$. We can do this because the surface is hyperbolic, and so
we can use Proposition 2. We will choose $\epsilon>0$ later.
We then delete the enclosed discs and cusps with respect to
$\alpha_{1},\dots,\alpha_{n}$. For each one of these boundary components
$\alpha_{i}$, we cap it with a hemisphere whose equator has length equal to
that of $\alpha_{i}$. We smooth out this gluing, and call the resulting
manifold $\tilde{M}$. We observe that, assuming that this smoothing is done on
a sufficiently small scale,
$\textrm{Area}(\tilde{M})\leq\textrm{Area}(M)+100n\epsilon^{2}$.
We now denote $\tilde{M}$ as $N$, and apply Theorem 4 to it. This results in a
$1$-complex $\Gamma$ along with a continuous function $f:N\rightarrow\Gamma$
so that, for every $p\in\Gamma$, $f^{-1}(p)$ has diameter
$\lesssim\sqrt{\textrm{Area}(N)}$.
We begin by embedding $\Gamma$ continuously into $\mathbb{R}^{3}$ so that, for
every plane $H_{z}=\\{\\{x,y,z\\}:x,y\in\mathbb{R}\\}$, $H_{z}$ intersects the
image of $\Gamma$ in a finite number of points. Clearly, such an embedding
exists. For example, we can embed the vertices of $\Gamma$ distinctly, and
then join these vertices by smooth curves according to how they are joined in
$\Gamma$. After a small perturbation, the result has the desired properties.
Note that we can assume that $\Gamma$ is finite since $N$ is closed.
We will fix such an embedding $h:\Gamma\rightarrow\mathbb{R}^{3}$. Lastly, we
define the projection $\pi:\mathbb{R}^{3}\rightarrow\mathbb{R}$ by
$\pi(x,y,z)=z.$
These maps are shown below:
${N}$${\Gamma}$${\mathbb{R}^{3}}$${\mathbb{R}}$$\scriptstyle{f}$$\scriptstyle{h}$$\scriptstyle{\pi}$
We can now state the main lemma that we will need:
###### Lemma 8.
Suppose that $f$, $h$, and $\pi$ are as above. There is some $\rho>0$ so that
the following holds. For every closed interval $I=[a,b]$ with $b\geq a$ and
$b-a\leq\rho$, for every connected component $X_{I}$ of
$(h\circ\pi)^{-1}(I),$
the diameter of $f^{-1}(X_{I})$ is $<2C\sqrt{\textrm{Area}(N)}$.
###### Proof.
Since $N$ is closed, $(f\circ h\circ\pi)(N)$ is a closed interval
$[\alpha,\beta]$. Suppose that the conclusion of the lemma was not true. We
could then find a sequence of intervals $I_{1},I_{2},\dots$ along with
connected components $X_{I_{1}},X_{I_{2}},\dots$ of $\Gamma$ so that
$X_{I_{j}}$ is a connected component of $(h\circ\pi)^{-1}(I_{j})$ with the
following properties:
1. (1)
The length of $I_{j}$ goes to $0$ as $j$ goes to $\infty$.
2. (2)
The diameter of $f^{-1}(X_{I_{j}})$ is at least $2C\sqrt{\textrm{Area}(N)}$
for every $j$.
Let $p_{j}$ be the center point of $I_{j}$; since $X_{I_{j}}$ has positive
diameter, $I_{j}$ must intersect $[\alpha,\beta]$, and so $p_{j}$ has a
convergent subsequence which converges to some $p\in[\alpha,\beta]$. For the
remainder of the proof, we will assume that we have already passed to such a
subsequence.
For every $\rho>0$, consider the interval $I_{\rho}=[p-\rho,p+\rho]$. We claim
that there is a connected component $X_{\rho}$ of $(h\circ\pi)^{-1}(I_{\rho})$
with diameter $\geq 2C\sqrt{\textrm{Area}(N)}$. Fix a $\rho>0$; there is some
$j$ so that $I_{j}\subset I_{\rho}$, and so there is a connected component
$X_{\rho}$ of $(h\circ\pi)^{-1}(I_{\rho})$ with $X_{j}\subset X_{\rho}$. As a
result,
$\textrm{Diameter}(X_{\rho})\geq\textrm{Diameter}(X_{j})\geq
2C\sqrt{\textrm{Area}(N)}.$
Since $\Gamma$ is compact, every $X_{\rho}$ is compact. In addition, due to
this compactness, we can find $\rho_{1},\rho_{2},\dots$ with
1. (1)
$\rho_{i}\geq\rho_{i+1}$.
2. (2)
$\rho_{i}\rightarrow 0^{+}$ as $i\rightarrow\infty$.
3. (3)
$X_{\rho_{i+1}}\subset X_{\rho_{i}}$.
Thus,
$\bigcap_{i=1}^{\infty}X_{\rho_{i}}$
is compact and connected; it is a connected component of $f^{-1}(z)$, where
$z=(h\circ\pi)^{-1}(p)$. We denote this connected component by $X_{0}$.
From the above, for each $\rho_{i}$, there are $x_{\rho_{i}},y_{\rho_{i}}\in
X_{\rho_{i}}$ so that the distance from $x_{\rho_{i}}$ to $y_{\rho_{i}}$ is
$\geq 2C\sqrt{\textrm{Area}(N)}$ (this uses the fact that $X_{\rho_{i}}$ is
compact). Thus, passing to a subsequence twice, $x_{\rho_{i}}\rightarrow x$
and $y_{\rho_{i}}\rightarrow y$ as $i\rightarrow\infty$, and
1. (1)
$x\in X_{0}$ and $y\in X_{0}$.
2. (2)
The distance from $x$ to $y$ is at least $2C\sqrt{\textrm{Area}(N)}$.
However, this means that the diameter of $X_{0}$ is at least
$2C\sqrt{\textrm{Area}(N)}$, which is a contradiction, completing the proof. ∎
We now continue with the proof of Proposition 3. We use Lemma 8 to produce a
pants decomposition of $N$. We begin with the functions $f$, $h$, and $\pi$
defined in the proof of Lemma 8, and let $\rho>0$ be as in its conclusion.
$f\circ h\circ\pi$ is a continuous function from $N$ to $\mathbb{R}$. We can
find a Morse function $\tilde{f}$ from $N$ to $\mathbb{R}$ so that, for every
$p\in N$,
$|f(p)-\tilde{f}(p)|\leq\rho/10.$
This is because we can approximate (in $C^{0}$) every continuous function on a
compact manifold with a smooth function, and every such smooth function can be
approximated (in $C^{\infty}$) by a Morse function.
Consider now $\gamma$, a connected component of $\tilde{f}^{-1}(q)$. If
$x,y\in\gamma$, then $|f(x)-\tilde{f}(x)|\leq\rho/10$ and
$|f(y)-\tilde{f}(y)|\leq\rho/10$. Since $\tilde{f}(x)=\tilde{f}(y)=q$, we have
$|f(x)-f(y)|\leq\rho/5$. Hence, if we let $a=(f\circ h\circ\pi)(x)$, then
$\gamma$ is entirely contained in $(f\circ h\circ\pi)^{-1}(I)$, where
$I=[a-\rho/5,a+\rho/5],$
and so is contained in the preimage of $f$ of a connected component of
$(h\circ\pi)^{-1}(I)$. By Lemma 8, this means that $\gamma$ has diameter at
most $2C\sqrt{\textrm{Area}(N)}$. Once we have this Morse function, it is
straightforward to produce our our pants decomposition of $N$. To do this, if
$F$ is our Morse function in which $s_{1},\dots,s_{k}$ are the singular
points, then we choose $2k$ points
$s_{1}-\kappa,s_{1}+\kappa,\dots,s_{k}-\kappa,s_{k}+\kappa$ where $\kappa>0$
is so small that
$[s_{i}-\kappa,s_{i}+\kappa]\cap[s_{j}-\kappa,s_{j}+\kappa]=\emptyset$ for
every $i\neq j$. We then cut $N$ along
$f^{-1}(s_{1}-\kappa),f^{-1}(s_{1}+\kappa),\dots,f^{-1}(s_{k}-\kappa),f^{-1}(s_{k}+\kappa).$
Each of these is a collection of smooth curves, each of which has ambient
diameter $\lesssim\sqrt{\textrm{Area}(N)}$. After we remove these curves, we
are left with a set of one, two, and three punctured spheres. Since the genus
of $N$ is at least $2$, there is at least one thrice punctured sphere.
We reglue the one punctured spheres along its single boundary component; this
results in a set of two and three punctures spheres. For every two punctured
sphere, we glue it back along one of its boundary components. The result is a
collection of thrice punctured spheres; this is our pants decomposition.
We now choose $\epsilon>0$ so small so that the following two properties are
true:
1. (1)
$\textrm{Area}(N)\leq 2\textrm{Area}(M)$
2. (2)
For each $\alpha_{i}$, there is a simple closed smooth curve
$\tilde{\alpha}_{i}$ so that:
1. (a)
$\tilde{\alpha}_{i}$ bounds a punctured disc which contains both $\alpha_{i}$
and its cusp.
2. (b)
For any curve $\gamma$ which goes from $\alpha_{i}$ to $\tilde{\alpha}_{i}$,
$\gamma$ has length at $\geq 10C\sqrt{\textrm{Area}(M)}$.
We can find $\epsilon>0$ so that the first inequality is satisfied because of
the relationship between the areas of $M$ and $N$ described above. For the
second, we Proposition 2 since $M$ is hyperbolic. Let
$\gamma_{1},\dots,\gamma_{k}$ be the pants decomposition of $N$ that we obtain
above.
If $\gamma_{i}$ intersects one of the regions $U$ that we modified from $M$,
then the fact that $\gamma_{i}$ cannot be contractible implies that it must
not lie within the $10C\sqrt{\textrm{Area}(M)}$-neighborhood of $\alpha_{j}$
for every $j$. If it did, then its diameter would be at least
$10C\sqrt{\textrm{Area}(M)}\geq 5C(2\sqrt{\textrm{Area}(M)})\geq
5C\sqrt{\textrm{Area(N)}},$
which is a contradiction. Thus, $\\{\gamma_{i}\\}$ lie entirely in the portion
of $N$ which agrees with $M$, and so we can consider them as curves in $M$. As
such, the diameter of each is $\lesssim\sqrt{\textrm{Area}(M)}$; from
Proposition 2, $\textrm{Area}(M)\approx g+n$, which yields the desired bound.
Furthermore, since they constitute a pants decomposition of $N$, when we
consider the cusps of $M$, we observe that they constitute a pseudo pants
decomposition of $M$.
## 4\. Geodesic pseudo pants decompositions and the curve shortening process
In this section, we prove Proposition 5. The idea is to employ the disk curve
shortening process developed by Hass and Scott in [11]. This process is
defined for closed Riemannian surfaces; we will deal with the issue of cusps
shortly to work around this. The idea is to start with a finite sequence of
piecewise smooth closed curves $\alpha_{1},\dots,\alpha_{n}$ on the closed
Riemannian surface $S$. We then choose a finite sequence of closed convex
discs $D_{1},\dots,D_{m}$ on the surface $S$ of radius at most
$\rho<\textrm{inj rad}(S)$ in general position (here $\textrm{inj rad}(S)$ is
the injectivity radius of $S$). Roughly speaking, the idea of Hass and Scott
is to move through the sequence of discs. For each disc $D_{i}$ in the
sequence, we replace each segment of each $\alpha_{j}$ which passes through
$D_{i}$ with the unique length minimizing geodesic between the same endpoints;
since we chose the radii of $D_{i}$ to be sufficiently small, these arcs are
unique and also lie in $D_{i}$.
After doing this for all discs, we obtain a new sequence of piecewise smooth
curves $\alpha_{1,2},\alpha_{2,2},\dots,\alpha_{n,2}$. Hass and Scott observed
the following:
1. (1)
The length of $\alpha_{i,2}$ is no larger than the length of $\alpha_{i}$.
2. (2)
If $\\{\alpha_{i}\\}$ are simple, then $\\{\alpha_{i,2}\\}$ are simple.
3. (3)
If $\\{\alpha_{i}\\}$ are all disjoint, then $\\{\alpha_{i,2}\\}$ are
disjoint.
4. (4)
$\alpha_{i}$ is homotopic to $\alpha_{i,2}$.
The procedure involves repeating this operation with $\\{\alpha_{i,2}\\}$ to
form $\\{\alpha_{i,3}\\}$. Repeating this procedure, we obtain a $n$ sequences
of curves $\\{\alpha_{i,j}\\}$ for $i\in\\{1,\dots,n\\}$ and
$j\in\\{1,2,\dots\\}$. Scott and Hass also proved that, for a fixed
$i\in\\{1,\dots,n\\}$, $\\{\alpha_{i,j}\\}$ converges to a closed geodesic.
Note that in their procedure there is an upper bound on the number of smooth
segments of all curves involved (depending on the number of such segments in
the original curves and the number of discs); hence we can take this
convergence to be $C^{\infty}$ at every point which is smooth in the sequence.
If $\alpha_{i}$ is simple and noncontractible, then all $\\{\alpha_{i,j}\\}$
are simple and noncontractible, and the resulting closed geodesic is simple.
Let $\\{\alpha_{i}^{*}\\}$ be the resulting closed geodesics. We also have
that, as a result of the uniqueness of geodesics, if all original curves are
homotopically distinct and noncontractible, then $\\{\alpha_{i}^{*}\\}$ are
simple closed curves which are disjoint and noncontractible.
In [11], Hass and Scott generalize this curvature shortening procedure to work
on families of curves. For this article, however, we will just need the
results described above. We are now in the situation that we have a hyperbolic
surface $M$ with genus $g$ and $n$ cusps, and a pseudo pants decomposition
$\gamma_{1},\dots,\gamma_{k}$ so that each curve has ambient diameter $\leq
C\sqrt{g+n}$.
We begin by choosing open punctured discs $X_{1},\dots,X_{n}$ around the $n$
cusps so that, if $\alpha$ is a closed curve with is noncontractible relative
to the cusps, and if it intersects $X_{i}$, then $\alpha$ has length at least
$100C\sqrt{g+n}$. $M\setminus\\{\bigcup X_{i}\\}$ is compact; we can find a
finite number of closed convex discs $D_{1},\dots,D_{m}$ all of radius
$\leq\frac{\eta}{100}$ on $M$ which are in general position, and which cover
$M\setminus\\{\bigcup X_{i}\\}$. We cannot choose $\eta$ to be the injectivity
radius of $M$, since it is $0$ if there is at least one cusp. Instead, we
choose $\eta>0$ to be
$\inf_{x\in M\setminus\\{\bigcup X_{i}\\}}\textrm{inj rad}_{x}(M),$
where $\textrm{inj rad}_{x}(M)$ is the injectivity radius of $M$ at the point
$x$. This infimum is positive because $M\setminus\\{\bigcup X_{i}\\}$ is
compact; this is also why we can cover it with _finitely_ many convex closed
discs with this radius bound. Since the length of each $\gamma_{i}$ is $\leq
C\sqrt{g+n}$, and since each is noncontractible relative to the cusps (since
they form a pseudo pants decomposition), $\gamma_{i}$ cannot intersect any
$X_{j}$, and so lies entirely in the union of $D_{1},\dots,D_{m}$.
Fix a disc $D_{i}$ and a segment $\beta$ of $\gamma_{i}$ which starts and ends
on $\partial D_{i}$, and which lies entirely in $D_{i}$. If we replace $\beta$
with $\tilde{\beta}$, the unique shortest geodesic joining the endpoints of
$\beta$, then the resulting curve $\eta$ is no longer than $\gamma_{i}$ and is
homotopic to $\gamma_{i}$, and so still lies in the union of all $D_{i}$. In
addition, we have the following lemma, which implies that $\eta$ has diameter
$\lesssim\sqrt{g+n}$.
###### Lemma 9.
$M$ is a hyperbolic surface, and $\gamma$ is a curve in $M$. Suppose that
$\gamma$ has diameter $\leq D$, and suppose that $x$ and $y$ are points on $M$
which are of distance $\leq\frac{\eta}{100}$ from each other, where $\eta$ is
chosen above. If we delete the segment from $x$ to $y$ and replace it with the
unique length minimizing geodesic from $x$ to $y$, then the resulting curve
$\tilde{\gamma}$ also has diameter $\leq D$.
###### Proof.
If $a$ and $b$ are on the geodesic segment added, they they are closer than
$x$ and $y$, and so the distance between $a$ and $b$ is $\leq D$. If $a$ and
$b$ are both not on the geodesic segment added, then the distance between them
is $\leq D$. Lastly, if $a$ is not on the segment and $b$ is on the segment,
then the distance from $a$ to $x$ is $\leq D$, and the distance from $a$ to
$y$ is $\leq D$. If $a$ is on the segment and $b$ is not on the segment, then
the argument works in the same manner as below, just with the labels “$a$” and
“$b$” reversed.
Since the surface is hyperbolic, there is a covering map
$F:\mathbb{H}\rightarrow M$ which is a local isometry. We can use $F$ to lift
$a$ to $a^{\prime}\in\mathbb{H}$, then we can consider the ball $B$ of radius
$D$ in $\mathbb{H}$ around $a^{\prime}$. We can lift $x$ to a point
$x^{\prime}$ in $B$, and we can also lift the point $y$ to a point
$y^{\prime}$ in $B$. The geodesic segment between $x$ and $y$ that we add
lifts to the unique geodesic joining $x^{\prime}$ and $y^{\prime}$; this
follows from the fact that the segment has length less than
$\frac{\eta}{100}$. Since balls in $\mathbb{H}$ are geodesically convex, that
is, if two points are in the ball, then the unique geodesic joining them is
also in the ball, this geodesic arc is contained in $B$. Thus, its image under
$F$, which corresponds to the curve which $b$ is on, is also within a distance
$D$ of $a$. This completes the proof. ∎
To summarize, we move through the list of discs, then for each disc, we move
through the list of curves, then for each curve, we move through the list of
arcs that pass through the disc, and then for each arc we perform the relevant
replacement. We can apply the curve shortening process of Hass and Scott to
$\gamma_{1},\dots,\gamma_{k}$ with respect to discs $D_{1},\dots,D_{m}$; this
forms curves $\\{\gamma_{i,2}\\}$. By continuing the apply their procedure
(with $D_{1},\dots,D_{m}$ fixed at the outset), we obtain curves
$\\{\gamma_{i,j}\\}$ with $i\in\\{1,\dots,k\\}$ and $j\in\\{1,2,\dots\\}$.
Furthermore, since the original curves were simple and disjoint, all curves
are simple and disjoint, and converge to closed geodesics
$\gamma_{1}^{*},\dots,\gamma_{k}^{*}$.
Since all $\gamma_{1}^{*},\dots,\gamma_{k}^{*}$ are in different homotopy
classes (since they form a pseudo pants decomposition), they are all disjoint
(this also uses the uniqueness of geodesics). Hence, they also form a pseudo
pants decomposition, are simple closed curves, and have the desired diameter
bound. This completes the proof.
## References
* [1] Florent Balacheff and Hugo Parlier, _Bers’ constants for punctures spheres and hyperelliptic surfaces_ , Journal of Topology and Analysis 4 (2012), no. 3, 271–296.
* [2] Florent Balacheff and Stéphane Sabourau, _Disatolic and isoperimetric inequalities on surfaces_ , Annales scientifiques de l’École Normale Supérieure 43 (2010), 579–605.
* [3] Lipman Bers, _Spaces of degenerating Riemann surfaces_ , no. 79, Princeton University Press, 1974.
* [4] by same author, _An inequality for Riemann surfaces_ , Differential geometry and complex analysis (1985), 87–93.
* [5] Robert Brooks, _The spectral geometry of a tower of coverings_ , J. Differential Geometry 23 (1986), 97–107.
* [6] Peter Buser, _Riemannshe flächen und längenspektrum vom trigonometrishen standpunkt_ , Ph.D. thesis, University of Bonn, 1981.
* [7] by same author, _Geometry and spectra of compact Riemann surfaces_ , vol. 106, Birkhäuser Boston Inc., 1992.
* [8] Peter Buser and Mika Seppälä, _Symmetric pants decompositions of Riemann surfaces_ , Duke Math. J. 1 (1992), no. 67, 39–55.
* [9] Mikhail Gromov, _Filling Riemannian manifolds_ , J. Differential Geometry 18 (1983), 1 – 147.
* [10] Larry Guth, _Volumes of balls in Riemannian manifolds and Uryson width_ , Journal of Topology and Analysis 9 (2017), no. 2, 195–219.
* [11] Joel Hass and Peter Scott, _Shortening curves on surfaces_ , Topology 33 (1994), no. 1.
* [12] Yevgeny Liokumovich, Boris Lishak, Alexander Nabutovsky, and Regina Rotman, _Filling metric spaces_ , Duke Math. J. 171 (2022), 595–632.
* [13] Alexander Nabutovsky, Regina Rotman, and Stéphane Sabourau, _Sweepouts of closed Riemannian manifolds_ , arXiv:2007.14954 (2020).
* [14] Panos Papasoglu, _Uryson width and volume_ , Geometric and Functional Analysis 30 (574–587).
|
# On the existence of solutions of the Dirichlet problem for $p$-Laplacian on
Riemannian manifolds
S. M. Bakiev Department of Differential Equations, Faculty of Mechanics and
Mathematics, Moscow Lomonosov State University, Vorobyovy Gory, Moscow, 119992
Russia<EMAIL_ADDRESS>and A. A. Kon’kov Department of Differential
Equations, Faculty of Mechanics and Mathematics, Moscow Lomonosov State
University, Vorobyovy Gory, Moscow, 119992 Russia<EMAIL_ADDRESS>
###### Abstract.
We obtain a criterion for the existence of solutions of the problem
$\Delta_{p}u=0\quad\mbox{in }M\setminus\partial
M,\quad\left.u\right|_{\partial M}=h,$
with the bounded Dirichlet integral, where $M$ is an oriented complete
Riemannian manifold with boundary and $h\in W_{p,loc}^{1}(M)$, $p>1$.
## 1\. Introduction
Let $M$ be an oriented complete Riemannian manifold with boundary. We consider
solutions of the problem
$\Delta_{p}u=0\quad\mbox{in }M\setminus\partial M,$ (1.1)
$\left.u\right|_{\partial M}=h,$ (1.2)
where $\Delta_{p}u=\nabla_{i}(g^{ij}|\nabla u|^{p-2}\nabla_{j}u)$ is the
$p$-Laplacian and $h\in W_{p,loc}^{1}(M)$, $p>1$.
As a condition at infinity, we assume that the Dirichlet integral is bounded,
i.e.
$\int_{M}|\nabla u|^{p}\,dV<\infty.$ (1.3)
As is customary, by $g_{ij}$ we denote the metric tensor consistent with the
Riemanian connection and by $g^{ij}$ we denote the tensor dual to the metric
one. In so doing, $|\nabla u|=(g^{ij}\nabla_{i}u\nabla_{j}u)^{1/2}$. As in
[10], by $W_{p,loc}^{1}(\omega)$, where $\omega\subset M$ is an open set, we
mean the space of measurable functions belonging to
$W_{p}^{1}(\omega^{\prime}\cap\omega)$ for any open set
$\omega^{\prime}\subset M$ with compact closure. The space $L_{p,loc}(\omega)$
is defined analogously.
A function $u\in W_{p,loc}^{1}(M)$ is called a solution of (1.1) if
$\int_{M}g^{ij}|\nabla u|^{p-2}\nabla_{j}u\nabla_{i}\varphi\,dV=0$ (1.4)
for all $\varphi\in C_{0}^{\infty}(M\setminus\partial M)$, where $dV$ is the
volume element of the manifold $M$. In its turn, condition (1.2) means that
$(u-h)\psi\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(M\setminus\partial M)}$ for all $\psi\in
C_{0}^{\infty}(M)$.
Boundary value problems for differential equations in unbounded domains and on
smooth manifolds have been studied by a number of authors [1]–[8], [12]. In
the case where $M$ is a domain in ${\mathbb{R}}^{n}$ bounded by a surface of
revolution, a criterion for the existence of solutions of (1.1)–(1.3) was
obtained in [12]. However, the method used in [12] cannot be generalized to
the case of an arbitrary Riemannian manifold. Theorem 2.1 proved in our
article does not have this shortcoming.
Let $K\subset M$ be a compact set. We denote by $C_{0}^{\infty}(M,K)$ the set
of functions from $C^{\infty}(M)$ that are equal to zero in a neighborhood of
$K$. In its turn, by ${\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(\omega,K)}$, where $\omega$ is an open subset of $M$,
we denote the closure of $C_{0}^{\infty}(M,K)\cap W_{p}^{1}(\omega)$ in
$W_{p}^{1}(\omega)$. By definition, a function $\varphi\in W_{p,loc}^{1}(M)$
satisfies the condition
$\left.\varphi\right|_{K}=\psi,$ (1.5)
where $\psi\in W_{p,loc}^{1}(M)$, if
$\varphi-\psi\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(\omega,K)}$ for some open set $\omega$ containing
$K$.
###### Proposition 1.1.
A function $u\in W_{p,loc}^{1}(\Omega)$ satisfies (1.2) if and only if
$\left.u\right|_{K}=h$ (1.6)
for any compact set $K\subset\partial M$.
###### Proof.
At first, let (1.2) hold and $K$ be a compact subset of $\partial M$. Take an
open pre-compact set $\omega$ containing $K$ and a function $\psi\in
C_{0}^{\infty}(M)$ such that
$\left.\psi\right|_{\omega}=1.$
By (1.2), the function $(u-h)\psi$ belongs to the closure of
$C_{0}^{\infty}(M\setminus\partial M)$ in the space
$W_{p}^{1}(M\setminus\partial M)$. Assuming that functions from
${C_{0}^{\infty}(M\setminus\partial M)}$ are extended by zero to $\partial M$,
we obtain $u-h\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(\omega,K)}.$
Now, assume that condition (1.6) is valid and let $\psi\in C_{0}^{\infty}(M)$.
We consider the compact set $K=\operatorname{supp}\psi\cap\partial M$. In view
of (1.6), there exists an open set $\omega$ such that $K\subset\omega$ and,
moreover, $u-h\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(\omega,K)}$ or, in other words,
$\|u-h-\varphi_{i}\|_{W_{p}^{1}(\omega)}\to 0\quad\mbox{as }i\to\infty$ (1.7)
for some sequence of functions $\varphi_{i}\in C_{0}^{\infty}(M,K)\cap
W_{p}^{1}(\omega)$, $i=1,2,\ldots$. We denote
$\tilde{K}=\operatorname{supp}\psi\setminus\omega$. Since $\tilde{K}$ is a
compact set belonging to $M\setminus\partial M$, there is a function $\tau\in
C_{0}^{\infty}(M\setminus\partial M)$ equal to one in a neighborhood of
$\tilde{K}$. It is easy to see that $(1-\tau)\psi\varphi_{i}\in
C_{0}^{\infty}(\omega\setminus\partial M),$ $i=1,2,\ldots$. At the same time,
by (1.7), we have
$\|(1-\tau)\psi(u-h-\varphi_{i})\|_{W_{p}^{1}(M)}=\|(1-\tau)\psi(u-h-\varphi_{i})\|_{W_{p}^{1}(\omega)}\to
0\quad\mbox{as }i\to\infty;$
therefore, one can assert that
$(1-\tau)\psi(u-h)\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(M\setminus\partial M)}.$ It is also obvious that
$\tau\psi(u-h)\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(M\setminus\partial M)}.$ Thus, we obtain
$\psi(u-h)=(1-\tau)\psi(u-h)+\tau\psi(u-h)\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(M\setminus\partial M)}.$ ∎
Let $\Omega$ be an open subset of $M$. The capacity of a compact set $K\subset
M$ associated with a function $\psi\in W_{p,loc}^{1}(M)$ is defined as
$\operatorname{cap}_{\psi}(K,\Omega)=\inf_{\varphi}\int_{\Omega}|\nabla\varphi|^{p}dV,$
where the infimum is taken over all functions
$\varphi\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(\Omega)}$ for which (1.5) is valid. In so doing, we
assume that the functions from ${\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(\Omega)}$ are extended by zero beyond $\Omega$. For
an arbitrary closed set $E\subset M$, we put
$\operatorname{cap}_{\psi}(E,\Omega)=\sup_{K}\operatorname{cap}_{\psi}(K,\Omega),$
where the supremum is taken over all compact sets $K\subset E$. If $\Omega=M$,
we write $\operatorname{cap}_{\psi}(K)$ instead of
$\operatorname{cap}_{\psi}(K,M)$. In the case of $\psi=1$ and $p=2$, the
capacity $\operatorname{cap}_{\psi}(K)$ coincides with the well-known Wiener
capacity [9].
It is not difficult to verify that the capacity introduced above has the
following natural properties.
1. (a)
Let $K_{1}\subset K_{2}$ and $\Omega_{2}\subset\Omega_{1}$, then
$\operatorname{cap}_{\psi}(K_{1},\Omega_{1})\leq\operatorname{cap}_{\psi}(K_{2},\Omega_{2}).$
2. (b)
Suppose that $\lambda$ is a real number, then
$\operatorname{cap}_{\lambda\psi}(K,\Omega)=|\lambda|^{p}\operatorname{cap}_{\psi}(K,\Omega).$
3. (c)
Let $\psi_{1},\psi_{2}\in W_{p,loc}^{1}(M)$, then
$\operatorname{cap}_{\psi_{1}+\psi_{2}}^{1/p}(K,\Omega)\leq\operatorname{cap}_{\psi_{1}}^{1/p}(K,\Omega)+\operatorname{cap}_{\psi_{2}}^{1/p}(K,\Omega).$
We say that $u\in W_{p,loc}^{1}(M)$ is a solution of (1.1) under the condition
$\left.\frac{\partial u}{\partial\nu}\right|_{\partial M}=0$ (1.8)
if the integral identity (1.4) holds for all $\varphi\in C_{0}^{\infty}(M)$.
The set of solutions of problem (1.1), (1.8) with bounded Dirichlet integral
(1.3) is denoted by $\mathfrak{H}$.
## 2\. Main result
###### Theorem 2.1.
Problem (1.1)–(1.3) has a solution if and only if
$\operatorname{cap}_{h-w}(\partial M)<\infty$ (2.1)
for some $w\in\mathfrak{H}$.
The proof of Theorem 2.1 is based on the following two lemmas known as
Poincare’s inequalities.
###### Lemma 2.1.
Let $G\subset M$ be a pre-compact Lipschitz domain and $\omega$ be a subset of
$G$ of non-zero measure. Then
$\int_{G}|u|^{p}dV\leq C\left(\int_{G}|\nabla
u|^{p}dV+\left|\int_{\omega}u\,dV\right|^{p}\right)$
for all $u\in W_{p}^{1}(G)$, where the constant $C>0$ does not depend on $u$.
###### Lemma 2.2.
Let $\omega\subset M$ be a pre-compact Lipschitz domain. Then
$\int_{\omega}|\varphi-\alpha|^{p}\,dV\leq
C\int_{\omega}|\nabla\varphi|^{p}\,dV,$
for all $\varphi\in W_{p}^{1}(\omega)$, where
$\alpha=\frac{1}{\operatorname{mes}\omega}\int_{\omega}\varphi\,dV$
and the constant $C>0$ does not depend on $\varphi$.
###### Proof of Theorem 2.1.
We show that the existence of a solution of (1.1)–(1.3) implies the validity
of (2.1). Consider a sequence of functions $\varphi_{i}\in C_{0}^{\infty}(M)$,
$i=1,2,\ldots$, such that
$\int_{M}|\nabla(u-\varphi_{i})|^{p}dV\to\inf_{\varphi\in
C_{0}^{\infty}(M)}\int_{M}|\nabla(u-\varphi)|^{p}dV\quad\mbox{as }i\to\infty.$
Since the sequence $\nabla\varphi_{i}$, $i=1.2,\ldots$, is bounded in
$L_{p}(M)$, there is a subsequence $\nabla\varphi_{i_{j}}$, $j=1,2,\ldots$,
that converges weakly in $L_{p}(M)$ to some vector-function ${\mathbf{r}}\in
L_{p}(M)$. Let $R_{m}$ be the convex hull of the set
$\\{\varphi_{i_{j}}\\}_{j\geq m}$. By Mazur’s theorem, there exists a sequence
$r_{m}\in R_{m}$, $m=1,2,\ldots$, such that
$\|\nabla r_{m}-{\mathbf{r}}\|_{L_{p}(M)}\to 0\quad\mbox{as }m\to\infty.$
(2.2)
In view of the convexity of the functional
$\varphi\mapsto\int_{M}|\nabla(u-\varphi)|^{p}dV,\quad\varphi\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(M)},$
we have
$\int_{M}|\nabla(u-r_{m})|^{p}dV\leq\sup_{j\geq
m}\int_{M}|\nabla(u-\varphi_{i_{j}})|^{p}dV;$
therefore,
$\int_{M}|\nabla(u-r_{m})|^{p}dV\to\inf_{\varphi\in
C_{0}^{\infty}(M)}\int_{M}|\nabla(u-\varphi)|^{p}dV\quad\mbox{as }m\to\infty.$
Let $\omega\subset M$ be a pre-compact Lipschitz domain. Denoting
$\alpha_{m}=\frac{1}{\operatorname{mes}\omega}\int_{\omega}r_{m}\,dV,$
we obtain in accordance with Lemma 2.2 that the sequence $r_{m}-\alpha_{m}$,
$m=1,2,\ldots$, is fundamental in $W_{p}^{1}(\omega)$. By Lemma 2.1, this
sequence is also fundamental in $W_{p}^{1}(G)$ for any pre-compact Lipschitz
domain $G\subset M$.
At first, we assume that the sequence $\alpha_{m}$, $m=1,2,\ldots$, is
bounded. Extracting from it a convergent subsequence $\alpha_{i_{j}}$,
$j=1,2,\ldots$, we have that the sequence of the functions $r_{m_{j}}$,
$j=1,2,\ldots$, is fundamental in $W_{p}^{1}(G)$ for any pre-compact Lipschitz
domain $G\subset M$. Hence, there exists $v\in W_{p,loc}^{1}(M)$ such that
$\|r_{m_{j}}-v\|_{W_{p}^{1}(G)}\to 0\quad\mbox{as }j\to\infty$
for any pre-compact Lipschitz domain $G\subset M$. In view of (2.2), we have
$\nabla v={\mathbf{r}};$ therefore,
$\int_{M}|\nabla(u-v)|^{p}dV=\inf_{\varphi\in
C_{0}^{\infty}(M)}\int_{M}|\nabla(u-\varphi)|^{p}dV.$ (2.3)
Thus, by the variational principle, the function $w=u-v$ belongs to
$\mathfrak{H}$.
Let us show the validity of inequality (2.1). Let $K\subset\partial\Omega$ be
some compact set. It is easy to see that
$\left.v\right|_{K}=h-w.$ (2.4)
Take a function $\tau\in C_{0}^{\infty}(M)$ equal to one in a neighborhood of
$K$. Putting $\psi_{j}=\tau v+(1-\tau)r_{m_{j}},$ $j=1,2,\ldots,$ we obtain a
sequence of functions from ${\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(M)}$ satisfying the condition
$\left.\psi_{j}\right|_{K}=h-w,\quad j=1,2,\ldots.$
In so doing, we obviously have
$\displaystyle\int_{M}|\nabla(v-\psi_{j})|^{p}dV=\int_{M}|\nabla((1-\tau)(v-r_{m_{j}}))|^{p}dV$
$\displaystyle\quad{}\leq
2^{p}\int_{\operatorname{supp}\tau}|\nabla\tau(v-r_{m_{j}})|^{p}dV+2^{p}\int_{M}|(1-\tau)\nabla(v-r_{m_{j}})|^{p}dV\to
0\;\mbox{as }j\to\infty,$
whence it follows immediately that
$\operatorname{cap}_{h-w}(K)\leq\lim_{j\to\infty}\int_{M}|\nabla\psi_{j}|^{p}dV=\int_{M}|\nabla
v|^{p}dV.$ (2.5)
In view of the arbitrariness of the compact set $K\subset\partial\Omega$, the
last formula implies the estimate
$\operatorname{cap}_{h-w}(\partial M)\leq\int_{M}|\nabla v|^{p}dV<\infty.$
(2.6)
Now, assume that the sequence $\alpha_{m}$, $m=1,2,\ldots$, is not bounded.
Without loss of generality, we can also assume that $|\alpha_{m}|\to\infty$ as
$m\to\infty$. If this is not the case, then we replace $\alpha_{m}$,
$m=1,2,\ldots$, with a suitable subsequence. Applying Lemma 2.2, we arrive at
the inequality
$\int_{\omega}|r_{m}-\alpha_{m}|^{p}\,dV\leq C\int_{\omega}|\nabla
r_{m}|^{p}\,dV$
for all $m=1,2,\ldots$, where the constant $C>0$ does not depend on $m$,
whence we have
$\int_{\omega}\left|\frac{r_{m}}{\alpha_{m}}-1\right|^{p}\,dV\leq\frac{C}{|\alpha_{m}|^{p}}\int_{\omega}|\nabla
r_{m}|^{p}dV\to 0\quad\mbox{as }m\to\infty.$
For any positive integer $m$ we take a positive integer $s_{m}\geq m$ such
that
$\int_{\omega}\left|\alpha_{m}-\frac{\alpha_{m}r_{s_{m}}}{\alpha_{s_{m}}}\right|^{p}dV=|\alpha_{m}|^{p}\int_{\omega}\left|\frac{r_{s_{m}}}{\alpha_{s_{m}}}-1\right|^{p}dV<\frac{1}{2^{m}}$
(2.7)
and
$\left|\frac{\alpha_{m}}{\alpha_{s_{m}}}\right|<\frac{1}{2^{m}}.$ (2.8)
Putting further
$v_{m}=r_{m}-\frac{\alpha_{m}r_{s_{m}}}{\alpha_{s_{m}}},\quad m=1,2,\ldots,$
we obtain
$\displaystyle\int_{\omega}|v_{m}-v_{l}|^{p}dV\leq{}$ $\displaystyle
2^{p}\int_{\omega}|r_{m}-r_{l}-\alpha_{m}+\alpha_{l}|^{p}dV$
$\displaystyle{}+2^{p}\int_{\omega}\left|\alpha_{m}-\frac{\alpha_{m}r_{s_{m}}}{\alpha_{s_{m}}}-\alpha_{l}+\frac{\alpha_{l}r_{s_{l}}}{\alpha_{s_{l}}}\right|^{p}dV,\quad
m,l=1,2,\ldots.$
By Lemma 2.2, the estimate
$\int_{\omega}|r_{m}-r_{l}-\alpha_{m}+\alpha_{l}|^{p}dV\leq
C\int_{\omega}|\nabla(r_{m}-r_{l})|^{p}dV,\quad m,l=1,2,\ldots,$
is valid, where the constant $C>0$ does not depend on $m$ and $l$. At the same
time, condition (2.7) allows us to assert that
$\displaystyle\int_{\omega}\left|\alpha_{m}-\frac{\alpha_{m}r_{s_{m}}}{\alpha_{s_{m}}}-\alpha_{l}+\frac{\alpha_{l}r_{s_{l}}}{\alpha_{s_{l}}}\right|^{p}dV\leq
2^{p}\int_{\omega}\left|\alpha_{m}-\frac{\alpha_{m}r_{s_{m}}}{\alpha_{s_{m}}}\right|^{p}dV$
$\displaystyle\qquad{}+2^{p}\int_{\omega}\left|\alpha_{l}-\frac{\alpha_{l}r_{s_{l}}}{\alpha_{s_{l}}}\right|^{p}dV<\frac{2^{p}}{2^{m}}+\frac{2^{p}}{2^{l}},\quad
m,l=1,2,\ldots.$
Hence, the sequence $v_{m}$, $m=1,2,\ldots$, is fundamental in
$L_{p}(\omega)$. According to Lemma 2.1, this sequence is also fundamental in
$W_{p}^{1}(G)$ for any pre-compact Lipschitz domain $G\subset M$. Let us
denote by $v$ the limit of this sequence. In view of (2.2) and (2.8), we have
$\|\nabla v_{m}-{\mathbf{r}}\|_{L_{p}(M)}\to 0\quad\mbox{as }m\to\infty;$
therefore, $v$ satisfies relation (2.3) and in accordance with the variational
principle the function $w=u-v$ belongs to $\mathfrak{H}$. In so doing, for any
compact set $K\subset\partial M$ condition (2.4) is obviously valid. Thus,
putting $\psi_{j}=\tau v+(1-\tau)v_{j},$ $j=1,2,\ldots,$ where $\tau\in
C_{0}^{\infty}(M)$ is some function equal to one in a neighborhood of $K$, we
obtain
$\displaystyle\int_{M}|\nabla(v-\psi_{j})|^{p}dV=\int_{M}|\nabla((1-\tau)(v-v_{j}))|^{p}dV$
$\displaystyle\quad{}\leq
2^{p}\int_{\operatorname{supp}\tau}|\nabla\tau(v-v_{j})|^{p}dV+2^{p}\int_{M}|(1-\tau)\nabla(v-v_{j})|^{p}dV\to
0\quad\mbox{as }j\to\infty,$
whence we again arrive at relation (2.5) from which (2.6) follows.
It remains to show that condition (2.1) implies the existence of a solution of
problem (1.1)–(1.3). Let (2.1) be valid for some $w\in\mathfrak{H}$. We take
pre-compact Lipschitz domains $\Omega_{i}\subset\Omega_{i+1}$, $i=1,2,\ldots$,
whose union coincides with the entire manifold $M$. Consider the functions
$\varphi_{i}\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(M)}$ such that
$\left.\varphi_{i}\right|_{\overline{\Omega}_{i}\cap\partial
M}=h-w\quad\mbox{and}\quad\int_{M}|\nabla\varphi_{i}|^{p}dV<\operatorname{cap}_{h-w}(\overline{\Omega}_{i}\cap\partial
M)+\frac{1}{2^{i}},\quad i=1,2,\ldots.$
In view of (2.1), the sequence $\nabla\varphi_{i}$, $i=1,2,\ldots$, is bounded
in the space $L_{p}(M)$. Hence, there exists a subsequence
$\nabla\varphi_{i_{j}}$, $j=1,2,\ldots$, of this sequence that weakly
converges in $L_{p}(M)$ to some vector-function ${\mathbf{r}}\in L_{p}(M)$. As
above, we denote by $R_{m}$ the convex hull of the set
$\\{\varphi_{i_{j}}\\}_{j\geq m}$. By Mazur’s theorem, there exists a sequence
$r_{m}\in R_{m}$, $m=1,2,\ldots$, such that (2.2) holds. Since the functional
$\varphi\mapsto\int_{M}|\nabla\varphi_{i}|^{p}dV,\quad\varphi\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(M)},$
is convex, we obtain
$\int_{M}|\nabla r_{m}|^{p}dV<\operatorname{cap}_{h-w}(\partial
M)+\frac{1}{2^{m}},\quad m=1,2,\ldots.$ (2.9)
Also, it can be seen that
$\left.r_{m}\right|_{\overline{\Omega}_{m}\cap\partial M}=h-w,\quad
m=1,2,\ldots.$ (2.10)
One can assume without loss of generality that $\Omega_{1}\cap\partial
M\neq\emptyset$. Thus, we have
$\int_{\Omega_{1}}|\varphi|^{p}dV\leq C\int_{\Omega_{1}}|\nabla\varphi|^{p}dV$
for all $\varphi\in{\stackrel{{\scriptstyle\rm\scriptscriptstyle
o}}{{W}}\\!\\!{}_{p}^{1}(\Omega_{1},\overline{\Omega}_{1}\cap\partial M)},$
where the constant $C>0$ does not depend on $\varphi$. In particular,
$\int_{\Omega_{1}}|r_{i}-r_{j}|^{p}dV\leq
C\int_{\Omega_{1}}|\nabla(r_{i}-r_{j})|^{p}dV$
for all $i,j=1,2,\ldots$, whence it follows that the sequence $r_{i}$,
$i=1,2,\ldots$, is fundamental in $L_{p}(\Omega_{1})$. Lemma 2.1 implies that
this sequence is also fundamental in $W_{p}^{1}(G)$ for any pre-compact
Lipschitz domain $G\subset M$. Let us denote by $u_{1}$ the limit of this
sequence. In view of (2.9) and (2.10), we obtain
$\int_{M}|\nabla u_{1}|^{p}dV<\operatorname{cap}_{h-w}(\partial M)$ (2.11)
and
$\left.u_{1}\right|_{\partial M}=h-w.$ (2.12)
Let us construct a solution of problem (1.1)–(1.3). This time we take a
sequence of functions $\varphi_{i}\in{C_{0}^{\infty}(M\setminus\partial M)},$
$i=1,2,\ldots$, such that
$\int_{M}|\nabla(u_{1}+w-\varphi_{i})|^{p}dV\to\inf_{\varphi\in
C_{0}^{\infty}(M\setminus\partial
M)}\int_{M}|\nabla(u_{1}+w-\varphi)|^{p}dV\quad\mbox{as }i\to\infty.$
By (2.11), the sequence $\nabla\varphi_{i}$, $i=1,2,\ldots$, is bounded in
$L_{p}(\Omega)$. Thus, it has a subsequence $\nabla\varphi_{i_{j}}$,
$j=1,2,\ldots$, that weakly converges in $L_{p}(M)$ to some vector-function
${\mathbf{r}}\in L_{p}(M)$. According to Mazur’s theorem, there exists a
sequence $r_{m}\in R_{m}$, $m=1,2,\ldots$, satisfying relation (2.2). Since
$r_{m}\in C_{0}^{\infty}(M\setminus\partial M)$, $m=1,2,\ldots$, this sequence
is fundamental in $W_{p}^{1}(G)$ for any pre-compact domain $G\subset M$.
Denoting by $u_{0}$ the limit of this sequence, we have
$\left.u_{0}\right|_{\partial
M}=0\quad\mbox{and}\quad\int_{M}|\nabla(u_{1}+w-u_{0})|^{p}dV=\inf_{\varphi\in
C_{0}^{\infty}(M\setminus\partial M)}\int_{M}|\nabla(u_{1}+w-\varphi)|^{p}dV.$
To complete the proof, it remains to note that, in view of (2.12) and the
variational principle, the function $u=u_{1}+w-u_{0}$ is a solution of
(1.1)–(1.3). ∎
## References
* [1] V. V. Brovkin, A. A. Kon’kov, Existence of solutions to the second boundary-value problem for the $p$-Laplacian on Riemannian manifolds, Math. Notes 109:2 (2021) 171–183.
* [2] R. R. Gadyl’shin, G. A. Chechkin, A boundary value problem for the Laplacian with rapidly changing type of boundary conditions in a multi-dimensional domain, Siberian Math. J. 40:2 (1999) 229–244.
* [3] A. A. Grigor’yan, Dimension of spaces of harmonic functions, Math. Notes 48:5 (1990) 1114–1118.
* [4] A. A. Kon’kov, On the solution space of elliptic equations on Riemannian manifolds, Differential Equations 31:5 (1995) 745–752.
* [5] A. A. Kon’kov, On the dimension of the solution space of elliptic systems in unbounded domains, Sbornik Mathematics 1995, 80:2, 411–434.
* [6] S. A. Korolkov, A. G. Losev, Generalized harmonic functions of Riemannian manifolds with ends, Math. Z. 272:1–2 (2012) 459–472.
* [7] A. G. Losev, E. A. Mazepa, On solvability of the boundary value problems for harmonic function on noncompact Riemannian manifolds, Issues Anal. 8(26):3 (2019) 73–82.
* [8] L. D. Kudrjavcev, Solution of the first boundary value problem for self-adjoint elliptic equations in the case of an unbounded region. Izv. Akad. Nauk SSSR Ser. Mat. 31 (1967) 11791199 (Russian).
* [9] N. S. Landkov, Foundations of Modern Potential Theory. Springer-Verlag, Berlin 1972.
* [10] O. A. Ladyzhenskaya, N. N. Ural’tseva, Linear and quasilinear elliptic equations, Academic Press, New York-London, 1968.
* [11] V.G. Maz’ya, Sobolev spaces, Springer Ser. Soviet Math., Springer-Verlag, Berlin 1985.
* [12] V. G. Maz’ya, S. V. Poborchi, Existence and uniqueness of an energy solution to the Dirichlet problem for the Laplace equation in the exterior of a multi-dimensional paraboloid, J. Math. Sci. 172:4 (2011) 532–554.
|
# Efficient Neural Network Training via
Forward and Backward Propagation Sparsification
Xiao Zhou $~{}^{1}$, Weizhong Zhang∗1, Zonghao Chen2, Shizhe Diao1, Tong
Zhang$~{}^{1}$
1 Hong Kong University of Science and Technology, 2 Tsinghua University
<EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
Equal contributionJointly with Google Research
###### Abstract
Sparse training is a natural idea to accelerate the training speed of deep
neural networks and save the memory usage, especially since large modern
neural networks are significantly over-parameterized. However, most of the
existing methods cannot achieve this goal in practice because the chain rule
based gradient (w.r.t. structure parameters) estimators adopted by previous
methods require dense computation at least in the backward propagation step.
This paper solves this problem by proposing an efficient sparse training
method with completely sparse forward and backward passes. We first formulate
the training process as a continuous minimization problem under global
sparsity constraint. We then separate the optimization process into two steps,
corresponding to weight update and structure parameter update. For the former
step, we use the conventional chain rule, which can be sparse via exploiting
the sparse structure. For the latter step, instead of using the chain rule
based gradient estimators as in existing methods, we propose a variance
reduced policy gradient estimator, which only requires two forward passes
without backward propagation, thus achieving completely sparse training. We
prove that the variance of our gradient estimator is bounded. Extensive
experimental results on real-world datasets demonstrate that compared to
previous methods, our algorithm is much more effective in accelerating the
training process, up to an order of magnitude faster.
## 1 Introduction
In the last decade, deep neural networks (DNNs) [38, 13, 41] have proved their
outstanding performance in various fields such as computer vision and natural
language processing. However, training such large-sized networks is still very
challenging, requiring huge computational power and storage. This hinders us
from exploring larger networks, which are likely to have better performance.
Moreover, it is a widely-recognized property that modern neural networks are
significantly overparameterized, which means that a fully trained network can
always be sparsified dramatically by network pruning techniques [12, 10, 27,
49, 22] into a small sub-network with negligible degradation in accuracy.
After pruning, the inference efficiency can be greatly improved. Therefore, a
natural question is can we exploit this sparsity to improve the training
efficiency?
The emerging technique called sparse network training [11] is closely related
with our question, which can obtain sparse networks by training from scratch.
We can divide existing methods into two categories, i.e., parametric and non-
parametric, based on whether they explicitly parameterize network structures
with trainable variables (termed structure parameters). Empirical results [26,
37, 47, 25] demonstrate that the sparse networks they obtain have comparable
accuracy with those obtained from network pruning. However, most of them
narrowly aim at finding a sparse subnetwork instead of simultaneously
sparsifying the computation of training by exploiting the sparse structure. As
a consequence, it is hard for them to effectively accelerate the training
process in practice on general platforms, e.g., Tensorflow [1] and Pytorch
[33]. Detailed reasons are discussed below:
* •
Non-parametric methods find the sparse network by repeating a two-stage
procedure that alternates between weight optimization and pruning [11, 8], or
by adding a proper sparsity-inducing regularizer on the weights to the
objective [24, 44]. The two-stage methods prune the networks in weight space
and usually require retraining the obtained subnetwork from scratch every time
when new weights are pruned, which makes training process even more time-
consuming. Moreover, the computation of regularized methods is dense since the
gradients of a zero-valued weights/filters are still nonzero.
* •
All the parametric approaches estimate the gradients based on chain rule. The
gradient w.r.t. the structure parameters can be nonzero even when the
corresponding channel/weight is pruned. Thus, to calculate the gradient via
backward propagation, the error has to be propagated through all the
neurons/channels. This means that the computation of backward propagation has
to be dense. Concrete analysis can be found in Section 3.
We notice that some existing methods [5, 30] can achieve training speedup by
careful implementation. For example, the dense to sparse algorithm [30]
removes some channels if the corresponding weights are quite small for a long
time. However, these methods always need to work with a large model at the
beginning epochs and consume huge memory and heavy computation in the early
stage. Therefore, even with such careful implementations, the speedups they
can achieve are still limited.
In this paper, we propose an efficient channel-level parametric sparse neural
network training method, which is comprised of completely sparse (See Remark
1) forward and backward propagation. We adopt channel-level sparsity since
such sparsity can be efficiently implemented on the current training platforms
to save the computational cost. In our method, we first parameterize the
network structure by associating each filter with a binary mask modeled as an
independent Bernoulli random variable, which can be continuously parameterized
by the probability. Next, inspired by the recent work [50], we globally
control the network size during the whole training process by controlling the
sum of the Bernoulli distribution parameters. Thus, we can formulate the
sparse network training problem into a constrained minimization problem on
both the weights and structure parameters (i.e., the probability). The main
novelty and contribution of this paper lies in our efficient training method
called completely sparse neural network training for solving the minimization
problem. Specifically, to fully exploit the sparse structure, we separate
training iteration into two parts, i.e., weight update and structure parameter
update. For weight update, the conventional backward propagation is used to
calculate the gradient, which can be sparsified completely because the
gradients of the filters with zero valued masks are also zero. For structure
parameter update, we develop a new variance reduced policy gradient estimator
(VR-PGE). Unlike the conventional chain rule based gradient estimators (e.g.,
straight through[3]), VR-PGE estimates the gradient via two forward
propagations, which is completely sparse because of the sparse subnetwork.
Finally, extensive empirical results demonstrate that our method can
significantly accelerate the training process of neural networks.
The main contributions of this paper can be summarized as follows:
* •
We develop an efficient sparse neural network training algorithm with the
following three appealing features:
* –
In our algorithm, the computation in both forward and backward propagations is
completely sparse, i.e., they do not need to go through any pruned channels,
making the computational complexity significantly lower than that in standard
training.
* –
During the whole training procedure, our algorithm works on small sub-networks
with the target sparsity instead of follows a dense-to-sparse scheme.
* –
Our algorithm can be implemented easily on widely-used platforms, e.g.,
Pytorch and Tensorflow, to achieve practical speedup.
* •
We develop a variance reduced policy gradient estimator VR-PGE specifically
for sparse neural network training, and prove that its variance is bounded.
* •
Experimental results demonstrate that our methods can achieve significant
speed-up in training sparse neural networks. This implies that our method can
enable us to explore larger-sized neural networks in the future.
###### Remark 1.
We call a sparse training algorithm completely sparse if both its forward and
backward propagation do not need to go through any pruned channels. For such
algorithms, the computational cost in forward and backward propagation cost
can be roughly reduced to $\rho^{2}*100\%$, with $\rho$ being the ratio of
remaining unpruned channels.
## 2 Related Work
In this section, we briefly review the studies on neural network pruning,
which refers to the algorithms that prune DNNs after fully trained, and the
recent works on sparse neural network training.
### 2.1 Neural Network Pruning
Network Pruning [11] is a promising technique for reducing the model size and
inference time of DNNs. The key idea of existing methods [11, 10, 49, 22, 29,
15, 51, 43, 46, 35, 18] is to develop effective criteria (e.g, weight
magnitude) to identify and remove the massive unimportant weights contained in
networks after training. To achieve practical speedup on general devices, some
of them prune networks in a structured manner, i.e., remove the weights in a
certain group (e.g., filter) together, while others prune the weights
individually. It has been reported in the literature [10, 27, 49, 22] that
they can improve inference efficiency and reduce memory usage of DNNs by
orders of magnitudes with minor loss in accuracy, which enables the deployment
of DNNs on low-power devices.
We notice that although some pruning methods can be easily extended to train
sparse networks, they cannot accelerate or could even slow down the training
process. One reason is they are developed in the scenario that a fully trained
dense network is given, and cannot work well on the models learned in the
early stage of training. Another reason is after each pruning iteration, one
has to fine tune or even retrain the network for lots of epoch to compensate
the caused accuracy degradation.
### 2.2 Sparse Neural Network Training
The research on sparse neural network training has emerged in the recent
years. Different from the pruning methods, they can find sparse networks
without pre-training a dense one. Existing works can be divided into four
categories based on their granularity in pruning and whether the network
structures are explicitly parameterized. To the best of our knowledge, no
significant training speedups achieved in practice are reported in the
literature. Table 1 summarizes some representative works.
Table 1: Some representative works in sparse neural network training. granularity | non-parametric | parametric
---|---|---
weight-level | [6, 8, 51, 24, 20, 31, 42, 32, 5] | [45, 40, 28, 50, 20]
channel-level | [44, 14] | [21, 26, 47, 28, 18]
Weight-level non-parametric methods, e.g., [8, 11, 51, 31, 32], always adopt a
two-stage training procedure that alternates between weight optimization and
pruning. They differ in the schedules of tuning the prune ratio over training
and layers. [11] prunes the weights with the magnitude below a certain
threshold and [51, 8] gradually increase the pruning rate during training.
[32, 6] automatically reallocate parameters across layers during training via
controlling the global sparsity.
Channel-level non-parametric methods [14, 44] are proposed to achieve a
practical acceleration in inference. [44] is a structured sparse learning
method, which adds a group Lasso regularization into the objective function of
DNNs with each group comprised of the weights in a filter. [14] proposes a
soft filter pruning method. It zeroizes instead of hard pruning the filters
with small $\ell_{2}$ norm, after which these filters are treated the same
with other filters in training. It is obvious that these methods cannot
achieve significant speedup in training since they need to calculate the full
gradient in backward propagation although the forward propagation could be
sparsified if implemented carefully.
Parametric methods multiply each weight/channel with a binary [50, 47, 40, 45]
or continuous [26, 28, 21, 20] mask, which can be either deterministic [26,
45] or stochastic [50, 47, 28, 40, 21, 20]. The mask is always parameterized
via a continuous trainable variable, i.e., structure parameter. The sparsity
is achieved by adding sparsity-inducing regularizers on the masks. The
novelties of these methods lie in estimating the gradients w.r.t structure
parameters in training. To be precise,
* •
Deterministic Binary Mask. [45] parameterizes its deterministic binary mask as
a simple step function and estimates the gradients via sigmoid straight
through estimator (STE) [3].
* •
Deterministic Continuous Mask. [26] uses the linear coefficients of batch
normalization (BN) as a continuous mask and enforces most of them to 0 by
penalizing the objective with $\ell_{1}$ norm of the coefficients. [20]
defines the mask as a soft threshold function with learnable threshold. These
methods can estimate the gradients via standard backward propagation.
* •
Stochastic Binary Mask. [47, 40] model the mask as a bernoulli random variable
and the gradients w.r.t. the parameters of bernoulli distributions are
estimated via STE. [50] estimates the gradients via Gumbel-Softmax trick [17],
which is more accurate than STE.
* •
Stochastic Continuous Mask. [28, 21] parameterize the mask as a continuous
function $g(c,\epsilon)$, which is differentiable w.r.t. $c$, and $\epsilon$
is a parameter free noise, e.g., Gaussian noise $\mathcal{N}(0,1)$. In this
way, the gradients can be calcuated via conventional backward propagation.
Therefore, we can see that all of these parametric methods estimate the
gradients of the structure parameters based on the chain rule in backward
propagation. This makes the training iteration cannot be sparsified by
exploiting the sparse network structure. For the details, please refer to
Section 3.
---
Figure 1: A fully connected network. $\boldsymbol{w}$ is the weight matrix of
$1$st layer, $m_{i}$ is the mask of $i$-th neuron; $\hat{y}$,
$\hat{\boldsymbol{x}}_{in}$ and $\hat{x}_{i}$ are the output, input and
preactivation. The $3$rd neuron (in grey) is pruned.
## 3 Why Existing Parameteric Methods Cannot Achieve Practical Speedup?
In this section, we reformulate existing parametric channel-level methods into
a unified framework to explain why they cannot accelerate the training process
in practice.
Notice that convolutional layer can be viewed as a generalized fully connected
layer, i.e., viewing the channels as neurons and convolution of two matrices
as a generalized multiplication (see [9]). Hence, for simplicity, we consider
the fully connected network in Figure 1. Moreover, since the channels in CNNs
are corresponding to the neurons in fully connected networks, we consider
neuron-level instead of weight-level sparse training in our example.
As discussed in Section 2, existing methods parameterize the 4 kinds of mask
in the following ways:
$\displaystyle\textup{(i): }m_{i}=\phi(s_{i});\quad$
$\displaystyle\textup{(ii): }m_{i}=\psi(s_{i});\quad\textup{(iii):
}m_{i}=g(s_{i},\epsilon),\epsilon\sim\mathcal{N}(0,1);\quad\textup{(iv):
}m\sim\textup{Bern}(p_{i}(s)),$
where the function $\phi(s_{i})$ is binary, e.g., step function; $\psi(s_{i})$
is a continuous function; $g(s_{i},\epsilon)$ is differentiable w.r.t.
$s_{i}$. All existing methods estimate the gradient of the loss
$\ell(\hat{y},y)$ w.r.t. $s_{i}$ based on chain rule, which can be formulated
into a unified form below.
Specifically, we take the pruned neuron $x_{3}$ in Figure 1 as an example, the
gradient is calculated as
$\displaystyle\nabla_{s_{3}}\ell(\hat{y},y)=\underbrace{\frac{\partial{\ell(\hat{y},y)}}{\partial\hat{x}_{3}}}_{a}\underbrace{\left(\boldsymbol{w}^{\top}_{:,3}\boldsymbol{x}_{in}\right)}_{forward}\frac{\partial
m_{3}}{\partial s_{3}}.$ (1)
Existing parametric methods developed different ways to estimate
$\frac{\partial m_{3}}{\partial s_{3}}$. Actually, for cases (ii) and (iii),
the gradients are well-defined and thus can be calculated directly. STE is
used to estimate the gradient in case (i) [45]. For cases (iv), [47, 40, 50]
adopt STE and Gumbel-Softmax.
In Eqn.(1), the term (a) is always nonzero especially when $\hat{x}_{3}$ is
followed by BN. Hence, we can see that even for the pruned neuron $x_{3}$, the
gradient $\frac{\partial m_{3}}{\partial s_{3}}$ can be nonzero in all four
cases. This means the backward propagation has to go though all the
neurons/channels, leading to dense computation.
At last, we can know from Eqn.(1) that forward propagation in existing methods
cannot be completely sparse. Although
$\boldsymbol{w}^{\top}_{:,3}\boldsymbol{x}_{in}$ can be computed sparsely as
in general models $\boldsymbol{x}_{in}$ could be a sparse tensor of a layer
with some channels being pruned, we need to calculate it for each neuron via
forward propagation to calculate RHS of Eqn.(1). Thus, even if carefully
implemented, the computational cost of forward propagation can only be reduced
to $\rho*100\%$ instead of $\rho^{2}*100\%$ as in inference.
That’s why we argue that existing methods need dense computation at least in
backward propagation. So they cannot speed up the training process effectively
in practice.
###### Remark 2.
The authors of GrowEfficient [47] confirmed that actually they also calculated
the gradient of $q_{c}$ w.r.t, $\boldsymbol{s}_{c}$ in their Eqn.(6) via STE
even if $q_{c}=0$. Thus need dense backward propagation.
## 4 Channel-level Completely Sparse Neural Network Training
Below, we present our sparse neural network training framework and the
efficient training algorithm.
### 4.1 Framework of Channel-level Sparse Training
Given a convolutional network $f(x;\boldsymbol{w})$, let
$\\{\mathcal{F}_{c}:c\in\mathcal{C}\\}$ be the set of filters with
$\mathcal{C}$ being the set of indices of all the channels. To parameterize
the network structure, we associate each $\mathcal{F}_{c}$ with a binary mask
$m_{c}$, which is an independent Bernoulli random variable. Thus, each channel
is computed as
${\small\boldsymbol{x}_{\text{out,
c}}=\boldsymbol{x}_{in}*\left(\mathcal{F}_{c}m_{c}\right),}$
with $*$ being the convolution operation. Inspired by [50], to avoid the
problems, e.g., gradient vanishing, we parameterize $m_{c}$ directly on the
probability $s_{c}$, i.e., $m_{c}$ equals to 1 and 0 with the probabilities
$s_{c}$ and $1-s_{c}$, respectively. Thus, we can control the channel size by
the sum of $s_{c}$. Following [50], we can formulate channel-level sparse
network training into the following framework:
$\displaystyle\min_{\boldsymbol{w},\boldsymbol{s}}~{}\mathbb{E}_{p(\boldsymbol{m}|\boldsymbol{s})}~{}\mathcal{L}(\boldsymbol{w},\boldsymbol{m}):=\frac{1}{N}\sum_{i=1}^{N}\ell\left(f\left(\mathbf{x}_{i};\boldsymbol{w},\boldsymbol{m}\right),\mathbf{y}_{i}\right)$
(2) $\displaystyle
s.t.~{}\boldsymbol{w}\in\mathbb{R}^{n},\boldsymbol{s}\in\mathcal{S}:=\\{\boldsymbol{s}\in[0,1]^{|\mathcal{C}|}:\boldsymbol{1}^{\top}\boldsymbol{s}\leq
K\\},$
where $\left\\{\left(\mathbf{x}_{i},\mathbf{y}_{i}\right)\right\\}_{i=1}^{N}$
is the training dataset, $\boldsymbol{w}$ is the weights of the original
network, $f\left(\cdot;\cdot,\cdot\right)$ is the pruned network, and
$\ell(\cdot,\cdot)$ is the loss function, e.g, cross entropy loss.
$K=\rho|\mathcal{C}|$ controls the remaining channel size with $\rho$ being
the remaining ratio of the channels.
Discussion. We’d like to point out that although our framework is inspired by
[50], our main contribution is the efficient solver comprised of completely
sparse forward/backward propagation for Problem (2). Moreover, our framework
can prune the weights in fully connected layers together, since we can
associate each weight with an independent mask.
### 4.2 Completely Sparse Training with Variance Reduced Policy Gradient
Now we present our completely sparse training method, which can solve Problem
(2) via completely sparse forward and backward propagation. The key idea is to
separate the training iteration into filter update and structure parameter
update so that the sparsity can be fully exploited.
#### 4.2.1 Filter Update via Completely Sparse Computation
It is easy to see that the computation of the gradient w.r.t. the filters can
be sparsified completely. To prove this point, we just need to clarify the
following two things:
* •
We do not need to update the filters corresponding to the pruned channels.
Consider a pruned channel $c$, i.e., $m_{c}=0$, then due to the chain rule, we
can have
$\displaystyle\frac{\partial\ell\left(f\left(\mathbf{x}_{i};\boldsymbol{w},\boldsymbol{m}\right)\right)}{\partial\mathcal{F}_{c}}=\frac{\partial\ell\left(f\left(\mathbf{x}_{i};\boldsymbol{w},\boldsymbol{m}\right)\right)}{\partial\boldsymbol{x}_{out,c}}\frac{\partial\boldsymbol{x}_{out,c}}{\partial\mathcal{F}_{c}}\equiv
0,$
the last equation holds since $\boldsymbol{x}_{out,c}\equiv 0$. This indicates
that the gradient w.r.t the pruned filter $\mathcal{F}_{c}$ is always 0, and
thus $\mathcal{F}_{c}$ does not need to be updated.
* •
The error cannot pass the pruned channels via backward propagation. Consider a
pruned channel $c$, we denote its output before masking as
$\hat{\boldsymbol{x}}_{out,c}=\boldsymbol{x}_{in}*\mathcal{F}_{c}$, then the
error propagating through this channel can be computed as
$\displaystyle\frac{\partial\ell\left(f\left(\mathbf{x}_{i};\boldsymbol{w},\boldsymbol{m}\right)\right)}{\partial\hat{\boldsymbol{x}}_{out,c}}=\frac{\partial\ell\left(f\left(\mathbf{x}_{i};\boldsymbol{w},\boldsymbol{m}\right)\right)}{\partial\boldsymbol{x}_{out,c}}\frac{\partial\boldsymbol{x}_{out,c}}{\hat{\boldsymbol{x}}_{out,c}}\equiv
0.$
This demonstrates that to calculate the gradient w.r.t. the unpruned filters,
the backward propagation does not need to go through any pruned channels.
Therefore, the filters can be updated via completely sparse backward
propagation.
#### 4.2.2 Structure Parameter Update via Variance Reduced Policy Gradient
We notice that policy gradient estimator (PGE) can estimate the gradient via
forward propagation, avoiding the pathology of chain rule based estimators as
dicussed in Section 3. For abbreviation, we denote
$\mathcal{L}(\boldsymbol{w},\boldsymbol{m})$ as $\mathcal{L}(\boldsymbol{m})$
since $\boldsymbol{w}$ can be viewed as a constant here. The objective can be
written as
$\displaystyle\Phi(\boldsymbol{s})=\mathbb{E}_{p(\boldsymbol{m}|\boldsymbol{s})}~{}\mathcal{L}(\boldsymbol{m}),$
which can be optimized using gradient descent:
$\boldsymbol{s}\leftarrow\boldsymbol{s}-\eta\nabla\Phi(\boldsymbol{s}).$
with learning rate $\eta$. One can obtain a stochastic unbiased estimate of
the gradient $\nabla\Phi(\boldsymbol{s})$ using PGE:
$\displaystyle\nabla\Phi(\boldsymbol{s})=\mathbb{E}_{p(\boldsymbol{m}|\boldsymbol{s})}~{}\mathcal{L}(\boldsymbol{m})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})},$
(PGE)
leading to Policy Gradient method, which may be regarded as a stochastic
gradient descent algorithm:
$\displaystyle\boldsymbol{s}\leftarrow\boldsymbol{s}-\eta\mathcal{L}(\boldsymbol{m})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}.$
(3)
In Eqn.(3), $\mathcal{L}(\boldsymbol{m})$ can be computed via completely
sparse forward propagation and the computational cost of
$\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}=\frac{\boldsymbol{m}-\boldsymbol{s}}{\boldsymbol{s}(1-\boldsymbol{s})}$
is negligible, therefore PGE is computationally efficient.
However, in accordance with the empirical results reported in [36, 17], we
found that standard PGE suffers from high variance and does not work in
practice. Below we will develop a Variance Reduced Policy Gradient Estimator
(VR-PGE) starting from theoretically analyzing the variance of PGE.
Firstly, we know that this variance of PGE is
$\displaystyle\mathbb{E}_{p(\boldsymbol{m}|\boldsymbol{s})}~{}\mathcal{L}^{2}(\boldsymbol{m})\|\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}\|_{2}^{2}-\|\nabla\Phi(\boldsymbol{s})\|_{2}^{2},$
which can be large because $\mathcal{L}(\boldsymbol{m})$ is large.
Mean Field theory [39] indicates that, while $\mathcal{L}(\boldsymbol{m})$ can
be large, the term
$\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})$ is small
when $\boldsymbol{m}$ and $\boldsymbol{m^{\prime}}$ are two independent masks
sampled from a same distribution$p(\boldsymbol{m}|\boldsymbol{s})$ (see the
appendix for the details). This means that we may consider the following
variance reduced preconditioned policy gradient estimator:
$\displaystyle\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\boldsymbol{m}^{\prime}|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}\sim
p(\boldsymbol{m}|\boldsymbol{s})}~{}\left(\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})\right)H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})},$
(VR-PGE)
where $H^{\alpha}(\boldsymbol{s})$ is a specific diagonal preconditioning
matrix
$H^{\alpha}(\boldsymbol{s})=\textup{diag}\left(\boldsymbol{s}\circ(1-\boldsymbol{s})\right)^{\alpha},$
(4)
with $\alpha\in(0,1)$ and $\circ$ being the element-wise product. It plays a
role as adaptive step size and it is shown that this term can reduce the
variance of the stochastic PGE term
$\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}$. The details
can be found in the appendix. Thus $\Phi(\boldsymbol{s})$ can be optimized
via:
$\displaystyle\boldsymbol{s}\leftarrow\boldsymbol{s}-\eta\left(\mathcal{L}\left(\boldsymbol{m}\right)-\mathcal{L}\left(\boldsymbol{m}^{\prime}\right)\right)H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}.$
(5)
In our experiments, we set $\alpha$ to be $\frac{1}{2}$ for our estimator VR-
PGE. The theorem below demonstrates that VR-PGE can have bounded variance.
###### Theorem 1.
Suppose $\boldsymbol{m}$ and $\boldsymbol{m^{\prime}}$ are two independent
masks sampled from the Bernoulli distribution
$p(\boldsymbol{m}|\boldsymbol{s})$, then for any $\alpha\in[\frac{1}{2},1)$
and $\boldsymbol{s}\in(0,1)^{|\mathcal{C}|}$, the variance is bounded for
$\displaystyle\left(\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})\right)H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}$
Finally, we provide a complete view of our sparse training algorithm in
Algorithm 1, which is essentially a projected stochastic gradient descent
equipped with our efficient gradient estimators above. The projection operator
in Algorithm 1 can be computed efficiently using Theorem 1 of [50].
Algorithm 1 Completely Sparse Neural Network Training
0: target remaining ratio $\rho$, a dense network $\boldsymbol{w}$, the step
size $\eta$, and parameter $\alpha$ in (4) .
1: Initialize $\boldsymbol{w}$, let $\boldsymbol{s}=\rho\mathbf{1}$.
2: for training epoch $t=1,2\ldots T$ do
3: for each training iteration do
4: Sample mini batch of data
$\mathcal{B}=\left\\{\left(\mathbf{x}_{1},\mathbf{y}_{1}\right),\ldots,\left(\mathbf{x}_{B},\mathbf{y}_{B}\right)\right\\}$.
5: Sample $\boldsymbol{m}^{(i)}$ from $p(\boldsymbol{m}|\boldsymbol{s})$,
$i=1,2$.
6: Update $\boldsymbol{s}$ and $\boldsymbol{w}$
$\boldsymbol{s}\leftarrow\operatorname{proj}_{\mathcal{S}}(\boldsymbol{z})\mbox{
with
}\boldsymbol{z}=\boldsymbol{s}-\eta\left(\mathcal{L}_{\mathcal{B}}(\boldsymbol{w},\boldsymbol{m}^{(1)})-\mathcal{L}_{\mathcal{B}}(\boldsymbol{w},\boldsymbol{m}^{(2)})\right)H^{\alpha}(\boldsymbol{s})\frac{\boldsymbol{m}^{(1)}-\boldsymbol{s}}{\boldsymbol{s}(1-\boldsymbol{s})},$$\boldsymbol{w}\leftarrow\boldsymbol{w}-\eta\nabla_{\boldsymbol{w}}\mathcal{L}_{\mathcal{B}}\left(\boldsymbol{w},\boldsymbol{m}^{(1)}\right)$
7: end for
8: end for
9: return A pruned network $\boldsymbol{w}\circ\boldsymbol{m}$ by sampling a
mask $\boldsymbol{m}$ from the distribution
$p(\boldsymbol{m}|\boldsymbol{s})$.
Discussion. In our algorithm, benefited from our constraint on
$\boldsymbol{s}$, the channel size of the neural network during training can
be strictly controlled. This is in contrast with GrowEfficient [47], which
ultilizes regularizer term to control the model size and has situations where
model size largely drift away from desired. This will have larger demand for
the GPU memory storage and have more risk that memory usage may explode,
especially when we utilize sparse learning to explore larger models. Moreover,
our forward and backward propagations are completely sparse, i.e., they do not
need to go through any pruned channels. Therefore, the computational cost of
each training iteration can be roughly reduced to $\rho^{2}*100\%$ of the
dense network.
## 5 Experiments
In this section, we conduct a series of experiments to demonstrate the
outstanding performance of our method. We divide the experiments into five
parts. In part one, we compare our method with several state-of-the-art
methods on CIFAR-10 [19] using VGG-16 [38], ResNet-20 [13] and
WideResNet-28-10 [48] to directly showcase the superiority of our method. In
part two, we directly compare with state-of-the-art method GrowEfficient [47]
especially on extremely sparse regions, and on two high capacity networks
VGG-19 [38] and ResNet-32 [13] on CIFAR-10/100 [19]. In part three, we conduct
experiments on a large-scale dataset ImageNet [4] with ResNet-50 [13] and
MobileNetV1 [16] and compare with GrowEfficient [47] across a wide sparsity
region. In part four, we present the train-computational time as a
supplementary to the conceptual train-cost savings to justify the
applicability of sparse training method into practice. In part five, we
present further analysis on epoch-wise train-cost dynamics and experimental
justification of variance reduction of VR-PGE. Due to the space limitation, we
postpone the experimental configurations, calculation schemes on train-cost
savings and train-computational time and additional experiments into appendix.
### 5.1 VGG-16, ResNet-20 and WideResNet-28-10 on CIFAR-10
Table 2: Comparison with the channel pruning methods L1-Pruning [22], SoftNet [14], ThiNet [29], Provable [23] and one channel sparse training method GrowEfficient [47] on CIFAR-10. Model | Method | Val Acc(%) | Params(%) | FLOPs(%) | Train-Cost Savings($\times$)
---|---|---|---|---|---
VGG-16 | Original | 92.9 | 100 | 100 | 1$\times$
| L1-Pruning | 91.8 | 19.9 | 19.9 | -
| SoftNet | 92.1 | 36.0 | 36.1 | -
| ThiNet | 90.8 | 36.0 | 36.1 | -
| Provable | 92.4 | 5.7 | 15.0 | -
| GrowEfficient | 92.5 | 5.0 | 13.6 | 1.22$\times$
| Ours | 92.5 | 4.4 | 8.7 | 8.69$\times$
ResNet-20 | Original | 91.3 | 100 | 100 | 1$\times$
| L1-Pruning | 90.9 | 55.6 | 55.4 | -
| SoftNet | 90.8 | 53.6 | 50.6 | -
| ThiNet | 89.2 | 67.1 | 67.3 | -
| Provable | 90.8 | 37.3 | 54.5 | -
| GrowEfficient | 90.91 | 35.8 | 50.2 | 1.13$\times$
| Ours | 90.93 | 35.1 | 36.1 | 2.09$\times$
WRN-28-10 | Original | 96.2 | 100 | 100 | 1$\times$
| L1-Pruning | 95.2 | 20.8 | 49.5 | -
| GrowEfficient | 95.3 | 9.3 | 28.3 | 1.17$\times$
| Ours | 95.6 | 8.4 | 7.9 | 9.39$\times$
Table 2 presents Top-1 validation accuracy, parameters, FLOPs and train-cost
savings comparisons with channel pruning methods L1-Pruning [22], SoftNet
[14], ThiNet [29], Provable [23] and sparse training method GrowEfficient
[47]. SoftNet can train from scratch but requires completely dense
computation. Other pruning methods all require pretraining of dense model and
multiple rounds of pruning and finetuning, which makes them slower than
vanilla dense model training. Therefore the train-cost savings of these
methods are below 1$\times$ and thus shown as ("-") in Table 2.
GrowEfficient [47] is a recently proposed state-of-the-art channel-level
sparse training method showing train-cost savings compared with dense
training. As described in Section 3, GrowEfficient features completely dense
backward and partially sparse forward pass, making its train-cost saving
limited by $\frac{3}{2}$. By contrast, the train-cost savings of our method is
not limited by any constraint. The details of how train-cost savings are
computed can be found in appendix.
Table 2 shows that our method generally exhibits better performance in terms
of validation accuracy, parameters and particularly FLOPs. In terms of train-
cost savings, our method shows at least 1.85$\times$ speed-up against
GrowEfficient [47] and up to 9.39$\times$ speed-up against dense training.
### 5.2 Wider Range of Sparsity on CIFAR-10/100 on VGG-19 and ResNet-32
Figure 2: Comparison of Top-1 Validation Accuracy and Train-cost Savings on
CIFAR-10/100.
In this section, we explore sparser regions of training efficiency to present
a broader comparision with state-of-the-art channel sparse training method
GrowEfficient [47].
We plot eight figures demonstrating the relationships between the Top-1
validation accuracy, FLOPs and train-cost savings. We find that our method
generally achieves higher accuracy under same FLOPs settings. To be noted, the
train-cost savings of our method is drastically higher than GrowEfficient
[47], reaching up to 58.8$\times$ when sparisty approches 1.56% on ResNet-32
on CIFAR-100, while the speed-up of GrowEfficient is limited by $\frac{3}{2}$.
Table 3: Comparison with the channel pruning methods L1-Pruning [22], SoftNet [14], Provable [23] and one channel sparse training method GrowEfficient [47] on ImageNet-1K. Model | Method | Val Acc(%) | Params(%) | FLOPs(%) | Train-Cost Savings($\times$)
---|---|---|---|---|---
ResNet-50 | Original | 77.0 | 100 | 100 | 1$\times$
| L1-Pruning | 74.7 | 85.2 | 77.5 | -
| SoftNet | 74.6 | - | 58.2 | -
| Provable | 75.2 | 65.9 | 70.0 | -
| GrowEfficient | 75.2 | 61.2 | 50.3 | 1.10$\times$
| Ours | 76.0 | 48.2 | 46.8 | 1.60$\times$
| Ours | 73.5 | 27.0 | 24.7 | 3.02$\times$
| Ours | 69.3 | 10.8 | 10.1 | 7.36$\times$
### 5.3 ResNet-50 and MobileNetV1 on ImageNet-1K
In this section, we present the performance boost obtained by our method on
ResNet-50 and MobileNetV1 on ImageNet-1K [4]. Our method searches a model with
76.0% Top-1 accuracy, 48.2% parameters and 46.8% FLOPs beating all compared
state-of-the-art methods. The train-cost saving comes up to 1.60$\times$ and
is not prominent due to the accuracy constraint to match up with compared
methods. Therefore we give a harder limit to the channel size and present
sparser results on the same Table 3, reaching up to 7.36$\times$ speed-up
while still preserving 69.3% Top-1 accuracy. For the already compact model
MobileNetV1, we plot two figures in Figure 3 comparing with GrowEfficient
[47]. We find that our method is much stabler in sparse regions and obtains
much higher train-cost savings.
Figure 3: Top-1 Validation Accuracy and Train-cost Savings on MobileNetV1 on
ImageNet. Epoch-wise Train-cost and Variance Comparison on VGG-19 on CIFAR-10.
### 5.4 Actual Training Computational Time Testing
In this section, we provide actual training computational time on VGG-19 and
CIFAR-10. The GPU in test is RTX 2080 Ti and the deep learning framework is
Pytorch [33]. The intent of this section is to justify the feasibility of our
method in reducing actual computational time cost, rather than staying in
conceptual training FLOPs reduction. The computational time cost is measured
by wall clock time, focusing on forward and backward propagation. We present
training computational time in Table 4 with varying sparsity as in Figure 2.
It shows that the computational time savings increases steadily with the
sparisty. We also notice the gap between the savings in FLOPS and
computational time. The gap comes from the difference between FLOPs and actual
forward/backward time. More specifically, forward/backward time is slowed down
by data-loading processes and generally affected by hardware latency and
throughput, network architecture, etc. At extremely sparse regions, the pure
computational time of sparse networks only occupies little of the
forward/backward time and the cost of data management and hardware latency
dominates the wall-clock time. Despite this gap, it can be expected that our
train-cost savings can be better translated into real speed-up in exploring
large models where the pure computational time dominates the forward/backward
time, which promises a bright future for making training infeasibly large
models into practice.
Table 4: Train-computational Time on VGG-19 with CIFAR-10. The computational time saving is not as prominent as train-cost savings while still achieving nearly an order of reduction, preserving 87.97% accuracy. Model | Val Acc(%) | Params(%) | FLOPs(%) | Train-Cost Savings($\times$) | Train-Computational Time(min)
---|---|---|---|---|---
VGG-19 | 93.84 | 100.00 | 100.00 | 1.00$\times$ | 21.85 (1.00$\times$)
| 93.46 | 23.71 | 28.57 | 2.64$\times$ | 14.04 (1.55$\times$)
| 93.11 | 12.75 | 19.33 | 3.89$\times$ | 10.43 (2.09$\times$)
| 92.23 | 6.69 | 10.27 | 7.30$\times$ | 6.83 (3.20$\times$)
| 90.82 | 3.06 | 4.94 | 15.28$\times$ | 4.86 (4.50$\times$)
| 87.97 | 0.80 | 1.70 | 44.68$\times$ | 2.95 (7.41$\times$)
### 5.5 Further Analysis
[Epoch-wise Train-cost Dynamics of Sparse Training Process] We plot the train-
cost dynamics in Figure 3. The vertical label is the ratio of train-cost to
dense training, the inverse of train-cost savings. This demonstrates huge
difference between our method and GrowEfficient [47]. The model searched by
our method exhibits 92.73% Top-1 accuracy, 16.68% parameters, 14.28% FLOPs
with 5.28$\times$ train-cost savings, while the model searched by
GrowEfficient exhibits 92.47% Top-1 accuracy, 18.08% parameters, 22.74% FLOPs
with 1.21$\times$ train-cost savings.
[Experimental Verification of Variance Reduction of VR-PGE against PGE] We
plot the mean of variance of gradients of channels from different layers. The
model checkpoint and input data are selected randomly. The gradients are
calculated in two approaches, VR-PGE and PGE. From the rightmost graph of
Figure 3, we find that the VR-PGE reduces variance significantly, up to 3
orders of magnitude.
## 6 Conclusion
This paper proposes an efficient sparse neural network training method with
completely sparse forward and backward passes. A novel gradient estimator
named VR-PGE is developed for updating structure parameters, which estimates
the gradient via two sparse forward propagation. We theoretically proved that
VR-PGE has bounded variance. In this way, we can separate the weight and
structure update in training and making the whole training process completely
sparse. Emprical results demonstrate that the proposed method can
significantly accelerate the training process of DNNs in practice. This
enables us to explore larger-sized neural networks in the future.
## Acknowledgments and Disclosure of Funding
This work is supported by GRF 16201320.
Supplemental Material: Efficient Neural Network Training via Forward and
Backward Propagation Sparsification
This appendix can be divided into four parts. To be precise,
* 1.
Section A gives the detailed proof of Theorem 1 and discuss the convergence of
our method.
* 2.
Section B present experimental configurations of this work.
* 3.
Section C present calculation schemes on train-cost savings and train-
computational time.
* 4.
Section D discusses the potentials and limitations of this work.
## Appendix A Proof of Theorem 1
### A.1 Properties of Overparameterized Deep Neural Networks
Before giving the detailed proof, we would like to present the following two
properties of overparameterized deep neural networks, which are implied by the
latest studies based on the mean field theory. We will empirically verify
these properties in this section and adopt them as assumptions in our proof.
###### Property 1.
Given the probability $\boldsymbol{s}$ and the weights $\boldsymbol{w}$ for an
overparameterized deep neural network, then for two independent masks
$\boldsymbol{m}$ and $\boldsymbol{m}^{\prime}$ sampled from
$p(\cdot|\boldsymbol{s})$,
$\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})$ is always
small. That is
$\displaystyle V(\boldsymbol{s}):=\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\left(\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})\right)^{2}$
(6)
is small.
The mean field theory based studies [39, 7] proved that discrete deep neural
networks can be viewed as sampling neurons/channels from continuous networks
according to certain distributions. As the numbers of neurons/channels
increase, the output of discrete networks would converge to that of the
continuous networks (see Theorem 3 in [39] and Theorem 1 in [7]). Although in
standard neural networks we do not have the scaling operator as [39, 7] for
computing the expectation, due to the batch normalization layer, the affect
caused by this difference can largely be eliminated. The subnetworks
$\boldsymbol{m}$ and $\boldsymbol{m}^{\prime}$ here can be roughly viewed as
sampled from a common continuous network. Therefore,
$\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})$ would be
always small. That’s why Property 1 holds.
###### Property 2.
Given the probability $\boldsymbol{s}$ and the weights $\boldsymbol{w}$ for an
overparameterized deep neural network, consider a mask $\boldsymbol{m}$
sampled from $p(\cdot|\boldsymbol{s})$, if we flip one component of
$\boldsymbol{m}$, then the network would not change too much. Combined with
Property 1, this can be stated as: for any $j\in\mathcal{C}$, we denote
$\boldsymbol{m}_{-j}$ and $\boldsymbol{s}_{-j}$ to be all the components of
$\boldsymbol{m}$ and $\boldsymbol{s}$ except the $j$-th component, and define
$\displaystyle
V_{\max}(\boldsymbol{s}):=\max_{\boldsymbol{m}_{j}\in\\{0,1\\},j\in\mathcal{C}}\mathbb{E}_{\boldsymbol{m}_{-j}\sim
p(\cdot|\boldsymbol{s}_{-j})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\left(\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})\right)^{2},$
then
$\displaystyle V_{\max}(\boldsymbol{s})\approx V(\boldsymbol{s}).$ (7)
In the mean field based studies [39, 7], they model output of a neuron/channel
as a expectation of weighted sum of the neurons/channels in the previous layer
w.r.t. a certain distribution. Therefore, the affect of flipping one component
of the mask on expectation is negligible. Therefore Property 2 holds.
### A.2 Detailed Proof
###### Proof.
In this proof, we denote
$\displaystyle\left(\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})\right)H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}$
as
$\mathcal{G}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})$.
Note that the total variance
$\displaystyle\mbox{Var}(\mathcal{G}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s}))$
$\displaystyle=$ $\displaystyle\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\|\mathcal{G}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})\|_{2}^{2}-\|\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\mathcal{G}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})\|_{2}^{2},$
we only need to prove that the term $\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\|\mathcal{G}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})\|_{2}^{2}$
is bounded.
We let $\boldsymbol{m}_{-j}$ and $\boldsymbol{s}_{-j}$ be all the components
of $\boldsymbol{m}$ and $\boldsymbol{s}$ except the $j$-th component with
$j\in\mathcal{C}$. We consider the $j$-th component of
$\mathcal{G}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})$,
i.e.,
$\mathcal{G}_{j}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})$,
then $\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\|\mathcal{G}_{j}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})\|_{2}^{2}$
can be estimated as
$\displaystyle\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\left(\mathcal{G}_{j}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})\right)^{2}$
$\displaystyle=$ $\displaystyle\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\left(\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})\right)^{2}[H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}]_{j}^{2}$
$\displaystyle=$ $\displaystyle\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\left(\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})\right)^{2}\left(\boldsymbol{s}_{j}^{2\alpha}(1-\boldsymbol{s}_{j})^{2\alpha}\frac{(\boldsymbol{m}_{j}-\boldsymbol{s}_{j})^{2}}{\boldsymbol{s}^{2}_{j}(1-\boldsymbol{s}_{j})^{2}}\right)$
(8) $\displaystyle=$ $\displaystyle\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\left(\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})\right)^{2}\left(\boldsymbol{s}_{j}^{2(\alpha-1)}(1-\boldsymbol{s}_{j})^{2(\alpha-1)}(\boldsymbol{m}_{j}-\boldsymbol{s}_{j})^{2}\right)$
$\displaystyle=$ $\displaystyle\mathbb{E}_{\boldsymbol{m}_{j}\sim
p(\cdot|\boldsymbol{s}_{j})}\left(\mathbb{E}_{\boldsymbol{m}_{-j}\sim
p(\cdot|\boldsymbol{s}_{-j})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\left(\mathcal{L}(\boldsymbol{m})-\mathcal{L}(\boldsymbol{m}^{\prime})\right)^{2}\right)\left(\boldsymbol{s}_{j}^{2(\alpha-1)}(1-\boldsymbol{s}_{j})^{2(\alpha-1)}(\boldsymbol{m}_{j}-\boldsymbol{s}_{j})^{2}\right)$
$\displaystyle\mathop{\leq}_{\eqref{eqn:property-flip}}$ $\displaystyle
V_{\max}(\boldsymbol{s})\mathbb{E}_{\boldsymbol{m}_{j}\sim
p(\cdot|\boldsymbol{s}_{j})}\left(\boldsymbol{s}_{j}^{2(\alpha-1)}(1-\boldsymbol{s}_{j})^{2(\alpha-1)}(\boldsymbol{m}_{j}-\boldsymbol{s}_{j})^{2}\right)$
(9) $\displaystyle=$
$\displaystyle\left(\boldsymbol{s}_{j}^{2\alpha}(1-\boldsymbol{s}_{j})^{(2\alpha-1)}+\boldsymbol{s}_{j}^{2\alpha-1}(1-\boldsymbol{s}_{j})^{2\alpha}\right)V_{\max}(\boldsymbol{s}).$
(10)
Thus $\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\|\mathcal{G}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})\|_{2}^{2}$
can be estimated as follows:
$\displaystyle\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\|\mathcal{G}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})\|_{2}^{2}$
$\displaystyle=$
$\displaystyle\sum_{j\in\mathcal{C}}\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\left(\mathcal{G}_{j}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})\right)^{2}$
$\displaystyle\leq$ $\displaystyle
V_{\max}(\boldsymbol{s})\sum_{j\in\mathcal{C}}\boldsymbol{s}_{j}^{2\alpha}(1-\boldsymbol{s}_{j})^{(2\alpha-1)}+\boldsymbol{s}_{j}^{2\alpha-1}(1-\boldsymbol{s}_{j})^{2\alpha}.$
(11)
Thus, when $\alpha\in[\frac{1}{2},1)$, we have
$\displaystyle\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\|\mathcal{G}^{\alpha}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})\|_{2}^{2}$
$\displaystyle\leq$ $\displaystyle
V_{\max}(\boldsymbol{s})\sum_{j\in\mathcal{C}}\boldsymbol{s}_{j}^{2\alpha}(1-\boldsymbol{s}_{j})^{(2\alpha-1)}+\boldsymbol{s}_{j}^{2\alpha-1}(1-\boldsymbol{s}_{j})^{2\alpha}$
$\displaystyle\leq$ $\displaystyle|\mathcal{C}|V_{\max}(\boldsymbol{s}).$
The last inequality holds since the term
$\boldsymbol{s}_{j}^{2\alpha}(1-\boldsymbol{s}_{j})^{(2\alpha-1)}+\boldsymbol{s}_{j}^{2\alpha-1}(1-\boldsymbol{s}_{j})^{2\alpha}$
is monotonically decreasing w.r.t. $\alpha\in[\frac{1}{2},1)$.
Therefore, from Property 1 and 2, we can see that the variance is bounded for
any $\boldsymbol{s}$.
∎
###### Remark 3.
Eqn. (8) and (9) indicate that $H^{\alpha}(\boldsymbol{\boldsymbol{s}})$ is
introduced to reduce the variance of the stochastic PGE term
$\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}$. Without
$H^{\alpha}(\boldsymbol{\boldsymbol{s}})$ (i.e., $\alpha=0$), from Eqn.(11),
we can see that the total variance bound would be
$\displaystyle
V_{\max}(\boldsymbol{s})\sum_{j\in\mathcal{C}}\frac{1}{(1-\boldsymbol{s}_{j})}+\frac{1}{\boldsymbol{s}_{j}}.$
Because of the sparsity constraints, lots of $\boldsymbol{s}_{j}$ would be
close to $0$. Hence, the total variance in this case could be very large.
###### Remark 4.
Our preconditioning matrix $H^{\alpha}(\boldsymbol{s})$ plays a role as
adaptive step size. The hyper-parameter $\alpha$ can be used to tune its
effect on variance reduction. For a large variance
$\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}$ we can use a
large $\alpha$. In our experiments, we find that simply letting
$\alpha=\frac{1}{2}$ works well.
### A.3 Convergence of Our Method
For the weight update, the convergence can be guaranteed since we use the
standard stochastic gradient descent with the gradient calculated via backward
propagation.
For the parameter $\boldsymbol{s}$, as stated in Section 4.2.2, we update it
as:
$\displaystyle\boldsymbol{s}\leftarrow\boldsymbol{s}-\eta\left(\mathcal{L}\left(\boldsymbol{m}\right)-\mathcal{L}\left(\boldsymbol{m}^{\prime}\right)\right)H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}.$
(12)
Let
$\Delta\boldsymbol{s}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})$
be
$\left(\mathcal{L}\left(\boldsymbol{m}\right)-\mathcal{L}\left(\boldsymbol{m}^{\prime}\right)\right)H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}$,
we can have
$\displaystyle\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\Delta\boldsymbol{s}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})$
$\displaystyle=$ $\displaystyle\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\left(\mathcal{L}\left(\boldsymbol{m}\right)-\mathcal{L}\left(\boldsymbol{m}^{\prime}\right)\right)H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}$
$\displaystyle=$ $\displaystyle\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathcal{L}\left(\boldsymbol{m}\right)H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}-\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\mathcal{L}\left(\boldsymbol{m}^{\prime}\right)H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}$
$\displaystyle=$ $\displaystyle
H^{\alpha}(\boldsymbol{s})\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathcal{L}\left(\boldsymbol{m}\right)\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}-H^{\alpha}(\boldsymbol{s})\mathbb{E}_{\boldsymbol{m}^{\prime}\sim
p(\cdot|\boldsymbol{s})}\mathcal{L}\left(\boldsymbol{m}^{\prime}\right)\underbrace{\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}}_{I}$
$\displaystyle=$ $\displaystyle
H^{\alpha}(\boldsymbol{s})\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathcal{L}\left(\boldsymbol{m}\right)\nabla_{\boldsymbol{s}}\ln{p(\boldsymbol{m}|\boldsymbol{s})}$
(13) $\displaystyle=$ $\displaystyle
H^{\alpha}(\boldsymbol{s})\nabla_{\boldsymbol{s}}\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathcal{L}\left(\boldsymbol{m}\right),$
where Eqn.(13) holds since term
$I=\nabla_{\boldsymbol{s}}\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\boldsymbol{1}\equiv 0$.
Therefore, we can see that
$\Delta\boldsymbol{s}(\boldsymbol{m},\boldsymbol{m}^{\prime}|\boldsymbol{s})$
is an unbiased gradient estimator associated with an adaptive step size, i.e.,
our VR-PGE is a standard preconditioned stochastic gradient descent method.
Thus, the convergence can be guaranteed.
### A.4 Experiments Verfiying Properties 1 and 2 in A.1
Figure 4 presents the values of $\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathcal{L}^{2}(\boldsymbol{m})$, $V(\boldsymbol{s})$
and $V_{\max}(\boldsymbol{s})$ during the training process of ResNet-32 on
CIFAR-10. We can see that $V(\boldsymbol{s})$ and $V_{\max}(\boldsymbol{s})$
are very close during the whole training process and they are smaller than
$\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathcal{L}^{2}(\boldsymbol{m})$ by four orders of
magnitude. This verifies our Property 1 and 2.
Figure 4: Experiments on ResNet32 on CIFAR-10. $V(\boldsymbol{s})$ and
$V_{\max}(\boldsymbol{s})$ are very close during the whole training process
and they are smaller than $\mathbb{E}_{\boldsymbol{m}\sim
p(\cdot|\boldsymbol{s})}\mathcal{L}^{2}(\boldsymbol{m})$ by four orders of
magnitude.
## Appendix B Experimental Configurations
[CIFAR-10/100 Experiments] GPUs: 1 for VGG and ResNet and 2 for WideResNet.
Batch Size: 256. Weight Optimizer: SGD. Weight Learning Rate: 0.1. Weight
Momentum: 0.9. Probability Optimizer: Adam. Probability Learning Rate: 12e-3.
WarmUp: ✗. Label Smoothing: ✗.
[ImageNet-1K Experiments] GPUs: 4. Batch Size: 256. Weight Optimizer: SGD.
Weight Learning Rate: 0.256. Weight Momentum: 0.875. Probability Optimizer:
Adam. Probability Learning Rate: 12e-3. WarmUp: ✓. Label Smoothing: 0.1.
###### Remark 5.
The bold-face probability learning rate 12e-3 is the only hyperparameter
obtained by grid search on CIFAR-10 experiments and applied directly to larger
datasets and networks. Other hyperparameters are applied following the same
practice of previous works [34, 20, 27, 51]. The channels of ResNet32 for
CIFAR experiments are doubled following the same practice of [42].
Table 5: Forward/backward time of dense/sparse networks and accompanying properties. Model | Val Acc(%) | Params(%) | Forward(min) | Backward(min) | Train-Computational Time(min)
---|---|---|---|---|---
VGG-19 | 93.84 | 100.00 | 6.89 | 14.96 | 21.85 (1.00$\times$)
| 93.46 | 23.71 | 6.41 | 7.63 | 14.04 (1.55$\times$)
| 93.11 | 12.75 | 4.89 | 5.54 | 10.43 (2.09$\times$)
| 92.23 | 6.69 | 3.10 | 3.73 | 6.83 (3.20$\times$)
| 90.82 | 3.06 | 2.15 | 2.71 | 4.86 (4.50$\times$)
| 87.97 | 0.80 | 1.27 | 1.68 | 2.95 (7.41$\times$)
## Appendix C Calculation Schemes on Train-cost Savings and Train-
computational Time
### C.1 Train-cost Savings
The train-cost of vanilla dense training can be computed as two parts: in
forward propagation, calculating the loss of weights and in backward
propagation, calculating the gradient of weights and gradient of the
activations of the previous layers. The FLOPs of backward propagation is about
2$\sim$3 times of forward propagation [2]. In the following calculation, we
calculate the FLOPs of forward propagation concretely and consider FLOPs of
backward propagation 2 times of forward propagation for simplicity.
[GrowEfficient] The forward propagation of dense network is $f_{D}$. The
forward propagation of GrowEfficient is partially sparse with FLOPs being
$f_{S}$ and backward propagation is dense. Therefore the train-cost saving is
computed as $\frac{f_{D}+2f_{D}}{f_{S}+f_{D}}=\frac{3}{2+f_{S}/f_{D}}$, upper-
bounded by $\frac{3}{2}$.
[Ours] The forward propagation of dense network is $f_{D}$. The forward
propagation and backward propagation is totally sparse. The FLOPs of forward
propagation is $f_{S}$ and the FLOPs of backward propagation is $2*f_{S}$. The
forward propagation has to be computed two times. Therefore the train-cost
saving is computed as
$\frac{f_{D}+2*f_{D}}{2*f_{S}+2*f_{S}}=\frac{3}{4f_{S}/f_{D}}$. Actually,
$f_{S}/f_{D}$ is roughly equal to $\rho^{2}$, leading to drastically higher
train-cost savings.
### C.2 Train-computational Time
The calculation of train-computational time focuses on the forward and
backward propagation of dense/sparse networks. For both of the dense and
sparse networks, we sum up the computation time of all the forward and
backward propagation in the training process as the train-computational time.
The detailed time cost is presented in Table 5. We can see that we can achieve
significant speedups in computational time.
## Appendix D Potentials and Limitations of This Work
[On Computational Cost Saving] Although our method needs two forward
propagation in each iteration, we have to point out that our method can
achieve significant computational cost saving. The reason is that our forward
and backward is completely sparse, whose computational complexity is roughly
$\rho^{2}*100\%$ of the conventional training algorithms with $\rho$ being the
remain ratio of the channels.
[On Exploring Larger Networks] About the potential of our method in exploring
larger networks, we’d like to clarify the following three things:
* 1\.
The memory cost of the structure parameters $\boldsymbol{s}$ is negligible
compared with the original weight $\boldsymbol{w}$ as each filter is
associated with only one structure parameter, therefore our $\boldsymbol{s}$
would hardly increase the total memory usage.
* 2\.
Although in our method, we need to store the parameter of the full model, this
would not hinder us from exploring larger networks. The reason is that, in
each iteration, we essentially perform forward and backward propagation on the
sparse subnetwork. More importantly, we find that reducing the frequency of
sampling subnetwork, e.g., sample a new subnetwork for every 50 iterations,
during training would not affect the final accuracy. In this way, we can store
the parameters of the full model on CPU memory and store the current
subnetwork on GPU, and synchronize the parameters’ updates to the full model
only when we need to resample a new subnetwork. Hence, our method has great
potentials in exploring larger deep neural networks. We left such engineering
implements as the future work and we also welcome the engineers in the
community to implement our method more efficiently.
* 3\.
In exploring larger networks, the channel remain ratio $\rho$ can be much
smaller than the one in the experiments in the main text. Notice that our
method can reduce the computational complexity to $\rho^{2}*100\%$ of the full
network. It implies that, in this scenario, the potential of our method can be
further stimulated. We left this evaluation as future work after more
efficient implementation as discussed above.
## References
* Abadi et al. [2016] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In _12th $\\{$USENIX$\\}$ symposium on operating systems design and implementation ($\\{$OSDI$\\}$ 16)_, pages 265–283, 2016.
* Baydin et al. [2018] Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey. _Journal of machine learning research_ , 18, 2018.
* Bengio et al. [2013] Yoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. _arXiv preprint arXiv:1308.3432_ , 2013.
* Deng et al. [2009] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_ , pages 248–255. Ieee, 2009.
* Dettmers and Zettlemoyer [2019] Tim Dettmers and Luke Zettlemoyer. Sparse networks from scratch: Faster training without losing performance. _arXiv preprint arXiv:1907.04840_ , 2019.
* Evci et al. [2020] Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. Rigging the lottery: Making all tickets winners. In _International Conference on Machine Learning_ , pages 2943–2952. PMLR, 2020.
* Fang et al. [2019] Cong Fang, Yihong Gu, Weizhong Zhang, and Tong Zhang. Convex formulation of overparameterized deep neural networks. _arXiv preprint arXiv:1911.07626_ , 2019.
* Frankle et al. [2019] Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M Roy, and Michael Carbin. Stabilizing the lottery ticket hypothesis. _arXiv preprint arXiv:1903.01611_ , 2019.
* Gu et al. [2020] Yihong Gu, Weizhong Zhang, Cong Fang, Jason D Lee, and Tong Zhang. How to characterize the landscape of overparameterized convolutional neural networks. In _Advances in Neural Information Processing Systems_ , volume 33, pages 3797–3807, 2020.
* Guo et al. [2016] Yiwen Guo, Anbang Yao, and Yurong Chen. Dynamic network surgery for efficient dnns. In _Advances in neural information processing systems_ , pages 1379–1387, 2016.
* Han et al. [2015] Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections for efficient neural network. In _Advances in neural information processing systems_ , pages 1135–1143, 2015.
* Han et al. [2016] Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. _International Conference on Learning Representations_ , 2016.
* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 770–778, 2016.
* He et al. [2018] Y He, G Kang, X Dong, Y Fu, and Y Yang. Soft filter pruning for accelerating deep convolutional neural networks. In _IJCAI International Joint Conference on Artificial Intelligence_ , 2018.
* He et al. [2017] Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural networks. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 1389–1397, 2017.
* Howard et al. [2017] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. _arXiv preprint arXiv:1704.04861_ , 2017.
* Jang et al. [2017] Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. _International Conference on Learning Representations_ , 2017.
* Kang and Han [2020] Minsoo Kang and Bohyung Han. Operation-aware soft channel pruning using differentiable masks. In _International Conference on Machine Learning_ , pages 5122–5131. PMLR, 2020.
* Krizhevsky [2009] A Krizhevsky. Learning multiple layers of features from tiny images. _Master’s thesis, University of Tront_ , 2009.
* Kusupati et al. [2020] Aditya Kusupati, Vivek Ramanujan, Raghav Somani, Mitchell Wortsman, Prateek Jain, Sham Kakade, and Ali Farhadi. Soft threshold weight reparameterization for learnable sparsity. In _Proceedings of the International Conference on Machine Learning_ , July 2020.
* Lemaire et al. [2019] Carl Lemaire, Andrew Achkar, and Pierre-Marc Jodoin. Structured pruning of neural networks with budget-aware regularization. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 9108–9116, 2019.
* Li et al. [2017] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. _International Conference on Learning Representations_ , 2017.
* Liebenwein et al. [2019] Lucas Liebenwein, Cenk Baykal, Harry Lang, Dan Feldman, and Daniela Rus. Provable filter pruning for efficient neural networks. In _International Conference on Learning Representations_ , 2019.
* Liu et al. [2015] Baoyuan Liu, Min Wang, Hassan Foroosh, Marshall Tappen, and Marianna Pensky. Sparse convolutional neural networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pages 806–814, 2015.
* Liu et al. [2021] Shiwei Liu, Tianlong Chen, Xiaohan Chen, Zahra Atashgahi, Lu Yin, Huanyu Kou, Li Shen, Mykola Pechenizkiy, Zhangyang Wang, and Decebal Constantin Mocanu. Sparse training via boosting pruning plasticity with neuroregeneration. _arXiv preprint arXiv:2106.10404_ , 2021.
* Liu et al. [2017] Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. Learning efficient convolutional networks through network slimming. In _Proceedings of the IEEE International Conference on Computer Vision_ , pages 2736–2744, 2017.
* Liu et al. [2018] Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. In _International Conference on Learning Representations_ , 2018.
* Louizos et al. [2018] Christos Louizos, Max Welling, and Diederik P Kingma. Learning sparse neural networks through l_0 regularization. In _International Conference on Learning Representations_ , 2018.
* Luo et al. [2017] Jian-Hao Luo, Jianxin Wu, and Weiyao Lin. Thinet: A filter level pruning method for deep neural network compression. In _Proceedings of the IEEE international conference on computer vision_ , pages 5058–5066, 2017.
* Lym et al. [2019] Sangkug Lym, Esha Choukse, Siavash Zangeneh, Wei Wen, Sujay Sanghavi, and Mattan Erez. Prunetrain: fast neural network training by dynamic sparse model reconfiguration. In _Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis_ , pages 1–13, 2019.
* Mocanu et al. [2018] Decebal Constantin Mocanu, Elena Mocanu, Peter Stone, Phuong H Nguyen, Madeleine Gibescu, and Antonio Liotta. Scalable training of artificial neural networks with adaptive sparse connectivity inspired by network science. _Nature communications_ , 9(1):1–12, 2018.
* Mostafa and Wang [2019] Hesham Mostafa and Xin Wang. Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization. In _International Conference on Machine Learning_ , pages 4646–4655. PMLR, 2019.
* Paszke et al. [2019] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. _Advances in neural information processing systems_ , 32:8026–8037, 2019.
* Ramanujan et al. [2020] Vivek Ramanujan, Mitchell Wortsman, Aniruddha Kembhavi, Ali Farhadi, and Mohammad Rastegari. What’s hidden in a randomly weighted neural network? In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 11893–11902, 2020.
* Renda et al. [2019] Alex Renda, Jonathan Frankle, and Michael Carbin. Comparing rewinding and fine-tuning in neural network pruning. In _International Conference on Learning Representations_ , 2019.
* Rezende et al. [2014] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In _International conference on machine learning_ , pages 1278–1286. PMLR, 2014.
* Savarese et al. [2020] Pedro Savarese, Hugo Silva, and Michael Maire. Winning the lottery with continuous sparsification. _Advances in Neural Information Processing Systems_ , 33, 2020.
* Simonyan and Zisserman [2015] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. _International Conference on Learning Representations_ , 2015.
* Song et al. [2018] Mei Song, Andrea Montanari, and P Nguyen. A mean field view of the landscape of two-layers neural networks. _Proceedings of the National Academy of Sciences_ , 115:E7665–E7671, 2018.
* Srinivas et al. [2017] Suraj Srinivas, Akshayvarun Subramanya, and R Venkatesh Babu. Training sparse neural networks. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_ , pages 138–145, 2017.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _Advances in Neural Information Processing Systems_ , 2017.
* Wang et al. [2019] Chaoqi Wang, Guodong Zhang, and Roger Grosse. Picking winning tickets before training by preserving gradient flow. In _International Conference on Learning Representations_ , 2019.
* Wang et al. [2020] Ziheng Wang, Jeremy Wohlwend, and Tao Lei. Structured pruning of large language models. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 6151–6162, 2020.
* Wen et al. [2016] Wei Wen, Chunpeng Wu, Yandan Wang, Yiran Chen, and Hai Li. Learning structured sparsity in deep neural networks. In _Advances in neural information processing systems_ , pages 2074–2082, 2016.
* Xiao et al. [2019] Xia Xiao, Zigeng Wang, and Sanguthevar Rajasekaran. Autoprune: Automatic network pruning by regularizing auxiliary parameters. In _Advances in Neural Information Processing Systems_ , pages 13681–13691, 2019.
* Ye et al. [2020] Mao Ye, Chengyue Gong, Lizhen Nie, Denny Zhou, Adam Klivans, and Qiang Liu. Good subnetworks provably exist: Pruning via greedy forward selection. In _International Conference on Machine Learning_ , pages 10820–10830. PMLR, 2020.
* Yuan et al. [2020] Xin Yuan, Pedro Henrique Pamplona Savarese, and Michael Maire. Growing efficient deep networks by structured continuous sparsification. In _International Conference on Learning Representations_ , 2020.
* Zagoruyko and Komodakis [2016] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. In _British Machine Vision Conference 2016_. British Machine Vision Association, 2016.
* Zeng and Urtasun [2019] Wenyuan Zeng and Raquel Urtasun. MLPrune: Multi-layer pruning for automated neural network compression, 2019. URL https://openreview.net/forum?id=r1g5b2RcKm.
* Zhou et al. [2021] Xiao Zhou, Weizhong Zhang, Hang Xu, and Tong Zhang. Effective sparsification of neural networks with global sparsity constraint. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 3599–3608, 2021.
* Zhu and Gupta [2017] Michael Zhu and Suyog Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. _arXiv preprint arXiv:1710.01878_ , 2017.
|
$\displaystyle\leq\mu_{i}^{3}+\alpha_{i}^{\lfloor{\mu_{i}}\rfloor}\mathbb{E}\left[{\mathcal{Y}_{i}^{3}}\right]\leq\mu_{i}^{3}+\mathbb{E}\left[{\mathcal{Y}_{i}^{3}}\right]\;.$
In the first inequality we have used that the geometric distribution is
memoryless. Now simple calculations give that
$\mathbb{E}\left[{\mathcal{Y}_{i}^{3}}\right]\leq 6\mu_{i}(1+\mu_{i})^{2}\leq
24\mu_{i}$, so we get that
$\mathbb{E}\left[{\left|{\mathcal{Y}_{i}-\mu_{i}}\right|^{3}}\right]\leq\mu_{i}^{3}+24\mu_{i}\leq
25\mu_{i}$.
Depending on $\mathbb{E}\left[{S_{d}}\right]$ we will prove different bounds
on $f_{d}$. Let $M>0$ be a large constant. We will prove that if
$\mathbb{E}\left[{S_{d}}\right]<C+M\sqrt{C}$ then $f_{d}\geq L$, if
$C+M\sqrt{C}\leq\mathbb{E}\left[{S_{d}}\right]<C+\tfrac{1}{4}C$ then
$f_{d}\geq
L\varepsilon\sqrt{C\log\left(\tfrac{1}{\varepsilon\sqrt{C}}\right)}$, and if
$C+\tfrac{1}{4}C\leq\mathbb{E}\left[{S_{d}}\right]$ then $f_{d}\geq
L\varepsilon C$. This will prove the result since
$\varepsilon\sqrt{C\log\left(\tfrac{1}{\varepsilon\sqrt{C}}\right)}\geq\varepsilon
C$ if and only if $C\geq\left(1/(\varepsilon\sqrt{C})\right)$, and since
$C\leq\gamma/\varepsilon^{2}$ for a small constant $\gamma$ then
$1\geq\min\left\\{\varepsilon\sqrt{C\log\left(\tfrac{1}{\varepsilon\sqrt{C}}\right)},\varepsilon
C\right\\}$.
If $\mathbb{E}\left[{S_{d}}\right]<C+M\sqrt{C}$ then we will show that
$f_{d}\geq L$. This will follow by a usage of the Berry-Esseen theorem.
###### Theorem 18 (Berry Esseen theorem).
Let $X_{1},\ldots,X_{d}$ be independent random variables with
$\mathbb{E}\left[{X_{i}}\right]=0$,
$\mathbb{E}\left[{X_{i}^{2}}\right]=\sigma_{i}^{2}>0$, and
$\mathbb{E}\left[{\left|{X_{i}}\right|^{3}}\right]=\rho_{i}<\infty$. Let
$F_{d}$ be the cumulative distribution function of $\sum_{i=1}^{d}X_{i}$, let
$\Phi$ be the cumulative distribution function of the standard normal
distribution, and let $\sigma^{2}=\sum_{i=1}^{d}\sigma^{2}$. Then,
$\displaystyle\sup_{x\in\mathbb{R}}\left|{F_{d}(x)-\Phi(x/\sigma)}\right|\leq
K_{1}\frac{\sum_{i=1}^{d}\rho_{i}}{\sigma^{3}}\;.$
where $K_{1}$ is a universal constant.
Since $\mathbb{E}\left[{S_{d}}\right]<C+M\sqrt{C}$ then
$C\geq\mathbb{E}\left[{S_{d}}\right]-M\sqrt{\mathbb{E}\left[{S_{d}}\right]}\leq\mathbb{E}\left[{S_{d}}\right]-M\sqrt{\sum_{i=1}^{d}\sigma_{i}^{2}}$
and we get that
$f_{d}=\Pr\left[S_{d}<C\right]\geq\Pr\left[S_{d}<\mathbb{E}\left[{S_{d}}\right]-M\sqrt{\sum_{i=1}^{d}\sigma_{i}^{2}}\right]$.
Now the Berry-Esseen theorem give us that,
$\displaystyle
f_{d}\geq\Pr\left[S_{d}<\mathbb{E}\left[{S_{d}}\right]-\sqrt{\sum_{i=1}^{d}\sigma_{i}^{2}}\right]\geq\Phi(-M)-K_{1}\frac{\sum_{i=1}^{d}\mathbb{E}\left[{\left|{\mathcal{Y}_{i}-\mu_{i}}\right|^{3}}\right]}{\left(\sum_{i=1}^{d}\sigma_{i}^{2}\right)^{3/2}}$
We know that
$\mathbb{E}\left[{\left|{\mathcal{Y}_{i}-\mu_{i}}\right|^{3}}\right]\leq
25\mu_{i}$ and that $\sigma_{i}^{2}\geq\mu_{i}$ for all $1\leq i\leq d$, so we
get that,
$\displaystyle
f_{d}\geq\Phi(-M)-25K_{1}\frac{\mathbb{E}\left[{S_{d}}\right]}{\mathbb{E}\left[{S_{d}}\right]^{3/2}}\geq\Phi(-M)-25K_{1}\sqrt{8L}\geq
L\;.$
Here we have used that $\mathbb{E}\left[{S_{d}}\right]\geq
C/2\geq\frac{1}{8L}$ and that $L$ is sufficiently small.
Now we consider the case where $\mathbb{E}\left[{S_{d}}\right]\geq
C+M\sqrt{C}$. We define $\beta_{d}=\mathbb{E}\left[{S_{d}}\right]-C$ and note
that $\beta_{d}\geq M\sqrt{C}$. We will need the following claim.
###### Claim 3.
For all $1\leq d\leq k$ and all integers $t\geq 1$ we have that,
$\frac{\Pr\left[S_{d}=t+1\right]}{\Pr\left[S_{d}=t\right]}\leq\frac{\Pr\left[S_{d}=t\right]}{\Pr\left[S_{d}=t-1\right]}\;.$
###### Proof.
We define the sets
$A_{t}=\left\\{(a_{1},\ldots,a_{d})\in\mathbb{N}_{0}^{d}\;\middle|\;\sum_{i=1}^{d}a_{i}=t\right\\}$
and get that
$\Pr\left[\sum_{i=1}^{d}\mathcal{Y}_{i}=t\right]=\sum_{(a_{1},\ldots,a_{d})\in
A_{t}}\prod_{i=1}^{d}\ \alpha_{i}^{a_{i}}(1-\alpha_{i})\;.$
We note that the result it is equivalent to showing that
$\Pr\left[X=t+1\right]\Pr\left[X=t-1\right]\leq\Pr\left[X=t\right]^{2}$ which
in turn is equivalent to
$\sum_{(a,b)\in A_{t+1}\times
A_{t-1}}\prod_{i=1}^{d}\alpha_{i}^{a_{i}+b_{i}}(1-\alpha_{i})^{2}\leq\sum_{(a,b)\in
A_{t}\times A_{t}}\prod_{i=1}^{d}\alpha_{i}^{a_{i}+b_{i}}(1-\alpha_{i})^{2}.$
To see that this latter inequality holds, let $s\in A_{2t}$ and define the map
$g_{s}:\mathbb{N}_{0}^{d}\to\mathbb{N}_{0}$ by $g_{s}(i)=|\\{(a,b)\in
A_{i}\times A_{2t-i}\mid a+b=s\\}|$. We note that $g_{s}(i)>0$ exactly when
$i\in\\{0,1,\dots,2t\\}$. The desired inequality is then equivalent to
$\sum_{s\in A_{2t}}g_{s}(t+1)\prod_{i=1}^{d}\alpha_{i}^{s_{i}}\leq\sum_{s\in
A_{2t}}g_{s}(t)\prod_{i=1}^{d}\alpha_{i}^{s_{i}}.$
We will show that $g_{s}$ is log-concave for each $s\in A_{2t}$. As $g_{s}$ is
clearly symmetric around $i=t$, it will in particular follow that
$g_{s}(t+1)\leq g_{s}(t)$ which then leads to the desired inequality. To show
that $g_{s}$ is log-concave, we note that it is a convolution of log-concave
functions. Indeed, fix $s$ and define for $1\leq j\leq d$, the map
$h_{j}:\mathbb{N}_{0}\to\mathbb{N}_{0}$ by $h_{j}(i)=1$ if $0\leq i\leq s_{j}$
and $h_{j}(i)=0$ otherwise. Then each $h_{j}$ is log-concave, and moreover,
$g_{s}$ is the convolution $g_{s}=h_{1}*\cdots*h_{k}$, i.e.,
$g_{s}(i)=\sum_{\begin{subarray}{c}a\in\mathbb{Z}^{k}\\\
a_{1}+\cdots+a_{d}=i\end{subarray}}\prod_{j=1}^{d}h_{j}(a_{j}).$
It is a standard fact that the convolution of log-concave functions is again
log-concave, and the desired inequality follows. ∎
Now let $\ell_{d}\in\mathbb{N}$ be the minimal integer satisfying that
$\Pr\left[S_{d}=C-1\right]/\Pr\left[S_{d}=C-1-\ell_{d}\right]\geq 2$. Now
combining 3 with the definition of $\ell_{d}$ we get that,
$\displaystyle\Pr\left[S_{d}<C\right]$
$\displaystyle=\sum_{t=1}^{C}\Pr\left[S_{d}=C-t\right]\geq\sum_{t=1}^{\ell_{d}}\Pr\left[S_{d}=C-t\right]\geq\frac{\ell_{d}}{2}\Pr\left[S_{d}=C-1\right]\;,$
$\displaystyle\Pr\left[S_{d}<C\right]$
$\displaystyle=\sum_{t=1}^{C}\Pr\left[S_{d}=C-t\right]\leq\sum_{r=0}^{\lceil{C/\ell_{d}}\rceil}\ell_{d}\Pr\left[S_{d}=C-1-r\ell_{d}\right]$
$\displaystyle\leq\ell_{d}\Pr\left[S_{d}=C-1\right]\sum_{r=0}^{\infty}2^{-r}=2\ell_{d}\Pr\left[S_{d}=C-1\right]\;,$
$\displaystyle\mathbb{E}\left[{(C-S_{d})[S_{d}<C]}\right]$
$\displaystyle=\sum_{t=1}^{C}t\Pr\left[S_{d}=C-t\right]\leq\sum_{r=0}^{\lceil{C/\ell_{d}}\rceil}\left(r\ell_{d}^{2}+\frac{\ell_{d}(\ell_{d}+1)}{2}\right)\Pr\left[S_{d}=C-1-r\ell_{d}\right]$
$\displaystyle\leq\sum_{r=0}^{\lceil{C/\ell_{d}}\rceil}\left(r\ell_{d}^{2}+\frac{\ell_{d}(\ell_{d}+1)}{2}\right)2^{-r}\Pr\left[S_{d}=C-1\right]\leq
4\ell_{d}^{2}\Pr\left[S_{d}=C-1\right]\;,$
$\displaystyle\mathbb{E}\left[{(C-S_{d})[S_{d}<C]}\right]$
$\displaystyle=\sum_{t=1}^{C}t\Pr\left[S_{d}=C-t\right]\geq\sum_{t=1}^{\ell_{d}}t\Pr\left[S_{d}=C-1-t\right]\geq\frac{\ell_{d}^{2}}{4}\Pr\left[S_{d}=C-1\right]\;.$
From this we get that
$\frac{\mathbb{E}\left[{(C-S_{d})[S_{d}<C]}\right]}{8\ell_{d}}\leq
f_{d}\leq\frac{8\mathbb{E}\left[{(C-S_{d})[S_{d}<C]}\right]}{\ell_{d}}$. Now
it is clear that
$\mathbb{E}\left[{(C-S_{d})[S_{d}<C]}\right]\geq\mathbb{E}\left[{(C-S_{k})[S_{k}<C]}\right]$
and we will argue that
$\mathbb{E}\left[{(C-S_{k})[S_{k}<C]}\right]\geq\tfrac{\varepsilon\mu}{2}\geq\tfrac{\varepsilon
C}{4}$. This will imply that $f_{d}\geq\tfrac{\varepsilon C}{32\ell_{d}}$.
Using Theorem 12 we get that $\Pr\left[S_{d}\geq
C-t\right]\geq\frac{\sum_{j\in[m]}\left[{X^{(j)}_{k}\leq
t}\right]}{m}-m^{-1/2+o(1)}$ for all $1\leq t\leq C$ with probability
$1-m^{-\gamma}$, and we know that
$\sum_{t=1}^{C}\frac{\sum_{j\in[m]}\left[{X^{(j)}_{k}\leq
t}\right]}{m}=\varepsilon C$, so fixing such event give us that,
$\displaystyle\mathbb{E}\left[{(C-S_{k})[S_{k}<C]}\right]$
$\displaystyle=\sum_{t=1}^{C}\Pr\left[S_{d}\geq C-t\right]$
$\displaystyle\geq\sum_{t=1}^{C}\left(\frac{\sum_{j\in[m]}\left[{X^{(j)}_{k}\leq
t}\right]}{m}-m^{-1/2+o(1)}\right)$ $\displaystyle=\varepsilon\mu-
Cm^{-1/2+o(1)}$ $\displaystyle\geq\tfrac{\varepsilon C}{2}-Cm^{-1/2+o(1)}$
$\displaystyle\geq\tfrac{\varepsilon C}{4}\;.$
Here we have used that $\varepsilon\leq 1$ and that $1/\varepsilon=m^{o(1)}$.
Now we just need to upper bound $\ell_{d}$. By 3 we get that
$\ell_{d}\leq\lceil{\frac{\log(2)}{\log\left(\tfrac{\Pr\left[S_{d}=C\right]}{\Pr\left[S_{d}=C-1\right]}\right)}}\rceil$
so we want to lower bound
$\frac{\Pr\left[S_{d}=C\right]}{\Pr\left[S_{d}=C-1\right]}$. To do this we
will define exponentially tilted variables $(V_{i})_{1\leq i\leq d}$. Let
$\lambda\in\mathbb{R}$ satisfying $\mathbb{E}[e^{\lambda S_{d}}]<\infty$ be a
parameter which will be determined later. We define $V_{i}$ by
$\Pr\left[V_{i}=t\right]=\frac{\Pr\left[\mathcal{Y}_{i}=t\right]e^{\lambda
t}}{\mathbb{E}\left[{e^{\lambda\mathcal{Y}_{i}}}\right]}$ for $1\leq i\leq d$.
Clearly, this is well-defined since
$\sum_{t=0}^{\infty}\Pr\left[\mathcal{Y}_{i}=t\right]e^{\lambda
t}=\mathbb{E}\left[{e^{\lambda\mathcal{Y}_{i}}}\right]$. As pointed out in
[AAKT21], each $V_{i}$ is also geometric random variables (with parameter
$\alpha_{i}e^{\lambda}$) and,
$\displaystyle\Pr\left[S_{d}=C-t\right]=\frac{\mathbb{E}\left[{e^{\lambda\sum_{i=1}^{d}\mathcal{Y}_{i}}}\right]}{e^{\lambda
C}}e^{\lambda t}\Pr\left[\sum_{i=1}^{d}V_{i}=C-t\right].$ (23)
for all integers $t$. Moreover, there is a unique $\lambda$ maximizing
$\lambda
C-\log\mathbb{E}\left[{e^{\lambda\sum_{i=1}^{d}\mathcal{Y}_{i}}}\right]$, and
with this choice of $\lambda$, it holds that
$\sum_{i=1}^{d}\mathbb{E}\left[{V_{i}}\right]=C$. It is easy to see that
$\lambda<0$ since $\beta>0$. We start by noticing that
$\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]\geq\sum_{i=1}^{d}\mathbb{E}\left[{V_{i}}\right]=C$,
and that
$\mathbb{E}\left[{\left|{V_{i}-\mathbb{E}\left[{V_{i}}\right]}\right|^{3}}\right]\leq
25\mathbb{E}\left[{V_{i}}\right]$ by the same reasoning that gave us that
$\mathbb{E}\left[{\left|{\mathcal{Y}_{i}-\mu_{i}}\right|^{3}}\right]\leq
25\mathbb{E}\left[{\mathcal{Y}_{i}}\right]$ since $V_{i}$ is geometrically
distributed with parameter $\alpha_{i}e^{\lambda}<\alpha_{i}$.
We will also need the following lemma by Aamand et al. [AAKT21]. We state a
simplified version of their lemma which covers our use case.
###### Lemma 19.
Let $X_{1},\ldots,X_{d}$ be independent geometric distributed random variables
with $\operatorname*{Var}\left[{X_{i}}\right]=\sigma_{i}^{2}>0$ and
$\mathbb{E}\left[{\left|{X_{i}-\mathbb{E}\left[{X_{i}}\right]}\right|^{3}}\right]=\rho_{i}<\infty$,
and let $\sigma^{2}=\sum_{i=1}^{d}\sigma_{i}^{2}$. Then for every $t$ where
$\mu+t\sigma$ is an integer,
$\displaystyle\left|{\Pr\left[X=\mu+t\sigma\right]-\frac{1}{\sqrt{2\pi}\sigma}e^{-t^{2}/2}}\right|\leq
K_{2}\left(\frac{\sum_{i=1}^{d}\rho_{i}}{\sigma^{3}}\right)^{2}\;.$
where $K_{2}$ is a universal constant.
We will also need the following claim. The proof is bit technical so we defer
the proof till the end of the section.
###### Claim 4.
If $\beta\geq C$ then,
$\displaystyle\frac{\Pr\left[S_{d}=C\right]}{\Pr\left[S_{d}=C-1\right]}\geq
e^{1/8}\;,$
and if $\beta<C$ then,
$\displaystyle\frac{\Pr\left[S_{d}=C\right]}{\Pr\left[S_{d}=C-1\right]}\geq
e^{\tfrac{1}{8}\beta/C}\;,$
and
$\displaystyle\frac{\Pr\left[S_{d}=C-1\right]}{\Pr\left[S_{d}=C-1-\tfrac{1}{2}\lceil{\tfrac{C}{\beta}}\rceil\right]}<2\;.$
If $\beta\geq\tfrac{1}{4}C$ then using 4 we get that
$\ell_{d}\leq\lceil{32\log(2)}\rceil$ which implies that
$f_{d}\geq\frac{\varepsilon C}{32\lceil{32\log(2)}\rceil}\geq L\varepsilon C$.
So now we just need to focus on the case where $\beta<\tfrac{1}{4}C$. We use 4
to get that $\ell_{d}\leq\lceil{8\log(2)\tfrac{C}{\beta}}\rceil\leq
7\tfrac{C}{\beta}$ which implies that $f_{d}\geq\frac{\varepsilon\beta}{224}$.
We now just need to lower bound $\beta$. From 4 we know that
$\ell_{d}>\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil\geq\tfrac{C}{4\beta}>1$,
so $\Pr\left[S_{d}=C\right]\leq\Pr\left[S_{d}=C-1\right]/2$ and we get that
$\mathbb{E}\left[{(C-S_{d})[S_{d}<C]}\right]\geq\frac{\ell_{d}^{2}}{8}\Pr\left[S_{d}=C\right]$.
We will argue that $\mathbb{E}\left[{(C-S_{d})[S_{d}<C]}\right]\leq
2\varepsilon C$. Using Theorem 12 we get that $\Pr\left[S_{d}\geq
C-t\right]\leq\frac{\sum_{j\in[m]}\left[{X^{(j)}_{k}\leq
t}\right]}{m}+m^{-1/2+o(1)}$ for all $1\leq t\leq C$ with probability
$1-m^{-\gamma}$, and we know that
$\sum_{t=1}^{C}\frac{\sum_{j\in[m]}\left[{X^{(j)}_{k}\leq
t}\right]}{m}=\varepsilon C$, so fixing such event give us that,
$\displaystyle\mathbb{E}\left[{(C-S_{k})[S_{k}<C]}\right]$
$\displaystyle=\sum_{t=1}^{C}\Pr\left[S_{d}\geq C-t\right]$
$\displaystyle\leq\sum_{t=1}^{C}\left(\frac{\sum_{j\in[m]}\left[{X^{(j)}_{k}\leq
t}\right]}{m}+m^{-1/2+o(1)}\right)$
$\displaystyle=\varepsilon\mu+Cm^{-1/2+o(1)}$ $\displaystyle\leq\varepsilon
C-Cm^{-1/2+o(1)}$ $\displaystyle\leq 2\varepsilon C\;.$
Here we have used that $1/\varepsilon=m^{o(1)}$. Combing it all we have that,
$\displaystyle\Pr\left[S_{d}=C\right]\leq 16\frac{\varepsilon
C}{\ell_{d}^{2}}\leq 256\frac{\varepsilon\beta^{2}}{C}\;.$
We will now prove that
$\displaystyle\Pr\left[X=C\right]\geq\frac{\exp\left(-\tfrac{\beta^{2}}{C}\right)}{4\sqrt{C}}\;.$
(24)
This will lead to the desired result. Indeed, combining with the bounds above,
we then obtain that
$\frac{\exp\left(-\tfrac{\beta^{2}}{C}\right)}{\sqrt{C}}\leq\frac{\varepsilon\beta^{2}}{1024C}\;,$
or $\Delta e^{\Delta}\geq\frac{1}{1024\varepsilon\sqrt{C}}$, where we have put
$\Delta=\beta^{2}/C$. Then
$\Delta\geq\tfrac{1}{1024}\log\left(\frac{1}{\varepsilon\sqrt{C}}\right)$, so
that
$\beta\geq\tfrac{1}{32}\sqrt{C\log\left(\frac{1}{\varepsilon\sqrt{C}}\right)}$,
and finally
$f_{d}=\Pr\left[S_{d}<C\right]\geq\tfrac{1}{32}\varepsilon\sqrt{C\log\left(\frac{1}{\varepsilon\sqrt{C}}\right)}\geq
L\varepsilon\sqrt{C\log\left(\frac{1}{\varepsilon\sqrt{C}}\right)}\;,$
as desired.
We thus turn to prove Eq. 24. By Eq. 23 we have that,
$\displaystyle\Pr\left[S_{d}=C\right]=\frac{\mathbb{E}\left[{e^{\lambda\sum_{i=1}^{d}\mathcal{Y}_{i}}}\right]}{e^{\lambda
C}}\Pr\left[\sum_{i\in[k]}V_{i}=C\right]\;.$ (25)
We start by focusing on bounding $\lambda
C-\log\mathbb{E}\left[{e^{\lambda\sum_{i=1}^{d}\mathcal{Y}_{i}}}\right]$.
First write
$\psi_{d}(p)=\log\mathbb{E}\left[{e^{p\sum_{i=1}^{d}\mathcal{Y}_{i}}}\right]$
and define the function $g_{d}(t)=\sup_{p}(pt-\psi_{d}(p))$ which is the
Fenchel-Legendre transform of $\psi_{d}(p)$. By our choice of $\lambda$,
$g_{d}(C)=\lambda
C-\log\mathbb{E}\left[{e^{\lambda\sum_{i=1}^{d}\mathcal{Y}_{i}}}\right]$. It
is easy to check that $g_{d}(C+\beta)=0$ and $g_{d}^{\prime}(C+\beta)=0$, and
a standard result on the Fenchel-Legendre transformations is that
$g_{d}^{\prime\prime}(t)=\frac{1}{\psi_{d}^{\prime\prime}(p_{d}(t))}$ where
$p_{d}(t)$ is the unique number such that
$g_{d}(t)=p_{d}(t)t-\psi_{d}(p_{d}(t))$. Now by Taylor’s expansion formula we
have that
$\displaystyle g_{d}(C)\leq\left(\sup_{C\leq t\leq
C+\beta}g_{d}^{\prime\prime}(t)\right)\frac{\beta^{2}}{2}=\left(\frac{1}{\inf_{C\leq
t\leq C+\beta}\psi_{d}^{\prime\prime}(p_{d}(t))}\right)\frac{\beta^{2}}{2}$
(26)
We have that
$\psi_{d}^{\prime}(p)=\sum_{i=1}^{d}\frac{\mathbb{E}\left[{\mathcal{Y}_{i}e^{pY_{i}}}\right]}{\mathbb{E}\left[{e^{p\mathcal{Y}_{i}}}\right]}$
and
$\displaystyle\psi_{d}^{\prime\prime}(p)=\sum_{i=1}^{d}\left(\frac{\mathbb{E}\left[{\mathcal{Y}_{i}^{2}e^{p\mathcal{Y}_{i}}}\right]}{\mathbb{E}\left[{e^{p\mathcal{Y}_{i}}}\right]}-\left(\frac{\mathbb{E}\left[{\mathcal{Y}_{i}e^{p\mathcal{Y}_{i}}}\right]}{\mathbb{E}\left[{e^{p\mathcal{Y}_{i}}}\right]}\right)^{2}\right)\geq\sum_{i=1}^{d}\frac{\mathbb{E}\left[{\mathcal{Y}_{i}e^{p\mathcal{Y}_{i}}}\right]}{\mathbb{E}\left[{e^{p\mathcal{Y}_{i}}}\right]}=\psi_{d}^{\prime}(p).$
Now, $p_{d}(t)\geq\lambda$ when $C\leq t\leq C+\beta$. This implies that
$\psi_{d}^{\prime\prime}(p(t))\geq\psi_{d}^{\prime}(\lambda)=C$ when $C\leq
t\leq C+\beta$. Combining this with Eq. 25 and Eq. 26 we get that
$\displaystyle\Pr\left[S_{d}=C\right]\geq
e^{-\tfrac{\beta^{2}}{2C}}\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]\geq
e^{-\tfrac{\beta^{2}}{C}}\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]\;.$ (27)
To complete the proof of Eq. 24, it thus suffices to show that
$\Pr\left[\sum_{i\in[k]}V_{i}=C\right]=\tfrac{1}{4\sqrt{C}}$. We use Lemma 19
to get that,
$\displaystyle\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]\geq\frac{1}{\sqrt{2\pi\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]}}-K_{2}\left(\frac{\sum_{i=1}^{d}\mathbb{E}\left[{\left|{V_{i}-\mathbb{E}\left[{V_{i}}\right]}\right|^{3}}\right]}{\left(\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]\right)^{3/2}}\right)^{2}$
Now we use that
$\mathbb{E}\left[{V_{i}}\right]\leq\operatorname*{Var}\left[{V_{i}}\right]\leq
2\mathbb{E}\left[{V_{i}}\right]$,
$\left|{V_{i}-\mathbb{E}\left[{V_{i}}\right]}\right|^{3}\leq
25\mathbb{E}\left[{V_{i}}\right]$, and
$\sum_{i=1}^{d}\mathbb{E}\left[{V_{i}}\right]=C$ to get that,
$\displaystyle\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]\geq\frac{1}{\sqrt{4\pi
C}}-25^{2}K_{2}\frac{1}{C^{2}}$
We know that $C\geq\tfrac{1}{4L}$ so if we choose $L$ sufficiently small we
get that,
$\displaystyle\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]\geq\frac{1}{4\sqrt{C}}\;.$
This leads to the desired bound.
We finish the section by proving 4.
###### Proof of 4.
We start by using Eq. 23 to get that,
$\displaystyle\frac{\Pr\left[S_{d}=C\right]}{\Pr\left[S_{d}=C-1\right]}=e^{-\lambda}\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}$
We want to argue that
$\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}\geq\max\left\\{e^{-1/8},e^{-\tfrac{1}{8}\beta/C}\right\\}$.
First we use Lemma 19 to get that,
$\displaystyle\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}\geq\frac{\tfrac{1}{\sqrt{2\pi\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]}}-K_{2}\left(\tfrac{\sum_{i=1}^{d}\mathbb{E}\left[{\left|{V_{i}-\mathbb{E}\left[{V_{i}}\right]}\right|^{3}}\right]}{\left(\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]\right)^{3/2}}\right)^{2}}{\tfrac{1}{\sqrt{2\pi\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]}}e^{-1/(2\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right])}+K_{2}\left(\tfrac{\sum_{i=1}^{d}\mathbb{E}\left[{\left|{V_{i}-\mathbb{E}\left[{V_{i}}\right]}\right|^{3}}\right]}{\left(\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]\right)^{3/2}}\right)^{2}}$
Now we use that
$\mathbb{E}\left[{V_{i}}\right]\leq\operatorname*{Var}\left[{V_{i}}\right]\leq
2\mathbb{E}\left[{V_{i}}\right]$,
$\left|{V_{i}-\mathbb{E}\left[{V_{i}}\right]}\right|^{3}\leq
25\mathbb{E}\left[{V_{i}}\right]$, and
$\sum_{i=1}^{d}\mathbb{E}\left[{V_{i}}\right]=C$ to get that,
$\displaystyle\frac{\tfrac{1}{\sqrt{2\pi\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]}}-K_{2}\left(\tfrac{\sum_{i=1}^{d}\mathbb{E}\left[{\left|{V_{i}-\mathbb{E}\left[{V_{i}}\right]}\right|^{3}}\right]}{\left(\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]\right)^{3/2}}\right)^{2}}{\tfrac{1}{\sqrt{2\pi\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]}}e^{-1/(2\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right])}+K_{2}\left(\tfrac{\sum_{i=1}^{d}\mathbb{E}\left[{\left|{V_{i}-\mathbb{E}\left[{V_{i}}\right]}\right|^{3}}\right]}{\left(\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]\right)^{3/2}}\right)^{2}}\geq\frac{1-25^{2}K_{2}\sqrt{8\pi\tfrac{1}{C}}}{e^{-1/(4C)}+25^{2}K_{2}\sqrt{8\pi\tfrac{1}{C}}}$
Using that $e^{-1/(4c)}\leq 1-\tfrac{1}{8c}$ we the get that,
$\displaystyle\frac{1-25^{2}K_{2}\sqrt{8\pi\tfrac{1}{C}}}{e^{-1/(4C)}+25^{2}K_{2}\sqrt{8\pi\tfrac{1}{C}}}\geq\frac{1-25^{2}K_{2}\sqrt{8\pi\tfrac{1}{C}}}{1-\tfrac{1}{8c}+25^{2}K_{2}\sqrt{8\pi\tfrac{1}{C}}}=1-\frac{2\cdot
25^{2}K_{2}\sqrt{8\pi\tfrac{1}{C}}-\tfrac{1}{8c}}{1-\tfrac{1}{8c}+25^{2}K_{2}\sqrt{8\pi\tfrac{1}{C}}}$
We now that $C\geq\tfrac{1}{4L}$ so choosing $L$ sufficiently small it holds
that
$\displaystyle\frac{2\cdot
25^{2}K_{2}\sqrt{8\pi\tfrac{1}{C}}-\tfrac{1}{8c}}{1-\tfrac{1}{8c}+25^{2}K_{2}\sqrt{8\pi\tfrac{1}{C}}}\leq\frac{2\cdot
25^{2}K_{2}\sqrt{8\pi}}{\sqrt{C}}$
This implies that,
$\displaystyle\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}\geq
1-\frac{2\cdot 25^{2}K_{2}\sqrt{8\pi}}{\sqrt{C}}$
Clearly, $1-\frac{2\cdot 25^{2}K_{2}\sqrt{8\pi}}{\sqrt{C}}\geq e^{-1/8}$ by
choosing $L$ small enough. We also note that,
$\displaystyle 1-\frac{2\cdot 25^{2}K_{2}\sqrt{8\pi}}{\sqrt{C}}\geq
e^{-\tfrac{4\cdot 25^{2}K_{2}\sqrt{8\pi}}{\sqrt{C}}}\geq
e^{-\tfrac{M}{8\sqrt{C}}}\geq e^{-\tfrac{\beta}{8C}}\;.$
By choosing $M$ large enough. The last inequality follows since $\beta\geq
M\sqrt{C}$.
We now have to bound $\lambda$. We define the function
$h(x)=\sum_{i=1}^{d}\alpha_{i}e^{x}(1-\alpha_{i}e^{x})^{-1}\;.$
We note that
$h(0)=\sum_{i=1}^{d}\mathbb{E}\left[{\mathcal{Y}_{i}}\right]=C+\beta$. We take
the derivative of $h$ twice and get that,
$\displaystyle h^{\prime}(x)$
$\displaystyle=\sum_{i=1}^{d}\alpha_{i}e^{x}(1-\alpha_{i}e^{x})^{-2}$
$\displaystyle h^{\prime\prime}(x)$
$\displaystyle=\sum_{i=1}^{d}ae^{x}(1+e^{x})(1-\alpha_{i}e^{x})^{-3}$
We note that $h^{\prime}(x)\geq 0$ and $h^{\prime\prime}(x)\geq 0$ for all $x$
so $h$ is a monotonically increasing convex function, and
$h^{\prime}(0)=\sum_{i=1}^{d}\operatorname*{Var}\left[{\mathcal{Y}_{i}}\right]\leq\sum_{i=1}^{d}2\mu_{i}=2(C+\beta)$.
If $\beta\geq C$ then again using that $h$ is convex we get that,
$\displaystyle h(-\tfrac{1}{4})$ $\displaystyle\geq
h(0)-\tfrac{1}{4}h^{\prime}(0)\geq C+\beta-2\tfrac{1}{4}(C+\beta)\geq C\;.$
Since $h$ is increasing then it implies that $\lambda\leq-\tfrac{1}{4}$ and we
get that,
$\displaystyle\frac{\Pr\left[S_{d}=C\right]}{\Pr\left[S_{d}=C-1\right]}=e^{-\lambda}\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}\geq\
e^{\tfrac{1}{4}}e^{-\tfrac{1}{8}}=e^{-\tfrac{1}{8}}\;.$
If $\beta<C$ then using that $h$ is convex we get that,
$\displaystyle h(-\tfrac{1}{4}\beta/C)$ $\displaystyle\geq
h(0)-\tfrac{1}{4}\beta/Ch^{\prime}(0)\geq
C+\beta-2\tfrac{1}{4}\beta/C(C+\beta)$
$\displaystyle=C+(1-2\tfrac{1}{4}-2\tfrac{1}{4}\beta/C)\beta\geq
C+(1-4\tfrac{1}{4})\beta\geq C\;.$
Since $h$ is increasing then it implies that $\lambda\leq-\tfrac{1}{4}\beta/C$
and we get that,
$\displaystyle\frac{\Pr\left[S_{d}=C\right]}{\Pr\left[S_{d}=C-1\right]}=e^{-\lambda}\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}\geq
e^{\tfrac{1}{4}\beta/C}e^{-\tfrac{1}{8}\beta/C}=e^{\tfrac{1}{8}\beta/C}\;.$
We now focus on the upper bound. We use Eq. 23 to get that,
$\displaystyle\frac{\Pr\left[S_{d}=C-1\right]}{\Pr\left[S_{d}=C-1-\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil\right]}=e^{-\lambda\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil}\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1-\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil\right]}$
We start by lower bounding $\lambda$. We first note that $h^{\prime}(x)\leq
h(x)$ for all $x$. Using that $h$ is convex we get that,
$\displaystyle C+\beta=h(0)\geq
h(-\beta/C)+\beta/Ch^{\prime}(-\beta/C)\geq\frac{C+\beta}{C}h(-\beta/C)\;.$
This implies that $h(-\beta/C)\leq C$ and $\lambda\geq-\beta/C$. Now we will
bound
$\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1-\lceil{\tfrac{C}{\beta}}\rceil\right]}$.
We will again use Lemma 19.
$\displaystyle\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1-\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil\right]}$
$\displaystyle\leq\frac{\tfrac{1}{\sqrt{2\pi\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]}}e^{-1/(2\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right])}+K_{2}\left(\tfrac{\sum_{i=1}^{d}\mathbb{E}\left[{\left|{V_{i}-\mathbb{E}\left[{V_{i}}\right]}\right|^{3}}\right]}{\left(\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]\right)^{3/2}}\right)^{2}}{\tfrac{1}{\sqrt{2\pi\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]}}e^{-(1+\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil)^{2}/(2\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right])}-K_{2}\left(\tfrac{\sum_{i=1}^{d}\mathbb{E}\left[{\left|{V_{i}-\mathbb{E}\left[{V_{i}}\right]}\right|^{3}}\right]}{\left(\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right]\right)^{3/2}}\right)^{2}}$
$\displaystyle\leq\frac{\tfrac{1}{\sqrt{2\pi
C}}e^{-1/(2\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right])}+25^{2}K_{2}\tfrac{1}{C}}{\tfrac{1}{\sqrt{2\pi
C}}e^{-(1+\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil)^{2}/(2\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right])}-25^{2}K_{2}\tfrac{1}{C}}$
Now we note that since $\beta\geq M\sqrt{C}$ then we get that
$(1+\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil)^{2}/(2\sum_{i=1}^{d}\operatorname*{Var}\left[{V_{i}}\right])\leq\tfrac{1}{12}$.
We then get that,
$\displaystyle\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1-\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil\right]}$
$\displaystyle\leq\frac{\tfrac{1}{\sqrt{2\pi
C}}+25^{2}K_{2}\tfrac{1}{C}}{\tfrac{1}{\sqrt{2\pi
C}}e^{-1/12}-25^{2}K_{2}\tfrac{1}{C}}\leq e^{1/6}$
The last inequality follows by $C\geq\tfrac{1}{4L}$ and choosing $L$ small
enough. We then get that,
$\displaystyle\frac{\Pr\left[S_{d}=C-1\right]}{\Pr\left[S_{d}=C-1-\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil\right]}$
$\displaystyle=e^{-\lambda\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil}\frac{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1\right]}{\Pr\left[\sum_{i=1}^{d}V_{i}=C-1-\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil\right]}$
$\displaystyle\leq
e^{\tfrac{\beta}{C}\tfrac{1}{4}\lceil{\tfrac{C}{\beta}}\rceil}e^{1/6}$
$\displaystyle\leq e^{1/2+1/6}$ $\displaystyle<2$
∎
## 6 The Number of Bins Visited During an Insertion
This section is dedicated to proving the part of Theorem 3 concerning
insertions, which we restate below.
###### Theorem 20.
Let $n,m\in\mathbb{N}$ and $0<\varepsilon<1$ with $1/\varepsilon=n^{o(1)}$.
Let $C=(1+\varepsilon)n/m$. Suppose we insert $n$ balls into $m$ bins, each of
capacity $C$, using consistent hashing with bounded loads and virtual bins
having $k$ levels where $k=c/\varepsilon^{2}$ for $c$ a sufficiently large
universal constant. The expected number of bins visited during an insertion of
a ball is $O(1/f)$.
In fact, the proof uses only that the total number of non-full bins is
$\Theta(f)$ with high probability, not the concrete value of $f$. Therefore
the complicated expression for $f$ will never occur in the proof of the
theorem. All we will occasionally use is the fact that the number of non-full
bins is $\Omega(\varepsilon)$, which follows trivially from a combinatorial
argument.
The section is structured as follows: We start by providing some preliminaries
for the proof of Theorem 20 in Section 6.1. In Section 6.2, we use the results
from Section 5 to provide a strengthening of Lemma 11. Finally, we provide the
proof of Theorem 20 in Section 6.3.
### 6.1 Preliminaries For the Analysis
We start by making the following definition which will be repeatedly be useful
in the analysis to follow.
###### Definition 2.
Consider any distribution of $n$ balls into $m$ bins. We say that a bin is
_close to full_ if it contains more than $(1+\varepsilon/2)n/m$ balls.
Otherwise, we say that it is _far from full_.
Suppose we distribute $n$ balls into $m$ bins each of capacity
$C=(1+\varepsilon)n/m$ using consistent hashing with bounded loads and virtual
bins. By Theorem 4, the number of non-full bins is $\Theta(fm)$ with high
probability when $k=O(1/\varepsilon^{2})$ is sufficiently large. We claim that
it also holds that the number of far from full bins is $\Theta(fm)$ with high
probability. To see this, suppose that after distributing the $n$ balls into
the $m$ bins of capacity $C=(1+\varepsilon)n/m$ each, we reduce the capacity
of each bin to $C_{0}=(1+\varepsilon/2)n/m$. This requires forwarding balls
from the now overflowing bins and this forwarding can only increase the number
of bins containing $(1+\varepsilon/2)n/m$ balls. By Theorem 4, and with
$\varepsilon_{0}=\varepsilon/2$, the number of non-full bins after the
relocating is $\Theta(f_{0}m)$, where
$f_{0}=\begin{cases}\varepsilon_{0}C_{0},&C_{0}\leq\log(1/\varepsilon_{0})\\\
\varepsilon_{0}\sqrt{C_{0}\log\left(\frac{1}{\varepsilon_{0}\sqrt{C}_{0}}\right)},&\log(1/\varepsilon_{0})<C_{0}\leq\frac{1}{2\varepsilon_{0}^{2}}\\\
1,&\frac{1}{2\varepsilon_{0}^{2}}\leq C_{0}.\end{cases}$
But clearly, $f_{0}=\Theta(f)$, so we conclude that the number of far from
full bins before modifying the system is $\Theta(fm)$ with high probability.
Summing up, we have the following corollary to Theorem 4.
###### Corollary 21.
In the setting of Theorem 20, the number of far from full bins is $\Theta(fm)$
with high probability, i.e., with probability $1-n^{-\gamma}$ for every
$\gamma$=O(1).
Finally, recall Definition 1: The _run_ at a given level $i$ containing some
virtual bin $b$, is the maximal interval at level $i$ which contains $b$ and
satisfies that all bins lying in $I$ gets full at level $i$.
### 6.2 High Probability Bound on the Number of Bins Visited in an Insertion
This section will be dedicated to prove the following strengthening of Lemma
11.
###### Theorem 22.
Let $n,m\in\mathbb{N}$ and $0<\varepsilon<1$ with $1/\varepsilon=n^{o(1)}$.
Suppose we distribute $n$ balls into $m$ bins, each of capacity
$C=(1+\varepsilon)n/m$, using consistent hashing with bounded loads and
virtual bins and $k=c/\varepsilon^{2}$ levels for a sufficiently large
constant $c$. Let $b$ be a bin at level $i$ which may be chosen dependently on
the hashing of balls and bins to level $1,\dots,i-1$ and $I$ the run at level
$I$ containing $b$. Let $X$ denote the number of bins in $I$. For any $t\geq
1/f$,
$\Pr[X\geq t]=\exp(-\Omega(tf))+O(n^{-10})$
The same statement holds even if $b$ is given an extra start load of $\lambda
tf\lceil C\varepsilon/2\rceil$ ’artificial’ balls before the hashing of balls
and bins to level $i$, where $\lambda$ is a sufficiently small constant.
Note that it in particular follows that the number of bins visited at a given
level during an insertion is $O(\log(1/\delta)/f)$ with probability
$1-\delta$.
###### Proof.
Let $R$ denote the number of virtual bins in $I$. By Corollary 21, the number
of far from full bins after inserting balls at level $1,\dots,i-1$ is at least
$c_{0}fm$ with high probability, where $c_{0}>0$ is some universal constant.
Furthermore, by a standard Chernoff bound, the number of balls hashing to
level $i$ is at most $2n/k$ with high probability. Here we used the assumption
that $1/\varepsilon=n^{o(1)}$, so $n\gg k\log n$. Condition on those two
events and consider the following modified process at level $i$ where (1) $b$
and every bin which was close to full after inserting the balls at level
$1,\dots,i-1$ forwards every ball it receives at level $i$, i.e., has its
remaining capacity reduced to zero (2) each far from full bin stores at most
$\lceil C\varepsilon/2\rceil$ balls from level $i$ before it starts forwarding
balls at level $i$, i.e., has its remaining capacity reduced to $\lceil
C\varepsilon/2\rceil$. Let $I^{\prime}$ denote the run containing $b$ with
such modified capacities. Letting $R^{\prime}$ denote the number of virtual
bins lying in $I$ it then holds that $R\leq R^{\prime}$, so it suffices to
provide a high probability upper bound on $R^{\prime}$.
Let $s\in\mathbb{N}$ be given and let $A_{s}$ be the event that $s+1\leq
R^{\prime}\leq 2s+1$. Define $J_{1}^{-}$ and $J_{1}^{+}$ to be respectively
the intervals at level $i$ ending and starting at $b$ and having length
$s/(4m)$. Similarly, let $J_{2}^{-}$ and $J_{2}^{+}$ be respectively the
intervals at level $i$ ending and starting at $b$ and having length $4s/m$. We
observe that if $A_{s}$ occur then either of the following events must hold.
* $B_{1}$:
$J_{2}^{-}$ or $J_{2}^{+}$ contains at most $3s$ virtual bins.
* $B_{2}$:
$J_{1}^{-}$ or $J_{1}^{+}$ contains at least $s/2$ virtual bins
* $B_{3}$:
$J_{1}^{-}$ or $J_{1}^{+}$ contains at most $c_{0}fs/8$ virtual bins which
were far from full from levels $1,\dots,i-1$
* $B_{4}$:
$J_{2}^{-}\cup J_{2}^{+}$ contains at least $\lceil C\varepsilon/2\rceil\cdot
c_{0}fs/8$ balls.
Indeed, suppose that $A_{s}$ occur and that neither of $B_{1},B_{2},B_{3}$
occur. We show that then $B_{4}$ must occur. To see this observe that if
$B_{1}$ does not occur, then since $I^{\prime}$ consists of at most $2s+1$
bins, $I^{\prime}\subseteq J_{2}^{-}\cup J_{2}^{+}$. Since $B_{2}$ does not
occur, $I^{\prime}$ must further fully contain $J_{1}^{-}$ or $J_{1}^{+}$.
Since $B_{3}$ does not occur, $I^{\prime}$ must then contain at least
$c_{0}fs/8$ virtual bins which were far from full from levels $1,\dots,i-1$.
Finally any ball allocated to a bin of $I^{\prime}$ must also hash to
$I^{\prime}$. Since the at least $c_{0}fs/8$ far from full bins from level
$1,\dots,i-1$ which lie in $I^{\prime}$ each get full at level $i$ and has a
total capacity of $\lceil C\varepsilon/2\rceil\cdot c_{0}fs/8$, it follows
that at least $\lceil C\varepsilon/2\rceil\cdot c_{0}fs/8$ balls must hash to
$I^{\prime}\subseteq J_{2}^{-}\cup J_{2}^{+}$. This is exactly the event
$B_{4}$.
As in the proof of Lemma 11, we can use standard Chernoff bounds to conclude
that $\Pr[B_{1}]=\exp(-\Omega(s))$, $\Pr[B_{2}]=\exp(-\Omega(s))$ and
$\Pr[B_{3}]=\exp(-\Omega(fs))$. For $B_{4}$, we observe that the expected
number of balls, $\mu$, hashing to $J_{2}^{-}\cup J_{2}^{+}$ is upper bounded
by $2n/k\cdot 8s/m=O(Cs/k)$. As $f=\Omega(\varepsilon)$, we may assume that
$k\geq c^{\prime}/(\varepsilon f)$ for any constant $c^{\prime}$. Thus,
choosing $c^{\prime}$ sufficiently large, it follows that $\mu\leq
Cs\varepsilon fc_{0}/32$. Using another Chernoff bound, it follows
$\Pr[B_{4}]=\exp(-\Omega(fs))$. In conclusion, if $s\geq 1/f$, it holds that
$\Pr[A_{s}]=\exp(-\Omega(fs))$ and the desired result follows as in the proof
of Lemma 11.
Finally, it is easy to modify the constants in the above argument, so that it
carries through even when $b$ is given an extra start load of $\lambda
tf\lceil C\varepsilon/2\rceil$ balls for a sufficiently small constant
$\lambda$, and this gives the final statement of the Theorem. ∎
### 6.3 The Proof of Theorem 20
In this section, we provide the proof of Theorem 20. In order to do so, we
first require a technical lemma which for a given virtual bin, $b$, bounds the
number of balls that are either placed in $b$ or forwarded from $b$ at level
$i$. The technique used to prove this lemma will be used for the final proof
of Theorem 20, but in a more sophisticated way. As such, the lemma below
serves as a nice warm up to the proof of Theorem 20. We start out by choosing
$s^{*}=O(\log n/f)$ sufficiently large, such that Theorem 22 yields that for a
bin $b$ at level $i$, the length of the run containing $b$ at level $i$ (see
Definition 1) has length at most $s^{*}$ with probability $1-1/n^{10}$.
###### Lemma 23.
Let $\lambda=O(1)$ be any constant. Let $b$ be a virtual bin at level $i$ that
may depend on the distribution of balls into bins at level $1,\dots,i-1$. Let
$n_{i}$ denote the number of balls hashing to level $i$ and suppose that
$n/(2k)\leq n_{i}\leq 2n/k$ where $k=O(1/\varepsilon^{2})$ is sufficiently
large (depending on $\lambda$). Let $Z$ denote the number of the $n_{i}$ balls
hashing to level $i$ that either are placed in $b$ or are forwarded from $b$.
Define $\alpha=\lceil C\varepsilon/2\rceil$. For any $\ell\geq 1/5$ satisfying
that $\alpha\ell$ is an integer666The constant $5$ is arbitrary., it holds
that
$\Pr[Z\geq\alpha\ell]\leq e^{-\lambda\ell}+1/n^{10}.$
###### Proof.
We define $A_{\ell}$ to be the event that $Z\geq\alpha\ell$. When upper
bounding the probability of $A_{\ell}$ we may assume that every bin which was
close to full at level $i-1$ forwards all balls landing in it at level $i$. We
may further assume that any bin which was far from full at level $i-1$ stores
exactly $\alpha=\lceil C\varepsilon/2\rceil$ balls and then starts forwarding
balls. Let $Z^{\prime}$ denote the number of balls landing in $b$ or being
forwarded from $b$ at level $i$ in this modified process. Then clearly,
$Z^{\prime}\geq Z$ so $\Pr[Z\geq\alpha\ell]\leq\Pr[Z^{\prime}\geq\alpha\ell]$.
Figure 1: $s$ bins that are far from full and $\alpha s+\alpha\ell$ balls. The
bins are represented as boxes and the balls as disks.
Next note that if $Z^{\prime}\geq\alpha\ell$, then there must an integer
$s\geq 0$ and an interval of the $i$’th level ending in $b$ which contains
exactly $s$ virtual bins which are far from full and exactly $\alpha
s+\alpha\ell$ balls. See Figure 1. Indeed, of the $\ell$ balls landing or
being forwarded from $b$ consider the one hashing furthest behind $b$ at level
$i$, call it $x$. Let $s$ be the the number of far from full bins hashing
between $x$ and $b$ at level $i$. Aside from the $\alpha\ell$ balls landing in
$b$ or being forwarded from $b$, there must hash enough balls between $x$ and
$s$ to put $\alpha$ balls in each of the far from full bins between $x$ and
$b$, and thus the interval between $x$ and $b$ contains exactly $s$ far from
full bins and $\alpha s+\alpha\ell$ balls. We denote the event that there
exists such an interval by $A_{\ell,s}$ noting that we may then upper bound
$\Pr[A_{\ell}]\leq\sum_{s=0}^{s^{*}}\Pr[A_{\ell,s}]+1/n^{10}$. Here we used
that the run containing $b$ has length at most $s^{*}$ with probability at
least $1-1/n^{2}$. We proceed to upper bound $\Pr[A_{\ell,s}]$ for each $0\leq
s\leq s^{*}$.
Figure 2: We generate the number of balls hashing directly between $b_{i}$ and
$b_{i+1}$ sequentially. In each step the number of such balls is dominated by
a geometric distribution with parameter $q$.
So fix $s\geq 0$. We generate the sequence of the $s+1$ far from full bins
$b=b_{0},b_{1},\dots,b_{s+1}$ leading up to $b$ and the balls hashing between
them in a backwards order. Starting at $b_{0}$ we go backwards along the
cyclic order. At some point we reach a bin, $b_{1}$ and we let $X_{0}$ be the
number of balls met along the way in the between $b_{0}$ and $b_{1}$. We
continue this was, going backwards until we have met $s+1$ bins
$b_{1},\dots,b_{s+1}$ and for each $1\leq i\leq s$ we let $X_{i}$ be the
number of balls met in the cyclic order between $b_{i}$ and $b_{i+1}$. See
Figure 2 for an illustration of the process. Let $f$ denote the fraction of
bins which were far from full from level $1,\dots,i-1$. As we saw after
Definition 2, $f\geq\varepsilon/3$. Now when going backwards from $b_{i}$
until we get to $b_{i+1}$, the probability of meeting a ball in each step is
upper bounded by $\frac{n_{i}}{n_{i}+mf-s}\leq\frac{n_{i}}{n_{i}+mf-s^{*}}:=q$
regardless of the values of $X_{0},\dots,X_{i-1}$. Letting
$X_{0}^{\prime},\dots,X_{s}^{\prime}$ be independent geometric variables with
parameter $1-q$, $X=\sum_{i=0}^{s}X_{i}$, and
$X^{\prime}=\sum_{i=0}^{s}X_{i}^{\prime}$ it follows tht for any $t>0$,
$\Pr[X\geq t]\leq\Pr[X^{\prime}\geq t]$.
If $A_{\ell,s}$ holds, then $X\geq s\alpha+\ell\alpha$, so we may upper bound
$\Pr[A_{\ell,s}]\leq\Pr[X^{\prime}\geq s\alpha+\ell\alpha].$
The expected value of $X_{i}^{\prime}$ is
$\mathbb{E}[X_{i}^{\prime}]=\frac{q}{1-q}=\frac{n_{i}}{mf-s^{*}}=O\left(\frac{n}{kmf}\right)=O\left(\frac{\alpha}{kf\varepsilon}\right)\leq\frac{\alpha}{\lambda_{0}}.$
Here $\lambda_{0}=O(1)$ is a sufficiently large constant which we will choose
later. Here we again used the assumption that $1/\varepsilon=m^{o(1)}$ and
moreover that $k=O(1/\varepsilon^{2})$ is sufficiently large. It follows that
$\mathbb{E}[X^{\prime}]=\frac{(s+1)\alpha}{\lambda_{0}}$. Note in particular
that we can ensure that
$\mathbb{E}[X^{\prime}]\leq\frac{s\alpha+\ell\alpha}{2}$, so that
$\Pr[X^{\prime}\geq
s\alpha+\ell\alpha]\leq\Pr\left[X^{\prime}\geq\mathbb{E}[X^{\prime}]+\frac{s\alpha+\ell\alpha}{2}\right].$
We apply Theorem 10 to bound this quantity. If we are in the case, where we
are to use the second bound of Eq. 10, we obtain that
$\Pr\left[X^{\prime}\geq\mathbb{E}[X^{\prime}]+\frac{s\alpha+\ell\alpha}{2}\right]\leq\left(1-\frac{1}{1+2\alpha/\lambda_{0}}\right)^{\frac{s\alpha+\ell\alpha}{4}},$
It is easy to check that
$\left(1-\tfrac{1}{1+2\alpha/\lambda_{0}}\right)^{\alpha}$ can be made smaller
than any sufficiently small constant, just by choosing $\lambda_{0}$
sufficiently large. Thus it follows that
$\displaystyle\Pr[X^{\prime}\geq s\alpha+\ell\alpha]\leq
e^{-\lambda_{1}(s+\ell)},$ (28)
where we can make $\lambda_{1}=O(1)$ sufficiently large. However, we may have
to use the first bound of Eq. 10 and we investigate now which bound we obtain
in this case. Relating back to Theorem 10, we define
$\mu_{0}=\mathbb{E}[X_{i}^{\prime}]\leq\alpha/\lambda_{0}$,
$A=\left(1+\tfrac{1}{2\mu_{0}}\right)\log\left(1+\tfrac{1}{2\mu_{0}}\right)$
and $t=\frac{1}{4\sigma^{2}}\frac{s\alpha+\ell\alpha}{2}$. We further define
$\sigma^{2}=\operatorname*{Var}[X^{\prime}]=(s+1)\operatorname*{Var}[X_{i}^{\prime}]=(s+1)\mu_{0}(1+\mu_{0})$.
If $\mu_{0}\geq 1$, then $A\leq 1/\mu_{0}$ and
$\sigma^{2}\leq(s+1)2\mu_{0}^{2}$, so that
$A\sigma^{2}\leq
2(s+1)\mu_{0}\leq\frac{2(s+1)\alpha}{\lambda_{0}}<\frac{s\alpha+\ell\alpha}{8}=t\sigma^{2},$
by choosing $\lambda_{0}$ large enough. Thus, in this case we obtain the bound
in Eq. 28. If on the other hand $\mu_{0}<1$, then
$t\geq\frac{s\alpha+\ell\alpha}{16(s+1)\mu_{0}}\geq\frac{\lambda_{0}(s+\ell)}{16(s+1)}\geq\lambda_{2}$
for a sufficiently large constant $\lambda_{2}$. Then also $W_{0}(t)$ can be
made larger than any given constant, so we obtain that the bound of Eq. 28
holds in general.
We now sum over $s$ to obtain that
$\Pr[A_{\ell}]\leq 1/n^{10}+\sum_{s=0}^{s^{*}}\Pr[A_{\ell,s}]\leq
1/n^{10}+\sum_{s=0}^{s^{*}}e^{-\lambda_{1}(s+\ell)}\leq
1/n^{10}+e^{-\lambda\ell},$
where again $\lambda$ can be made sufficiently large. This completes the
proof. ∎
With this lemma in hand we are ready to proceed with the proof of Theorem 20.
To guide the reader, we will start by providing a high level idea of how to
obtain the result as follows. First of all, it will be helpful to recall in
details how an insertion of a ball is handled using consistent hashing with
bounded loads and virtual bins. When inserting a ball, $x$, we uniformly hash
$x$ to a random point at a random level. Suppose that the hash value of $x$,
$h(x)$, lies in the $i$’th level $i$ for some $i$. Starting at $h(x)$ we walk
along level $i$ until we arrive at a virtual bin. If the virtual bin is filled
to its capacity with balls hashing to level $1,\dots,i$, we forward a ball
from that bin at level $i$ (it could be $x$ but it could also be another ball
that hashed to level $i$ of lower priority than $x$). We repeat the step,
continuing to walk along level $i$ until we meet a new virtual bin. The first
time we meet a virtual bin, $b$, which was not filled to its capacity with
balls hashing to level $1,\dots,i$, we insert the forwarded ball and find the
smallest level $j>i$ such that the virtual bin of $b$ at level $j$ received a
ball at level $j$. If no such level exists, the insertion is completed.
Otherwise $b$ has an overflow of one ball at level $j$, and we continue the
insertion walking along level $j$ starting at $b$. Theorem 20 claims that the
expected number of bins visited during this entire process is upper bounded by
$O(1/f)$.
The idea of in our proof of Theorem 20 is to split the bins visited during the
insertion of $x$ into _epochs_. An epoch starts by visiting $\lceil
1/f\rceil$ virtual bins of the insertion (unless of course the insertion is
completed before that many bins has been seen). The last of these $\lceil
1/f\rceil$ virtual bins lies at some level $i$ and we finish the epoch by
completing the forwarding of balls needed at level $i$. At this point, we are
either done with the insertion or we need to forward a ball from some virtual
bin at some level $j>i$. The next epochs are similar; having finished epoch
$a-1$, in epoch $a$, we visit $\lceil 1/f\rceil$ virtual bins. At this point,
we will be at some level $\ell$ if we are not already done with the insertion.
We then finish the part of the insertion which takes place at level $\ell$.
Importantly, at the beginning of each epoch, we have just arrived at a virtual
bin at a completely fresh level.
The proof shows that during the first $\lceil 1/f\rceil$ steps of an epoch,
the probability of finishing the insertion in each step is $1-\Omega(f)$. The
intuition for this, is that when we reach a bin $b$ at some level, $i$, the
probability that $b$ is far from full from other levels than $i$ can be showed
to be $\Omega(f)$. Since the number of levels $k=O(1/\varepsilon^{2})$ is
large, the contribution from level $i$ to $b$ only fills $b$ with probability
$1-\Omega(1)$. Thus, the probability of not finishing the insertion during the
first $\lceil 1/f\rceil$ steps of an epoch is $(1-\Omega(f))^{\lceil
1/f\rceil}=e^{-\Omega(1)}=1-\Omega(1)$. Now conditioning on not finishing the
insertion during the first $\lceil 1/f\rceil$ steps of an epoch, we can still
show that the expected number of bins visited during the rest of the epoch is
$O(1/f)$. Letting $\mathcal{E}$ denote the event of finishing the insertion
during the first $\lceil 1/f\rceil$ of an epoch and $T$, the total number of
bins visited during the insertion, we have on a very high level that
$\displaystyle\mathbb{E}[T]\leq\Pr[\mathcal{E}]\lceil
1/f\rceil+\Pr[\mathcal{E}^{c}](O(1/f)+\mathbb{E}[T])=O(1/f)+\Pr[\mathcal{E}^{c}]\mathbb{E}[T]=O(1/f)+p\mathbb{E}[T],$
(29)
where $p=1-\Omega(1)$. Solving this equation, we find that
$\mathbb{E}[T]=O(1/f)$. Here it should be noted that the recursive formula
(29) is a bit too simplified. In our analysis, the $\mathbb{E}[T]$ on the left
hand side and on the right hand side of (29) will not exactly be the same. The
point is that after finishing epoch $a$, and being ready to start epoch $a+1$
at a new level $j$, we will know a bit more about the hashing of balls to
level $1,\dots,j-1$ than we did before the beginning of epoch $a$. However,
using Lemma 22, we know that it is only a relatively small fraction of the
system that we have any information about, and so we can argue that the
expectation does not change much.
With this intuition in mind, our next goal is to obtain Theorem 20.
###### Proof of Theorem 20.
As described above, we partition the insertion into _epochs_ where an epoch
consists of the following two steps.
1. 1.
We go through $\lceil 1/f\rceil$ bins of the insertion ending in a bin at some
level $\ell$.
2. 2.
We continue the insertion at level $\ell$ until we arrive at some bin $b$
which does not get full at level $\ell$.
After step $2.$ we will have to continue the insertion on some level $j>i$ (if
$b$ gets full at that level). Note that the insertion will complete during an
epoch if along the way, we meet a bin which does not get full on either of
levels $1,\dots,k$. We will prove the following more technical claim which
implies Theorem 20.
###### Claim 5.
Let $D>0$ be any constant and $0\leq t\leq D\log n$. Condition on the event
that the insertion has been through $t$ epochs so far. Let $\mathcal{E}$
denote the event that we finish the insertion at one of the first $\lceil
1/f\rceil$ bins met during step 1. of epoch $t+1$. Further define $R$ to be
the random variable which counts the number of bins visited during step 2. of
epoch $t+1$ (if the insertion completes before we get to step 2. we put
$R=0$). Then
$\displaystyle\Pr[\mathcal{E}]\geq c,$ (30)
for some universal constant $c>0$ (which does not depend on $D$), and
$\displaystyle\mathbb{E}[R\mid\mathcal{E}^{c}]=O(1/f).$ (31)
Before proving the claim, we argue how the desired result follows. First of
all, choosing $D=2/c$, it follows from (30) that the probability of not
finishing the insertion during the first $D\log n$ epochs is upper bounded by
$(1-c)^{D\log n}\leq\exp(-2\log n)\leq n^{-2}.$
Conditioned on this extremely low probability event, the expected time for the
insertion is crudely and trivially upper bounded by $mk$, but $mkn^{-2}\ll 1$,
so this has no influence on the expected number of bins visited during the
insertion, as we will now formalize. For $1\leq i\leq D\log n$, we let $X_{i}$
denote the expected number of bins visited during the insertion starting from
epoch $i$. If the insertion finishes before epoch $i$, we let $X_{i}=0$. Let
further $\mathcal{E}_{i}$ denote the probability of finishing the insertion
during step 1. of epoch $i$. Finally, let $R_{i}$ denote the number of bins
visited during step 2. of epoch $i$. Then, for any $0\leq i\leq D\log n$, it
holds that
$\mathbb{E}[X_{i}]\leq\Pr[\mathcal{E}_{i}]\cdot\lceil
1/f\rceil+\Pr[\mathcal{E}_{i}^{c}](\mathbb{E}[R_{i}\mid\mathcal{E}_{i}^{c}]+\mathbb{E}[X_{i+1}]).$
By the claim, $\Pr[\mathcal{E}_{i}^{c}]\leq 1-c$ and
$\mathbb{E}[R_{i}\mid\mathcal{E}_{i}^{c}]=O(1/f)$, so we obtain that
$\mathbb{E}[X_{i}]\leq O(1/f)+(1-c)\mathbb{E}[X_{i+1}].$
Solving this recursion, we obtain that
$\mathbb{E}[X_{0}]=O(1/f)+(1-c)^{i}\mathbb{E}[X_{i+1}],$
so putting $i=D\log n$, we obtain that
$\mathbb{E}[X_{0}]=O(1/f)+n^{-2}\cdot\mathbb{E}[X_{C\log n+1}]=O(1/f)$. But
$\mathbb{E}[X_{0}]$ is exactly the expected number of bins visited during an
insertion. It thus suffices to prove the claim which is the main technical
challenge of the proof.
###### Proof of Claim 5.
We split the proof into the proofs of equations (30) and (31).
### Proof of Equation (30)
It suffices to show that for each of the $\lceil 1/f\rceil$ bins visited
during step 1. of the epoch, the probability of ending the insertion at that
bin is $\Omega(f)$. More formally, we let $\mathcal{A}_{i}$ denote the event
that the $i$’th of these bins, $1\leq i\leq\lceil 1/f\rceil$ is still full,
i.e., that we do not end the insertion at the $i$’th bin, and show that
$\Pr\left[\mathcal{A}_{i}\right]\leq(1-\Omega(f))^{i}+im^{-1/2+o(1)}$. The
probability of not completing the insertion during step 1. of the epoch is
then upper bounded by $(1-\Omega(f))^{\lceil 1/f\rceil}+\lceil{1/f}\rceil
m^{-1/2+o(1)}\leq(1-\Omega(f))^{\lceil 1/f\rceil}+o(1)\leq e^{-\Omega(1)}:=c$
which is the desired result. Here we used that $1/f\leq
O(1/\varepsilon)=m^{o(1)}$.
We will condition on $\mathcal{A}_{i-1}$ so start by making the conditioning
more precise by describing exactly how the bins met before the $i$’th bin of
the epoch at the given level received enough ball to make them full. We then
bound the probability of $\mathcal{A}_{i}$ conditioned on this history. So fix
$i$ with $1\leq i\leq\lceil 1/f\rceil$. The conditioning on
$\mathcal{A}_{i-1}$ means that we have already seen $i-1$ full bins during the
epoch. Suppose that the $i$’th bin, call it $b$, is at some level $\ell$. We
then in particular know that the number of bins we have already visited at
level $\ell$ is at most $i-1\leq 1/f$. Let $a\geq 0$ denote the number of bins
already visited on level $\ell$. Going backwards from $b:=b_{a}$, we denote
these bins $b_{a-1},\dots,b_{0}$. Thus $b_{0}$ was the first bin ever visited
at level $\ell$. Note that possibly $b_{a}=b_{0}$. The conditioning
$\mathcal{A}_{i-1}$ especially implies that after level $\ell$, all bins
$b_{0},\dots,b_{a-1}$ got filled. We now describe how these bins got filled at
level $\ell$ as follows (see also Figure 3 for an illustration of the
process). Starting with $j=0$, if the remaining capacity of $b_{0}$ after
levels $1,\dots,\ell-1$ is $C_{0}$, we go backwards until at some point we
have met a set of bins of total remaining capacity $C^{*}$ and exactly
$C^{*}+C_{0}$ balls for some $C^{*}$. After this sequence, we insert a
question mark ?. This sequence of bins and balls describes how $b_{0}$
received its $C_{0}$ balls, and the ? indicates a yet unknown history. We next
go backwards from $b_{1}$ which has remaining capacity $C_{1}$, say. If we
arrive at $b_{0}$ before having seen $C_{1}$ balls get we simply skip past the
history of how $b_{0}$ got fills and continue the process after the ?. If we
obtain the description of how $b_{1}$ got filled at level $\ell$ before
reaching $b_{0}$, there might still be more balls hashing between $b_{0}$ and
$b_{1}$ (but no bins). In this case we insert a question mark, ?, after the
sequence of balls leading up to $b_{1}$. More generally, for $j=1,\dots,a-1$,
we go backwards from $b_{j}$ generating a sequence of balls. Whenever we reach
a bin, we go back to the nearest ? and start generating balls at that point
until we find a new bin or are done with describing the filling of $b_{j}$ —
In the later case we insert a new ?. The ? before bin $b_{0}$ has a special
status. If we ever reach it, and we still require $C_{j}$ balls to be filled,
we go backwards until we have found a set of bins of total remaining capacity
$C^{*}$ and exactly $C^{*}+C_{0}$ balls for some $C^{*}$. It should be
remarked that there is nothing probabilistic going on here. We have simply
explained a way to find the positions of a set of balls and bins which certify
how bins $b_{0},\dots,b_{a-1}$ got filled at level $\ell$. See Figure 3 for an
example of how this description of how bins $b_{0},\dots,b_{a-1}$ got filled
at level $\ell$ can look.
Figure 3: The filling of bins $b_{0},\dots,b_{3}$ at level $\ell$. The bins
are represented as boxes an the numbers within them describes their remaining
capacity at level $\ell$. The balls are represented as disks and the question
marks ? in circles.
We let $\mathcal{O}$ denote the event that bin $b$ receives more than $\lceil
C\varepsilon/2\rceil$ bins from level $\ell$. We also let $\mathcal{N}$ denote
the event that $b$ receives at least $\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)$ balls from the levels different than $\ell$. We then
get that,
$\displaystyle\Pr\left[\mathcal{A}_{i}\right]\leq\Pr\left[\mathcal{A}_{i-1}\wedge\left(\mathcal{O}\vee\mathcal{N}\right)\right]\leq\Pr\left[\mathcal{A}_{i-1}\right]\Pr\left[{\mathcal{O}}\,\middle|\,{\mathcal{A}_{i-1}}\right]+\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge\mathcal{N}\right]\;.$
We will next show that
$\Pr\left[{\mathcal{O}}\,\middle|\,{\mathcal{A}_{i-1}}\right]=p$, where
$p=1-\Omega(1)$, and
$\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge\mathcal{N}\right]\leq\Pr\left[\mathcal{A}_{i-1}\right]\Pr\left[{\mathcal{O}^{c}}\,\middle|\,{\mathcal{A}_{i-1}}\right](1-c_{0}f)+m^{-1/2+o(1)}$
where $c_{0}=\Omega(1)$ is a universal constant. This will then imply that,
$\displaystyle\Pr\left[\mathcal{A}_{i}\right]\leq\Pr\left[\mathcal{A}_{i-1}\right](1-(1-p)c_{0}f)+m^{-1/2+o(1)}\leq(1-(1-p)c_{0}f)^{i}+im^{-1/2+o(1)}\;.$
We again split the proof into two parts.
##### Bounding $\Pr\left[{\mathcal{O}}\,\middle|\,{\mathcal{A}_{i-1}}\right]$:
In the following we will omit the conditioning of $\mathcal{A}_{i-1}$ from the
notation to avoid clutter. We have described how bins $b_{0},\dots,b_{a-1}$
got filled at level $\ell$. This included a tail of balls behind each bin as
well as some positions marked with ?. Let $s+1$ be the number of such ?-marks
including the mark behind bin $b_{0}$. (See Figure 3). Then $s\leq a$. Let
$X_{0}$ denote the number of balls being forwarded to $b_{a}$ from the
backmost ? before $b_{0}$ and let $X_{1},\dots,X_{s}$, denote the number of
balls forwarded to $b_{a}$ from the remaining positions marked with a ?. The
number of balls, $n_{\ell}$ , hashing to level $\ell$ lies between $n/(2k)$
and $2n/k$ with probability $1-O(n^{-10})$ by a standard Chernoff bound.
Moreover, the total number of bins lying in the history described so far is
$s^{*}=O(\frac{\log n}{f})$ with probability $1-O(n^{-10})$, by Lemma 22
including those bins landing before $b_{0}$ in the description. Now
conditioning on this history, for each $1\leq j\leq s$
$\mathbb{E}[X_{j}]\leq\frac{n_{k}}{m-s^{*}}=O(C/k).$
It follows that
$\mathbb{E}\left[\sum_{j=1}^{s}X_{j}\right]=O(sC/k)=O(C/(fk))=O(C/(\varepsilon
k)).$
If, we choose $k=O(1/\varepsilon^{2})$ sufficiently large, it in particular
follows that $\mathbb{E}\left[\sum_{j=1}^{s}X_{j}\right]\leq\lceil
C\varepsilon/2\rceil/20$. Thus, by Markov’s inequality,
$\displaystyle\Pr\left[\sum_{j=1}^{s}X_{j}\geq\lceil
C\varepsilon/2\rceil/2\right]\leq 1/10.$ (32)
Next, we show that $\Pr[X^{*}\geq\lceil C\varepsilon/2\rceil/2]\leq 1/10$.
From this it will follow that, $\Pr[\mathcal{O}]\leq 1/5$ which is what we
need. For bounding this probability, we may use Lemma 23. To get into the
setting of that lemma, we may simply contract the interval of the cyclic order
from the most backwards ? to $b_{a}$ and remove all unresolved ? in between
except for the most backwards one. That the other places marked with ? now
cannot receive any balls only increases the probability that $X^{*}\geq t$ for
any $t$. Now we are exactly in the setting of Lemma 23, which we apply with
$\ell=1/2$ to conclude that if $k=O(1/\varepsilon^{2})$ is sufficiently large,
then
$\Pr[X^{*}\geq\lceil C\varepsilon/2\rceil/2]\leq 1/10.$
The reader may note that as an alternative to the reduction above (contracting
the so far described history of how the bins $b_{0},\dots,b_{a-1}$ received
their balls), we may simply reprove Lemma 23 in this a tiny bit more
complicated setting. The arguments would remain exactly the same.
In conclusion, we have now argued that
$\Pr\left[{\mathcal{O}}\,\middle|\,{\mathcal{A}_{i-1}}\right]\leq 1/5$.
##### Bounding
$\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge\mathcal{N}\right]$:
We start by defining notation which we used in Section 5.1. Let $Y_{i}$ be the
number of balls which land in bin $b$ or which are forwarded by bin $b$ on
level $i$. We define $Y_{<\ell}=\sum_{i<\ell}Y_{i}$ and
$Y_{>\ell}=\sum_{i>\ell}Y_{i}$. With this notation we get that
$\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge\mathcal{N}\right]=\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}+Y_{>\ell}\geq\frac{n}{m}(1+\lceil C\varepsilon/2\rceil)\right]$
We let let $\mathcal{L}_{\ell}$ be the sigma-algebra generated by the random
choices on the first $\ell$ levels, and $A_{d}$ will be the event as defined
in Section 5.1.
We recall the simpler system from Section 5.1 which we will compare to. Let
$\mathcal{Y}_{i}$ be the number of balls which land in bin $b$ or which are
forwarded by bin $b$ on level $i$ in the simpler system. We similarly define
$\mathcal{Y}_{<\ell}=\sum_{i<\ell}\mathcal{Y}_{i}$ and
$\mathcal{Y}_{>\ell}=\sum_{i>\ell}\mathcal{Y}_{i}$.
We will prove that,
$\displaystyle\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}+Y_{>\ell}\geq\frac{n}{m}(1+\lceil C\varepsilon/2\rceil)\right]$ (33)
$\displaystyle\qquad\qquad\qquad\leq\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\right]\Pr\left[\mathcal{Y}_{<\ell}+\mathcal{Y}_{>\ell}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]+m^{-1/2+o(1)}$ (34)
This will imply the result since
$\displaystyle\Pr\left[\mathcal{Y}_{<\ell}+\mathcal{Y}_{>\ell}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]\leq\Pr\left[\sum_{i=1}^{k}\mathcal{Y}_{i}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]$
Now using Theorem 12 we get that
$\Pr\left[\sum_{i=1}^{k}\mathcal{Y}_{i}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]\leq\Pr\left[\sum_{i=1}^{k}Y_{i}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]+m^{-1/2+o(1)}$, and the discussion at the start
of Section 6.1 give us that
$\Pr\left[\sum_{i=1}^{k}Y_{i}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]\leq 1-c_{0}f$. Thus we just need to prove Eq. 33.
We start by noticing that,
$\displaystyle\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<}+Y_{>\ell}\geq\frac{n}{m}(1+\lceil C\varepsilon/2\rceil)\right]$
$\displaystyle\qquad\qquad=\sum_{s=0}^{\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)-1}\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}=s\wedge
Y_{>\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]$
$\displaystyle\qquad\qquad\qquad\qquad+\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}\geq\frac{n}{m}(1+\lceil C\varepsilon/2\rceil)\right]$
We fix $0\leq s\leq\frac{n}{m}(1+\lceil C\varepsilon/2\rceil)-1$ and get that,
$\displaystyle\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}=s\wedge
Y_{>\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]$
$\displaystyle\qquad\qquad\qquad=\mathbb{E}\left[{\left[{\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}=s}\right]\Pr\left[{Y_{>\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s}\,\middle|\,{\mathcal{L}_{l}}\right]}\right]$
Now we use Lemma 17 and get that
$\Pr\left[{Y_{>\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s}\,\middle|\,{\mathcal{L}_{l}}\right]\leq\Pr\left[\mathcal{Y}_{>\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]+k\left[{A_{\ell}^{c}}\right]+(1+2k)m^{-1/2+o(1)}$.
Using this we get that,
$\displaystyle\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}+Y_{>\ell}\geq\frac{n}{m}(1+\lceil C\varepsilon/2\rceil)\right]$
$\displaystyle\qquad\qquad\leq\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}+\mathcal{Y}_{>\ell}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]$
$\displaystyle\qquad\qquad\qquad+\sum_{s=0}^{\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)-1}\mathbb{E}\left[{\left[{\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}=s}\right]\left(k\left[{A_{\ell}^{c}}\right]+(1+2k)m^{-1/2+o(1)}\right)}\right]$
Now we note that,
$\displaystyle\sum_{s=0}^{\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)-1}\mathbb{E}\left[{\left[{\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}=s}\right]\left(k\left[{A_{\ell}^{c}}\right]+(1+2k)m^{-1/2+o(1)}\right)}\right]$
$\displaystyle\leq k\Pr\left[A_{\ell}^{c}\right]+(1+2k)m^{-1/2+o(1)}$
$\displaystyle\leq km^{-\gamma}+(1+2k)m^{-1/2+o(1)}$ $\displaystyle\leq
m^{-1/2+o(1)}$
The second last inequality uses Theorem 12 and last uses that $k=m^{o(1)}$.
We also want to also exchange $Y_{<\ell}$ with $\mathcal{Y}_{<\ell}$ and we
will do this in similar fashion.
$\displaystyle\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}+\mathcal{Y}_{>\ell}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]$
$\displaystyle\qquad\qquad\qquad=\sum_{s=0}^{\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)-1}\Pr\left[\mathcal{Y}_{>\ell}=s\right]\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]$
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad+\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge\mathcal{Y}_{>\ell}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]$
Again we fix $s$ and get that,
$\displaystyle\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]$
$\displaystyle=\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\right]\Pr\left[{Y_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s}\,\middle|\,{\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}}\right]$
$\displaystyle\leq\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\right]\Pr\left[{Y_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s}\,\middle|\,{\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
A_{\ell-1}}\right]$ $\displaystyle\qquad\qquad+\Pr\left[A_{\ell-1}^{c}\right]$
By Theorem 12 we know that $\Pr\left[A_{\ell-1}^{c}\right]\leq m^{-\gamma}$.
Now similarly to $Y_{<\ell}$ we define $Y_{<\ell}^{(j)}$ to be the number of
balls which lands in $j$ or which are forwarded by bin $j$ on levels before
level $\ell$. We know that $b$ is chosen uniformly from the set
$[m]\setminus\left\\{b_{0},\ldots,b_{a-1}\right\\}$ so if we fix the first
$\ell-1$ then the probability that
$Y_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s$ is equal to
$\displaystyle\frac{\sum_{j\in[m]\setminus\left\\{b_{0},\ldots,b_{a-1}\right\\}}\left[{Y_{<\ell}^{(j)}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s}\right]}{m-a}$
Since we condition on $A_{\ell-1}$ then we have that,
$\displaystyle\left|{\frac{\sum_{j\in[m]}\left[{Y_{<\ell}^{(j)}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s}\right]}{m}-\Pr\left[\mathcal{Y}_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]}\right|\leq
m^{-1/2+o(1)}$
This implies that,
$\displaystyle\left|{\frac{\sum_{j\in[m]\setminus\left\\{b_{0},\ldots,b_{a-1}\right\\}}\left[{Y_{<\ell}^{(j)}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s}\right]}{m-a}-\Pr\left[\mathcal{Y}_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]}\right|\leq\frac{m}{m-a}m^{-1/2+o(1)}+\frac{a}{m-a}$
Now we use Lemma 11 to get that $a\leq O(\log(m)/\varepsilon)=m^{o(1)}$ with
probability $1-m^{-\gamma}$. Here we use that $1/\varepsilon=m^{o(1)}$.
Combining this we get that,
$\displaystyle\Pr\left[{Y_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s}\,\middle|\,{\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
A_{\ell-1}}\right]$
$\displaystyle\qquad\qquad\leq\Pr\left[\mathcal{Y}_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]+\frac{m}{m-m^{o(1)}}m^{-1/2+o(1)}+\frac{m^{o(1)}}{m-m^{o(1)}}+m^{-\gamma}$
$\displaystyle\qquad\qquad\leq\Pr\left[\mathcal{Y}_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]+m^{-1/2+o(1)}$
We then get that,
$\displaystyle\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]$
$\displaystyle=\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\right]\Pr\left[{Y_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s}\,\middle|\,{\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}}\right]$
$\displaystyle\leq\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\right]\left(\Pr\left[\mathcal{Y}_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]+m^{-1/2+o(1)}\right)$
$\displaystyle\qquad\qquad+m^{-\gamma}$
$\displaystyle\leq\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\right]\Pr\left[\mathcal{Y}_{<\ell}\geq\frac{n}{m}(1+\lceil{C\varepsilon/2}\rceil)-s\right]+m^{-1/2+o(1)}$
Using this we get that,
$\displaystyle\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\wedge
Y_{<\ell}+\mathcal{Y}_{>\ell}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]$
$\displaystyle\qquad\qquad\leq\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\right]\Pr\left[\mathcal{Y}_{<\ell}+\mathcal{Y}_{>\ell}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]$
$\displaystyle\qquad\qquad\qquad+\sum_{s=0}^{\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)-1}\Pr\left[\mathcal{Y}_{>\ell}=s\right]m^{-1/2+o(1)}$
$\displaystyle\qquad\qquad\leq\Pr\left[\mathcal{A}_{i-1}\wedge\mathcal{O}^{c}\right]\Pr\left[\mathcal{Y}_{<\ell}+\mathcal{Y}_{>\ell}\geq\frac{n}{m}(1+\lceil
C\varepsilon/2\rceil)\right]+m^{-1/2+o(1)}$
This finishes the proof Eq. 33.
This concludes the proof that equation (30) of the claim holds.
### Proof of Equation (31)
We restate what we have to prove, namely that
$\mathbb{E}[R\mid\mathcal{E}^{c}]=O(1/f),$
Where $R$ is the number of bins visited during step 2. of epoch $t+1$ and
$\mathcal{E}^{c}$ is the event that we did not finish the insertion during
step 1. of epoch $t+1$. Let $b_{0},\dots,b_{a}$ denote the bins that we have
visited so far at the level where we are currently at, call it $\ell$. All
bins $b_{0},\dots,b_{a}$ got filled from levels $1,\dots,\ell$, and as in the
proof of equation (30) of the claim, we may again describe the history of how
the bins $b_{0},\dots,b_{a}$ got filled to their capacity at level $\ell$. See
Figure 4 for an example of such a history.
Figure 4: An example of how the conditioning on $\mathcal{E}^{c}$ might look.
Except for the ? coming before $b_{0}$, the circled ?’s, are parts of the
cyclic order which has not yet been fixed, but which are known to consist
solely of balls. The circled ? appearing before $b_{0}$, which is the yet
unknown history of how many balls $b_{0}$ are to further forward, has a
special role. Indeed, this history does not have to consist solely of balls
but can consist of a run of balls and bins such that all of the bins in the
run gets filled at this level.
Let $s\geq 1/f$. We wish to argue that the conditional probability
$\displaystyle\Pr[R\geq
s\mid\mathcal{E}^{c}]=O(\exp(-\Omega(sf)))+O(n^{-10}).$ (35)
Ignoring the unimportant $O(n^{-10})$ term, it will follow that
$\mathbb{E}[R\mid\mathcal{E}^{c}]\leq 1/f+\sum_{i=0}^{\infty}\Pr[2^{i}/f\leq
R\leq
2^{i+1}/f]2^{i+1}/f=1/f+O\left(\frac{1}{f}\sum_{i=0}^{\infty}\exp(-\Omega(2^{i}))2^{i}\right)=O(1/f).$
and including the $O(n^{-10})$ term in the computation could only increase the
bound with an additive $n^{-8}$, say, as we can here use the trivial bound on
the length of a run of $mk$. Thus, this yields the desired result. For the
bound on $\Pr[s\leq R\leq 2s\mid\mathcal{E}^{c}]$, it clearly suffices to
assume that $s\geq c/f$ where $c=O(1)$ is a sufficiently large constant.
We start by noting that with probability $1-O(n^{-10})$, the number of balls
hashing to level $\ell$ is at most $2n/k$ which we assume to be the case in
what follows. Let $q$ denote the number of places marked with ? between
$b_{0}$ and $b_{a}$ and let $X_{1},\dots,X_{q}$ denote the number of balls
landing at these positions. Then $q\leq a\leq 1/f$. Let $\alpha=\lceil
C\varepsilon/2\rceil$. Let $A$ denote the event that
$X_{1}+\dots+X_{q}\geq\lambda sf\alpha$, where $\lambda=\Omega(1)$ is a
sufficiently small constant to be chosen later. We start by providing an upper
bound on $\Pr[A]$. For this, we let $X=\sum_{i=1}^{q}X_{i}$ and note, like in
the proof of Lemma 23, that for each $i$, $X_{i}$ is dominated by a geometric
variable with parameter $q$ where $q=\frac{n_{\ell}}{m+n_{\ell}}$. Here
$n_{\ell}=2n/k$ is the upper bound on the number of ball hashing to level
$\ell$. Furthermore, this claim holds even conditioning on the values of
$(X_{j})_{j<i}$. Let $s^{\prime}=s\lambda$. Letting $Y_{1},\dots,Y_{1/f}$ be
independent such geometric variables and $Y=\sum_{i=1}^{1/f}Y_{i}$, we can
thus upper bound
$\Pr[A]\leq\Pr[Y\geq s^{\prime}f\alpha].$
Note that
$\mathbb{E}[Y_{i}]\leq\frac{n_{\ell}}{m}\leq\frac{2C}{k}\leq\frac{4\alpha}{\varepsilon
k}$
for $1\leq i\leq 1/f$, so that $\mathbb{E}[Y]\leq\frac{4\alpha}{\varepsilon
fk}\leq\alpha$, were the last inequality follows by assuming that
$k=O(1/\varepsilon^{2})$ is sufficiently large. We may also assume that
$s^{\prime}f$ is larger than a sufficiently large constant, as described
above, so we can upper bound
$\Pr[A]\leq\Pr[Y\geq\mathbb{E}[Y]+s^{\prime}f\alpha/2].$
By applying the bound Eq. 10 of Theorem 10 similarly to how we did in the
proof of Lemma 23 it follows after some calculations that
$\Pr[A]=\exp(-\Omega(sf)).$
Now condition on $A^{c}$ and let us focus on upper bounding $\Pr[R\geq
s\mid\mathcal{E}^{c}\cap A^{c}]$. For this, we apply (22). To get into the
setting of that theorem, we contract the part of the history revealed so far
between the back-most ?-mark before $b_{0}$ and up til and including $b_{a}$
into a single _unified_ bin. By the conditioning on $A^{c}$, this unified bin
comes with an extra start load of at most $\lambda sf\alpha$ balls, where we
can choose $\lambda=\Omega(1)$ to be any sufficiently small constant. Thus,
with the conditioning, we are exactly in the setting to apply Theorem 22, and
we may thus bound
$\Pr[R\geq s\mid\mathcal{E}^{c}\cap A^{c}]=\exp(-\Omega(sf)).$
It follows that
$\Pr[R\geq s\mid\mathcal{E}^{c}]\leq\Pr[A]+\Pr[R\geq s\mid\mathcal{E}^{c}\cap
A^{c}]=\exp(-\Omega(sf)),$
which is the desired. This completes the proof of 5. ∎ As explained before the
proof of 5, this completes the proof of our theorem. ∎
## 7 Insertions of Bins and Deletions of Balls and Bins
In this section, we prove the statements of Theorems 2 and 3 concerning the
deletions of balls and insertions and deletions of bins. Combined with the
results of Sections 2, 6 and 8, this proves the two theorems in full.
##### Deletions of Balls.
By the history independence, a deletion of a ball is symmetric to an
insertion. The bins visited when deleting a ball $x$ are the same as the bins
visited if $x$ had not been in the system and was inserted. Thus, we can upper
bound the expected number of bins visited when deleting a ball by
$O(1/\varepsilon)$ for Theorem 2 and $O(1/f)$ for Theorem 3. This also upper
bounds the number of balls moved in a deletions.
##### Deletions of Bins.
A deletion of a super bin is the same as reinserting the balls lying in that
super bin. We claimed that that the expected cost of deleting a super bin is
$O(C/f)$ in Theorem 3. At first, this may seem completely obvious, since the
cost of inserting a single ball is $O(1/f)$. However, this cost is for
inserting a ball which is selected independently of the random choices of the
hash function. Now, we are looking at the balls placed in a given super bin
$b$, and those are highly dependent on the hash function. However, we do know
that the expected average cost of all balls in the system is $O(1/f)$.
Moreover, all bins are symmetric, so the bin $b$ behaves like a random bin
amongst those in the system. Thanks to our load balancing, the balls are
almost uniformly spread between the bins, so a random ball from a random bin
is almost a uniformly random ball, so a random ball from $b$ has expected cost
$O(1/f)$. There are at most $C$ balls in $b$ them, so the total expected cost
is $O(C/f)$. A similar argument applies in the case of Theorem 2.
##### Insertions of Bins
Again, by the history independence an insertion of a bin is symmetric to its
deletion. The balls that are moved when inserting a bin are thus the same as
if that bin was in the system but was deleted. Thus we can use the result for
deletions of bins to conclude the bound of $O(C/f)$ on the number of balls
moved when inserting a bin. A similar argument applies in the case of Theorem
2.
## 8 Faster Searches Using the Level-Induced Priorities
In this section we make the calculation demonstrating that giving the balls
random priorities, we obtain the better bounds on the number of bins visited
during an insertion as claimed in Theorems 2 and 3. This is not a new idea but
is in fact an old trick [AK74, Knu73]. What we need to do is verify that
applying it, with the particular formula for $f$ in Eq. 4, we obtain the
stated search times. In fact, what we require for the analysis is only the
fact that if two balls hash to different levels, the ball hashing to the lower
level has the highest priority of the two. Within a given level, the
priorities can be arbitrary. This is important for the practical version of
our scheme described in Section 1.3.2 where the priorities are not uniformly
random and independent of the hashing of balls, but where the hashing of the
balls in fact determines the priorities, with higher hash values implying
lower priorities. We start by arguing about the expected number of bins
visited during a search as stated in Theorem 2.
##### Number of Bins Visited During a Search: Theorem 2.
We encourage the reader to recall the setting described in the theorem. Define
$X$ to be the number of bins visited during the search for some ball $x$.
Importantly, if $x$ hashes to level $i$, then all virtual bins visited during
the search of $x$ also lie on level $i$. For $i\in[k]$, we let $A_{i}$ denote
the event that $x$ hashes to level $i$, so that $\Pr[A_{i}]=p_{i}$. By a
standard Chernoff bound, the number of balls hashing to the first $i$ levels
is $np_{\leq i}\pm m^{1/2+o(1)}=np_{\leq i}(1\pm m^{-1/2+o(1)})$, with
probability $1-O(n^{-10})$, say. Here we used that $1/\varepsilon=m^{o(1)}$.
Condition on this event and define $n_{<i}$ to be the number of balls hashing
to the first $i$ levels. Finally letting $\varepsilon_{i}$ be such that that
$(1+\varepsilon_{i})n_{\leq i}/m=C$, we obtain from the part of Theorem 2
concerning insertions (which was proved in Section 2) that
$\mathbb{E}[X_{i}\mid A_{i}]=O\left(1/\varepsilon_{i}\right)$. Moreover,
$\Pr[A_{i}]\leq 2^{-i+1}$ for each $i$. It finally follows from the Chernoff
bound above that $\varepsilon_{i}\geq 1/2^{i}$, and so
$\mathbb{E}[X]=\sum_{i\in[k]}\mathbb{E}[X\mid A_{i}]\Pr[A_{i}]=O(k)=O(\log
1/\varepsilon)$
as desired.
##### Number of Bins Visited During a Search: Theorem 3.
We now perform a similar calculation to the one above, in the more complicated
setting of Theorem 3. Let us for simplicity assume that the number of balls
hashing to each level is exactly $n/k$. It is trivial to later remove this
assumption. We also assume for simplicity that $k\geq 1/\varepsilon^{2}$ is a
power of $2$, $k=2^{a}$ for some $a$. Let
$\ell=\lceil\log(1/\varepsilon)\rceil$ noting that $\ell\leq a$. We partition
$[k]=I_{0}\cup\cdots I_{\ell}$, where
$I_{i}=[2^{a}-2^{a-i-1}]\setminus[2^{a}-2^{a-i}]$ for $0\leq i\leq\ell-1$ and
$I_{\ell}=[2^{a}]\setminus[2^{a}-2^{a-\ell}]$. Let $A_{i}$ be the event that
the given ball to be searched $x$ hashes to some level in $I_{i}$, so that
$\Pr[A_{i}]=2^{-i+1}$ for $0\leq i\leq\ell-1$ and $\Pr[A_{\ell}]=2^{-\ell}$.
For $0\leq i\leq\ell$ we define $n_{\leq i}$ to be the number of balls hashing
to some level in $I_{0}\cup\cdots\cup I_{i}$. Finally, let $\varepsilon_{i}$
be such that $(1+\varepsilon_{i})n_{\leq i}/m=C$ and note that
$\Pr[A_{i}]=\Theta(\varepsilon_{i})$.
We partition $[\ell+1]$ into three sets, $[\ell+1]=J_{1}\cup J_{2}\cup J_{3}$
where
$J_{1}=\\{i\in[\ell+1]:C\leq\log 1/\varepsilon_{i}\\},\quad
I_{2}=\\{i\in[\ell+1]:\log
1/\varepsilon_{i}<C\leq\frac{1}{2\varepsilon_{i}^{2}}\\},\quad\text{and}\quad
I_{3}=\\{i\in[\ell+1]:\frac{1}{2\varepsilon_{i}^{2}}<C\\}.$
It then follows from the part of Theorem 3 dealing with insertions (proved in
Section 6) that
$\mathbb{E}[X]=O\left(\sum_{i\in
I_{1}}\frac{\Pr[A_{i}]}{\varepsilon_{i}C}+\sum_{i\in
I_{2}}\frac{\Pr[A_{i}]}{\varepsilon_{i}\sqrt{C\log\left(\tfrac{1}{\varepsilon_{i}\sqrt{C}}\right)}}+\sum_{i\in
I_{3}}\Pr[A_{i}]\right)=O\left(1+\frac{|I_{1}|}{C}+\sum_{i\in
I_{2}}\frac{1}{\sqrt{C\log\left(\tfrac{1}{\varepsilon_{i}\sqrt{C}}\right)}}\right)$
We have the trivial bound $|I_{1}|\leq\ell+1=O(\log 1/\varepsilon)$. Moreover,
for $i\in I_{2}$, it holds that
$e^{-C}\leq\varepsilon_{i}\leq\sqrt{\frac{1}{2C}},$
and since $\varepsilon_{i}=\Theta(2^{-i})$, it follows that
$\sum_{i\in
I_{2}}\frac{1}{\sqrt{C\log\left(\tfrac{1}{\varepsilon_{i}\sqrt{C}}\right)}}=O\left(\frac{1}{\sqrt{C}}\sum_{i=1}^{O(C)}\frac{1}{\sqrt{i}}\right)=O(1).$
In conclusion,
$\mathbb{E}[X]=O\left(1+\frac{\log 1/\varepsilon}{C}\right),$
and splitting into the cases, $C\leq\log 1/\varepsilon$ and $C<\log
1/\varepsilon$, we obtain the desired result.
## 9 The Practical Implementation.
In this section we sketch why our results continue to holds when using the
practical implementation described in Section 1.3.2 even when the hashing is
implemented using the practical mixed tabulation scheme from [DKRT15]. Let us
call the implementation from Section 1.3.2 the _practical implementation_.
We first discuss the practical implementation with fully random hashing. For
this, recall the definition of a run (Definition 1). Using a similar
argumentation to the one used in the proof of Lemma 11, it is easy to show
that in this implementation, for any constant $\gamma=O(1)$, the maximal
number of bins in a run is $O((\log n)/\varepsilon)$ with probability
$1-n^{-\gamma}$. Denote this high probability event $\mathcal{E}$. The number
of balls lying in a run consisting of $\ell$ bins is trivially upper bounded
by $C(\ell+1)$, so if $\mathcal{E}$ occurs, the maximal number of balls
hashing to a fixed run is $O(C(\log n)/\varepsilon)$. It follows that the
number of balls that are forwarded past any given point is $O(C(\log
n)/\varepsilon)$. In particular for any level $i$, the number of balls that
are forwarded from level $i$ to level $i+1$ is $O(C(\log n)/\varepsilon)$ and
the total number of such balls over all levels is $O(kC(\log
n)/\varepsilon)=m^{o(1)}$. One can now modify our inductive proof of Theorem 4
to check that its statement remains valid even with the influence of these
extra balls. Recall that in Theorem 4, $X_{i,j}$ denoted the number of bins
with at most $j$ balls after the hashing of balls to levels $0,\dots,i-1$.
Intuitively, in the inductive step, these $m^{o(1)}$ extra balls can only
affect $m^{o(1)}$ bins which does not affect the high probability bound
stating that $|X_{i,j}-\mu_{i,j}|\leq m^{1/2+o(1)}$. To exclude the bad event
$\mathcal{E}^{c}$, we simply use a union bound and that $\mathcal{E}$ happened
with very high probability. Once we have a version of Theorem 4 which holds in
the practical implementation, we can repeat the proof of Theorem 20, again
using union bounds for the event that the insertion interacts with the run of
size $m^{o(1)}$ entering the given level from below.
Let us now discuss the implementation with mixed tabulation. A mixed
tabulation hash function $h$ is defined using two of the simple tabulation
hash functions from [PT12], $h_{1}:\Sigma^{c}\to\Sigma^{d}$ and
$h_{2}:\Sigma^{c+d}\to R$. Here $\Sigma$ is some character alphabet with
$\Sigma^{c}=[u]$ and $c,d=O(1)$ are constants. Then for a key $x$,
$h(x)=h_{2}(x,h_{1}(x))$. An important property of mixed tabulation, proved in
[DKRT15], is the following: Suppose $X$ is a set of keys, $p_{1},\dots,p_{b}$
are output bit positions and $v_{1},\dots,v_{b}$ are desired bit values. Let
$Y$ be the set of keys $x\in X$ for which the $p_{i}$’th output bit
$h(x)_{p_{i}}=v_{i}$ for all $i$. If
$\mathbb{E}[|Y|]\leq|\Sigma|/(1+\Omega(1))$, then the remaining output bits of
the hash values in $Y$ are completely independent with probability
$1-O(|\Sigma|^{1-\lfloor{d/2}\rfloor})$. Another important property is that
mixed tabulation obeys the same concentration bounds as simple tabulation on
the number of balls landing in an interval [PT12].
For the implementation with mixed tabulation, we use $k$ independent mixed
tabulation functions, $h_{1},\dots,h_{k}$, to distribute the virtual bins, and
a single mixed tabulation function $h^{*}$ for the balls (independent of
$h_{1},\dots,h_{k}$). We moreover assume that $|\Sigma|=u^{1/c}=n^{\Omega(1)}$
which can be achieved using a standard universe reduction. To obtain our
results using mixed tabulation, the idea is essentially the same as above.
Again, we first need to prove an analogue of Theorem 4, and we would do this
using induction on the level, bounding $|X_{i,j}-\mu_{i,j}|$ with high
probability for each level $i$. To do this, we partition level $i$ into dyadic
intervals where we expect at most $|\Sigma|/2$ balls or bins to hash. Then we
can use the concentration bound from [PT12] (which also holds for mixed
tabulation) to obtain concentration on the number of bins of a given capacity
from the previous levels hashing to each interval. Moreover, we can use the
result of [DKRT15] to conclude that restricted to such an interval the hashing
of balls and bins is fully random. Again, we can prove a version of Lemma 11
with mixed tabulation (by using that mixed tabulation provides concentration
bounds) and conclude that the total number of balls that are forwarded from
one interval to another is $O(C(\log
n)/\varepsilon)=m^{o(1)}=|\Sigma|^{o(1)}$. Essentially, the good distribution
of the $X_{i-1,j}$ ensures that we also obtain a good distribution of the
number of bins with each capacity in each of the intervals of level $i$ (using
that the influence of the $|\Sigma|^{o(1)}$ balls passing between intervals
can only affect $|\Sigma|^{o(1)}$ bins), and this gives a good distribution of
the $X_{i,j}$. For this, it is important to be aware that there are now more
intervals, essentially $n/|\Sigma|$, but since $|\Sigma|=n^{\Omega(1)}$, we
still obtain that the total number of balls that are forwarded from one
interval to another is $n^{1-\Omega(1)}$. The high probability bound we obtain
on $|X_{i,j}-\mu_{i,j}|$ then instead takes the form
$|X_{i,j}-\mu_{i,j}|=n^{1-\Omega(1)}$, but this still suffices for our
purposes. Finally, we may prove a mixed tabulation version of Theorem 20,
again using the fully random hashing within each interval and using union
bounds to bound away the probability that we interact with the
$|\Sigma|^{o(1)}$ balls that are forwarded between intervals. As such, showing
that our results hold using mixed tabulation uses essentially the same ideas
as is needed to show that the implementation in Section 1.3.2 does, but with a
finer partitioning into intervals.
## 10 Modifying the Analysis for Dynamically Changing Capacities
In this last short section, we describe how our analysis can still be carried
through even with the dynamically changing capacities described in Section
1.7. In the preceding sections, we assumed that the capacities of the bins
were all equal to some integer $C$. However, in the setting of Section 1.7 we
are interested in the case where the _total_ capacity is $Cm$, with $m_{1}$
bins of capacity $\lfloor{C}\rfloor$ and $m_{2}$ bins of capacity
$\lceil{C}\rceil$. Thus $C$ is no longer assumed to be integral. This
corresponds to all bins having the same capacity $\lceil{C}\rceil$, but where
we include an extra $-1$’th level, where $m_{1}$ bins each receive a single
artificial ball.
To analyse this new setting one can first observe that the proofs in Section 6
carry through without significant changes. Thus it is mainly in regards to the
bounds on the fraction of non-full bins in Section 5 that there is something
to discuss. Recall, that we showed in Section 5.1 that the contribution of
balls from the levels to a given random bin essentially behaves like a sum of
geometric variables. With the terminology introduced in [AAKT21], geometric
variables are strongly monotone, and we could then apply the bound of that
paper to estimate the point probabilities of this sum. Now Bernoulli variables
are also strongly monotone, and so the bound in [AAKT21] can also be applied
when some of the variables in the sum are Bernoulli. Now with the $-1$’th
level described above, the number of balls landing in a random bin at the new
lowest level is Bernoulli. Then the contribution to a random bin is
essentially a sum of geometric variables and a single Bernoulli variable, and
since the bound in [AAKT21] holds for such a sum, we can still use it for
estimating the point probabilities of the number of balls in a bin. The
remaining parts of the proof carries through almost unchanged.
## Acknowledgement
The authors wish to thank Noga Alon and Nick Wormald for helpful discussions.
With Noga Alon, we studied sums of integer variables [AAKT21], including
bounds needed for the analysis of this paper. In unpublished work on a the
random graph $d$-process, Nick Wormald and Andrzej Ruciński also used the idea
of analyzing balls in capacitated bins by throwing an appropriately larger
number of balls into uncapacitated bins. They did not present an estimate on
the number of non-full bins, as needed for this paper.
Research supported by grant 16582, Basic Algorithms Research Copenhagen
(BARC), from the VILLUM Foundation.
## References
* [AAKT21] Anders Aamand, Noga Alon, Jakob Bæk Tejs Knudsen, and Mikkel Thorup. On sums of monotone random integer variables, 2021.
* [AK74] Ole Amble and Donald E. Knuth. Ordered hash tables. Comput. J., 17(2):135–142, 1974.
* [Azu67] Kazuoki Azuma. Weighted sums of certain dependent random variables. Tohoku Mathematical Journal, 19(3):357–367, 1967.
* [BG07] G. E. Blelloch and D. Golovin. Strongly history-independent hashing with applications. In Proc. 48th IEEE Symposium on Foundations of Computer Science (FOCS), pages 272–282, 2007.
* [BSS00] André Brinkmann, Kay Salzwedel, and Christian Scheideler. Efficient, distributed data placement strategies for storage area networks. In Proceedings of the Twelfth annual ACM Symposium on Parallel Algorithms and Architectures, SPAA, pages 119–128, 2000.
* [CDKR02] Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, and Antony IT Rowstron. Scribe: A large-scale and decentralized application-level multicast infrastructure. Selected Areas in Communications, IEEE Journal on, 20(8):1489–1499, 2002.
* [DKRT15] Søren Dahlgaard, Mathias Bæk Tejs Knudsen, Eva Rotenberg, and Mikkel Thorup. Hashing for statistics over k-partitions. In IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS, pages 1292–1310, 2015.
* [GF04] David A. Grossman and Ophir Frieder. Information Retrieval - Algorithms and Heuristics, Second Edition, volume 15 of The Kluwer International Series on Information Retrieval. Kluwer, 2004.
* [GH05] George Giakkoupis and Vassos Hadzilacos. A scheme for load balancing in heterogenous distributed hash tables. In Proceedings of the Twenty-Fourth Annual ACM Symposium on Principles of Distributed Computing, PODC, pages 302–311, 2005.
* [GM14] Spencer Greenberg and Mehryar Mohri. Tight lower bound on the probability of a binomial exceeding its expectation. Statistics & Probability Letters, 86:91–98, 2014.
* [KLL+97] David R. Karger, Eric Lehman, Frank Thomson Leighton, Rina Panigrahy, Matthew S. Levine, and Daniel Lewin. Consistent hashing and random trees: Distributed caching protocols for relieving hot spots on the world wide web. In Proceedings of the Twenty-Ninth Annual ACM Symposium on the Theory of Computing, STOC, pages 654–663, 1997.
* [KM05] Krishnaram Kenthapadi and Gurmeet Singh Manku. Decentralized algorithms using both local and random probes for P2P load balancing. In SPAA 2005: Proceedings of the 17th Annual ACM Symposium on Parallelism in Algorithms and Architectures, pages 135–144, 2005.
* [Knu73] Donald E. Knuth. The Art of Computer Programming, Volume III: Sorting and Searching. Addison-Wesley, 1973.
* [KR06] David R. Karger and Matthias Ruhl. Simple efficient load-balancing algorithms for peer-to-peer systems. Theory Comput. Syst., 39(6):787–804, 2006. Announced at SPAA’05.
* [Lar88] Per-Åke Larson. Dynamic hash tables. Commun. ACM, 31(4):446–457, 1988.
* [Man04] Gurmeet Singh Manku. Balanced binary trees for ID management and load balance in distributed hash tables. In Proceedings of the Twenty-Third Annual ACM Symposium on Principles of Distributed Computing, PODC, pages 197–205, 2004.
* [MTZ18] Vahab S. Mirrokni, Mikkel Thorup, and Morteza Zadimoghaddam. Consistent hashing with bounded loads. In Artur Czumaj, editor, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA, pages 587–604. SIAM, 2018\.
* [MZ17] Vahab Mirrokni and Morteza Zadimoghaddam. Consistent hashing with bounded loads. Google Research Blog, April 3, 2017. https://research.googleblog.com/2017/04/consistent-hashing-with-bounded-loads.html.
* [ÖV11] M. Tamer Özsu and Patrick Valduriez. Principles of Distributed Database Systems, Third Edition. Springer, 2011.
* [PT12] Mihai Pǎtraşcu and Mikkel Thorup. The power of simple tabulation-based hashing. Journal of the ACM, 59(3):Article 14, 2012. See also STOC’11.
* [RD01] Antony Rowstron and Peter Druschel. Pastry: Scalable, decentralized object location, and routing for large-scale peer-to-peer systems. In Middleware 2001, pages 329–350. Springer, 2001.
* [RFH+01] Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, and Scott Shenker. A scalable content-addressable network, volume 31. ACM, 2001.
* [Rod16] Andrew Rodland. Improving load balancing with a new consistent-hashing algorithm. Vimeo Engineering Blog, December 19, 2016. https://medium.com/vimeo-engineering-blog/improving-load-balancing-with-a-new-consistent-hashing-algorithm-9f1bd75709ed.
* [SMK+01] Ion Stoica, Robert Morris, David Karger, M Frans Kaashoek, and Hari Balakrishnan. Chord: A scalable peer-to-peer lookup service for internet applications. ACM SIGCOMM Computer Communication Review, 31(4):149–160, 2001\.
* [SML+03] Ion Stoica, Robert Morris, David Liben-Nowell, David R. Karger, M. Frans Kaashoek, Frank Dabek, and Hari Balakrishnan. Chord: a scalable peer-to-peer lookup protocol for internet applications. IEEE/ACM Trans. Netw., 11(1):17–32, 2003.
* [TR98] David Thaler and Chinya V. Ravishankar. Using name-based mappings to increase hit rates. IEEE/ACM Trans. Netw., 6(1):1–14, 1998.
|
# Efficient Quantum Circuit Cutting by Neglecting Basis Elements
Daniel T. Chen1, Ethan H. Hansen1, Xinpeng Li1, Vinooth Kulkarni1,
Vipin Chaudhary1, Bin Ren3, Qiang Guan2, Sanmukh Kuppannagari1, Ji Liu4, Shuai
Xu1 This research was supported in part by NSF Award 2216923. 1 Department of
Computer and Data Science
Case Western Reserve University, Cleveland, OH 44106
Email: {txc461, ehh50, xxl1337, vxk285, vxc204, sxk1942<EMAIL_ADDRESS>2
Department of Computer Science
Kent State University, Kent, OH 44240
Email<EMAIL_ADDRESS>3 College of William & Mary, Williamsburg, VA 23185
Email<EMAIL_ADDRESS>4 Argonne National Laboratory, IL 60439
Email<EMAIL_ADDRESS>
###### Abstract
Quantum circuit cutting has been proposed to help execute large quantum
circuits using only small and noisy machines. Intuitively, cutting a qubit
wire can be thought of as classically passing information of a quantum state
along each element in a basis set. As the number of cuts increase, the number
of quantum degrees of freedom needed to be passed through scales
exponentially. We propose a simple reduction scheme to lower the classical and
quantum resources required to perform a cut. Particularly, we recognize that
for some cuts, certain basis element might pass “no information” through the
qubit wire and can effectively be neglected. We empirically demonstrate our
method on circuit simulators as well as IBM quantum hardware, and we observed
up to 33 percent reduction in wall time without loss of accuracy.
## I Introduction
While quantum computers can provide exponential speed-up for certain
algorithms, unveiling potential diverse and disruptive applications [1, 2, 3],
current quantum hardware lacks the scale and reliability to execute these
quantum circuits. Preskill described such hardware as Noisy Intermediate-Scale
Quantum (NISQ) computers, and much effort has been focused on increasing qubit
fidelity and qubit count [4, 5, 6, 7]. Meanwhile, algorithmic developments
such as variational quantum algorithms [8, 9, 10] have caught substantial
attention for having the potential to realize quantum advantage utilizing only
NISQ machines. However, there are few results demonstrating provable advantage
for using variational quantum algorithms over known classical algorithms [11].
A natural question to ask when given limited qubits is: “Are there ways of
simulating large quantum circuits using small quantum devices?” Inspired by
fragmentation methods in molecular simulations [12, 13, 14, 15], Peng et al.
has demonstrated that it is possible to divide quantum circuit into
subcircuits, or fragments, that can be independently executed on small quantum
computers and then recombined [16]. This method does not impose assumptions on
circuits but requires a large amount of classical computing
resources—exponential in the number of cuts—in order to reconstruct the
expected output of the respective uncut circuit.
Nonetheless, quantum circuit cutting still holds great promise as an effective
tool for realizing quantum algorithms. Most notably, it has been empirically
shown that reducing the circuit size, even when sufficiently large machines
are available, can improve fidelity [17, 18, 19]. Moreover, tailoring circuit
cutting to specific applications, e.g. combinatorial optimization [20] and
error mitigation [21], can effectively avoid the exponential classical
overhead. Alternatively, stochastic methods through randomized measurement
[22] and sampling [23] have also been developed to reduce the postprocessing
cost. In recent, state-of-the-art research, it has been empirically shown that
optimization can be employed to replace traditional tomographic techniques and
significantly reduce the classical resources needed for reconstruction, though
at the cost of additional circuit evaluations [24]. Circuit cutting algorithms
have also undergone improvement and variations in terms of performance.
Maximum likelihood methods have been introduced to circuit cutting to formally
account for finite-shot error [19]. Shadow tomography—an efficient classical
representation of quantum states for predicting expectations of observables
[25]—can also be applied to individual circuit fragments [26].
Figure 1: Circuit diagram corresponding to the 3-qubit example
Our work contributes in the direction of reducing the computational resources
needed. In this paper, we introduce the idea of a golden cutting point: a cut
location on a quantum circuit where a certain measurement basis can be
neglected, e.g., when the quantum state operates in a lower-dimensional
subspace prior to the cut. Such reduction lowers the runtime of reconstructing
measurement statistics from fragments by approximately 33% for a single cut
and avoids excessive circuit executions. We experimentally verify our method
on both the Qiskit Aer simulator [27] and superconducting quantum devices from
IBM [28].
The outline of this paper is as follows. Section II presents the mathematical
formulation behind quantum circuit cutting and the idea of golden cutting
points. We begin with an elaborate three-qubit example in Section II-A and
generalize it to arbitrary circuit bipartitions in Section II-B. Then, we
proceed to conduct numerical experiments in Section III. Finally, we provide
insight into future directions in Section IV.
## II Quantum Circuit Cutting
Quantum circuit cutting seeks a qubit wire (or multiple qubit wires) such
that, upon removing, the circuit decomposes into several independent
fragments. This is accomplished by performing quantum tomography—learning a
quantum state—on the qubit upstream of the cut and recreating the inferred
state in the downstream qubit. One can think of tomography as estimating the
coefficient after expanding a quantum state with respect to some basis set,
which we take to be the Pauli basis
$\displaystyle\bigg{\\{}I=\begin{pmatrix}1&0\\\ 0&1\end{pmatrix},~{}~{}$
$\displaystyle X=\begin{pmatrix}0&1\\\ 1&0\end{pmatrix},$ $\displaystyle
Y=\begin{pmatrix}0&-i\\\ i&0\end{pmatrix},~{}~{}Z=\begin{pmatrix}1&0\\\
0&-1\end{pmatrix}\bigg{\\}}$ (1)
for concreteness. By measuring the quantum state with respect to each element
in the basis, one could piece together the state prior to the cut. Then, by
initializing the downstream qubit incident to the cut into the eigenstate of
each element in the basis, one could learn the behavior of the circuit given
the initial state. Lastly, reweighting the results obtained through multiple
initializations yields the behavior of an uncut circuit.
If a circuit undergoes $K$ cuts, tomography has to be performed on all $K$
qubits at the location of the cuts. The set of length-$K$ Pauli strings is
then an appropriate basis for a $K$-qubit system and the measurement-
preparation scheme described above must be done for all elements in the set.
Thus, the cost of performing circuit cutting scales exponentially with the
number of cuts. In this section, we introduce a special class of circuits that
contains a golden cutting point and that enjoys a reduction in measurement
complexity and classical postprocessing cost. We will demonstrate this via a
three-qubit example (Section II-A) and proceed to formalize the phenomenon for
arbitrary circuit bipartitions (Section II-B).
### II-A A Three-qubit Example
Consider the following three-qubit state
$\displaystyle\rho=U_{23}U_{12}|000\rangle\langle
000|U_{12}^{\dagger}U_{23}^{\dagger}$ (2)
where unitaries $U_{12}$ and $U_{23}$ can be arbitrary quantum gates (Figure
1). Notice that the qubit wire between the two gates can be cut, in the sense
that we can rewrite the above state as
$\displaystyle\rho=\frac{1}{2}\sum_{M_{2}\in\mathcal{B}}\rho_{f_{1}}(M_{2})\otimes\rho_{f_{2}}(M_{2})$
(3)
where the set $\mathcal{B}=\\{I,X,Y,Z\\}$ is the Pauli basis, and
$\displaystyle\rho_{f_{1}}(M_{2})$
$\displaystyle=\textnormal{tr}_{2}(M_{2}U_{12}|00\rangle\langle
00|U_{12}^{\dagger}),$ (4) $\displaystyle\rho_{f_{2}}(M_{2})$
$\displaystyle=U_{23}(M_{2}\otimes|0\rangle\langle 0|)U_{23}^{\dagger}.$ (5)
Here, $M_{2}$ refers to applying the operator $M$ onto to the second qubit.
This convention will be held throughout the manuscript where numerical
subscripts denote the qubit(s) an operator is acting on.
However, as currently presented, the fragments are not quantum states as Pauli
matrices are traceless operators. To obtain a physical interpretation,
consider the eigendecomposition eigendecomposition $M=rM^{r}+sM^{s}$ for each
$M\in\mathcal{B}$ where $r,s\in\\{+1,-1\\}$ are the eigenvalues and
$M^{r},M^{s}$ are the eigenstates. Then, we can rewrite Equation 3 as
$\displaystyle\rho=\frac{1}{2}\sum_{\begin{subarray}{c}M_{2}\in\mathcal{B}\\\
r,s=\pm
1\end{subarray}}rs\rho_{f_{1}}(M_{2}^{r})\otimes\rho_{f_{2}}(M_{2}^{s}).$ (6)
The eigenstates of $M\in\mathcal{B}$ have unit trace and can be regarded as
quantum states.
Intuitively, one can understand circuit cutting as classically passing
information through the qubit wire that is cut. Inside the summation (fix an
$M_{2}$ of choice), the information stored in the second qubit in the
direction $M_{2}$ is measured—taking the partial trace in $\rho_{f_{1}}$—and
passed onto the remaining circuit—initializing into state $M_{2}^{s}$. Since
the circuit fragments $\rho_{f_{1}}$ and $\rho_{f_{2}}$ can be simulated
independently, through repeating the measurement-initialization scheme, we can
effectively cut the original circuit, run fragments in parallel, and combine
the results classically.
We are often interested in finding the expectation of the quantum state with
respect to a quantum observable, $O$. Without loss of generality, assume an
observable can be decomposed as $O=O_{1}\otimes O_{23}$. Then, we can rewrite
the expectation in terms of $\rho_{f_{1}}$ and $\rho_{f_{2}}$:
$\displaystyle\textnormal{tr}(O\rho)$
$\displaystyle=\frac{1}{2}\sum_{M_{2},r,s}rs~{}\textnormal{tr}(O_{1}\rho_{f_{1}}(M_{2}^{r}))\textnormal{tr}(O_{23}\rho_{f_{2}}(M_{2}^{s}))$
(7)
$\displaystyle=\frac{1}{2}\sum_{M_{2},s}s~{}\textnormal{tr}(O_{23}\rho_{f_{2}}(M_{2}^{s}))\sum_{r}r~{}\textnormal{tr}(O_{1}\rho_{f_{1}}(M_{2}^{r})).$
(8)
The above summation involves 16 terms. Experimentally, the summation requires
measuring the second qubit in the fragment upstream for each non-identity
Pauli basis and initializing the first qubit in the downstream fragment to the
respective eigenstates.
However, the computation above can potentially be reduced. Suppose that there
exists an $M_{2,*}\in\mathcal{B}$ such that the last summation evaluates to
zero, that is,
$\displaystyle\sum_{r=\pm
1}r~{}\textnormal{tr}\left(O_{1}\rho_{f_{1}}(M_{2,*}^{r})\right)=0.$ (9)
Note that this can happen one of two ways:
1. (i)
Operator $O_{1}$ is orthogonal to the conditional state
$\rho_{f_{1}}(M_{2,*}^{r})$, i.e.,
$\textnormal{tr}(O\rho_{f_{1}}(M_{2,*}^{r}))=0$ for both $r=\pm 1$. For
example, $O_{1}=X$ and $U_{12}|00\rangle=(|00\rangle+|11\rangle)/\sqrt{2}$,
the Bell state.
2. (ii)
The conditional state is in a subspace that “conveys no information” about the
observable, specifically, $\textnormal{tr}(O_{1}\rho_{f_{1}}(M_{2,*}^{r}))\neq
0$ but the weighted sum in Equation 9 leads to systematic cancellations. For
example, $O_{1}=|+\rangle\langle+|$ is a projector to the plus-state, and
$U_{12}|00\rangle$ is again the Bell state.
Upon noting the reduction, the number of terms in the summation drops from 16
to 12. Moreover, since any term involving $M_{2,*}$ is neglected, there is no
need to estimate the expectation in the downstream fragment
($\textnormal{tr}\left(O_{23}\rho_{f_{2}}(M_{2,*}^{s})\right)$), which saves
circuit evaluations that use the initial state $M_{2,*}^{s}$. We say that the
cut is a golden cutting point if such reduction occurs.
### II-B Generalization
We formally extend the above case to more general bipartitions, but refrain
from considering cutting schemes that result in more than three fragments
(c.f. [26] for general expression of expectations with respect to circuit
fragments). Suppose a $N$-qubit quantum circuit induces a state $\rho$. A set
of $K$ cuts, with injective function $c$ mapping the $i$-th cut onto qubit
$c(i)\in[N]=\\{1,2,\dots,N\\}$. By cutting the $K$ wires, the circuit can be
bipartitioned into fragments $f_{1}$ and $f_{2}$. Now, cutting requires
passing information from $K$ sets of basis $\mathcal{B}$ through the cutting
point, namely, measuring upstream qubits and preparing downstream qubits with
respect to operators
$\displaystyle\bm{M}=\begin{pmatrix}M_{c(1)}&M_{c(2)}&\dots&M_{c(K)}\end{pmatrix}\in\mathcal{B}^{K}.$
(10)
Furthermore, each operator $M$ admits spectral decomposition. Letting
$\displaystyle\bm{r}=\begin{pmatrix}r_{c(1)}&r_{c(2)}&\dots&r_{c(K)}\end{pmatrix}\in\\{-1,+1\\}^{K}$
(11)
be a tuple of eigenvalues, we define
$\displaystyle\bm{M}^{\bm{r}}=\begin{pmatrix}M_{c(1)}^{r(1)}&M_{c(2)}^{r(2)}&\dots&M_{c(K)}^{r(K)}\end{pmatrix}$
(12)
to be the $\bm{r}$-th eigenstate of operator $\bm{M}$.
Using the above notation, we can compactly write the uncut state using the
fragment-induced states $\rho_{f_{i}}(\bm{M})$ for $i=1,2$. Following Equation
6 gives the reconstruction formula in this bipartition case:
$\displaystyle\rho=\frac{1}{2^{K}}\sum_{\begin{subarray}{c}\bm{M}\in\mathcal{B}^{K},\\\
\bm{r},\bm{s}\in\\{-1,+1\\}^{K}\end{subarray}}\prod_{i\in[K]}r_{i}s_{i}~{}\rho_{f_{1}}(\bm{M}^{\bm{r}})\otimes\rho_{f_{2}}(\bm{M}^{\bm{s}}).$
(13)
Alternatively, for any desired quantum observable $O$, suppose the operator
can be decomposed to accommodate the two fragments, i.e., $O=O_{f_{1}}\otimes
O_{f_{2}}$ up to appropriate permutation of qubit indices. This is without
loss of generality as expansions using Pauli strings would yield a linear
combination of operators that are qubit-wise separable. Then, we can arrive at
an expression analogous to Equation 7 in terms of the fragments $\rho_{f_{i}}$
and their respective observables $O_{f_{i}}$:
$\displaystyle\textnormal{tr}(O\rho)=$
$\displaystyle\frac{1}{2^{K}}\sum_{\bm{M},\bm{r},\bm{s}}\prod_{i\in[K]}r_{i}s_{i}~{}\textnormal{tr}\left(O_{f_{1}}\rho_{f_{1}}(\bm{M}^{\bm{r}})\right)\textnormal{tr}\left(O_{f_{2}}\rho_{f_{2}}(\bm{M}^{\bm{s}})\right).$
(14)
We now formally define the golden circuit cutting point.
###### Definition 1.
A $N$-qubit circuit amenable to bipartition with $K$ cuts has a _golden
cutting point_ if there exists $k^{*}\in[K]$ such that
$\displaystyle\sum_{r_{c(k^{*})}}r_{c(k^{*})}~{}\textnormal{tr}\left(O_{f_{1}}\rho_{f_{1}}(\bm{M}^{\bm{r}})\right)=0.$
(15)
Note that golden cutting points are not necessarily unique. There can be
multiple cuts with negligible bases and/or multiple negligible bases in one
cut. Suppose there are $K_{g}$ many golden cutting points and $K_{r}=K-K_{g}$
regular cutting points, the run time complexity of reconstructing the
expectation in this bipartite case scales as
$\mathcal{O}(4^{K_{r}}3^{K_{g}})$. Moreover, there is no need of preparing the
downstream fragments into the respective eigenstates, reducing the number of
circuit evaluations from $\mathcal{O}(6^{K})$ to
$\mathcal{O}(6^{K_{r}}4^{K_{g}})$. It is worth noting that the eigenstate
preparation scheme is overcomplete and alternative bases, e.g. the symmetric
informationally-complete (SICC) basis, can be used to achieve
$\mathcal{O}(4^{K})$ circuit evaluations without invoking golden circuit
cutting formalism. However, employing the SICC basis would require more
involved implementation, namely, solving linear systems, in order to construct
the appropriate tensor during reconstruction.
We emphasize that golden cutting points as its current form is strictly a
product of circuit design. That is, we can design circuits that admits golden
cutting points at known locations (c.f. Section III). In reality, the
existence of golden cutting points depends on the choice of the observable
$O$, how it can be decomposed into form amendable to cutting, and the the
state of the qubit prior to the cut. It is unlikely that an arbitrary circuit
would exhibit such property without intricate designs, and we defer further
discussion on finding golden cutting points to Section IV.
## III Experiments
Figure 2: Example 5-qubit circuit with golden cutting point. $U_{1}$ and
$U_{2}$ are randomized circuits. Note that other odd-number circuit widths
were used too, mainly 7-qubits split into subcircuits of 4 qubits each.
We demonstrate the capability of golden cutting points numerically.
Specifically, experiments were designed to verify two aspects of our work:
first, that our method does not sacrifice the correctness of reconstruction
and second, that our method reduces the overall runtime of the algorithm. The
circuit used in our experiment takes the form showed in Figure 2 where the
sizes of the circuits were tailored to the size of the device it was run on.
Most commonly, we split 5- and 7-qubit circuits into fragments of 3 and 4
qubits respectively. In addition to quantum hardware, we also tested our
method on the Aer simulator from Qiskit [27].
The generated circuits featured collections of $RX$ gates with the rotation
angle $\theta$ was chosen uniformly at random from the interval $[0,6.28]$, as
well as random gates generated using the `random_circuit()` function in
Qiskit, denoted by $U_{1}$ and $U_{2}$ in Figure 2. In our experiments, we
want to acquire the bitstring probability distribution generated by repeated
measurements in the computational ($Z$) basis. Alternatively, to match the
formalism of Section II-B, for all bitstrings $\hat{b}\in\\{0,1\\}^{n}$, one
can interpret the above as estimating the expectation of the projector
observable $\Pi^{\hat{b}}=|\hat{b}\rangle\langle\hat{b}|$ where $n=5,7$ is the
number of qubits in a circuit. This observable decomposes straightforwardly
upon cutting:
$\displaystyle\Pi^{\hat{b}}=\Pi^{\hat{b}_{1}}\otimes\Pi^{\hat{b}_{2}}$ (16)
where $\hat{b}_{1}$ and $\hat{b}_{2}$ are bitstrings of length $\lfloor
n/2\rfloor$ and $\lceil n/2\rceil$ respectively such that the concatenation
recovers $\hat{b}$. By repeating many measurements in the computational basis
and obtaining a bitstring distribution, we can then estimate the expectation
of the projector observables. The restrictions imposed on our circuit ansatz
creates a golden cutting point. In particular, the contribution of first
fragment to the total expectation (with respect to the projector operator
above, which has only diagonal components) conditioned on observing each
eigenstate of the Pauli $Y$ operator leads to components of equal magnitudes.
Once weighed by the respective eigenvalues, the trace terms systematically
cancels and induces a golden cutting point as defined in 15.
### III-A Verifying Reconstruction Accuracy
Figure 3: Comparison of weighted distances for uncut circuit evaluations and
reconstructed meausurements from fragment data. Each bar was averaged over 10
independent trials. Each trial consisted of 10,000 shots per fragment. Error
bars represent 95% confidence intervals.
We first proceed to verify the correctness of our golden cutting point method.
We performed noiseless (in the sense of hardware noise) simulations of the
full uncut circuit to obtain the ground truth distribution and compared our
golden cutting point method to it. In addition, we ran the same circuits, both
uncut and fragmented, on superconducting IBM Quantum devices [28]. To compare
distributions, we now introduce a weighted distance function $d_{w}$:
$\displaystyle
d_{w}(p;q)=\sum_{x\in\mathcal{X}}\frac{\left(p(x)-q(x)\right)^{2}}{q(x)}.$
(17)
for distributions $p$ and $q$ with support $\mathcal{X}$. In the experiment,
$p$ was our “test” distribution and $q$ was the “ground truth.” This weighted
distance function penalizes large percentage deviations more than other
metrics such as the total variational distance.
Using this weighted distance function, we determined the distance between the
ground truth bitstring distribution from the Aer simulator and the
distribution obtained by running the full circuit on quantum hardware. We then
also determined the distance between the ground truth distribution and the
distribution obtained by running our golden cutting method on quantum
hardware. We repeated the experiment for machines of two different sizes—a
5-qubit device would run a 5-qubit circuit that is split into two 3-qubit
subcircuits and a 7-qubit device would run a 7-qubit circuit split into two
4-qubit subcircuits. Each circuit configuration was repeated for 10 trials
with each trial consisting of 10,000 shots per (sub)circuit. The results of
the experiments are shown in Figure 3.
Because previous work has found that quantum circuit cutting has yielded a
benefit in terms of fidelity, we expected similar results. We were surprised
to find that such a benefit is non-existent on 5-qubit devices and not
detectable within 95% confidence intervals on a 7-qubit device. This is likely
due to the fact that our circuits were not particularly deep—only a few gates
in each. As the circuit depth increases we should expect to see more of a
benefit from circuit cutting. Additionally, an average of only 10 trials was
performed, leaving our uncertainties relatively large. In our future work we
will run more trials, potentially on larger devices, or with greater depths,
to more precisely determine the true distance for this method. Regardless, in
this work, we have verified that (within error) our method performs as well as
full circuit execution on real hardware in terms of outputting the correct
bitstring distribution.
Figure 4: Runtime comparison of circuits with (in gold) and without (in red)
golden circuit cutting optimization. Each configuration was repeated for 1000
trials and each trial involved 1000 shots for each (sub)circuit. Error bars
represent 95% confidence intervals.
### III-B Algorithm runtime
The presented golden cutting point method predicts a reduction in the number
of measurements, thereby lowering the runtime of the procedure. In this
experiment, we recorded the time taken for gathering fragment data and
reconstructing them on a randomly generated circuit. We assumed the golden
cutting point was known a priori (see Section IV for determining the existence
of golden cutting points), and studied the runtime reduction from exploiting
this knowledge. The results are illustrated in Figure 4.
Figure 5: Circuit cutting runtime with and without golden cutting point on
quantum devices from IBM. Each bar represents the average runtime of 50 trials
and each trial comprised of 1,000 shots for each (sub)circuit.
We then repeated the same experiment using real devices available through the
IBM Quantum Experience [28]. We report a speedup of 33 percent using our
method as compared to the standard method [18], as shown in Figure 5.
Specifically, we find that the average time for execution using the standard
(without golden cutting point consideration) reconstruction method was 18.84
seconds whereas the golden cutting point method had a mean of 12.61 seconds.
This reduction in run time is largely attributed to the reduced number of
circuit evaluations. Particularly, we avoided having to execute a third of the
total shots by neglecting one basis element, bringing the total number of
circuit executions down from $4.5\times 10^{5}$ to $3.0\times 10^{5}$. This
demonstrates the applicability of our method: it is simple to implement and
produces a considerable reduction in wall time needed to perform circuit
reconstruction.
## IV Conclusion
In this paper, we introduced the idea of a golden cutting point where elements
of a basis set can be neglected (Section II-B). Neglecting these basis
elements allows for a considerable decrease in the runtime. The reduction can
be attributed to (1) fewer terms are involved when combining measurement
results from fragments, and (2) fewer circuit evaluations are needed in the
fragment downstream of the golden cut. We verified the correctness and
demonstrated the lowered runtime in both simulators and real quantum hardware
(Section III). We observed that golden cutting points do not come at the cost
of accuracy, and a statistically significant 33% decrease in runtime persists
in both a simulated environment and quantum devices.
The introduction of the golden cutting point begs the question of applicable
algorithms that have built-in golden cutting points. Variational circuits are
a potential candidate due to their flexibility in the circuit ansatz. Although
variational algorithms for combinatorial optimization or chemical simulations
have great structural assumptions given by the Hamiltonian, quantum machine
learning circuits typically do not possess these constraints and therefore are
likely more amenable to exploiting the golden cutting point formalism.
Moreover, detection of the existence of a golden cutting point is also an
interesting one. In this work, we assumed the golden cutting point was known a
priori. However, it is not obvious whether a basis can be neglected without
simulating the circuit. Thus, we suspect there are methods for detecting
golden cutting points “online” during the execution of the circuit cutting
procedure through sequential empirical measurements. Note that this does not
necessarily sacrifice the parallelism of circuit cutting as golden cutting
points only affect the fragments directly incident to the cut. However,
performing such a method would require further statistical analysis of
acceptable error and the amplification of error through tensor contraction.
## Acknowledgements
We acknowledge the use of IBM Quantum services for this work. The views
expressed are those of the authors, and do not reflect the official policy or
position of IBM or the IBM Quantum team. This material is based upon work
supported by the U.S. Department of Energy, Office of Science, National
Quantum Information Science Research Centers. The colors used in figures were
based on Nord Theme [29]. Daniel T. Chen and Ethan H. Hansen contributed
equally to this work.
## References
* [1] L. K. Grover, “A fast quantum mechanical algorithm for database search,” in _Proceedings of the twenty-eighth annual ACM symposium on Theory of computing_ , ser. STOC ’96. Association for Computing Machinery, 1996, pp. 212–219. [Online]. Available: https://doi.org/10.1145/237814.237866
* [2] P. W. Shor, “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer,” _SIAM Journal on Computing_ , vol. 26, no. 5, pp. 1484–1509, oct 1997. [Online]. Available: https://doi.org/10.1137%2Fs0097539795293172
* [3] A. Robert, P. K. Barkoutsos, S. Woerner, and I. Tavernelli, “Resource-efficient quantum algorithm for protein folding,” _npj Quantum Information_ , vol. 7, no. 1, feb 2021. [Online]. Available: https://doi.org/10.1038%2Fs41534-021-00368-4
* [4] R. LaRose, A. Mari, S. Kaiser, P. J. Karalekas, A. A. Alves, P. Czarnik, M. E. Mandouh, M. H. Gordon, Y. Hindy, A. Robertson, P. Thakre, M. Wahl, D. Samuel, R. Mistri, M. Tremblay, N. Gardner, N. T. Stemen, N. Shammah, and W. J. Zeng, “Mitiq: A software package for error mitigation on noisy quantum computers,” _Quantum_ , vol. 6, p. 774, aug 2022. [Online]. Available: https://doi.org/10.22331%2Fq-2022-08-11-774
* [5] M. H. Abobeih, Y. Wang, J. Randall, S. J. H. Loenen, C. E. Bradley, M. Markham, D. J. Twitchen, B. M. Terhal, and T. H. Taminiau, “Fault-tolerant operation of a logical qubit in a diamond quantum processor,” _Nature_ , vol. 606, no. 7916, pp. 884–889, may 2022. [Online]. Available: https://doi.org/10.1038%2Fs41586-022-04819-6
* [6] A. A. Saki, A. Katabarwa, S. Resch, and G. Umbrarescu, “Hypothesis testing for error mitigation: How to evaluate error mitigation,” 2023. [Online]. Available: https://arxiv.org/abs/2301.02690
* [7] J. Gambetta, “Quantum-centric supercomputing: The next wave of computing,” Dec 2022. [Online]. Available: https://research.ibm.com/blog/next-wave-quantum-centric-supercomputing
* [8] M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, and P. J. Coles, “Variational quantum algorithms,” _Nature Reviews Physics_ , vol. 3, no. 9, pp. 625–644, aug 2021. [Online]. Available: https://doi.org/10.1038%2Fs42254-021-00348-9
* [9] E. Farhi, J. Goldstone, and S. Gutmann, “A quantum approximate optimization algorithm,” _arXiv preprint arXiv:1411.4028_ , 2014. [Online]. Available: https://arxiv.org/abs/1411.4028
* [10] D. A. Fedorov, B. Peng, N. Govind, and Y. Alexeev, “VQE method: a short survey and recent developments,” _Materials Theory_ , vol. 6, no. 1, pp. 1–21, 2022. [Online]. Available: https://doi.org/10.1186/s41313-021-00032-6
* [11] B. Barak and K. Marwaha, “Classical Algorithms and Quantum Limitations for Maximum Cut on High-Girth Graphs,” in _13th Innovations in Theoretical Computer Science Conference (ITCS 2022)_ , ser. Leibniz International Proceedings in Informatics (LIPIcs), M. Braverman, Ed., vol. 215. Dagstuhl, Germany: Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022, pp. 14:1–14:21. [Online]. Available: https://drops.dagstuhl.de/opus/volltexte/2022/15610
* [12] A. Warshel and M. Levitt, “Theoretical studies of enzymic reactions: dielectric, electrostatic and steric stabilization of the carbonium ion in the reaction of lysozyme,” _Journal of molecular biology_ , vol. 103, no. 2, pp. 227–249, 1976. [Online]. Available: https://www.sciencedirect.com/science/article/pii/0022283676903119
* [13] M. S. Gordon, D. G. Fedorov, S. R. Pruitt, and L. V. Slipchenko, “Fragmentation methods: A route to accurate calculations on large systems,” _Chemical reviews_ , vol. 112, no. 1, pp. 632–672, 2012. [Online]. Available: https://doi.org/10.1021/cr200093j
* [14] W. Li, S. Li, and Y. Jiang, “Generalized energy-based fragmentation approach for computing the ground-state energies and properties of large molecules,” _The Journal of Physical Chemistry A_ , vol. 111, no. 11, pp. 2193–2199, 2007\. [Online]. Available: https://doi.org/10.1021/jp067721q
* [15] H. Li, W. Li, S. Li, and J. Ma, “Fragmentation-based QM/MM simulations: Length dependence of chain dynamics and hydrogen bonding of polyethylene oxide and polyethylene in aqueous solutions,” _The Journal of Physical Chemistry B_ , vol. 112, no. 23, pp. 7061–7070, 2008. [Online]. Available: https://doi.org/10.1021/jp800777e
* [16] T. Peng, A. W. Harrow, M. Ozols, and X. Wu, “Simulating large quantum circuits on a small quantum computer,” _Physical Review Letters_ , vol. 125, no. 15, oct 2020. [Online]. Available: https://doi.org/10.1103%2Fphysrevlett.125.150504
* [17] T. Ayral, F.-M. Le Régent, Z. Saleem, Y. Alexeev, and M. Suchara, “Quantum divide and compute: Hardware demonstrations and noisy simulations,” in _2020 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)_. IEEE, 2020, pp. 138–140. [Online]. Available: https://ieeexplore.ieee.org/document/9155024
* [18] T. Ayral, F.-M. L. Régent, Z. Saleem, Y. Alexeev, and M. Suchara, “Quantum divide and compute: exploring the effect of different noise sources,” _SN Computer Science_ , vol. 2, no. 3, pp. 1–14, 2021. [Online]. Available: https://doi.org/10.1007/s42979-021-00508-9
* [19] M. A. Perlin, Z. H. Saleem, M. Suchara, and J. C. Osborn, “Quantum circuit cutting with maximum-likelihood tomography,” _npj Quantum Information_ , vol. 7, no. 1, p. 64, 2021. [Online]. Available: https://doi.org/10.1038/s41534-021-00390-6
* [20] Z. H. Saleem, T. Tomesh, M. A. Perlin, P. Gokhale, and M. Suchara, “Quantum Divide and Conquer for Combinatorial Optimization and Distributed Computing,” _arXiv preprint_ , Jul. 2021. [Online]. Available: https://arxiv.org/abs/2107.07532
* [21] J. Liu, A. Gonzales, and Z. H. Saleem, “Classical simulators as quantum error mitigators via circuit cutting,” _arXiv preprint arXiv:2212.07335_ , 2022\. [Online]. Available: https://arxiv.org/abs/2212.07335
* [22] A. Lowe, M. Medvidović, A. Hayes, L. J. O’Riordan, T. R. Bromley, J. M. Arrazola, and N. Killoran, “Fast quantum circuit cutting with randomized measurements,” 2022. [Online]. Available: https://arxiv.org/abs/2207.14734
* [23] D. Chen, B. Baheri, V. Chaudhary, Q. Guan, N. Xie, and S. Xu, “Approximate quantum circuit reconstruction,” in _2022 IEEE International Conference on Quantum Computing and Engineering (QCE)_. IEEE, 2022, pp. 509–515.
* [24] G. Uchehara, T. M. Aamodt, and O. Di Matteo, “Rotation-inspired circuit cut optimization,” _arXiv preprint arXiv:2211.07358_ , 2022. [Online]. Available: https://arxiv.org/abs/2211.07358
* [25] H.-Y. Huang, R. Kueng, and J. Preskill, “Predicting many properties of a quantum system from very few measurements,” _Nature Physics_ , vol. 16, no. 10, pp. 1050–1057, 2020. [Online]. Available: https://doi.org/10.1038/s41567-020-0932-7
* [26] D. T. Chen, Z. H. Saleem, and M. A. Perlin, “Quantum divide and conquer for classical shadows,” _arXiv preprint arXiv:2212.00761_ , 2022. [Online]. Available: https://arxiv.org/abs/2212.00761
* [27] A. tA-v et al., “Qiskit: An open-source framework for quantum computing,” 2021.
* [28] “IBM Quantum,” 2023. [Online]. Available: https://quantum-computing.ibm.com/
* [29] S. Greb, “Nord theme.” [Online]. Available: https://www.nordtheme.com/
|
# Fuzzy semigroups via semigroups
Anjeza Krakulli
Universiteti Aleksandër Moisiu
Fakulteti i Teknologjisë dhe Informacionit
Departamenti i Matematikës, Durrës
<EMAIL_ADDRESS>
Elton Pasku
Universiteti i Tiranës
Fakulteti i Shkencave Natyrore
Departamenti i Matematikës, Tiranë
<EMAIL_ADDRESS>
###### Abstract
The theory of fuzzy semigroups is a branch of mathematics that arose in early
90’s as an effort to characterize properties of semigroups by the properties
of their fuzzy subsystems which include, fuzzy subsemigroups and their alike,
fuzzy one (resp. two) sided ideals, fuzzy quasi-ideals, fuzzy bi-ideals etc.
To be more precise, a fuzzy subsemigroup of a given semigroup $(S,\cdot)$ is
just a $\wedge$-prehomomorphism $f$ of $(S,\cdot)$ to $([0,1],\wedge)$.
Variations of this, which correspond to the other before mentioned fuzzy
subsystems, can be obtained by imposing certain properties to $f$. It turns
out from the work of Kuroki, Mordeson, Malik and that of many of their
descendants, that fuzzy subsystems play a similar role to the structure theory
of semigroups that play their non fuzzy analogues. The aim of the present
paper is to show that this similarity is not coincidental. As a first step to
this, we prove that there is a 1-1 correspondence between fuzzy subsemigroups
of $S$ and subsemigroups of a certain type of $S\times I$. Restricted to fuzzy
one sided ideals, this correspondence identifies the above fuzzy subsystems to
their analogues of $S\times I$. Using these identifications, we prove that the
characterization of the regularity of semigroups in terms of fuzzy one sided
ideals and fuzzy quasi-ideals can be obtained as an implication of the
corresponding non fuzzy analogue. In a further step, we give new
characterizations of semilattices of left simple semigroups in terms of left
simple fuzzy subsemigroups, and of completely regualr semigroups in terms of
completely simple fuzzy subsemigroups. Both, left simple fuzzy subsemigroups,
and completely simple fuzzy subsemigroups are defined here for the first time,
and the corresponding characterizations generalize well known
characterizations of the corresponding semigroups.
Key words and phrases: Fuzzy subsemigroup, fuzzy one sided ideal, fuzzy quasi-
ideal, regular semigroup.
## 1 Introduction
Given a semigroup $(S,\cdot)$, a fuzzy subsemigroup of $S$ is a function
$f:S\rightarrow[0,1]$ such that for all $a,b\in S$, $f(ab)\geq f(a)\wedge
f(b)$. Thus, a fuzzy subsemigroup is just a prehomomorphism
$f:(S,\cdot)\rightarrow([0,1],\wedge)$ of semigroups. A fuzzy subsemigroup as
above will be called a fuzzy left ideal (resp. right ideal) of $S$ if for all
$a,b\in S$, $f(ab)\geq f(b)$ (resp. $f(ab)\geq f(a)$). The composite of two
fuzzy subsets $f,g:S\rightarrow[0,1]$ is given by
$(f\circ g)(a)=\left\\{\begin{array}[]{ccc}{\vee}_{a=bc}(f(b)\wedge
g(c))&if&\exists x,y\in S,a=xy\\\ 0&if&\forall x,y\in S,a\neq
xy.\end{array}\right.$
The operation $\circ$ is associative. The definition of $\circ$ is needed in
defining fuzzy quasi-ideals which are maps $q:S\rightarrow[0,1]$ such that
$q\circ S\cap S\circ q\subseteq q$. In more details, the inclusion here means
that for every $a\in S$, $(q\circ S)(a)\wedge(S\circ q)(a)\leq q(a)$, and $S$
is the fuzzy subsemigroup which maps every $a\in S$ to 1. More generally, for
every subsemigroup $A$ of $S$, the characteristic map $\chi_{A}$, defined by
$\chi_{A}(a)=\left\\{\begin{array}[]{ccc}1&if&a\in A\\\ 0&if&a\notin
A\end{array}\right.$
is proved to be a fuzzy subsemigroup of $S$. The above notions and other ones
that are needed to understand the rest of the paper can be found mostly in the
monograph [7]. Necessary concepts from semigroup theory are found in [3] and
[12]. From [12], we are presenting below theorem 9.3 whose fuzzy counterpart
is proved partially in [7]. We prove here its fuzzy counterpart theorem 4.1,
including the part already proved in [7], not in a direct fashion as in [7],
but as an implication of theorem 9.3 after having established a 1-1
correspondence between fuzzy one sided ideals of a semigroup $S$ on the one
hand, and one sided ideals of $S\times[0,1]$ on the other hand. Theorem 9.3 of
[12] reads as follows.
###### Theorem 1.1
The following are equivalent.
* (i)
$S$ is a regular semigroups.
* (ii)
For every right ideal $R$ and every fuzzy left ideal $L$, we have $R\cap
L=RL$.
* (iii)
For every right ideal $R$ and every left ideal $L$ of $S$, (a) $R^{2}=R$, (b)
$L^{2}=L$, (c) $RL$ is a quasi-ideal of $S$.
* (iv)
The set of all quasi-ideals of $S$ forms a regular semigroup.
* (v)
For every quasi ideal $Q$ of $S$, $QSQ=Q$.
The last section of this paper is devoted to the semilattice decomposition of
left regular semigroups whose left ideals are also right ideals. It is proved
in [11] that such semigroups have a nice decomposition into smaller and easier
to understand subsemigroups. More specifically we have from [11] the following
theorem.
###### Theorem 1.2
For a semigroup $(S,\cdot)$ the following conditions are equivalent:
* (1)
$S$ is a semilattice of left simple semigroups.
* (2)
$L_{1}\cap L_{2}=L_{1}L_{2}$ for every left ideals $L_{1}$ and $L_{2}$ of $S$.
* (3)
The set of all left ideals of $S$ is a semilattice under the multiplication of
subsets.
* (4)
$S$ is left regular and every left ideal of $S$ is two-sided.
There is also a version of this theorem in [7] (Theorem 4.1.3) where condition
(2), (3) and (4) are fuzzyfied by replacing the term left ideal (resp. ideal)
by fuzzy left ideal (resp. fuzzy ideal). We prove in our theorem 5.1 that
condition (1) can also be fuzzyfied. For this we have defined what a fuzzy
semilattice of left simple fuzzy subsemigroups is. In the same spirit with the
above we give a fuzzy version of Theorem 4.1.3 of [3] by characterizing
completely regular semigroups in terms of what are defined here to be fuzzy
semilattices of completely simple fuzzy subsemigroups.
## 2 A relationship between fuzzy subsemigroups and semigroups
Assume we are given a fuzzy subsemigroup $f:S\rightarrow[0,1]$ of a semigroup
$(S,\cdot)$. Besides the semigroup $(S,\cdot)$ we consider the semilattice
$([0,1],\wedge)$. We sometimes write $I$ instead of $[0,1]$ and let
$I^{\ast}=I\setminus\\{0\\}$. We can obviously regard $f$ as a prehomomorphism
from $(S,\cdot)$ to $([0,1],\wedge)$. With this data at hand, we define a
subsemigroup of the direct product semigroup $(S,\cdot)\times([0,1],\wedge)$
by letting
$\mathfrak{S}(S,f)=\\{(b,y)\in S\times[0,1]:f(b)\geq y\\}.$
This is indeed a subsemigroup of $S\times[0,1]$ since for every
$(a,x),(b,y)\in\mathfrak{S}(S,f)$ we have that $f(ab)\geq f(a)\wedge f(b)\geq
x\wedge y$, and then,
$(a,x)(b,y)=(ab,x\wedge y)\in\mathfrak{S}(S,f).$
The semigroup $\mathfrak{S}(S,f)$ satisfies the following three conditions
* (i)
$\pi_{1}(\mathfrak{S}(S,f))=S$ where $\pi_{1}$ is the projection in the first
coordinate,
* (ii)
for every $b\in S$,
$(b,\text{sup}\\{y\in[0,1]:(b,y)\in\mathfrak{S}(S,f)\\})\in\mathfrak{S}(S,f)$,
since $\text{sup}\\{y\in[0,1]:(b,y)\in\mathfrak{S}(S,f)\\}=f(b)$, and
* (iii)
$(b,y)\in\mathfrak{S}(S,f)$ for every $y\leq f(b)$.
Conversely, every subsemigroup $\Sigma$ of the direct product $S\times[0,1]$
which satisfies the following three conditions
* (s-i)
$\pi_{1}(\Sigma)=S$ where $\pi_{1}$ is the projection in the first coordinate,
* (s-ii)
for every $b\in S$, $(b,\text{sup}\\{y\in[0,1]:(b,y)\in\Sigma\\})\in\Sigma$,
and
* (s-iii)
$(b,y)\in\Sigma$ for every $y\leq\text{sup}\\{z\in[0,1]:(b,z)\in\Sigma\\}$,
gives rise to a fuzzy subsemigroup $f:S\rightarrow[0,1]$ such that
$\mathfrak{S}(S,f)=\Sigma$. For this we define
$f:S\rightarrow[0,1]\text{ by
}b\mapsto\text{sup}\\{y\in[0,1]:(b,y)\in\Sigma\\}.$
From condition (ii) we know that $(b,f(b))\in\Sigma$. The function $f$ is a
fuzzy subsemigroup for if $a,b\in S$, then since $(a,f(a)),(b,f(b))\in\Sigma$,
it follows that
$(ab,f(a)\wedge f(b))=(a,f(a))(b,f(b))\in\Sigma,$
and then $f(ab)\geq f(a)\wedge f(b)$. Finally,
$\displaystyle(b,y)\in\mathfrak{S}(S,f)$ $\displaystyle\Leftrightarrow
f(b)\geq y$ $\displaystyle\Leftrightarrow(b,y)\in\Sigma$ (from condition
(s-iii))
The set of subsemigroups $\Sigma$ of $S\times I$ as above is denoted here by
$F(S\times I)$. Also we write $\mathfrak{F}(S)$ for the set of all fuzzy
subsemigroups of $S$. Define now
$\Psi:\mathfrak{F}(S)\rightarrow F(S\times I)\text{ by
}f\mapsto\mathfrak{S}(S,f).$
###### Proposition 2.1
The map $\Psi$ is bijective.
###### Proof.
That $\Psi$ is correct and onto was proved above. To prove that $\Psi$ is
injective, we let $f,g\in\mathfrak{F}(S)$ such that
$\mathfrak{S}(S,f)=\mathfrak{S}(S,g)$. Then, for every $a\in S$,
$(a,g(a))\in\mathfrak{S}(S,g)$, hence $(a,g(a))\in\mathfrak{S}(S,f)$, and then
$g(a)\leq f(a)$. In a symmetric way one can show that $f(a)\leq g(a)$,
therefore $g(a)=f(a)$. ∎
We conclude this section with a further remark on the relationships between
subsemigroups of $S\times I$ and fuzzy subsemigroups of $S$. Once we
associated every fuzzy subsemigroup of $S$ with a subsemigroup of $S\times I$,
we would like to establish a reverse correspondence. It is obvious that not
every subsemigroup of $S\times I$ is in the image of $\Psi$. For instance, if
$t\in I^{\ast}$ is fixed, then every subsemigroup $\Sigma$ of $S\times I$ with
$\pi_{2}(\Sigma)=t$ cannot be in the image of $\Psi$. But still we can define
a fuzzy subsemigroup $\sigma$ in terms of $\Sigma$ and find a relationship
between $\mathfrak{S}(S,\sigma)$ and $\Sigma$. For this we define
$\sigma:S\rightarrow I$ such that
$\sigma(a)=\left\\{\begin{array}[]{ccc}\alpha&if&a\in\pi_{1}(\Sigma)\\\
0&if&a\notin\pi_{1}(\Sigma),\end{array}\right.$
where $\alpha=\text{sup}\\{t\in I:(a,t)\in\Sigma\\}$. We prove that $\sigma$
is a fuzzy subsemigroup of $S$. Indeed, let $a,b\in S$. If
$ab\notin\pi_{1}(\Sigma)$, then it is obvious that at least one of $a$ or $b$
cannot be in $\pi_{1}(\Sigma)$, consequently
$\sigma(ab)=0=\sigma(a)\wedge\sigma(b)$. If $ab\in\pi_{1}(\Sigma)$ we let
$\gamma=\text{sup}C$ where $C=\text{sup}\\{t\in I:(ab,t)\in\Sigma\\}$. If now
$a,b\in\pi_{1}(\Sigma)$, we write $A=\\{t^{\prime}\in
I:(a,t^{\prime})\in\Sigma\\}$, $B=\\{t^{\prime\prime}\in
I:(b,t^{\prime\prime})\in\Sigma\\}$, and let $\alpha=\text{sup}A$ and
$\beta=\text{sup}B$. We prove that $\alpha\wedge\beta\leq\gamma$ which in turn
is equivalent to $\sigma(ab)\geq\sigma(a)\wedge\sigma(b)$. Observe first that
$t^{\prime}\wedge t^{\prime\prime}\in C$ for every $t^{\prime}\in A$ and
$t^{\prime\prime}\in B$ since $(ab,t^{\prime}\wedge
t^{\prime\prime})=(a,t^{\prime})(b,t^{\prime\prime})\in\Sigma$. Let us assume
that $\alpha<\beta$ or equivalently that $\alpha\wedge\beta=\alpha$. In this
case, there is $t_{\ast}^{\prime\prime}\in B$ such that
$t_{\ast}^{\prime\prime}>\alpha$, and then
$\displaystyle\alpha$ $\displaystyle=\text{sup}A$
$\displaystyle=\text{sup}\\{t^{\prime}\wedge
t_{\ast}^{\prime\prime}:t^{\prime}\in A\\}$
$\displaystyle\leq\text{sup}\\{t^{\prime}\wedge t^{\prime\prime}:t^{\prime}\in
A,t^{\prime\prime}\in B\\}$ $\displaystyle\leq\text{sup}C$ (from our
observation) $\displaystyle=\gamma.$
A similar proof is available when $\alpha>\beta$. Let us now prove the claim
when $\alpha=\beta$. There are increasing sequences
$\\{t^{\prime}_{n}\\}\subseteq A$ and $\\{t^{\prime\prime}_{n}\\}\subseteq B$
such that $t^{\prime}_{n}\rightarrow\alpha$ and
$t^{\prime\prime}_{n}\rightarrow\alpha$. It follows that $t^{\prime}_{n}\wedge
t^{\prime\prime}_{n}\rightarrow\alpha$, and then
$\displaystyle\alpha$ $\displaystyle=\text{sup}A$
$\displaystyle=\text{sup}\\{t^{\prime}_{n}:n\in\mathbb{N}\\}$
$\displaystyle=\text{sup}\\{t^{\prime}_{n}\wedge
t^{\prime\prime}_{n}:n\in\mathbb{N}\\}$ $\displaystyle\leq\text{sup}C$ (from
the obsevation) $\displaystyle=\gamma.$
In the three remaining cases when at least one of $a$ or $b$ is not in
$\pi_{1}(\Sigma)$ it is straightforward that $\alpha\wedge\beta\leq\gamma$.
Now we prove that $\Sigma\subseteq\mathfrak{S}(S,\sigma)$. Indeed, let
$(a,t)\in\Sigma$, then from the definition of $\sigma$, $\sigma(a)\geq t$,
hence $(a,t)\in\mathfrak{S}(S,\sigma)$. Letting $\text{Sub}(S\times I)$ be the
set of subsemigroups of the product semigroup $S\times I$, we define
$\tilde{\Psi}:\text{Sub}(S\times I)\rightarrow\mathfrak{F}(S)\text{ by
}\Sigma\mapsto\sigma,$
with $\sigma$ described as above. This map is obviously surjecive, and in
general non injective since if $\sigma$ is such that
$\Sigma\neq\mathfrak{S}(S,\sigma)$, then
$\tilde{\Psi}(\Sigma)=\sigma=\tilde{\Psi}(\mathfrak{S}(S,\sigma))$. The right
hand side equality above shows that $\tilde{\Psi}$ is a left inverse of
$\Psi$.
## 3 Similar relationships for other fuzzy subsystems
Given a fuzzy left ideal $f:S\rightarrow[0,1]$ of a semigroup $(S,\cdot)$, we
define a left ideal in the direct product semigroup $S\times[0,1]$ by
$\mathcal{L}(S,f)=\\{(b,y)\in S\times[0,1]:f(b)\geq y\\}.$
This is indeed a left ideal of $S\times[0,1]$ for if $(a,x)\in S\times[0,1]$
and $(b,y)\in\mathcal{L}(S,f)$, then
$(a,x)(b,y)=(ab,x\wedge y)\in\mathcal{L}(S,f)$
since
$f(ab)\geq f(b)\geq x\wedge f(b)\geq x\wedge y.$
We also note that $\mathcal{L}(S,f)$ satisfies the following three conditions:
* (i)
$\pi_{1}(\mathcal{L}(S,f))=S$ where $\pi_{1}$ is the projection in the first
coordinate,
* (ii)
for every $b\in S$,
$(b,\text{sup}\\{y\in[0,1]:(b,y)\in\mathcal{L}(S,f)\\})\in\mathcal{L}(S,f)$,
since $\text{sup}\\{y\in[0,1]:(b,y)\in\mathcal{L}(S,f)\\}=f(b)$, and
* (iii)
$(b,y)\in\mathcal{L}(S,f)$ for every $y\leq f(b)$.
Conversely, every left ideal $L$ of the direct product semigroup
$S\times[0,1]$ which satisfies the following three conditions
* (l-i)
$\pi_{1}(L)=S$ where $\pi_{1}$ is the projection in the first coordinate,
* (l-ii)
for every $b\in S$, $(b,\text{sup}\\{y\in[0,1]:(b,y)\in L\\})\in L$, and
* (l-iii)
$(b,y)\in L$ for every $y\leq\text{sup}\\{z\in[0,1]:(b,z)\in L\\}$,
gives rise to a fuzzy left ideal of $S$. For this we define
$f:S\rightarrow[0,1]:b\mapsto\text{sup}\\{y\in[0,1]:(b,y)\in L\\}.$
In particular we have that $(b,f(b))\in L$. Now we show that for all $a,b\in
S$, $f(ab)\geq f(b)$. Since $L$ is a left ideal, then for $(a,1)\in
S\times[0,1]$ and $(b,f(b))\in L$, we have
$(a,1)(b,f(b))=(ab,f(b))\in L,$
hence from the definition of $f$, we have $f(ab)\geq f(b)$. Also we note that
the left ideal $\mathcal{L}(S,f)$ arising from the fuzzy left ideal $f$
defined in terms of $L$ is exactly the left ideal $L$ we started with, that is
$\mathcal{L}(S,f)=L$. Indeed,
$\displaystyle(b,y)\in\mathcal{L}(S,f)$ $\displaystyle\Leftrightarrow f(b)\geq
y$ $\displaystyle\Leftrightarrow(b,y)\in L$ (from condition (l-iii))
A similar result with proposition 2.1 holds true for fuzzy left ideals. We let
$L(S\times I)$ be the set of left ideals of $S\times I$ which satisfy
properties (l-i), (l-ii) and (l-iii) above. Also we write $\mathfrak{L}(S)$
for the set of all fuzzy left ideals of $S$.
###### Proposition 3.1
The restriction $\Psi|_{\mathfrak{L}(S)}$ of $\Psi$ on $\mathfrak{L}(S)$ is a
bijection between $\mathfrak{L}(S)$ and $L(S\times I)$.
###### Proof.
It is straightforward that for every fuzzy left ideal $f$,
$\mathcal{L}(S,f)=\Psi(f)$, hence
$\Psi|_{\mathfrak{L}(S)}:\mathfrak{L}(S)\rightarrow L(S\times I)$ is well
defined. On the other hand, every left ideal $L$ of $L(S\times I)$ is in the
image of $\Psi$, hence the restriction map is onto. Finally,
$\Psi|_{\mathfrak{L}(S)}$ is 1-1 as a restriction of a 1-1 map. ∎
We note by passing that similar results hold true for fuzzy right ideals too.
For instance, one can prove that if we let $\mathfrak{R}(S)$ be the set of
fuzzy right ideals of $S$, and $R(S\times I)$ the set of right ideals of
$S\times I$ which satisfy certain condition similar to those $(l-i,ii,iii)$,
then we have,
###### Proposition 3.2
The restriction $\Psi|_{\mathfrak{R}(S)}$ of $\Psi$ on $\mathfrak{R}(S)$ is a
bijection between $\mathfrak{R}(S)$ and $R(S\times I)$.
We conclude this section by proving that analogous results to the above hold
true for fuzzy quasi-ideals.
###### Lemma 3.1
Whenever $q$ is a fuzzy quasi-ideal and $a\in S$ has a family of
factorizations $a=b_{i}c_{i}$ with $i\in J$, then $\forall i\in J$, $q(a)\geq
q(b_{i})$ or $q(a)\geq q(c_{i})$.
###### Proof.
From the definition of a fuzzy quasi-ideal $q$ we have
$\displaystyle q(a)$ $\displaystyle\geq(\underset{i\in
J}{\vee}q(b_{i}))\wedge(\underset{j\in J}{\vee}q(c_{j}))$
$\displaystyle=\underset{i\in J}{\vee}q(b_{i})$ (if for instance,)
then $q(a)\geq q(b_{i})$ for all $i\in J$. Similarly, one can show that when
$\text{min}\left(\underset{i\in J}{\vee}q(b_{i}),\underset{j\in
J}{\vee}q(c_{j})\right)=\underset{j\in J}{\vee}q(c_{j}),$
then $q(a)\geq q(c_{j})$. ∎
For every fixed fuzzy quasi-ideal $q$ of $S$, we let
$\mathfrak{Q}(S,q)=\\{(a,x)\in S\times I:q(a)\geq x\\},$
and prove that $\mathfrak{Q}(S,q)$ is a quasi-ideal of $S\times I$. Indeed, if
$(a,x),(b,y)\in\mathfrak{Q}(S,q)$ and $(s,t),(s^{\prime},t^{\prime})\in
S\times I$ are such that
$(a,x)(s,t)=(s^{\prime},t^{\prime})(b,y)\in\mathfrak{Q}(S,q)\cdot(S\times
I)\cap(S\times I)\cdot\mathfrak{Q}(S,q),$
then
$\displaystyle(as,x\wedge t)=(s^{\prime}b,t^{\prime}\wedge y),$
hence $as=s^{\prime}b$ and $q(as)=q(s^{\prime}b)$. From Lemma 3.1 we have that
either $q(as)\geq q(a)$, or $q(s^{\prime}b)\geq q(b)$. In the first case we
have that
$q(as)\geq q(a)\geq x\geq x\wedge t,$
which proves that $(a,x)(s,t)=(as,x\wedge t)\in\mathfrak{Q}(S,q)$. If the
second case above occurs, then
$q(s^{\prime}b)\geq q(b)\geq y\geq y\wedge t^{\prime},$
consequently, $(s^{\prime},t^{\prime})(b,y)=(s^{\prime}b,t^{\prime}\wedge
b)\in\mathfrak{Q}(S,q)$. We have thus proved that $\mathfrak{Q}(S,q)$ is a
quasi-ideal of $S\times I$. The quasi-ideal $\mathfrak{Q}(S,q)$ satisfies the
following properties
* (i)
$\pi_{1}(\mathfrak{Q}(S,q))=S$ where $\pi_{1}$ is the projection in the first
coordinate,
* (ii)
for every $b\in S$,
$(b,\text{sup}\\{y\in[0,1]:(b,y)\in\mathfrak{Q}(S,q)\\})\in\mathfrak{Q}(S,q)$,
* (iii)
$(b,y)\in\mathfrak{Q}(S,q)$ for every
$y\leq\text{sup}\\{z\in[0,1]:(b,z)\in\mathfrak{Q}(S,q)\\}$, and
* (iv)
if $a\in S$ has a family of decompositions $a=b_{i}c_{i}$ with $i\in J$, then
for all $i\in J$,
$\text{sup}\\{x\in[0,1]:(a,x)\in\mathfrak{Q}(S,q)\\}\geq\text{sup}\\{y\in[0,1]:(b_{i},y)\in\mathfrak{Q}(S,q)\\},$
or for all $i\in J$,
$\text{sup}\\{x\in[0,1]:(a,x)\in\mathfrak{Q}(S,q)\\}\geq\text{sup}\\{z\in[0,1]:(c_{i},z)\in\mathfrak{Q}(S,q)\\}.$
Conversely, every quasi-ideal $Q$ of the direct product semigroup
$S\times[0,1]$ which satisfies the following four conditions
* (q-i)
$\pi_{1}(Q)=S$ where $\pi_{1}$ is the projection in the first coordinate,
* (q-ii)
for every $b\in S$, $(b,\text{sup}\\{y\in[0,1]:(b,y)\in Q\\})\in Q$,
* (q-iii)
$(b,y)\in Q$ for every $y\leq\text{sup}\\{z\in[0,1]:(b,z)\in Q\\}$, and
* (q-iv)
If $a\in S$ has a family of decompositions $a=b_{i}c_{i}$ with $i\in J$, then
for all $i\in J$,
$\text{sup}\\{x\in[0,1]:(a,x)\in Q\\}\geq\text{sup}\\{y\in[0,1]:(b_{i},y)\in
Q\\},$
or for all $i\in J$,
$\text{sup}\\{x\in[0,1]:(a,x)\in Q\\}\geq\text{sup}\\{z\in[0,1]:(c_{i},z)\in
Q\\},$
gives rise to a fuzzy quasi ideal of $S$. Indeed, let $q:S\rightarrow[0,1]$ be
such that
$q(b)=\text{sup}\\{y\in[0,1]:(b,y)\in Q\\}.$
In particular we have that $(b,q(b))\in Q$. Now we show that $q\circ S\cap
S\circ q\subseteq q$. Let $a\in S$ be any fixed element. Denote by $J$ the set
of indexes for which there are $b_{i},c_{i}\in S$ with $i\in J$ such that
$a=b_{i}c_{i}$. To prove the above inclusion we need to prove that
$\underset{i\in J}{\vee}q(b_{i})\wedge\underset{j\in J}{\vee}q(c_{j})\leq
q(a),$
which due to the condition (q-iii) follows directly if we prove that
$(a,\underset{(i,j)\in I\times I}{\vee}q(b_{i})\wedge
q(c_{j}))=(a,\underset{i\in I}{\vee}q(b_{i})\wedge\underset{j\in
I}{\vee}q(c_{j}))\in Q.$
From condition (q-iv) we may assume that for instance $q(a)\geq q(b_{i})$ for
all $i\in J$, hence for each $j\in J$, we have from condition (q-iii) that
$(a,q(b_{i})\wedge q(c_{j}))\in Q$. Conditions (q-ii) and (q-iii) now imply
that $(a,\underset{(i,j)\in J\times J}{\vee}(q(b_{i})\wedge q(c_{i})))\in Q$,
and we are done. A similar proof is available if we assume that for all $j\in
J$, $q(a)\geq q(c_{j})$. Now we show that $\mathfrak{Q}(S,q)=Q$ for the fuzzy
quasi-ideal $q$ defined in terms of the given $Q$. Indeed,
$\displaystyle(b,y)\in\mathfrak{Q}(S,q)$ $\displaystyle\Leftrightarrow
q(b)\geq y$ $\displaystyle\Leftrightarrow(b,y)\in Q$ (from (q-iii)).
In a similar fashion with the case of fuzzy left and right ideals, we denote
here by $Q(S\times I)$ the set of all quasi-ideals of $S\times I$ that satisfy
conditions (q i-iv), and let $\mathfrak{Q}(S)$ be the set of all fuzzy quasi-
ideals of $S$. With these notations we have this
###### Proposition 3.3
The restriction $\Psi|_{\mathfrak{Q}(S)}$ of $\Psi$ on $\mathfrak{Q}(S)$ is a
bijection between $\mathfrak{Q}(S)$ and $Q(S\times I)$.
###### Proof.
The map $\Psi$ sends $q\mapsto\mathfrak{Q}(S,q)\in Q(S\times I)$ and this map
is onto from the above. The injectivity of $\Psi|_{\mathfrak{Q}(S)}$ is dealt
with in the same way as that of fuzzy one sided ideals. ∎
## 4 Regular Semigroups
Our next Theorem 4.1 is the fuzzy analogue of theorem 9.3 of [12] and is
partially covert from theorems 3.1.2-3.1.8 of [7] apart from characterization
(iv) which is a new fuzzyfication of its counterpart in theorem 9.3 of [12].
This theorem characterizes the regularity of a semigroup in terms of fuzzy one
sided ideals and fuzzy quasi-ideals but in contrast with [7] where the proofs
are analogue to those of theorem 9.3 of [12], our proofs employ the
identifications of Proposition 3.1 and Proposition 3.2, to obtain the result
as an implication of the corresponding result of theorem 9.3 of [12]. The
following lemma will be useful in the proof of theorem 4.1.
###### Lemma 4.1
Let $B,C$ be subsemigroups of a semigroup $S$, and let $\chi_{B}$ and
$\chi_{C}$ be the corresponding fuzzy subsemigroups. Then,
$\chi_{B}\circ\chi_{C}=\chi_{BC}$.
###### Proof.
For every $a\in S$ we have that,
$\displaystyle(\chi_{B}\circ\chi_{C})(a)=1$
$\displaystyle\Leftrightarrow\underset{a=st}{\vee}\chi_{B}(s)\wedge\chi_{C}(t)=1$
$\displaystyle\Leftrightarrow\text{ there is a decomposition }a=st,\text{ with
}\chi_{B}(s)=1\text{ and }\chi_{C}(t)=1$ $\displaystyle\Leftrightarrow\text{
there is a decomposition }a=st,\text{ with }s\in B\text{ and }t\in C$
$\displaystyle\Leftrightarrow a\in BC$
$\displaystyle\Leftrightarrow\chi_{BC}(a)=1.$
Further, if $(\chi_{B}\circ\chi_{C})(a)=0$, then either $a$ does not have a
nontrivial decomposition $a=st$, in which case $a\notin BC$ and then
$\chi_{BC}(a)=0$, or $a$ decomposes as $a=st$ but for every such decomposition
we should have that $\chi_{B}(s)\wedge\chi_{C}(t)=0$. This means that either
$s\notin B$ or $t\notin C$, consequently $a\notin BC$ and $\chi_{BC}(a)=0$.
Conversely, if $\chi_{BC}(a)=0$, then, either $a$ is indecomposable, or even
it is, it can never be written as $a=st$ with both $s\in B$ and $t\in C$,
consequently $(\chi_{B}\circ\chi_{C})(a)=0$. Recollecting we have that
$\chi_{B}\circ\chi_{C}=\chi_{BC}$. ∎
###### Theorem 4.1
The following are equivalent.
* (i)
$S$ is a regular semigroups.
* (ii)
For every fuzzy right ideal $f$ and every fuzzy left ideal $g$, we have $f\cap
g=f\circ g$.
* (iii)
For every fuzzy right ideal $f$ and every fuzzy left ideal $g$ of $S$, (a)
$f\circ f=f$, (b) $g\circ g=g$, (c) $f\circ g$ is a fuzzy quasi-ideal of $S$.
* (iv)
The set $\mathcal{Q}(S)$ of all fuzzy quasi-ideals of $S$ forms a regular
semigroup where the multiplication is $\circ$ and for every
$q\in\mathcal{Q}(S)$, $q\circ S\circ q=q$..
###### Proof.
$(i)\Rightarrow(ii)$: The regularity of $S$ implies that of $S\times I$.
Indeed, for every $(a,x)\in S\times I$, let $a^{\prime}\in S$ such that
$aa^{\prime}a=a$. Then, $(a,x)(a^{\prime},x)(a,x)=(a,x)$ proving that $(a,x)$
has an inverse. Being regular for $S\times I$ means that for every right ideal
$R$ and every left ideal $L$ of $S\times I$, we have that $R\cap L=RL$. We can
apply this by taking $R=\mathfrak{R}(S,f)=\Psi(f)$ the fuzzy right ideal of
$S$ associated with an arbitrary fuzzy right ideal $f$, and
$L=\mathfrak{L}(S,g)=\Psi(g)$ the fuzzy left ideal of $S$ associated with an
arbitrary fuzzy left ideal $g$. In this particular case we have
$\mathfrak{R}(S,f)\cap\mathfrak{L}(S,g)=\mathfrak{R}(S,f)\mathfrak{L}(S,g)$.
Further, for every $a\in S$, we assume that $(f\cap g)(a)=f(a)$ which means
that $f(a)=\text{min}(f(a),g(a))$. Under this assumption we will prove that
$(f\cap g)(a)=f(a)=\vee_{a=bc}(f(b)\wedge g(c))=(f\circ g)(a).$
A similar proof can be provided in case $g(a)=\text{min}(f(a),g(a))$. Since
$f(a)\leq g(a)$, then $(a,f(a))\in\mathfrak{R}(S,f)\cap\mathfrak{L}(S,g)$ and
$(a,f(a))\in\mathfrak{R}(S,f)\mathfrak{L}(S,g)$. Hence, there are
$(b,x)\in\mathfrak{R}(S,f)$ and $(c,y)\in\mathfrak{L}(S,g)$ such that
$(a,f(a))=(b,x)(c,y)$. It follows that $a=bc$, and that
$f(a)=x\wedge y\leq f(b)\wedge g(c).$
But $f$ is a fuzzy right ideal, then $f(b)\leq f(bc)=f(a)$, hence
$f(b)\leq f(a)\leq f(b)\wedge g(c)\leq f(b),$
which implies that $f(a)=f(b)=f(b)\wedge g(c)$. Now we write $(f\circ g)(a)$
as
$(f\circ g)(a)=(f(b)\wedge
g(c))\vee(\vee_{a=b^{\prime}c^{\prime},b^{\prime}\neq b}(f(b^{\prime})\wedge
g(c^{\prime}))),$
and obtain from the above that
$(f\circ g)(a)=f(a)\vee(\vee_{a=b^{\prime}c^{\prime},b^{\prime}\neq
b}(f(b^{\prime})\wedge g(c^{\prime})))\geq f(a).$
But on the other hand we have that
$\displaystyle(f\circ
g)(a)=\vee_{a=b^{\prime\prime}c^{\prime\prime}}(f(b^{\prime\prime})\wedge
g(c^{\prime\prime}))$
$\displaystyle\leq\vee_{a=b^{\prime\prime}c^{\prime\prime}}f(b^{\prime\prime})$
$\displaystyle\leq f(b^{\prime\prime}c^{\prime\prime})$ (since $f$ is a fuzzy
right ideal) $\displaystyle=f(a).$
Therefore we finally have that $(f\circ g)(a)=f(a)=(f\cap g)(a)$.
$(ii)\Rightarrow(i)$: Let $R$ and $L$ be arbitrary right and left ideals of
$S$ and $\chi_{R}$ and $\chi_{L}$ be their respective fuzzy right and left
ideals. From the assumption we have that
$\chi_{R}\cap\chi_{L}=\chi_{R}\circ\chi_{L}$. On the one hand, it is obvious
that $\chi_{R}\cap\chi_{L}=\chi_{R\cap L}$, and on the other hand we have from
lemma 4.1 that $\chi_{R}\circ\chi_{L}=\chi_{RL}$. Combining both equalities we
derive that $R\cap L=RL$. Now theorem 9.3 [12] implies (i).
$(i)\Rightarrow(iii)$: Let $f$ be a fuzzy right ideal of $S$ and let
$\mathfrak{R}(f,S)=\Psi(f)$ be the respective right ideal of $S\times I$.
Since $S$ is regular, so is $S\times I$ and then (iii) of theorem 9.3 [12]
implies that $\mathfrak{R}(f,S)\mathfrak{R}(f,S)=\mathfrak{R}(f,S)$. We imply
from this that $f\circ f=f$. For every $a\in S$,
$(a,f(a))\in\mathfrak{R}(f,S)$, hence there are
$(b,y),(c,z)\in\mathfrak{R}(f,S)$ such that
$(a,f(a))=(b,y)(c,z)=(bc,y\wedge z).$
In particular we have that $a$ has a decomposition $a=bc$ and that
$f(a)=y\wedge z\leq f(b)\wedge f(c).$
It follows from this that
$(f\circ f)(a)=(f(b)\wedge
f(c))\vee\left(\underset{\underset{(b^{\prime},c^{\prime})\neq(b,c)}{a=b^{\prime}c^{\prime}}}{\vee}f(b^{\prime})\wedge
f(c^{\prime})\right)\geq f(a).$
On the other hand we have that
$\displaystyle(f\circ f)(a)$
$\displaystyle=\underset{a=b^{\prime}c^{\prime}}{\vee}f(b^{\prime})\wedge
f(c^{\prime})$
$\displaystyle\leq\underset{a=b^{\prime}c^{\prime}}{\vee}f(a)\wedge
f(c^{\prime})$ (since $f$ is a fuzzy right ideal) $\displaystyle\leq f(a).$
Combining both inequalities we obtain that $(f\circ f)(a)=f(a)$.
In a similar fashion one can prove that for every fuzzy left ideal $g$ we have
that $g\circ g=g$. Finally, that $f\circ g$ is a fuzzy quasi-ideal of $S$ for
every fuzzy right ideal $f$ and every fuzzy left ideal $g$, follows from Lemma
2.6.5 [7] and from the equality $f\circ g=f\cap g$ which is a consequence of
the equivalence $(i)\Leftrightarrow(ii)$.
$(iii)\Rightarrow(i)$: Under the assumptions that for every fuzzy right ideal
$f$ and every fuzzy left ideal $g$, we have that $f\circ f=f$, $g\circ g=g$
and $f\circ g$ is a fuzzy quasi-ideal, we prove that for every right ideal $R$
of $S$ and every left ideal $L$ of $S$ we have that $RR=R$, $LL=L$ and that
$RL$ is a quasi-ideal of $S$. These three conditions imply from theorem 9.3 of
[12] that $S$ is regular. Consider now the fuzzy right ideal $\chi_{R}$ and
the fuzzy left ideal $\chi_{L}$ for which we have that
$\chi_{R}\circ\chi_{R}=\chi_{R}$, $\chi_{L}\circ\chi_{L}=\chi_{L}$ and that
$\chi_{R}\circ\chi_{L}$ is a fuzzy quasi-ideal of $S$. Lemma 4.1 and the first
two assumptions imply immediately that $RR=R$ and $LL=L$. Again lemma 4.1
implies that $\chi_{R}\circ\chi_{L}=\chi_{RL}$, which from our assumption is a
fuzzy quasi-ideal of $S$. Then lemma 2.6.4 of [7] implies $RL$ is a quasi-
ideal of $S$.
$(i)\Rightarrow(iv)$: Let $q_{1},q_{2}\in\mathcal{Q}(S)$ be arbitrary and
$\mathfrak{Q}(q_{1},S)$, $\mathfrak{Q}(q_{2},S)$ be their corresponding quasi
ideals of $S\times I$. We know from theorem 9.3 [12] that
$\mathfrak{Q}(q_{1},S)\mathfrak{Q}(q_{2},S)$ is a quasi-ideal of $S\times I$
since $S\times I$ is regular. Let $a\in S$ be arbitrary and want to show that
$((q_{1}\circ q_{2})\circ S\wedge S\circ(q_{1}\circ q_{2}))(a)\leq(q_{1}\circ
q_{2})(a).$
Assume now that $a$ decomposes as $a=b_{i}c_{i}d_{i}$ where $i\in J$, so the
above inequality now writes as
$\underset{i\in J}{\vee}(q_{1}(b_{i})\wedge q_{2}(c_{i}))\wedge\underset{i\in
J}{\vee}(q_{1}(c_{i})\wedge q_{2}(d_{i}))\leq(q_{1}\circ q_{2})(a),$
which is equivalent to
$\underset{(i,j)\in J\times J}{\vee}(q_{1}(b_{i})\wedge q_{2}(c_{i})\wedge
q_{1}(c_{j})\wedge q_{2}(d_{j}))\leq(q_{1}\circ q_{2})(a).$
This follows if we prove that for every $(i,j)\in J\times J$ we have
$q_{1}(b_{i})\wedge q_{2}(c_{i})\wedge q_{1}(c_{j})\wedge
q_{2}(d_{j})\leq(q_{1}\circ q_{2})(a).$ (1)
For this consider the element $(a,q_{1}(b_{i})\wedge q_{2}(c_{i})\wedge
q_{1}(c_{j})\wedge q_{2}(d_{j}))\in S\times I$ for which we have
$\displaystyle\mathfrak{Q}(q_{1},S)\mathfrak{Q}(q_{2},S)(S\times I)$
$\displaystyle\ni(b_{i},q_{1}(b_{i}))(c_{i},q_{2}(c_{i}))(d_{i},q_{1}(c_{j})\wedge
q_{2}(d_{j}))$ $\displaystyle=(a,q_{1}(b_{i})\wedge q_{2}(c_{i})\wedge
q_{1}(c_{j})\wedge q_{2}(d_{j}))$ $\displaystyle=(b_{j},q_{1}(b_{i})\wedge
q_{2}(c_{i}))(c_{j},q_{1}(c_{j}))(d_{j},q_{2}(d_{j}))$
$\displaystyle\in(S\times I)\mathfrak{Q}(q_{1},S)\mathfrak{Q}(q_{2},S).$
Since $\mathfrak{Q}(q_{1},S)\mathfrak{Q}(q_{2},S)(S\times I)\cap(S\times
I)\mathfrak{Q}(q_{1},S)\mathfrak{Q}(q_{2},S)\subseteq\mathfrak{Q}(q_{1},S)\mathfrak{Q}(q_{2},S)$,
then
$(a,q_{1}(b_{i})\wedge q_{2}(c_{i})\wedge q_{1}(c_{j})\wedge
q_{2}(d_{j}))\in\mathfrak{Q}(q_{1},S)\mathfrak{Q}(q_{2},S),$
therefore there are $(s,x)\in\mathfrak{Q}(q_{1},S)$ and
$(t,y)\in\mathfrak{Q}(q_{2},S)$ such that
$(a,q_{1}(b_{i})\wedge q_{2}(c_{i})\wedge q_{1}(c_{j})\wedge
q_{2}(d_{j}))=(s,x)(t,y)=(st,x\wedge y).$ (2)
It follows that
$\displaystyle(q_{1}\circ q_{2})(a)$ $\displaystyle=(q_{1}\circ q_{2})(st)$
(since $a=st$) $\displaystyle\geq q_{1}(s)\wedge q_{2}(t)$ (from the
definition of $\circ$) $\displaystyle\geq x\wedge y$ (since $q_{1}(s)\geq x$
and $q_{2}(t)\geq y$) $\displaystyle=q_{1}(b_{i})\wedge q_{2}(c_{i})\wedge
q_{1}(c_{j})\wedge q_{2}(d_{j})$ (from (2)).
This proves (1) and we are done with the first part of the proof. It remains
to prove that $(\mathcal{Q}(S),\circ)$ is regular. For this we utilize again
theorem 9.3 [12] where it is proved that for every quasi-ideal $Q$ of $S$ we
have that $QSQ=Q$. Let $q\in\mathcal{Q}(S)$ and $\mathfrak{Q}(q,S)$ its
corresponding quasi-ideal of $S\times I$ which satisfies
$\mathfrak{Q}(q,S)(S\times I)\mathfrak{Q}(q,S)=\mathfrak{Q}(q,S).$
We use this and the obvious fact that $S\in\mathcal{Q}(S)$ to show that
$q\circ S\circ q=q$ which would prove that $q$ is regular. First we show that
for every $a\in S$, $(q\circ S\circ q)(a)\leq q(a)$. Indeed, since
$(q\circ S\circ q)(a)=\underset{a=bcd}{\vee}q(b)\wedge q(d),$
to prove the inequality, it is enough to show that $q(a)\geq q(b)$ or
$q(a)\geq q(d)$ for every decomposition $a=bcd$. But either one or the other
is insured from lemma 3.1 and then the result follows. Conversely,
$(a,q(a))\in\mathfrak{Q}(q,S)$, therefore there are
$(b,x),(d,y)\in\mathfrak{Q}(q,S)$ and $(c,z)\in S\times I$ such that
$(a,q(a))=(b,x)(c,z)(d,y)=(bcd,x\wedge z\wedge y).$
From this it follows that
$\displaystyle q(a)$ $\displaystyle=x\wedge z\wedge y$ $\displaystyle\leq
q(b)\wedge q(d)$ (since $x\leq q(b)$ and $y\leq q(d)$)
$\displaystyle\leq(q\circ S\circ q)(a)$ (since $a=bcd$,)
hence $q(a)\leq(q\circ S\circ q)(a)$ which concludes the proof.
$(iv)\Rightarrow(i)$: If we prove that for every quasi-ideal $Q$ of $S$ we
have that $QSQ=Q$, then Theorem 9.3 of [12] implies that $S$ is regular.
Consider $\chi_{Q}$ which from Lemma 2.6.4 of [7] is a fuzzy quasi-ideal of
$S$ for which we have that $\chi_{Q}=\chi_{Q}\circ\chi_{S}\circ\chi_{Q}$. From
Lemma 4.1 we can write the above as $\chi_{Q}=\chi_{QSQ}$ which implies that
$Q=QSQ$ and we are done. ∎
## 5 Semilattice decompositions
Recall from [11] that a semigroup $S$ is called left regular if for every
$a\in S$, there is $x\in S$ such that $a=xa^{2}$. Likewise regular semigroups,
left regular ones have the property that every element have nontrivial
decompositions as a product of two elements. Most importantly, as shown in
[11], the left regular semigroups whose every left ideal is two-sided,
decompose as semilattices of left simple semigroups. We will generalize this
by introducing here fuzzy semilattices of left simple fuzzy subsemigroups.
Before we give our next definition, we recall that for every fuzzy
subsemigroup $f$ of a given semigroup $S$ and every $t\in I$, there is the so
called level subset (or the $t$-cut) $f_{t}=\\{a\in S:f(a)\geq t\\}$
associated with $t$. It is obvious that $f_{t}$ is a subsemigroup of $S$.
###### Definition 5.1
Let $(Y,\cdot)$ be a semilattice and $f_{\alpha}$ for $\alpha\in Y$ be a
family of fuzzy subsemigroups of a semigroup $S$. We say that this family
forms a fuzzy semilattice of fuzzy subsemigroups of $S$ if the following three
conditions are satisfied.
* (i)
$\forall a\in S$, and all $\alpha,\beta\in Y$ with $\alpha\neq\beta$, we have
that $f_{\alpha}(a)\wedge f_{\beta}(a)=0$.
* (ii)
For all $\alpha,\beta\in Y$, $f_{\alpha}\circ f_{\beta}\subseteq
f_{\alpha\cdot\beta}$.
* (iii)
$\forall(\alpha,t)\in Y\times I^{\ast}$, $(f_{\alpha})_{t}\neq\emptyset$, and
$\forall(a,t)\in S\times I^{\ast}$, $\exists\alpha\in Y$ such that
$a\in(f_{\alpha})_{t}$.
In such a case we say that $S$ is a fuzzy semilattice of fuzzy subsemigroups
of $S$. We observe the following.
###### Proposition 5.1
If a semigroup $S$ has a semilattice decomposition, then $S$ has a fuzzy
semilattice decomposition.
###### Proof.
Assume that $S$ is a semilattice of semigroups $S_{\alpha}$ with $\alpha\in Y$
where $Y$ is a semilattice. Let $\chi_{S_{\alpha}}$ be the corresponding fuzzy
subsemigroups of $S$. We show that the family $\chi_{S_{\alpha}}$ with
$\alpha\in Y$ forms a fuzzy semilattice of left simple fuzzy subsemigroups of
$S$. Let $\alpha\neq\beta$ be elements of $Y$ and $a\in S$ be arbitrary. If
$a\notin S_{\alpha}\cup S_{\beta}$, then
$\chi_{S_{\alpha}}(a)\wedge\chi_{S_{\beta}}(a)=0$. If $a\in S_{\alpha}$, then
$\chi_{S_{\alpha}}(a)\wedge\chi_{S_{\beta}}(a)=0$, and similarly, if $a\in
S_{\beta}$, then $\chi_{S_{\alpha}}(a)\wedge\chi_{S_{\beta}}(a)=0$. Secondly,
for all $\alpha,\beta\in Y$, we have that
$\displaystyle\chi_{S_{\alpha}}\circ\chi_{S_{\beta}}$
$\displaystyle=\chi_{S_{\alpha}S_{\beta}}$ (from Lemma 4.1)
$\displaystyle\subseteq\chi_{S_{\alpha\cdot\beta}}$ (since
$S_{\alpha}S_{\beta}\subseteq S_{\alpha\cdot\beta}$).
Thirdly, $\forall(\alpha,t)\in Y\times I^{\ast}$,
$(\chi_{S_{\alpha}})_{t}\neq\emptyset$ since for $t\neq 0$,
$(\chi_{S_{\alpha}})_{t}=S_{\alpha}$. Also $\forall(a,t)\in S\times I^{\ast}$,
$\exists\alpha\in Y$ such that $a\in(\chi_{S_{\alpha}})_{t}$. We can chose
$\alpha\in Y$ such that $a\in S_{\alpha}$, and then $a\in
S_{\alpha}=(\chi_{S_{\alpha}})_{t}$. ∎
###### Proposition 5.2
If $S$ is a fuzzy semilattice of fuzzy subsemigroups of $S$, then $S\times
I^{\ast}$ has a semilattice decomposition.
###### Proof.
Assume that $S$ is a fuzzy semilattice of fuzzy subsemigroups $f_{\alpha}$
with $\alpha\in Y$. We define for each $\alpha\in Y$ the set
$\mathfrak{S}^{\ast}(f_{\alpha},S)=\\{(a,t)\in S\times
I^{\ast}:f_{\alpha}(a)\geq t\\}.$
This set is nonempty from the assumption (iii) on $f_{\alpha}$, and clearly
forms a subsemigroup of $S\times I^{\ast}$. For every fixed pair
$(\alpha,t)\in Y\times I^{\ast}$ we consider the nonempty set
$(f_{\alpha})_{t}=\\{a\in S:f_{\alpha}(a)\geq t\\}=\\{a\in
S:(a,t)\in\mathfrak{S}^{\ast}(f_{\alpha},S)\\},$
and let
$f_{(\alpha,t)}=(f_{\alpha})_{t}\times\\{t\\}.$
It is straightforward that $f_{(\alpha,t)}$ is a subsemigroup of $S\times
I^{\ast}$. We will prove that the family of all $f_{(\alpha,t)}$ forms a
semilattice decomposition of subsemigroups for $S\times I^{\ast}$, where the
index semilattice is $(Y,\cdot)\times(I^{\ast},\wedge)$. We observe that for
each $\alpha\in Y$,
$\mathfrak{S}^{\ast}(f_{\alpha},S)=\underset{t\in
I^{\ast}}{\cup}f_{(\alpha,t)}.$ (3)
With this observation in mind we prove that the family of all $f_{(\alpha,t)}$
with $(\alpha,t)\in Y\times I^{\ast}$, are pairwise disjoint. Indeed, for
every fixed $\alpha\in Y$, two different
$f_{(\alpha,t)},f_{(\alpha,t^{\prime})}$ do not intersect since $t\neq
t^{\prime}$. To prove that for all $\alpha\neq\beta$ and $t,t^{\prime}\in
I^{\ast}$, $f_{(\alpha,t)}\cap f_{(\beta,t^{\prime})}=\emptyset$ it is enough
to prove that
$\mathfrak{S}^{\ast}(f_{\alpha},S)\cap\mathfrak{S}^{\ast}(f_{\beta},S)=\emptyset$.
Let for this
$(a,t)\in\mathfrak{S}^{\ast}(f_{\alpha},S)\cap\mathfrak{S}^{\ast}(f_{\beta},S)$,
then
$f_{\alpha}(a)\geq t\text{ and }f_{\beta}(a)\geq t,$
which is impossible since one of $f_{\alpha}(a)$ or $f_{\beta}(a)$ has to be
zero from definition 5.1, (i). So it remains that
$\mathfrak{S}^{\ast}(f_{\alpha},S)\cap\mathfrak{S}^{\ast}(f_{\beta},S)=\emptyset$.
Let us prove that for every $(\alpha,t),(\beta,t^{\prime})\in Y\times
I^{\ast}$ we have that $f_{(\alpha,t)}f_{(\beta,t^{\prime})}\subseteq
f_{(\alpha\cdot\beta,t\wedge t^{\prime})}$. Indeed, for every $(a,t)\in
f_{(\alpha,t)}$ and $(b,t^{\prime})\in f_{(\beta,t^{\prime})}$ we have that
$\displaystyle f_{\alpha\cdot\beta}(ab)$ $\displaystyle\geq(f_{\alpha}\circ
f_{\beta})(ab)$ (from definition 5.1, (ii))
$\displaystyle=\underset{a^{\prime}b^{\prime}=ab}{\vee}f_{\alpha}(a^{\prime})\wedge
f_{\beta}(b^{\prime})$ $\displaystyle\geq f_{\alpha}(a)\wedge f_{\beta}(b)$
$\displaystyle\geq t\wedge t^{\prime},$
which implies that
$(a,t)(b,t^{\prime})=(ab,t\wedge t^{\prime})\in f_{(\alpha\cdot\beta,t\wedge
t^{\prime})}.$
Finally, that $\underset{(\alpha,t)\in Y\times
I^{\ast}}{\cup}f_{(\alpha,t)}=S\times I^{\ast}$ follows directly from
definition 5.1, (iii). ∎
###### Definition 5.2
A fuzzy subsemigroup $f$ of a semigroup $S$ is called a left simple fuzzy
subsemigroup of $S$ if for every $t\in I^{\ast}$, the set $f_{t}=\\{a\in
S:f(a)\geq t\\}$ is nonempty and forms a left simple subsemigroup of $S$.
This definition is consistent with the definition of a left simple
subsemigroup $A$ of $S$. Indeed, in that case the property of the fuzzy
subsemigroup $\chi_{A}$ that $(\chi_{A})_{t}=\\{a\in S:\chi_{A}(a)\geq t\\}$
is left simple for all $t\in I^{\ast}$ means exactly that $A$ is a left simple
subsemigroup of $S$ since for $t\in I^{\ast}$, $(\chi_{A})_{t}=A$.
###### Theorem 5.1
For a semigroup $(S,\cdot)$ the following conditions are equivalent:
* (1’)
$S$ is a fuzzy semilattice of left simple fuzzy subsemigroups of $S$.
* (1)
$S$ is a semilattice of left simple semigroups.
###### Proof.
We begin by proving that $(1^{\prime})\Rightarrow(1)$. Assumption (1’) and
Proposition 5.2 imply that there is a semilattice decomposition of $S\times
I^{\ast}$ with components $f_{(\alpha,t)}=(f_{\alpha})_{t}\times\\{t\\}$ where
$(\alpha,t)\in Y\times I^{\ast}$ and $Y$ is a semilattice. Further we show
that each $f_{(\alpha,t)}$ is a left simple subsemigroup of $S\times
I^{\ast}$. Indeed, since $f_{(\alpha,t)}=(f_{\alpha})_{t}\times\\{t\\}$ and
$(f_{\alpha})_{t}=\\{a\in
S:(a,t)\in\mathfrak{S}^{\ast}(f_{\alpha},S)\\}=\\{a\in S:f_{\alpha}(a)\geq
t\\},$
we have from our assumption and definition 5.2 that $f_{(\alpha,t)}$ is left
simple. Saito’s theorem now implies that $S\times I^{\ast}$ is left regular
and that every left ideal there is two sided. The first implies that $S$ is
also left regular, and the second implies that every left ideal of $S$ is two
sided. To see the second one, we let $L$ be a left ideal of $S$, and $t\in
I^{\ast}$, then clearly $L\times(0,t]$ is a left ideal of $S\times I^{\ast}$
and hence a right ideal. Let now $s\in S$ and $a\in L$ be arbitrary. Then for
every $(a,t^{\prime})\in L\times(0,t]$ and $(s,u)\in S\times I^{\ast}$ we have
that $(as,t^{\prime}\wedge u)=(a,t^{\prime})(s,u)\in L\times(0,t]$,
consequently $as\in L$ which proves that $L$ is a right ideal of $S$.
Let us now prove the converse implication $(1)\Rightarrow(1^{\prime})$. Assume
that $S$ is a semilattice of left simple semigroups $S_{\alpha}$ with
$\alpha\in Y$ where $Y$ is a semilattice. From Proposition 5.1, we have that
the family $\chi_{S_{\alpha}}$ with $\alpha\in Y$ forms a fuzzy semilattice of
fuzzy subsemigroups of $S$. In addition, each $\chi_{S_{\alpha}}$ is a left
simple fuzzy subsemigroup. This is obvious from the comment after definition
5.2. ∎
Theorem 4.1.3 of [3] states that Every completely regular semigroup is a
semilattice of completely simple semigroups. Our next attempt will be to
generalize this by replacing the semilattice of completely simple semigroups
with fuzzy semilattice of completely simple fuzzy subsemigroups. We make first
the following definition.
###### Definition 5.3
A fuzzy subsemigroup $f$ of a semigroup $S$ is called a completely simple
fuzzy subsemigroup of $S$ if for each $t\in I^{\ast}$, the set $f_{t}=\\{a\in
S:f(a)\geq t\\}$ is a nonempty and forms a completely simple subsemigroup of
$S$.
As in the case of left simple fuzzy subsemigroups, the above definition
generalizes completely simple subsemigroups of $S$, for if $A$ is such one,
then $\chi_{A}$ has the property that for each $t\in I^{\ast}$,
$(\chi_{A})_{t}=\\{a\in S:\chi_{A}(a)\geq t\\}=A$ is a completely simple
subsemigroup.
###### Theorem 5.2
Every completely regular semigroup $S$ is a fuzzy semilattice of completely
simple fuzzy subsemigroups.
###### Proof.
If we prove that $S$ being a fuzzy semilattice of completely simple fuzzy
subsemigroups is equivalent to $S$ being a semilattice of completely simple
subsemigroups, then our claim follows from Theorem 4.1.3 of [3]. Assume that
there is a semilattice $Y$ and a family $f_{\alpha}$ with $\alpha\in Y$, of
completely simple fuzzy subsemigroups of $S$ which form a fuzzy semilattice
decomposition for $S$. We first show that $S\times I^{\ast}$ has a semilattice
decomposition of completely simple subsemigroups. Indeed, from Proposition 5.2
we have that the family $f_{(\alpha,t)}=(f_{\alpha})_{t}\times\\{t\\}$ with
$(\alpha,t)\in Y\times I^{\ast}$ is a semilattice decomposition of $S\times
I^{\ast}$. Our assumption together with definition 5.3 imply that each
$f_{(\alpha,t)}$ is a completely simple subsemigroup of $S\times I^{\ast}$.
Secondly, Proposition 4.1.2 of [3] implies that each $f_{(\alpha,t)}$ is
completely regular. Thirdly, let $a\in S$ be arbitrary. For every $t\in
I^{\ast}$, from our assumption of the family $f_{\alpha}$, there is $\alpha\in
Y$ such that $(a,t)\in f_{(\alpha,t)}$. Since $f_{(\alpha,t)}$ is completely
regular, it follows that $(f_{\alpha})_{t}$ is completely regular as well,
therefore there is a subgroup of $(f_{\alpha})_{t}$ containing $a$. This
proves that $S$ is completely regular. Theorem 4.1.3 of [3] implies that $S$
is a semilattice of completely simple subsemigroups. For the converse, assume
that $S$ is a semilattice $Y$ of completely simple subsemigroups $S_{\alpha}$
with $\alpha\in Y$. Consider the family of fuzzy subsemigroups
$\chi_{S_{\alpha}}$ with $\alpha\in Y$. From Proposition 5.1 we have that this
family forms a fuzzy semilattice of fuzzy subsemigroups of $S$. The comment
after definition 5.3 implies that each $\chi_{S_{\alpha}}$ is fuzzy completely
simple. ∎
## References
* [1] J. Ahsan, M.F. Khan, and M. Shabir, Characterization of monoids by the properties of their fuzzy subsystems, Fuzzy Sets and Systems, 56(1993),199-208.
* [2] J. Ahsan, R.M. Latif, and M. Shabir, Fuzzy quasi-ideals in semigroups, J. Fuzzy Math., 9(2001) 259-270.
* [3] J. M. Howie, Fundamentals of Semigroup Theory, Clarendon Press, Oxford, 1995
* [4] N. Kuroki, On fuzzy semigroups, Inform. Sei., 53(1991) 203-236.
* [5] N. Kuroki, Regular and intra-regular semigroups, Tokyo Gakugei J. Math. Edu., 3(1991) 51-54
* [6] N. Kuroki, On fuzzy quasi-ideals in semigroups, Advances in Fuzzy Theory and Teehnology Vol. 1 (Ed. Paul P. Wang) Duke University, North Carolina, (1993) 63-76.
* [7] J.N. Mordeson, D.S. Malik, N. Kuroki, Fuzzy Semigroups, Springer, 2003
* [8] J.N. Mordeson, K.R. Bhutani, A. Rosenfeld, Fuzzy Group Theory, Springer, 2005
* [9] E. Pasku, On a connection between fuzzy subgroups and F-inverse covers of inverse monoids, Novi Sad J. Math. First Online…
* [10] A. Rosenfeld, Fuzzy groups, J. Math. Anal. Appl. 35 (1971), 512-517
* [11] Saito, T. On semigroups which are semilattices of left simple semigroups, Math. Japon. 18 (1973), 95-97.
* [12] O. Steinfeld, Quasi Ideals in Rings and Semigroups, Akademiai Kiado, Budapest, 1978
* [13] L.A. Zadeh, Fuzzy sets, Inform. and Comput. 8 (1965), 338-353
|
# Human-Robot Interaction via a Joint-Initiative Supervised Autonomy (JISA)
Framework.
Abbas Sidaoui, Naseem Daher, and Daniel Asmar Abbas Sidaoui, Naseem Daher, and
Daniel Asmar are with the Vision and Robotics Lab, American University of
Beirut, Beirut, P.O. Box 11-0236 Lebanon. E-mails: ({ams108, nd38,
<EMAIL_ADDRESS>
###### Abstract
In this paper, we propose and validate a Joint-Initiative Supervised Autonomy
(JISA) framework for Human-Robot Interaction (HRI), in which a robot maintains
a measure of its self-confidence (SC) while performing a task, and only
prompts the human supervisor for help when its SC drops. At the same time,
during task execution, a human supervisor can intervene in the task being
performed, based on his/her Situation Awareness (SA). To evaluate the
applicability and utility of JISA, it is implemented on two different HRI
tasks: grid-based collaborative simultaneous localization and mapping (SLAM)
and automated jigsaw puzzle reconstruction. Augmented Reality (AR) (for SLAM)
and two-dimensional graphical user interfaces (GUI) (for puzzle
reconstruction) are custom-designed to enhance human SA and allow intuitive
interaction between the human and the agent. The superiority of the JISA
framework is demonstrated in experiments. In SLAM, the superior maps produced
by JISA preclude the need for post processing of any SLAM stock maps;
furthermore, JISA reduces the required mapping time by approximately 50
percent versus traditional approaches. In automated puzzle reconstruction, the
JISA framework outperforms both fully autonomous solutions, as well as those
resulting from on-demand human intervention prompted by the agent.
###### Index Terms:
Human-Robot Interaction, Joint Initiative, Supervised Autonomy, Mixed
Initiative, SLAM, Puzzle Reconstruction, Levels of Robot Autonomy.
## I Introduction
When automation was first introduced, its use was limited to well-structured
and controlled environments in which each automated agent performed a very
specific task [1]. Nowadays, with the rapid advancements in automation and the
decreasing cost of hardware, robots are being used to automate more tasks in
unstructured and dynamic environments. However, as the tasks become more
generic and the environments become more unstructured, full autonomy is
challenged by the limited perception, cognition, and execution capabilities of
robots [2]. Furthermore, when robots operate in full autonomy, the noise in
the real world increases the possibility of making mistakes [3], which could
be due to certain decisions or situations that the robot is uncertain about,
or due to limitations in the software/hardware abilities. In some cases, the
robot is able to flag that there might be a problem (e.g., high uncertainty in
object detection or inability to reach a manipulation goal); while in other
cases, the robot may not even be aware that errors exist (e.g., LiDAR not
detecting a transparent obstacle).
In recent years, human-robot interaction has been gaining more attention from
robotics researchers [4]. First, robots are being deployed to automate more
tasks in close proximity with humans [5, 6]; second, the challenges facing
autonomous robots may not perplex humans, owing to their superior cognition,
reasoning, and problem-solving skills. This has lead researchers in the HRI
community to include humans in the decision-making loop to assist and provide
help, as long as autonomous systems are not fully capable of handling all
types of contingencies [7, 8, 9]. Including a human in the loop can mitigate
certain challenges and limitations facing autonomy, by complementing the
strength, endurance, productivity, and precision that a robot offers on one
hand, with the cognition, flexibility, and problem-solving abilities of humans
on the other. In fact, studies have shown that combining human and robots
skills can increase the capabilities, reliability, and robustness of robots
[10]. Thus, creating successful approaches to human-robot interaction and
human-autonomy teaming is considered crucial for developing effective
autonomous systems [11, 12], and deploying robots outside of controlled and
structured environments [13, 14].
One approach to human-autonomy teaming is having a human who helps the
autonomous agents when necessary. In fact, seeking and providing help to
overcome challenges in performing tasks has been studied by human-psychology
researchers for many decades. When uncertain about a situation, people tend to
gather missing information, assess different alternatives, and avoid mistakes
by seeking help from more knowledgeable persons [15]; for example, sometimes
junior developers consult senior colleagues to help in detecting hidden bugs
in a malfunctioning code, or students ask teachers to check an assignment
answer that they are not certain about.
Help is also considered indispensable to solve problems and achieve goals that
are beyond a person’s capabilities [15]. When aware of their own limitations,
people seek help to overcome their disadvantaged situations; for instance, a
shorter employee may ask a taller colleague to hand them an object that is
placed at a high shelf. In some cases, people might not be aware of the
situation nor their limitations; here, another person might have to intervene
and provide help to avoid problems or unwanted outcomes. For example, a
supervisor at work may step in to correct a mistake that an intern is not
aware of, or when a passer-by helps a blind person avoid an obstacle on the
sidewalk. Moreover, exchanging help has been shown to enhance the performance
and effectiveness of human teams [16, 17, 18], especially in teams with
members having different backgrounds, experiences, and skill sets.
Figure 1: The proposed JISA framework is motivated by needs from three areas:
autonomous robots, human psychology, and human-robot interaction. The grey
block in the middle shows an illustration of the proposed JISA system. The
autonomous agent(s) perform the tasks autonomously and send the human
supervisor their internal states, tasks results, and SC-based assistance
requests through a user interface. The supervisor can assist the robot or
intervene either directly or through the user interface.
With the above in mind, Fig. 1 summarizes the motivation behind our proposed
JISA framework from various perspectives. First, the human psychology
literature indicates that although humans think of themselves as being
autonomous [19], they always seek and receive help to achieve their goals and
overcome uncertain situations and limited capabilities. Second, robotics
literature states that autonomous robots are still facing challenges due to
limitations in hardware, software, and cognitive abilities. These challenges
lead to mistakes and problems; some of which can be detected and flagged by
the robot, while others may go undetected by the robot. Third, research in HRI
and human-autonomy interaction illustrates that having a human to interact
with and assist autonomous agents is a must.
From the three different lines of research, we conclude that there is a need
for a new HRI framework that takes advantage of the complementary skills of
humans and robots on one hand, and adopt the general idea of help-seeking and
helping behaviors from human-human interaction on the other hand. While state-
of-the-art frameworks focus mainly on how to allocate and exchange tasks
between humans and robots, there is no generalized framework, up to our
knowledge, that considers including a human in the loop to support autonomous
systems where a human cannot completely takeover the tasks execution. For this
reason, we are proposing JISA as a framework for HRI that allows a human to
supervise autonomous tasks and provide help to avoid autonomy mistakes,
without a complete takeover of these tasks. Human assistance can be requested
by the robot in tasks where it can reason about its SC. In tasks where the
robot is not aware of its limitations, or does not have the means to measure
its SC, human assistance is based on the human’s SA. Moreover, the type,
timing, and extent of interaction between the human supervisor and the robot
are governed within JISA through a JISA-based user interface.
The main contributions of this paper are:
1. 1.
Proposing a novel Joint-Initiative Supervised Autonomy (JISA) framework for
human-robot interaction, which leverages the robot’s SC and the human’s SA in
order to improve the performance of automated tasks and mitigate the
limitations of fully autonomous systems without having a human to completely
takeover any task execution.
2. 2.
Demonstrating the utility and applicability of JISA by:
a) Re-framing the works in [20, 21, 22] in the context of the proposed JISA
framework, with the aim of showing how existing works can be generalized under
the JISA framework.
b) Applying the JISA framework in a new application that involves automated
jigsaw puzzle solving.
These two applications serve as proof-of-concept (POC) to the proposed JISA
framework.
3. 3.
Experimental validation and providing preliminary empirical evidence that
applying JISA outperforms full autonomy in both POC applications. We note that
the experiments in section V were conducted uniquely for this manuscript,
while experiments in section IV were introduced in [21, 22]. No additional
experiments were conducted in section IV, since our goal is to show how a
previous approach could be generalized and re-formulated in a more systematic
manner under the JISA framework.
## II Related Work
Researchers in the HRI community have been advocating bringing back humans to
the automation loop through different approaches and methods. This section
presents some of the most common approaches.
One of the commonly used methods for HRI is Mixed Initiative Human-Robot
Interaction (MI-HRI). The term ‘Mixed-initiative’ (MI) was first introduced in
1970 in the context of computer-assisted instruction systems [23]. This
definition was followed by several others in the domain of Human-Computer
Interaction (HCI) [24, 25, 26], where MI was proposed as a method to enable
both agents and humans to contribute in planning and problem-solving according
to their knowledge and skills. Motivated by the previous definitions, Jiang
and Arkin [27] presented the most common definition for MI-HRI, which states
that ‘an initiative (task) is mixed only when each member of a human-robot
team is authorized to intervene and seize control of it’ [27]. According to
this definition, the term ‘initiative’ in MI-HRI systems refers to a task of
the mission, ranging from low-level control to high-level decisions, that
could be performed and seized by either the robot or the human [27]. This
means that both agents (robot and human) may have tasks to perform within a
mission, and both can completely takeover tasks from each other. Unlike MI, in
JISA the term ‘initiative’ is used to refer to ‘initiating the human
assistance,’ and it is considered joint since this assistance is initiated
either by the autonomous agent’s request or by the human supervisor. Another
difference between JISA and MI-HRI is that all tasks within a JISA system are
performed by the robot; albeit, human assistance is requested/provided to
overcome certain challenges and limitations.
Another method that enables the contribution of both humans and agents in an
automated task is ‘adjustable autonomy’, which allows one to change the levels
of autonomy within an autonomous system by the system itself, the operator, or
an external agent[28]. This adjustment in autonomy enables a user to prevent
issues by either partial assistance or complete takeover, depending on the
need. However, when the autonomy is adjusted, tasks are re-allocated between
the human and the agents, and certain conditions have to be met in order to
re-adjust the system back to the full-autonomy state. In our proposed
framework, the tasks are not re-allocated between the human and the agent but
rather the human is acting as a supervisor to assist the autonomous agent and
intervene when needed. Since no additional tasks are re-allocated to the human
in JISA, he/she can supervise multiple agents at once with, theoretically,
less mental and physical strain as compared to re-assigning tasks to the human
himself.
It is important to mention that we are not proposing JISA to replace these
previous approaches, but rather to expand the spectrum of HRI to autonomous
systems where it is not feasible for a human to ‘completely takeover’ the
mission tasks. Examples of such tasks are real-time localization of a mobile
robot, motion planning of multiple degrees-of-freedom robotic manipulators,
path planning of racing autonomous ground/aerial vehicles, among others. In
fact, every HRI approach has use-cases where it performs better in. Adjustable
autonomy and MI-HRI focus mainly on how to allocate functions between the
robot and the human to enhance the performance of a collaborative mission.
However, JISA is proposed to enhance the performance, decrease uncertainty,
and extend capabilities of robots that are initially fully autonomous by
including a human in the loop to assist the robot without taking over any task
execution.
Setting the Level of Robot Autonomy (LORA) was also considered by researchers
as a method for HRI. Beer et al. [29] proposed a taxonomy for LORA that
considers the perspective of HRI and the roles of both the human and the
robot. This taxonomy consists of 10 levels of autonomy that are defined based
on the allocation of functions between robot and human in each of the sense,
plan, and act primitives. Among the different levels of autonomy included in
the LORA taxonomy, there are two levels of relevance to our work: (1) shared
control with robot initiative where human input is requested by the robot, and
(2) shared control with human initiative where human input is based on his own
judgment. Although these two levels allow for human assistance, it is limited
to the sense and plan primitives. In addition, the continuum does not contain
a level which allows interaction based on both human judgment and requests
from the robot. To mitigate the limitations of these two LORAs, our JISA
framework proposes a new level labelled as ‘Joint-Initiative Supervised
Autonomy (JISA) Level,’ stated as follows:
The robot performs all three primitives (sense, plan, act) autonomously, but
can ask for human assistance based on a pre-defined internal state. In
addition, the human can intervene to influence the output of any primitive
when s/he finds it necessary.
Moreover, and unlike LORA, JISA provides a clear framework for implementation.
In addition to the general approaches discussed above, several application-
oriented methods were presented in which human help is exploited to overcome
autonomy challenges and limitations. Certain researchers focused on the idea
of having the human decide when to assist the robot or change its autonomy
level. For instance, systems that allow the user to switch autonomy level in a
navigation task were proposed in [30, 31]. Although these methods allow the
human to switch between autonomy levels, the definition of the levels and the
tasks allocated to the human in each level are different and mission-specific.
In [32], a human operator helps the robot in enhancing its semantic knowledge
by tagging objects and rooms via voice commands and a laser pointer. In such
human-initiated assistance approaches, helping the robot and/or changing the
autonomy level is solely based on the human judgment; thus, the human has to
closely monitor the performance in order to avoid failures or decaying
performance, which results in additional mental/physical workload.
Other researchers investigated approaches where interaction and human help is
based on the robot’s request only. In [10], a framework that allows the robot
to request human help based on the value of information (VOI) theory was
proposed. In this framework, the human is requested to help the robot by
providing perceptual input to overcome autonomy challenges. In [33], time and
cost measures, instead of VOI, were used to decide on the need for interaction
with the human. In [34] and [35], human assistance is requested whenever the
robot faces difficulties or unplanned situations while performing navigation
tasks. Instead of requesting human help in decision-making, [36] proposed a
mechanism that allows the robot to switch between different autonomy levels
based on the ‘robot’s self-confidence’ and the modeled ‘human trust in the
robot.’ In this approach, the robot decreases its autonomy level, and thus
increases its dependency on the human operator, whenever its confidence in
accurately performing the task drops. In these robot-initiated assistance
approaches, the human interaction is requested either directly through an
interface or by the agent changing its autonomy level; thus, any failure to
detect the need for human assistance would lead to performance deterioration
and potentially catastrophic consequences.
Other methods considered humans assisting robots based on both human judgment
and robots requests. In [37], an interactive human-robot semantic sensing
framework was proposed. Through this framework, the robot can pose questions
to the operator, who acts as a human sensor, in order to increase its semantic
knowledge or assure its observations in a target search mission. Moreover, the
human operator can intervene at any moment to provide useful information to
the robot. Here, human assistance is limited to the sensing primitive. In the
context of adjustable autonomy, [38] proposed a remotely operated robot that
changes its navigation autonomy level when a degradation in performance is
detected. In addition, operators can choose to change autonomy levels based on
their own judgment. In [39], a human can influence the robot autonomy via
direct control or task assignment, albeit the proposed controller allows the
robot to take actions in case of human error. Some researchers refer to MI as
the ability of the robot to assist in or takeover the tasks that are initially
performed by the human [40], or combine human inputs with automation
recommendations to enhance performance [41]. Finally, in [42] the concept of
MI is used to decide what task to be performed by which agent in a human-robot
team. In contrast to such systems where the human is a teammate who performs
tasks, the JISA framework proposes the human as a supervisor who intervenes to
assist the autonomous agent as needed: either on-demand or based on SA.
## III Proposed Methodology
In our proposed JISA framework, we consider HRI systems consisting of a human
supervisor and autonomous agent(s). We define JISA as: ”A Joint-Initiative
Supervised Autonomy framework that allows a robot to sense, plan, and act
autonomously, but can ask for human assistance based on its internal self-
confidence. In addition, the human supervisor can intervene and influence the
robot sensing, planning, and acting at any given moment based on his/her SA.”
According to this definition, the robot operates autonomously but the human
supervisor is allowed to help in reducing the robot’s uncertainty and
increasing its capabilities through SC-based and SA-based assistance. The
application of JISA in HRI systems relies on the following assumptions:
1. 1.
The human is aware of the application being performed and its automation
limitations.
2. 2.
The autonomous agent has a defined self-confidence based mechanism that it can
use to prompt the user for help. Through this mechanism, the robot requests
the human supervisor to approve, decline, or edit the decisions it is not
certain about. Moreover, this mechanism is used to inform the user if the
robot is unable to perform a task.
3. 3.
When the autonomous agent seeks help, the human is both ready to assist and
capable of providing the needed assistance.
4. 4.
The JISA system is equipped with an intuitive communication channel (JISA-
based interface) that allows the agent to ask for help and the human to assist
and intervene in the tasks.
5. 5.
The human has adequate expertise and SA, supported by the JISA-based
interface, to intervene in the supervised task.
Fig. 2 presents the guideline for implementing JISA, including the following
steps:
Figure 2: Guideline to apply the proposed JISA framework in autonomous
applications with four main modules.
1\. Define tasks to be supervised by human: the first step that is necessary
to implement JISA is for one to define the tasks in which human supervision is
needed based on the challenges and limitations of autonomy in the given
system/application. These tasks depend on the robot’s hardware/software,
mission type, and the environment in which the robot is operating.
To understand the challenges and limitations of autonomy in the given
system/application, the JISA system designer can run the application in full
autonomy mode, and compare the obtained performance and results with those
that are expected. Another way is to find common challenges and limitations
for similar systems/applications from the literature. After that, the JISA
system designer has to define the tasks affected by these challenges and
limitations, in which a human could help. For example, some challenges that
face an autonomous mobile robot in a SLAM application can be present in
obstacles that the robot cannot detect, these challenges affect the mapping
task. while other challenges that affect the picking task in a pick-and-place
application can be present in objects that are out of the manipulator’s reach.
The output of this step is a list of all the tasks that should be supervised
by the human. It is important to mention here that not all tasks in the
autonomous application need to be supervised, as some tasks result in better
performance when being fully autonomous.
2\. Define the robot’s self-confidence attributes: this step is concerned with
the SC mechanism used by the robot to reason about when to ask the human for
help and what happens if the human does not respond. Depending on the type of
task, available internal-states of the robot, and full-autonomy experiments,
the robot SC state could be toggled between ”confident” and ”not-confident”
either through (1) detecting certain events (SC-events), and/or through (2)
monitoring certain metric/s (SC-metrics) within the supervised task and
comparing them to a confidence threshold that is defined experimentally.
Examples of events that could be used to set robot SC to ”not-confident” are
the loss of sensor data, inability to reach the goal, failure in generating
motion plans; while examples of monitored metrics are performance, battery
level, dispersion of particles in a particle filter, among others.
Depending on the task and how critical it is to get human assistance once
requested, a ‘no-response’ strategy should be defined. Examples of such
strategy are: wait, request again, wait for defined time before considering
the decision approved/declined, put request on hold while proceeding with
another task, among others.
For each task to be supervised, the JISA system designer has to determine
whether or not it is possible to define events/metrics to be used to set the
SC state. Thus, the outputs of this step include (1) SC-based tasks: tasks of
which human assistance is requested through SC mechanism, (2) SC metrics
and/or SC events, (3) ‘no response’ strategy for each SC-based task, and (4)
supervised tasks that human assistance could not be requested through a SC
mechanism.
3\. Define the human situation awareness attributes: this step is crucial for
supervised tasks in which robot SC attributes could not be defined. In such
tasks, the assistance has to depend solely on the human SA. Given that on one
hand the human is aware of the system’s limitation, and on the other is
endowed with superior situational and contextual awareness, s/he can determine
when it is best to intervene at a task to attain better results where SC
attributes could not be defined. However, the JISA system designer should
define the needed tools to enhance human SA and help them better assess the
situation. Examples of SA tools include showing certain results of the
performed task, and communicating the status of the task planning/execution,
to name a few. The outputs of this step are (1) SA-based tasks: tasks in which
assistance depends on human SA, (2) tools that would enhance human SA.
4\. Choose the corresponding JISA-based User interface: after defining the
tasks to be supervised and the SC/SA attributes, a major challenge lies in
choosing the proper JISA-based UI that facilitates interaction between the
human and the agent. The interface, along with the information to be
communicated highly depend on the tasked mission, the hardware/software that
are used, and the tasks to be supervised. Thus, the JISA-based UI must be
well-designed in order to (1) help both agents communicate assistance
requests/responses and human interventions efficiently, (2) maximize the human
awareness through applying the SA tools, and (3) govern the type of
intervention/assistance so that it does not negatively influence the entire
process. JISA-based UIs include, but are not limited to graphical user
interfaces, AR interfaces, projected lights, vocals, gestures, etc. The
desired output from this step is choosing/designing a JISA-based UI.
In addition to the discussed guideline, Fig. 3 presents a block diagram to
help apply JISA in any automated system. This diagram consists of two main
blocks: the Autonomous System block and the JISA Coordinator (JISAC) block.
The autonomous system block contains modules that correspond to the tasks
performed by the autonomous agent; as mentioned in the guideline, some of
these tasks are supervised, while others are fully autonomous.
Figure 3: Generic diagram of the proposed JISA framework with its main blocks
and modules.
The JISAC block contains three main modules: SC-Interaction handler, SA-
Interaction handler, and the JISA-based UI. Interaction based on robot SC is
handled through the SC-Interaction handler. This module monitors the SC-
events/metrics defined through the ‘robot SC attributes’ and sends assistance
requests to the human supervisor through the JISA-based UI. When the human
responds to the request, SC-Interaction handler performs the needed operations
and sends back the results to the corresponding supervised task. If no
response from the human is received, the SC-Interaction handler applies the
no-response strategy along with the needed operations.
Since an operator monitors the system through the SA tools provided in the
JISA-based UI, he can intervene at any time based on SA. The SA-Interaction
handler receives the human input/intervention, applies the needed operations,
and sends the results to the corresponding supervised task. It is important to
mention that both SC- and SA-Interaction handlers can be connected to several
supervised tasks; and thus could handle different types of interactions and
operations. Moreover, the JISAC block could contain other modules that handle
some operations not directly related to the SC- SA- interaction handlers. To
better describe how JISA should be implemented, in the following two sections
we apply the suggested guideline attributes and block-diagram to two
autonomous tasks, including collaborative SLAM and automated puzzle solving.
## IV JISA in Collaborative SLAM
This section constitutes the first of two case studies in which JISA is
applied. Although the core of this case study is presented in [20, 21, 22],
the works are re-framed (as POC) in the context of JISA. This is done to
demonstrate the applicability of JISA in different robotic applications, and
to serve as an example of how to apply the framework in a SLAM application.
Thus, this section is considered as an evolution and generalization of the
approach presented in [20, 21, 22], where we intend to show how the JISA
guideline is followed and the framework is applied, rather than presenting
‘JISA in Collaborative SLAM’ as a standalone contribution and generating new
results.
The idea of collaborative SLAM is not new. An online pose-graph 3D SLAM for
multiple mobile robots was proposed in [43]. This method utilizes a master-
agent approach to perform pose-graph optimization based on odometry and scan
matching factors received from the robots. Since a centralized approach is
used for map merging, SLAM estimates in this approach may be affected by major
errors in the case of connectivity loss. Some researches proposed to localize
an agent in the map built by another agent [44, 45]. The problem with such
approaches is that one agent fails to localize when navigating in an area that
is not yet mapped by the second agent.
Developing HRI systems where the efficiency and perception of humans aid the
robot in SLAM has also gained some interest. For instance, [46] showed that
augmenting the map produced by a robot with human-labelled semantic
information increases the total map accuracy. However, this method required
post processing of these maps. [47] proposed to correct scan alignments of 3D
scanners through applying virtual forces by a human via a GUI. since this
method lacks localization capability, it cannot produce large accurate maps.
Sprute et al. [48] proposed a system where a robot performs SLAM to map an
area then the user can preview this map augmented on the environment through a
tablet and can manually define virtual navigation-forbidden areas. Although
these systems proved superior over fully autonomous methods by being more
efficient in increasing map accuracy, they totally depend on human awareness
and do not consider asking the human for help in case of challenging
situations.
In our implementation of JISA in collaborative SLAM, the effects of delays and
short communication losses are minimized since each agent runs SLAM on its
own. Moreover, the proposed system utilizes one agent to correct mapping
errors of another agent under the supervision of a human operator, who can
also intervene based on the agents’ requests or his judgment. The AR-HMD is
also used to (1) visualize the map created by a robot performing SLAM aligned
on the physical environment, (2) evaluate the map correctness, and (3) edit
the map in real-time through intuitive gestures.
The proposed application of JISA in collaborative SLAM allows three co-located
heterogeneous types of agents (robot, AR head mount device AR-HMD, and human)
to contribute to the SLAM process. Applying JISA here allows for real-time
pose and map correction. Whenever an agent’s self-confidence drops, it asks
the human supervisor to approve/correct its pose estimation. In addition, the
human can view the map being built superposed on the real environment through
an AR interface that was specifically developed for this purpose, and apply
edits to this map in real-time based on his/her SA. Moreover, when the robot
is unable to reach a certain navigation goal, it prompts the human to assist
in mapping the area corresponding to this goal; the human here is free to
choose between mapping the area through editing the map him/herself or through
activating the AR-HMD collaborative mapping feature. In the latter case, the
AR-HMD contributes to the map building process by adding obstacles that are
not detected by the robot and/or map areas that the robot cannot traverse.
Fig. 4 presents a sample demonstration of the proposed system, where in Fig.
4a we show an operator wearing a HoloLens with the nearby robot performing
SLAM. Fig. 4b shows how the occupancy grid, produced by the robot, is rendered
in the user’s view through a HoloLens, where the table, which is higher than
the robot’s LiDAR plane, is not represented in the map. Fig. 4c demonstrates
the addition of occupied cells representing the table (in white), and Fig. 4d
shows how the boundaries of the table are added by the user. Fig. 4e shows how
the HoloLens detects the table and visualize its respective 3D mesh. Finally,
Fig. 4f shows the corresponding projection of the table merged in the
occupancy map.
Figure 4: Sample demonstration of the proposed JISA in a collaborative SLAM
system. (a) Robot alongside a human operator wearing an AR-HMD, (b) the
augmented map with white lines representing the occupied cells, (c) user is
adding occupied cells, (d) the desk boundary added by the user, (e) the 3D
mesh created by the AR-HMD, and (f) projection of the table in the robot map
[22].
### IV-A System Overview
This section describes how the proposed JISA framework is applied to
collaborative grid-based SLAM, following the methodology presented in Section
(III). Table I summarizes the inputs form the SLAM problem, which are used to
satisfy the JISA framework.
TABLE I: JISA attributes in collaborative SLAM applications. Tasks to be supervised | Attributes
---|---
SC-based tasks | SA-based tasks | SC Attributes | SA Attributes | JISA-based UI
• Robot pose estimation • AR-HMD pose estimation • Ability to reach navigation goal | • Map building by robot • Map building by AR-HMD • Global map merging | • Confidence in pose • Confidence in reaching navigation goals • SC-metric: $N_{eff}$ • SC-event: HoloLens tracking feature | • Maps quality • SA-tools: Augment the map on physical world | • AR interface • 2D GUI
1\. Tasks to be supervised by the human: the main challenges facing SLAM are
inaccurate map representation and inaccurate localization [21]. In SLAM,
mapping and localization are performed simultaneously, and thus increased
accuracy in any of the two tasks enhances the accuracy of the other, and vice
versa. In this system, the human should be able to supervise and intervene in
the following tasks: (1) map building by the robot, (2) map building by the
AR-HMD, (3) robot pose estimation, (4) AR-HMD pose estimation, and (5) global
map merging.
2\. Robot SC attributes: as discussed in [21], the robot’s SC in the pose
estimation is calculated through the effective sample size ($N_{eff}$) metric
which reflects the dispersion of the particles’ importance weights. This
metric serves as an estimation of how well the particles represent the true
pose of the robot; thus, $N_{eff}$ is employed as SC-metric to reason about
when the robot should prompt the human for assistance in localizing itself. In
this system, the robot stops navigating and prompts the human to approve or
correct the estimated pose whenever $N_{eff}$ drops below a threshold that was
obtained experimentally. As for the HoloLens, the human supervisor is
requested to help in the pose estimation based on a built-in feature that
flags the HoloLens localization error event. Since any error in localization
affects the whole map accuracy, the system does not resume its operation
unless human response is received. Moreover, when the robot is given a
navigation goal that it is not able to find a valid path to, it prompts the
human supervisor to help in mapping the area corresponding to this goal. For
this task, the robot proceeds into its next goal even if the human does not
respond to the request directly.
3\. Human SA attributes: since SC attributes could not be defined in the map
building task (for both robot and HoloLens), the applied JISA system benefits
from the human perception and SA in assessing the quality of maps built by
both agents. To help the human supervisor compare the map built by the agents
with the real physical environment and apply edits, the SA tool proposed is
augmenting the map on the physical world. The human is allowed to edit the
robot’s map by toggling the status of cells between ‘occupied’ and ‘free’.
Moreover, the human can choose to activate the collaboration feature to assist
the robot in mapping whenever he finds it necessary.
4\. JISA-based user interface: to maximize the human’s SA and provide an
intuitive way of interaction, an augmented reality interface (ARI) running on
HoloLens is developed. Through this interface, humans can see the augmented
map superposed over the real environment and can interact with it through
their hands. Moreover, and upon request, humans can influence the robot’s pose
estimation through a GUI.
We are applying our proposed JISA framework on top of OpenSLAM Gmapping [49]:
a SLAM technique that applies Rao-Blackwellized particle filter (RBPF) to
build grid maps from laser range measurements and odometry data [50]. An RBPF
consists of $N$ particles in set $R_{t}$ where each particle
${R_{t}^{(i)}}=(\zeta_{t}^{(i)},\eta_{t}^{(i)},w_{t}^{(i)})$ is presented by a
proposed map $\eta_{t}^{(i)}$, pose $\zeta_{t}^{(i)}$ inside this map, and an
importance weight $w_{t}^{(i)}$. OpenSLAM can be summarized by five main steps
[50]: Pose Propagation, Scan Matching, Importance Weighting, Adaptive
Resampling, and Map Update.
Fig. 5 presents the flowchart of JISA framework in collaborative SLAM.
Description of modules in OpenSLAM gmapping block can be found in [21], while
description of modules in the JISAC block are summarized below:
Figure 5: The proposed JISA-based SLAM system flowchart: blocks in black are
adopted from OpenSLAM and blocks in red represent the proposed JISAC.
#### IV-A1 SC-Interaction handler
This module represents the mechanism applied to check the self-confidence of
the robot and HoloLens and prompts the human supervisor for help through the
JISA-based UI. When the HoloLens SC-event is triggered, this module prompt the
human supervisor to re-localize the HoloLens. As for the robot, the SC is
obtained based on the $N_{eff}$ metric given in equation (1). If
$N_{eff}>r_{th}$, where $r_{th}$ is a confidence threshold obtained through
empirical testing, no human assistance is requested; thus next steps are
ignored and $R_{t}$ is passed as-is to the OpenSLAM gmapping block.
$N_{eff}=\frac{1}{\sum_{i=1}^{N}(\tilde{w}^{(i)})^{2}},$ (1)
If $N_{eff}<r_{th}$, the human is requested to approve or correct the pose
estimation through the GUI. When the new human-corrected pose is acquired, an
augmented pose $\acute{a}_{t}^{i}\sim\mathcal{N}(\mu,\Sigma)$ is calculated
for each particle. This Gaussian approximation is calculated since the human
correction itself may include errors. After that, we perform scan matching
followed by importance weighting, and finally we update all the particles’
poses in $R_{t}$.
#### IV-A2 SA-Interaction handler
the SA-Interaction handler is responsible for fusing the human edits and the
AR-HMD map with the robot’s map to produce a global map that is used by all
agents. This module contains two functions:
1. 1.
AR Map Merger: this function produces a ‘merged map’ to be viewed by the human
operator. The merged map is constructed by comparing the human-edited cells
and the updated cells from the AR-HMD Map Builder module to the corresponding
cells of the robot’s map. The function always prioritizes human edits over the
robot and the AR-HMD maps, and it prioritizes Occupied status over Free
whenever the cell status is not consistent between the robot map and the AR-
HMD map.
2. 2.
Global Map Merger: this function fuses the merged map with the most up-to-date
robot map to produce a global map that is used by all agents. This ensures
that whenever the robot has gained confidence about the state of a cell, its
knowledge is propagated to the other agents. To avoid collision between the
robot and undetected obstacles, cells that are set to be Occupied by the AR
Map Merger, but found to be free by the robot, are set as Occupied in this
module.
#### IV-A3 AR-HMD Map Builder
This module produces an AR-HMD occupancy grid map in two steps: (1) ray-
casting to detect the nearest obstacles to the AR-HMD within a defined window,
and (2) map updating to build and update the AR-HMD occupancy grid map based
on the ray casting output as discussed in [22]. The produced map shares the
same size, resolution, and reference frame with the robot’s map. The AR-HMD
device is assumed to be able to create a 3D mesh of the environment, and the
relative transformation between the robot’s map frame and the AR-HMD frame is
known.
### IV-B Experiments and Results
The proposed JISA framework was implemented using Robotics Operating System
(ROS) Indigo distro and Unity3D 2018.4f1. The interaction menu developed in
the JISA-based UI is shown in Fig. 6. This menu allows the user to import/send
augmented maps, initialize/re-initialize a map pose, manually adjust the pose
of augmented map in the physical world, activate/deactivate the AR-HMD map
builder, and choose to edit the map through: toggling the status of individual
cells, drawing lines, and deleting regions.
Figure 6: Graphical user interface on the HoloLens, showing the menu items
that are displayed to the left of the user [22].
Testing for this system was conducted in Irani Oxy Engineering Complex (IOEC)
building at the American University of Beirut (AUB). The testing areas were
carefully staged in a way to introduce challenges that affect the performance
of SLAM (i.e., low semantic features, glass walls, object higher and others
lower than the LiDAR’s detection plane). Clearpath Husky A200 Explorer robot
and HoloLens were utilized during the experimentation process. The first batch
of experiments was performed to evaluate correcting the robot’s pose on the
overall accuracy of the resulting SLAM maps. In this batch, the operator had
to tele-operate the robot, following a defined trajectory, for a distance of
about $170m$. Table II presents the parameters used in the different sets of
experiments, where each set of tests was run three times, resulting in 54
tests. Furthermore, using the proposed JISA framework, three experiments are
conducted using parameters that resulted in the worst map representation for
each set.
TABLE II: Experimentation sets and their parameters [21]. Set | A | B | C
---|---|---|---
Lighting | Day, Night | Day, Night | Day, Night
Laser range | 5.5-6.1 m | 7.5-8.1 m | 9.5-10.1 m
Particles | 5 | 50 | 100 | 5 | 50 | 100 | 5 | 50 | 100
Figure 7: Maps obtained using Set A (low laser range) [21].
Experiments show that the accuracy of maps increases with the increase in the
laser range and number of particles. However, this improvement comes at a
higher computational cost.The best maps obtained through full autonomy using
parameters in Sets A and C are shown in Fig. 7a and Fig. 8a, while the worst
maps are shown in Fig. 7b and Fig. 8b respectively. Fig. 7c and Fig. 8c show
the results of the maps using the proposed JISA framework, which are overlaid
by the blueprints of the area in Fig. 7d and Fig. 8d. The conducted
experiments demonstrate how implementing JISA to request human assistance in
pose estimation can result in higher quality maps, especially in areas that
are considered challenging.
Figure 8: Maps obtained using Set C (high laser range) [21].
Tests using the JISA framework took $35\%-50\%$ less time than those performed
with OpenSLAM gmapping, since there was no need to revisit areas and check for
loop closures. Throughout JISA tests, the operator needed to change the
robot’s pose in only $9.5\%$ of the times where the SC mechanism requested
assistance.
We define success rate as the number of useful maps, which represent the
mapped environment correctly with no areas overlaying on top of each other,
over the total number of maps obtained throughout the runs. Experiments using
OpenSLAM gmapping resulted in a $57.4\%$ success rate. Through the JISA
framework, all produced maps had no overlaying areas, resulting in $100\%$
success rate. Moreover, maps produced through JISA did not require any post
processing since all map edits where done throughout the runs.
Another experiment was done to assess the implementation of JISA framework in
collaborative SLAM. Fig. 9 shows a blueprint of the arena. Region I is
separated from Region II by an exit that is too narrow for the robot to pass
through.
Figure 9: Blueprint of the testing arena and photos of the experimental setup.
Region I contains obstacles that could not be detected by the LiDAR. Region II
is a corridor with poorly textures walls, and Region III has two glass walls
facing each other.
The robot was tele-operated to map Region I. The resultant map is shown in
Fig. 10a, and it is overlaid on the blueprint in Fig. 10b. The robot failed to
correctly map some objects such as those numbered from 1 to 5 in Fig. 10. The
operator had to activate the AR-HMD Map Builder module and walk around the un-
mapped objects to allow the HoloLens to improve the map. Fig. 10c shows the
updates performed by the HoloLens, in real-time, in blue color, and Fig. 10d
shows the merged map overlaid on the blueprint.
Since the robot cannot get out of Region I, the operator walked around regions
II and III while activating the AR-HMD Map Builder. Fig. 11a and Fig. 11b
shows the resulting HoloLens map and the merged map respectively. The HoloLens
failed to perform tracking in a feature-deprived location (see red circle in
Fig. 11b), and the glass walls were not detected by the HoloLens. To account
for these errors, the human operator performed manual edits while walking
around. The final corrected global map is shown in Fig. 11c.
Figure 10: (a) The map obtained by the robot where it was not able to detect
the objects numbered 1 to 5, which is overlaid on the blueprint in (b). (c)
The automatic real-time updates performed by the HoloLens on the augmented map
(blue color) merged with the robot’s map, which is overlaid on the blueprint
in (d) [22]. Figure 11: (a) Map produced by the AR-HMD Map Builder module in
Region II. (b) the merged map of both Region II and Region III; the red circle
shows a mapping error that occurred because the HoloLens failed to perform
tracking in this location. (c) the final global map after performing human
edits to correct all errors [22].
The entire experiment took 13 minutes and 50 seconds. To assess this
performance, another experiment was conducted in which the robot was tele-
operated to map all three regions, and the traditional method of tape-
measuring and then editing offline using GIMP software was used. This
traditional mapping and post-processing method required around 46 minutes.
Thus, the proposed JISA framework eliminated post-processing of the map and
needed approximately 70% less time than the traditional method.
Through the above experiments, the efficiency of the proposed JISA framework
is demonstrated in collaborative SLAM systems by producing more accurate maps
in significantly less time.
## V JISA in Automated Puzzle Reconstruction
The Jigsaw puzzle is a problem that requires reconstructing an image from non-
overlapping pieces. This ‘game’ has been around since the 18th century;
however, the first computational approach to solve a puzzle was introduced in
1964 [51]. In addition to being an interesting and challenging game,
researchers have applied puzzle solving techniques in different applications
such as DNA/RNA modeling [52], speech descrambling [53], reassembling
archaeological artifacts [54], and reconstructing shredded documents, paints,
and photos [55, 56, 57].
Jigsaw puzzles are generally solved depending on functions of colors,
textures, shapes, and possible inter-piece correlations. In recent years,
research into solving jigsaw puzzles has focused on image puzzles with square
non-overlapping pieces; thus, solving these puzzles has to rely on the image
information only. The difficulty of such puzzles varies depending on the type
of pieces that they consist of. These types can be categorized into pieces
with (1) unknown orientations, (2) unknown locations, and (3) unknown
locations and orientations, which is the hardest to solve.
The first algorithm to solve jigsaw puzzles was proposed by Freeman and
Gardner [58] in 1964 where the focus was on the shape information in puzzle
pieces. Using both edge shape and color information to influence adjoining
puzzle pieces was first introduced in [59], where the color similarity along
matching edges of two pieces was compared to decide on the reconstruction.
Other approaches that were proposed for such puzzles relied on isthmus global
critical points [60], dividing puzzle pieces to groups of most likely
‘interconnected’ pieces [61], applying shape classification algorithms [62],
among others.
Taking the challenge a step further, several researchers proposed algorithms
to solve jigsaw puzzles of square pieces with unknown locations and
orientations, thus the reconstruction of pieces depends on image information
alone. This type of puzzles was proven to be an NP-complete problem [63],
meaning that formulating it as an optimization problem with a global energy
function is very hard to achieve [64]. [63] proposed to solve square puzzles
through a graphical model and probabilistic approach that uses Markov Random
Field and belief propagation. However, this method required having anchor
pieces given as prior knowledge to the system. Gallagher [65] also proposed a
graphical model that solves puzzles through tree-based reassembly algorithm by
introducing the Mahalanobis Gradient Compatibility (MGC) metric. This metric
measures the local gradient near the puzzle piece edge. Zanoci and Andress
[51] extended the work in [65] and formulated the puzzle problem as a search
for minimum spanning tree (MST) in a graph. Moreover, they used a Disjoint Set
Forest (DSF) data structure to guarantee fast reconstruction. One of the
common limitations of these approaches is that the algorithm cannot change the
position of a piece that has been misplaced early on in the reconstruction
process. Such mistakes lead to very poor overall performance of the algorithm
in many cases.
In our proposed work, we adopt the methods applied in [65] and [51] in forming
the problem as a search for MST, applying DSF, and using the MGC metric for
compatibility. The proposed JISA framework addresses the challenges presented
in solving image puzzles with square pieces of unknown locations and
orientations, using the greedy approach. Through a custom-developed GUI, the
human supervisor monitors the puzzle reconstruction results in real-time,
intervenes based on his/her SA, and receives assistance requests from the
agent when its self-confidence drops. In addition, the GUI communicates some
internal states of the agent such as the reconstruction completion percentage,
pieces that are being merged, and the location of the merged pieces in the
reconstruction results. We assume that based on the SA, imagination, and
cognitive power of a human, s/he can look at puzzle pieces or clusters and be
able to detect errors, intervene, and asses matching options, thanks to their
ability to see ‘the big picture.’
TABLE III: JISA attributes in Automated Puzzle Reconstruction. Tasks to be supervised | Attributes
---|---
SC-based tasks | SA-based tasks | SC Attributes | SA Attributes | JISA-based UI
• Matching of poorly textured pieces | • Global puzzle reconstruction • Trim frame location | • Confidence in local pieces merging • SC-metric: Piece entropy | • Global puzzle consistency • SA-tools: – Show reconstruction results – Show pieces responsible for the matching and their locations in the reconstructed cluster – Show the proposed trim frame | • 2D GUI
### V-A System Overview
This sub-section describes how the proposed JISA framework is applied to
automated puzzle reconstruction, following the framework presented in Section
(III). Table III lists the selections we made for puzzle solving based on the
JISA framework.
1\. Tasks to be supervised by the human: Based on the surveyed literature and
numerical experimentation with the methods proposed in [65] and [51], three
main challenges that face greedy autonomous puzzle solvers are identified:
* •
In cases where the pieces have low texture and highly similar colors at the
edges, the pairwise compatibility score will be low and the system would match
the two pieces, even though they are not adjacent in the original image. This
case is referred to as a ”false-positive” match, which leads to local minima
and negatively affects the puzzle reconstruction accuracy. Fig. 12a shows an
example of such cases; clusters C1 and C2 are merged together based on a
suggested match of the pieces with red borders. This false-positive merge
accumulated and decreased the accuracy of the final result.
* •
Although compatibility scores and priority selection were proven to be
reliable [65], they do guarantee correct matching between pieces even if they
are rich in textures [66] (see examples shown in Fig. 12b). Moreover, if two
pieces are matched based on low pairwise compatibility, but are later shown to
be globally misplaced, greedy methods do not have the ability to move or
rotate the misplaced piece, thus the system gets stuck in local minima and the
error accumulates and diminishes the accuracy of reconstruction.
* •
The hereby adopted method reconstructs the pieces without
dimension/orientation constraints. After all of the pieces are reconstructed
in one cluster, the system performs a trim and fill as discussed later. This
trimming may have wrong location/orientation based on the accuracy of the
reconstructed image, thus the final result after re-filling the puzzle might
still not match the original image.
Based on the above challenges, a human supervisor should be able to (1)
evaluate the matching option when two pieces with low image features (not
enough texture or color variation in the piece) are to be matched, (2) delete
pieces that s/he finds misplaced, (3) approve the trim frame or re-locate/re-
orient it after reconstruction is completed.
2\. Robot SC attributes: From experimentation, it was realized that as the
texture complexity in a puzzle piece decreases, the possibility of having a
false-positive match increases. This texture complexity could be measured
through the entropy of an image. In short, when the complexity of the texture
in the image increases, the entropy value increases, and vice versa. Thus, the
entropy of a puzzle piece is used as SC-metric. In our proposed framework
implementation, when this SC-metric drops below a certain threshold that is
defined experimentally, the agent asks the human supervisor to approve or
decline the match decision. In case no response is received within a pre-
defined time limit, the agent proceeds with the match. This no-response
strategy is selected because even if merging the clusters resulted in
misplaced pieces, the human can detect the error and fix it through SA-based
intervention at a later stage.
Figure 12: Examples of some challenges that face autonomous greedy puzzle
solvers. In (a), clusters C1 and C2 are wrongly merged based on low pairwise
compatibility score; this yields a final result with low accuracy. (b) shows
two examples of pieces with high textures that are misplaced by the autonomous
agent.
3\. Human SA attributes: since SC attributes could not be defined to capture
globally misplaced pieces, the human supervisor should leverage his/her SA and
judgment abilities to assess the global consistency in the reconstructed
puzzle. The SA tools proposed to enhance human SA here are summarized in
showing the user the reconstruction results, highlighting the pieces
responsible for merging in addition to their location in the resulted cluster,
and showing the user the proposed trim frame when reconstruction is done.
These tools allow the supervisor to detect misplaced pieces and relay this
information back to the agent. Moreover, the user is allowed to re-locate or
re-orient the trim frame if needed.
4\. JISA-based user interface: a user-friendly GUI, shown in Fig. 14(a), is
developed to apply the SA tools discussed above. The GUI displays the progress
of the reconstruction progress to the user, communicates with him/her the
system’s requests for assistance, and furnishes certain internal states that
help the user intervene in the process.
In this work, we are applying JISA to the automated puzzle solver presented in
[65] and [51]. This allows a human to supervise and intervene in the
reconstruction of an image from a set of non-overlapping pieces. All pieces
are square in shape and are placed in random locations and orientations, and
only the dimensions of the final image are known to the system. Fig. 13
presents the block diagram of the JISA puzzle solver. The modules in black are
adopted from [65] and [51], while the added JISA modules are shown in red.
Figure 13: Block diagram of the JISA-based methodology that is followed for
automated jigsaw puzzle reconstruction.
#### V-A1 Pairwise Compatibility
This module calculates the pairwise compatibility function for each possible
pair of pieces in the puzzle set. The MGC metric [51] is adopted to determine
the similarity of the gradient distributions on the common boundary of the
pair of pieces to be matched. Assuming that the compatibility measure
$D_{LR}(x_{i}\ ,\ x_{j})$ of two puzzle pieces $x_{i}$ and $x_{j}$, where
$x_{j}$ is on the right side of $x_{i}$, is to be calculated, the color
gradients in each color channel (red, green, blue) are calculated near the
right edge of $x_{i}$ as follows:
$G_{iL}=x_{i}(p,P,c)-x_{i}(p,P-1,c),$ (2)
where $G_{iL}$ is the gradients array with 3 columns, $c$ represents the three
color channels, and $P$ is the number of rows since the dimension of the
puzzle piece (in pixels) is $P\times P$. Then, the mean distribution
$\mu_{iL}(c)$ and the covariance $S_{iL}$ of $G_{iL}$ is calculated on the
same side of the same piece $x_{i}$ as:
$\mu_{iL}(c)=\frac{1}{P}\sum_{p=1}^{P}G_{iL}(p,c).$ (3)
The compatibility measure $D_{LR}(x_{i}\ ,\ x_{j})$, which calculates the
gradient from the right edge of $x_{i}$ to the left edge of $x_{j}$, is
calculated as:
$D_{LR}(x_{i}\ ,\
x_{j})=\sum_{p=1}^{P}(G_{ijLR}(p)-\mu_{iL})S_{iL}^{-1}(G_{ijLR}(p)-\mu_{iL})^{T},$
(4)
where $G_{ijLR}=x_{j}(p,1,c)-x_{i}(p,P,c)$. After that, the symmetric
compatibility score $C_{LR}(x_{i},x_{j})$ is computed as:
$C_{LR}(x_{i},x_{j})=D_{LR}(x_{i}\ ,\ x_{j})+D_{RL}(x_{j}\ ,\ x_{i}),$ (5)
where $D_{RL}(x_{j}\ ,\ x_{i})$ is calculated in a similar way to (4). These
steps are applied to each possible pair of pieces, for all possible
orientations of each piece
$({0^{\circ}},{90^{\circ}},{180^{\circ}},{270^{\circ}})$. The final step
entails dividing each compatibility score $C_{LR}(x_{i},x_{j})$ with the
second-smallest score corresponding to the same edge, resulting in the final
compatibility score $C^{\prime}_{LR}(x_{i},x_{j})$. This step ensures that
significant matches of each edge has a score that is much smaller than 1, thus
leading to re-prioritizing the compatibility scores in a way where pieces
edges, which are more likely to be a ‘correct match,’ are chosen by the
reconstruction algorithm in the very early steps.
#### V-A2 Tree-based Reconstruction (TBR)
This module is responsible for reconstructing the puzzle pieces through
finding a minimum spanning tree (MST) for a graph representation of the puzzle
$G=(V,E)$ [65, 51]. For that, pieces are treated as vertices, and
compatibility scores ($C_{LR}(x_{i},x_{j})$) are treated as weights of edges
($e$) in the graph. Each edge has a corresponding configuration between the
two vertices (the orientation of each piece). To find the MST that represents
a valid configuration of the puzzle, the method presented in [65, 51] is
adopted where a modified Disjoint Set Forest (DSF) data structure is applied.
The TBR initializes with forests (clusters) equal to the number of puzzle
pieces, and each forest having an individual vertex $V$ corresponding to a
puzzle piece. Each forest records the local coordinates and the orientation of
each member puzzle piece (vertex). A flowchart that shows the implementation
logic of TBR is shown in Fig. 14(b).
At the beginning of every iteration, the edge representing the lowest
compatibility score ($e_{min}$) is selected in order to join the corresponding
vertices in one forest. If the two vertices are already in the same forest
(i.e., pieces belong to same cluster), or matching the pieces leads to
collision in their corresponding clusters, the edge is discarded and appended
to unused edges list ($E_{unused}$) . Otherwise, the two pieces, with their
corresponding clusters, are sent to the SC interaction handler module to be
checked for merging. At the end of every iteration, and based on the results
from the SC interaction handler module, TBR either discards the edge or merge
the corresponding vertices into a single forest where $e_{min}$ is moved to
the set of edges in this forest; here, the local coordinates and orientations
of each piece are updated. Moreover, this module is responsible for applying
edits requested by the human through the SA interaction handler as discussed
later. The aforementioned process repeats until all pieces are assembled in
one cluster, which means that all vertices now belong to the same forest.
Figure 14: (a) The graphical user interface (GUI) developed for applying the
JISA framework to jigsaw puzzle reconstruction. (b) A flowchart showing the
implementation logic of TBR, the blocks with red borders correspond to the SC
and SA interaction handlers. A similar logic is used for implementing the
Trimming and Filling modules.
#### V-A3 Self-confidence interaction handler
This module represents the mechanism applied to check the self-confidence of
the agent in merging two pieces with their corresponding clusters. When the
two pieces (corresponding to $e_{min}$) along with their orientations are
received, this module checks if either piece has an entropy below the pre-
defined threshold, referred to as the confidence threshold. Our SC-metric is
based on the entropy obtained from Gray Level Co-Occurrence Matrix ($GLCM$),
which extracts second order statistical texture features from images. $GLCM$
is a square matrix where the number of rows and columns equals the number of
gray levels in the image. Each element $GLCM(i,j)$ of the matrix is
represented as a joint probability distribution function
$P(i,j|d_{pixel},\theta)$, which is the probability of appearance of two
pixels with values $v_{i}$ and $v_{j}$, separated by distance $d_{pixel}$ at
an orientation angle $\theta$. After the GLCM is obtained, the entropy is
calculated as follows:
$Entropy=-\sum_{i=o}^{M-1}{\sum_{j=o}^{M-1}{P(i,j|d_{pixel},\theta)\log_{2}P(i,j|d_{pixel},\theta)}}$
(6)
where $P(i,j|d_{pixel},\theta)=GLCM(i,j)$.
Since each piece of the puzzle can be placed in four different orientations,
we calculate the entropy corresponding to the orientation of each piece where
$\theta=0^{\circ},90^{\circ},180^{\circ}$, or $270^{\circ}$; $d_{pixel}=1$ in
all calculations.
If both pieces pass this SC test, SC interaction handler commands TBR to
proceed with merging the clusters. Otherwise, this module performs a temporary
merge, shows the merging result (cluster image) to the user through the JISA-
based UI, and requests human assistance in approving the merge. After
requesting assistance, the SC interaction handler awaits human response for a
defined period of time. If the human approves the temporary merge, TBR
proceeds with merging. If the human declines the merge, merging is undone and
the corresponding $e_{min}$ is deleted from $E$ in TBR. In case no response is
received within this time limit, TBR proceeds with merging.
#### V-A4 Situation awareness interaction handler
In addition to approving or declining the merge upon the agent’s request, the
user can select pieces that s/he deems as misplaced in the resultant image
based on his/her SA and judgment. When the user selects these pieces, SA
interaction handler commands TBR to delete the corresponding vertices from the
shown cluster image. Here, TBR deletes the vertices from the displayed forest
and forms new forests, using corresponding edges from $E_{unused}$, where each
forest contains one vertex corresponding to one deleted piece. The vertices of
the newly formed forests are connected to the other vertices in the graph
through edges corresponding to the compatibility scores as discussed before.
Allowing the user to inform the system about misplaced pieces is crucial since
not all errors occur due to low entropy, as some pieces might have complex
texture (high entropy) and still be misplaced by the system.
#### V-A5 Trimming and Filling
Trimming is applied to ensure that the final reconstructed image has the same
dimensions as the original one. In this step, a frame with size equal to the
original image is moved over the assembled puzzle to determine the location of
the portion with the highest number of pieces; this frame is tested for both
portrait and landscape orientations. When the frame with most number of pieces
is defined, the Trimming module sends the candidate frame to SA interaction
handler, which shows the user this frame superposed over the reconstructed
image. The user here can approve the frame location/orientation or edit it.
After the frame is set, all pieces outside this frame are trimmed from the
image and sent to the Filling module. Here, the trimmed pieces are used to
fill the gaps in the reconstructed image. Filling starts from gaps with the
highest number of neighbors and continues until all gaps are filled. Each gap
is filled by a piece whose edges has best compatibility score with the
adjacent edges corresponding to the gap’s neighbor pieces.
### V-B Implementation and Experiments
The JISA framework is implemented in Python3, and the GUI is developed in
pyQT. The GUI, shown in Fig. 14(a), displays the results of the reconstruction
in real-time (a resultant cluster image is shown in the ‘output’ in the GUI),
the percentage of completion of the reconstruction process, in addition to the
logs/errors/warnings. Moreover, and to help the user decide whether to decline
the merge or delete misplaced pieces, the GUI shows the user the two pieces
that are responsible of merging, and the location of these two pieces in the
resultant cluster image.
Through this GUI, the user can: choose the image to be reconstructed, start
reconstruction process, and perform operations on the shown cluster image.
These operations are: rotate, zoom in/out, approve/decline, and delete pieces
from the cluster. When the final cluster is obtained, the GUI shows the
proposed trimming frame to the user for approval or re-location.
To validate the proposed JISA framework in reconstructing jigsaw puzzles,
experiments were conducted on the MIT dataset published in [63]. This dataset
is formed of 20 images and is used as a benchmark in most of the available
references in the related literature. To assess the results, two types of
accuracy measures are calculated. The direct comparison metric compares the
exact position and orientation of each piece with the original image and
calculates the accuracy as a percentage. A drawback of this metric is that if
the reconstructed image has its pieces slightly shifted in a certain
direction, it will report a very low score, and even Zero. The neighbor
comparison metric compares the number of times that two pieces, which are
adjacent (with the same orientation) in the original image, are adjacent in
the final solution. This metric shows more robustness against slightly shifted
pieces.
In each experiment, the user selects an image from the set through the GUI,
then starts the reconstruction process. The human supervisor is instructed to
decline the merge option if s/he is not totally confident that this match is
correct. Moreover, the user is instructed to delete misplaced pieces whenever
detected. Here, it is important to mention that the user is assumed to have a
detailed knowledge about the GUI and how the algorithm works. Three runs were
performed on each image, resulting in a total of 60 tests. Table IV shows the
results obtained by the JISA system, in addition to results from four
different previous works. All results are based on square puzzle pieces with
unknown locations and orientations. The number of pieces per puzzle is 432 and
piece size in pixels is $28\times 28$. The obtained results show that the JISA
framework outperforms the other approaches in both direct and neighbor
comparison metrics.
To further show the effect of having JISA, four experiments are conducted in
which the supervisor is only allowed to approve or decline the merge result
when the system’s accuracy drops. This means that the supervisor can only
intervene upon the system’s request; so the system cannot benefit from the
supervisor’s better SA and judgment to correct errors that it cannot detect.
Visual results are shown in Fig. 15. The first column (left) shows results of
reconstruction from [51], the second column (middle) shows results of the user
only approving/declining upon the system’s request, while the third column
(right) shows results of the proposed JISA system. As expected, these results
demonstrate that having a JISA framework, where the supervisor can intervene
based on his/her SA and the system’s request, results in superior performance
as compared to a fully autonomous system or a system that only asks for human
assistance based on its self-confidence.
TABLE IV: Comparison of reconstruction performance between the JISA framework and four state-of-the-art approaches. Approach | Year | Direct Metric | Neighbor Metric
---|---|---|---
JISA framework | 2021 | $96.5\%$ | $97.2\%$
MTS with DSF [51] | 2016 | $88.5\%$ | $92.9\%$
Linear Programming [67] | 2015 | $95.3\%$ | $95.6\%$
Loop Constraints [68] | 2014 | $94.7\%$ | $94.9\%$
Constrained MST [65] | 2012 | $82.2\%$ | $90.4\%$
Figure 15: Visual results of conducting. The left column shows results using
the approach in [51], the middle column shows the results obtained through
running the proposed system but the user was allowed only to approve/decline
merge options, and the column to the right shows the results obtained by the
proposed JISA puzzle solver.
## VI Discussion and Conclusion
The aim of this paper is to propose and validate a joint-initiative supervised
autonomy framework for human-robot interaction. Unlike other frameworks that
focus on the allocation of functions between humans and robots, JISA is
proposed to extend the spectrum of HRI to autonomous systems where it is not
feasible for a human to ‘completely takeover’ the automated mission tasks. In
the proposed JISA framework, the autonomous agent (robot) performs tasks in an
autonomous fashion, but can ask a human supervisor for assistance based on its
self-confidence. In addition, the human can intervene and influence the
autonomous sensing, planning, and acting based on his/her SA. This framework
aims to overcome autonomy challenges and enhance its performance by combining
the cognition, flexibility, and problem solving skills of humans with the the
strength, endurance, productivity, and precision of robots. As a proof of
concept, the proposed JISA framework is applied in two different systems:
collaborative grid-based SLAM, and automated jigsaw puzzle re-construction. In
both systems, the following are defined: (1) the challenges and limitations
affecting full autonomy performance, (2) the confidence measure that triggers
the robot’s request for human assistance, (3) the type and level of
intervention that the human can perform. In addition, an augmented reality
interface (for SLAM) and 2D graphical user interface (for puzzle
reconstruction) are custom-designed to enhance the human situation awareness
and communicate information and requests efficiently.
Through experimental validation, it was shown that applying the JISA framework
in fully autonomous systems can help overcome several autonomy limitations and
enhance the overall system performance. In fact, JISA outperformed full
autonomy in both implemented systems. In collaborative SLAM, post processing
of grid maps was eliminated and the JISA system produced more accurate maps in
less number of trials and less run-time for each trial. In automated puzzle
reconstruction, results showed that the JISA system outperforms both fully
autonomous systems and systems where the human merely intervenes
(accept/reject) upon the agent’s requests only.
Since this is a first step towards a joint-initiative supervised autonomy, we
are aware of some limitations that need to be addressed in any future
evolution of JISA. First, the cost of each interaction over its benefit is not
evaluated in this work. This is a crucial component that could be included in
the SC/SA attributes to reason about when it is best to request/get assistance
through measuring: (1) the cost of interrupting the human supervisor, which is
most important when the human is supervising multiple robots; and (2) the cost
of waiting for human response, which is important in missions where the robot
needs to make rapid decisions and actions. Another limitation is that JISA
assumes the human to be available and ready to provide assistance whenever
needed, which is not always the case. This assumption could be relaxed by
including a module to estimate the human availability and evaluate his/her
capability to provide the needed help. Third, the number of help requests
within a mission could be reduced if the robot has the learning-from-
interaction capability. Agent learning would lead to extending the types of
human assistance to include providing demonstrations, information, and
preemptive advice. Finally, to relax the assumption that the human always has
superiority over the robot decisions, a human performance evaluation module
could be included to reason about whether to accept the human assistance,
negotiate it, or refuse it.
In future work, we will be studying more SC metrics/events (other than
$N_{eff}$ and Entropy) in both POC applications presented. This will help in
evaluating the effectiveness and optimality of the proposed attributes. In
addition, we aim to perform more tests on both applications through a group of
human subjects. The goal of these tests is to avoid any biased results and
study certain factors that might influence the performance of human
supervisors such as awareness, workload, skills, trust, and experience.
Moreover, and to better validate the general applicability and efficiency of
the proposed JISA framework, we aim to apply it in a new task of physical
robot 3D assembly.
## VII Acknowledgments
This work was supported by the University Research Board (URB) at the American
University of Beirut.
## References
* [1] M. Desai, M. Medvedev, M. Vázquez, S. McSheehy, S. Gadea-Omelchenko, C. Bruggeman, A. Steinfeld, and H. Yanco, “Effects of changing reliability on trust of robot systems,” in _2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI)_. IEEE, 2012, pp. 73–80.
* [2] S. Rosenthal, J. Biswas, and M. M. Veloso, “An effective personal mobile robot agent through symbiotic human-robot interaction.” in _AAMAS_ , vol. 10, 2010, pp. 915–922.
* [3] R. Parasuraman, T. B. Sheridan, and C. D. Wickens, “A model for types and levels of human interaction with automation,” _IEEE Transactions on systems, man, and cybernetics-Part A: Systems and Humans_ , vol. 30, no. 3, pp. 286–297, 2000.
* [4] M. Lippi and A. Marino, “Human multi-robot physical interaction: a distributed framework,” _Journal of Intelligent & Robotic Systems_, vol. 101, no. 2, pp. 1–20, 2021.
* [5] P. Glogowski, A. Böhmer, H. Alfred, and K. Bernd, “Robot speed adaption in multiple trajectory planning and integration in a simulation tool for human-robot interaction,” _Journal of Intelligent & Robotic Systems_, vol. 102, no. 1, 2021.
* [6] S. Li, H. Wang, and S. Zhang, “Human-robot collaborative manipulation with the suppression of human-caused disturbance,” _Journal of Intelligent & Robotic Systems_, vol. 102, no. 4, pp. 1–11, 2021.
* [7] J. M. Lockhart, M. H. Strub, J. K. Hawley, and L. A. Tapia, “Automation and supervisory control: A perspective on human performance, training, and performance aiding,” in _Proceedings of the Human Factors and Ergonomics Society annual meeting_ , vol. 37, no. 18. SAGE Publications Sage CA: Los Angeles, CA, 1993, pp. 1211–1215.
* [8] A. B. Moniz and B.-J. Krings, “Robots working with humans or humans working with robots? searching for social dimensions in new human-robot interaction in industry,” _Societies_ , vol. 6, no. 3, p. 23, 2016.
* [9] X. Liu, S. S. Ge, F. Zhao, and X. Mei, “A dynamic behavior control framework for physical human-robot interaction,” _Journal of Intelligent & Robotic Systems_, vol. 101, no. 1, pp. 1–18, 2021.
* [10] T. Kaupp, A. Makarenko, and H. Durrant-Whyte, “Human–robot communication for collaborative decision making—a probabilistic approach,” _Robotics and Autonomous Systems_ , vol. 58, no. 5, pp. 444–456, 2010.
* [11] M. R. Endsley, “From here to autonomy: lessons learned from human–automation research,” _Human factors_ , vol. 59, no. 1, pp. 5–27, 2017.
* [12] P. K. Pook and D. H. Ballard, “Deictic human/robot interaction,” _Robotics and Autonomous Systems_ , vol. 18, no. 1-2, pp. 259–269, 1996.
* [13] J. Fritsch, M. Kleinehagenbrock, S. Lang, T. Plötz, G. A. Fink, and G. Sagerer, “Multi-modal anchoring for human–robot interaction,” _Robotics and Autonomous Systems_ , vol. 43, no. 2-3, pp. 133–147, 2003.
* [14] K. Severinson-Eklundh, A. Green, and H. Hüttenrauch, “Social and collaborative aspects of interaction with a service robot,” _Robotics and Autonomous systems_ , vol. 42, no. 3-4, pp. 223–234, 2003.
* [15] J. van der Rijt, P. Van den Bossche, M. W. van de Wiel, S. De Maeyer, W. H. Gijselaers, and M. S. Segers, “Asking for help: A relational perspective on help seeking in the workplace,” _Vocations and learning_ , vol. 6, no. 2, pp. 259–279, 2013.
* [16] D. E. Cantor and Y. Jin, “Theoretical and empirical evidence of behavioral and production line factors that influence helping behavior,” _Journal of Operations Management_ , vol. 65, no. 4, pp. 312–332, 2019.
* [17] M. L. Frazier and C. Tupper, “Supervisor prosocial motivation, employee thriving, and helping behavior: A trickle-down model of psychological safety,” _Group & Organization Management_, vol. 43, no. 4, pp. 561–593, 2018.
* [18] G. S. Van der Vegt and E. Van de Vliert, “Effects of perceived skill dissimilarity and task interdependence on helping in work teams,” _Journal of management_ , vol. 31, no. 1, pp. 73–89, 2005.
* [19] V. Srinivasan and L. Takayama, “Help me please: Robot politeness strategies for soliciting help from humans,” in _Proceedings of the 2016 CHI conference on human factors in computing systems_ , 2016, pp. 4945–4955.
* [20] A. Sidaoui, I. H. Elhajj, and D. Asmar, “Human-in-the-loop augmented mapping,” in _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2018, pp. 3190–3195.
* [21] A. Sidaoui, M. K. Zein, I. H. Elhajj, and D. Asmar, “A-slam: Human in-the-loop augmented slam,” in _2019 International Conference on Robotics and Automation (ICRA)_. IEEE, 2019, pp. 5245–5251.
* [22] A. Sidaoui, I. H. Elhajj, and D. Asmar, “Collaborative human augmented slam,” in _2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2019, pp. 2131–2138.
* [23] J. R. Carbonell, “Ai in cai: An artificial-intelligence approach to computer-assisted instruction,” _IEEE transactions on man-machine systems_ , vol. 11, no. 4, pp. 190–202, 1970.
* [24] J. Allen, C. I. Guinn, and E. Horvtz, “Mixed-initiative interaction,” _IEEE Intelligent Systems and their Applications_ , vol. 14, no. 5, pp. 14–23, 1999.
* [25] E. Horvitz, “Principles of mixed-initiative user interfaces,” in _Proceedings of the SIGCHI conference on Human Factors in Computing Systems_ , 1999, pp. 159–166.
* [26] R. Cohen, C. Allaby, C. Cumbaa, M. Fitzgerald, K. Ho, B. Hui, C. Latulipe, F. Lu, N. Moussa, D. Pooley _et al._ , “What is initiative?” _User Modeling and User-Adapted Interaction_ , vol. 8, no. 3-4, pp. 171–214, 1998.
* [27] S. Jiang and R. C. Arkin, “Mixed-initiative human-robot interaction: definition, taxonomy, and survey,” in _2015 IEEE International Conference on Systems, Man, and Cybernetics_. IEEE, 2015, pp. 954–961.
* [28] S. Zieba, P. Polet, F. Vanderhaegen, and S. Debernard, “Principles of adjustable autonomy: a framework for resilient human–machine cooperation,” _Cognition, Technology & Work_, vol. 12, no. 3, pp. 193–203, 2010.
* [29] J. M. Beer, A. D. Fisk, and W. A. Rogers, “Toward a framework for levels of robot autonomy in human-robot interaction,” _Journal of human-robot interaction_ , vol. 3, no. 2, p. 74, 2014.
* [30] F. Tang and E. Ito, “Human-assisted navigation through sliding autonomy,” in _2017 2nd International Conference on Robotics and Automation Engineering (ICRAE)_. IEEE, 2017, pp. 26–30.
* [31] M. A. Goodrich, D. R. Olsen, J. W. Crandall, and T. J. Palmer, “Experiments in adjustable autonomy,” in _Proceedings of IJCAI Workshop on autonomy, delegation and control: interacting with intelligent agents_. Seattle, WA, 2001, pp. 1624–1629.
* [32] G. Gemignani, R. Capobianco, E. Bastianelli, D. D. Bloisi, L. Iocchi, and D. Nardi, “Living with robots: Interactive environmental knowledge acquisition,” _Robotics and Autonomous Systems_ , vol. 78, pp. 1–16, 2016\.
* [33] M. Y. Cheng and R. Cohen, “A hybrid transfer of control model for adjustable autonomy multiagent systems,” in _Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems_ , 2005, pp. 1149–1150.
* [34] S. Rosenthal, M. Veloso, and A. K. Dey, “Is someone in this office available to help me?” _Journal of Intelligent & Robotic Systems_, vol. 66, no. 1, pp. 205–221, 2012.
* [35] T. Fong, C. Thorpe, and C. Baur, “Robot, asker of questions,” _Robotics and Autonomous systems_ , vol. 42, no. 3-4, pp. 235–243, 2003.
* [36] T. M. Roehr and Y. Shi, “Using a self-confidence measure for a system-initiated switch between autonomy modes,” in _Proceedings of the 10th international symposium on artificial intelligence, robotics and automation in space, Sapporo, Japan_ , 2010, pp. 507–514.
* [37] L. Burks, N. Ahmed, I. Loefgren, L. Barbier, J. Muesing, J. McGinley, and S. Vunnam, “Collaborative human-autonomy semantic sensing through structured pomdp planning,” _Robotics and Autonomous Systems_ , p. 103753, 2021.
* [38] M. Chiou, N. Hawes, and R. Stolkin, “Mixed-initiative variable autonomy for remotely operated mobile robots,” _arXiv preprint arXiv:1911.04848_ , 2019\.
* [39] M. Guo, S. Andersson, and D. V. Dimarogonas, “Human-in-the-loop mixed-initiative control under temporal tasks,” in _2018 IEEE International Conference on Robotics and Automation (ICRA)_. IEEE, 2018, pp. 6395–6400.
* [40] E. A. M. Ghalamzan, F. Abi-Farraj, P. R. Giordano, and R. Stolkin, “Human-in-the-loop optimisation: mixed initiative grasping for optimally facilitating post-grasp manipulative actions,” in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2017, pp. 3386–3393.
* [41] R. Chipalkatty, G. Droge, and M. B. Egerstedt, “Less is more: Mixed-initiative model-predictive control with human inputs,” _IEEE Transactions on Robotics_ , vol. 29, no. 3, pp. 695–703, 2013.
* [42] R. R. Murphy, J. Casper, M. Micire, J. Hyams _et al._ , “Mixed-initiative control of multiple heterogeneous robots for urban search and rescue,” _proceedings of the IEEE Transactions on Robotics and Automation_ , 2000.
* [43] R. Dubé, A. Gawel, H. Sommer, J. Nieto, R. Siegwart, and C. Cadena, “An online multi-robot slam system for 3d lidars,” in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2017, pp. 1004–1011.
* [44] H. Surmann, N. Berninger, and R. Worst, “3d mapping for multi hybrid robot cooperation,” in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2017, pp. 626–633.
* [45] P. Fankhauser, M. Bloesch, P. Krüsi, R. Diethelm, M. Wermelinger, T. Schneider, M. Dymczyk, M. Hutter, and R. Siegwart, “Collaborative navigation for flying and walking robots,” in _2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2016, pp. 2859–2866.
* [46] E. A. Topp and H. I. Christensen, “Tracking for following and passing persons,” in _2005 IEEE/RSJ International Conference on Intelligent Robots and Systems_. IEEE, 2005, pp. 2321–2327.
* [47] P. Vieira and R. Ventura, “Interactive mapping in 3d using rgb-d data,” in _2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR)_. IEEE, 2012, pp. 1–6.
* [48] D. Sprute, K. Tönnies, and M. König, “Virtual borders: Accurate definition of a mobile robot’s workspace using augmented reality,” in _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2018, pp. 8574–8581.
* [49] “Gmapping ros package.” [Online]. Available: http://wiki.ros.org/gmapping
* [50] G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” _IEEE transactions on Robotics_ , vol. 23, no. 1, pp. 34–46, 2007.
* [51] C. Zanoci and J. Andress, “Making puzzles less puzzling: An automatic jigsaw puzzle solver,” _Stanford.edu_ , 2016.
* [52] W. Marande and G. Burger, “Mitochondrial dna as a genomic jigsaw puzzle,” _Science_ , vol. 318, no. 5849, pp. 415–415, 2007.
* [53] Y.-X. Zhao, M.-C. Su, Z.-L. Chou, and J. Lee, “A puzzle solver and its application in speech descrambling,” in _WSEAS Int. Conf. Computer Engineering and Applications_ , 2007, pp. 171–176.
* [54] K. Hori, M. Imai, and T. Ogasawara, “Joint detection for potsherds of broken earthenware,” in _Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149)_ , vol. 2. IEEE, 1999, pp. 440–445.
* [55] H. Liu, S. Cao, and S. Yan, “Automated assembly of shredded pieces from multiple photos,” _IEEE transactions on multimedia_ , vol. 13, no. 5, pp. 1154–1162, 2011.
* [56] E. Justino, L. S. Oliveira, and C. Freitas, “Reconstructing shredded documents through feature matching,” _Forensic science international_ , vol. 160, no. 2-3, pp. 140–147, 2006.
* [57] L. Zhu, Z. Zhou, and D. Hu, “Globally consistent reconstruction of ripped-up documents,” _IEEE Transactions on pattern analysis and machine intelligence_ , vol. 30, no. 1, pp. 1–13, 2007.
* [58] H. Freeman and L. Garder, “Apictorial jigsaw puzzles: The computer solution of a problem in pattern recognition,” _IEEE Transactions on Electronic Computers_ , no. 2, pp. 118–127, 1964.
* [59] D. A. Kosiba, P. M. Devaux, S. Balasubramanian, T. L. Gandhi, and K. Kasturi, “An automatic jigsaw puzzle solver,” in _Proceedings of 12th International Conference on Pattern Recognition_ , vol. 1. IEEE, 1994, pp. 616–618.
* [60] R. W. Webster, P. S. LaFollette, and R. L. Stafford, “Isthmus critical points for solving jigsaw puzzles in computer vision,” _IEEE transactions on systems, man, and cybernetics_ , vol. 21, no. 5, pp. 1271–1278, 1991.
* [61] T. R. Nielsen, P. Drewsen, and K. Hansen, “Solving jigsaw puzzles using image features,” _Pattern Recognition Letters_ , vol. 29, no. 14, pp. 1924–1933, 2008.
* [62] F.-H. Yao and G.-F. Shao, “A shape and image merging technique to solve jigsaw puzzles,” _Pattern Recognition Letters_ , vol. 24, no. 12, pp. 1819–1835, 2003.
* [63] T. S. Cho, S. Avidan, and W. T. Freeman, “A probabilistic image jigsaw puzzle solver,” in _2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition_. IEEE, 2010, pp. 183–190.
* [64] E. D. Demaine and M. L. Demaine, “Jigsaw puzzles, edge matching, and polyomino packing: Connections and complexity,” _Graphs and Combinatorics_ , vol. 23, no. 1, pp. 195–208, 2007.
* [65] A. C. Gallagher, “Jigsaw puzzles with pieces of unknown orientation,” in _2012 IEEE Conference on Computer Vision and Pattern Recognition_. IEEE, 2012, pp. 382–389.
* [66] X. Zheng, X. Lu, and Y. Yuan, “Image jigsaw puzzles with a self-correcting solver,” in _2013 International Conference on Virtual Reality and Visualization_. IEEE, 2013, pp. 112–118.
* [67] R. Yu, C. Russell, and L. Agapito, “Solving jigsaw puzzles with linear programming,” _arXiv preprint arXiv:1511.04472_ , 2015.
* [68] K. Son, J. Hays, and D. B. Cooper, “Solving square jigsaw puzzles with loop constraints,” in _European Conference on Computer Vision_. Springer, 2014, pp. 32–46.
|
# On Linear Time Invariant Systems Analysis via A Single Trajectory: A Linear
Programming Approach
Hassan Abdelraouf1, Fahad Albalawi2 and Eric Feron3 *This work was supported
by King Abdullah University of Science and Technology (KAUST)1Hassan
Abdelraouf is a Ph.D. student with Mechanical Engineering, (KAUST), Thuwal,
KSA<EMAIL_ADDRESS>Fahad Albalawi is a research scientist
with Electrical and Computer Engineering, KAUST, Thuwal, KSA
<EMAIL_ADDRESS>Eric Feron is with the Faculty of Electrical,
Computer, and Mechanical Engineering at KAUST, Thuwal, KSA
<EMAIL_ADDRESS>
###### Abstract
In this note, a novel methodology that can extract a number of analysis
results for linear time-invariant systems (LTI) given only a single trajectory
of the considered system is proposed. The superiority of the proposed
technique relies on the fact that it provides an automatic and formal way to
obtain a valuable information about the controlled system by only having
access to a single trajectory over a finite period of time (i.e., the system
dynamics is assumed to be unknown). At first, we characterize the stability
region of LTI systems given only a single trajectory dataset by constructing
the associated Lyapunov function of the system. The Lyapunov function is found
by formulating and solving a linear programming (LP) problem. Then, we extend
the same methodology to a variety of essential analysis results for LTI
systems such as deriving bounds on the output energy, deriving bounds on
output peak, deriving $\mathbf{L}_{2}$ and RMS gains. To illustrate the
efficacy of the proposed data-driven paradigm, a comparison analysis between
the learned LTI system metrics and the true ones is provided.
## I INTRODUCTION
Data-driven based control schemes gained a significant attention in the
control society over the last decade. In fact, such control paradigms become
an attractive alternative to conventional control algorithms [1, 2]. This
paradigm shift from explicit control techniques to learning-based ones has
produced a massive amount of theoretical and practical research works where
the majority of these control techniques were obtained from supervised machine
learning and reinforcement learning [3, 4, 5, 6, 7]. Despite the fact that
theoretical certificates for control performance may be derived in various
ways, the direct relationship between the generated or collected data and the
learning-based control performance is not well-established [8]. Albeit data
quality metrics such as entropy have been utilized substantially to guide
exploration and control strategies [9, 10], they still do not yield direct
conclusions into the influence of data on the provable control performance.
As a remedy, [11] utilized Gaussian process priors to generate Lyapunov-based
measure that can assess the value of data points with respect to a number of
control tasks. The ultimate goal of collecting dataset and assessing their
quality for dynamical system is to derive stability and performance
certificates that can be used later for different control tasks. As customary,
the stability of an equilibrium point of a given dynamic system can be studied
through the Lyapunov function. Lyapunov functions can be seen as stability
certificates for ordinary differential equations (ODEs) [12, 13]. The problem
of finding the Lyapunov function is generally complex, and it has been the
scope of many research papers in the control community [14]. Analytical
techniques are standard tools for constructing Lyapunov functions. Despite the
fact that these techniques are mathematically stable and sound, they require
significant expertise and manual efforts [15]. For LTI systems, which is the
focus class of systems for this work, semi-definite programming is sufficient
to construct Lyapunov functions due to the fact that Lyapunov functions are
inherently quadratic polynomials for LTI systems. However, the knowledge of
the underlying dynamical linear systems, i.e., the matrices (A,B,C,D), is
always assumed to be known in order to derive the Lyapunov function via semi-
definite programming approaches. Such an assumption might not be fulfilled
when first principle models cannot be derived because of the complexity of the
actual system. Alternatively, data-driven algorithms can provide an
approximate mathematical model from real system measurements.
Motivated by the aforementioned observations, we introduce a formal and
automatic methodology for LTI systems analysis where the model of the
controlled system is not available and only a single trajectory dataset is
provided. Specifically, we formulate the problems of finding the Lyapunov
function and observability gramium of LTI systems as a linear programming (LP)
problem given dataset that represents a single system trajectory over a finite
period of time. Many problems in control systems analysis like calculating
system’s RMS gain and output peak are formulated as Semi-definite Programming
(SP) and Linear Matrix Inequalities (LMIs) [16] where a full knowledge of the
system dynamics is assumed. By using our novel approach, these problems can be
solved accurately and efficiently without knowing the system dynamics, and
only a single trajectory over a finite period of time is given. Majority of
the proposed data-driven based Lyapunov function methods in the literature
focus on using Sum of Squares (SOS) methods [17, 18] as well as machine
learning techniques like Neural networks. [19, 20]. Nevertheless, all of these
methods are model-based and computationally expensive where big data sets are
needed for efficient learning. Our approach requires the least data points
along the trajectory to learn the Lyapunov function for LTI systems as well as
other LTI system analysis metrics. Finally, we test our proposed frameworks by
comparing them with the true Lyapunov function as well as the true LTI systems
metrics where superior performance of our proposed algorithms is well-
established.
## II Preliminaries
### II-A Class of LTI systems
We consider a class of LTI systems that can be written in the following state
description:
$\displaystyle\begin{split}\dot{x}&=Ax+Bu\\\ z&=Cx+Du\end{split}$ (1)
where $x\in\mathbb{R}^{n}$ represents the state vector, $u\in\mathbb{R}^{m}$
is the control input vector, and $z\in\mathbb{R}^{p}$ is the output state
vector.
### II-B Notations
The $2$ norm of a vector $x\in\mathbb{R}^{n}$ is defined as follows
$\left(\sum_{i=1}^{n}\left|x_{i}\right|^{2}\right)^{1/2}$. We denote the
vector of unique elements of a symmetric matrix $P\in\mathbb{R}^{n\times n}$
as $\text{vec}(P)$ and it is defined as follows:
$\text{vec}(P)=\begin{bmatrix}[P_{11}\quad\dots\quad P_{nn}]^{T}\\\
[P_{12}\quad\dots\quad P_{1n}]^{T}\\\ [P_{23}\quad\dots\quad P_{2n}]^{T}\\\
\vdots\\\ P_{(n-1)n}\end{bmatrix}$ (2)
where $\text{vec}(P)\in\mathbb{R}^{n(n+1)/2}$. For instance, when
$P\in\mathbb{R}^{2\times 2}$ is symmetric, then $\text{vec}(P)=[P_{11}\quad
P_{22}\quad P_{12}]^{T}$. In addition, for the two vectors
$x,y\in\mathbb{R}^{n}$ , we define $\oplus$ operator as follows:
$x\oplus y=\begin{bmatrix}[x_{1}y_{1}\quad\dots\quad x_{n}y_{n}]^{T}\\\
[x_{1}y_{2}+x_{2}y_{1}\quad\dots\quad x_{1}y_{n}+x_{n}y_{1}]^{T}\\\
[x_{2}y_{3}+x_{3}y_{2}\quad\dots\quad x_{2}y_{n}+x_{n}y_{2}]^{T}\\\ \vdots\\\
x_{n-1}y_{n}+x_{n}y_{n-1}\end{bmatrix}$ (3)
If $z=x\oplus y$, then $z\in\mathbb{R}^{n(n+1)/2}$. For example, if
$x,y\in\mathbb{R}^{2}$, then $x\oplus x=[x_{1}^{2}\quad x_{2}^{2}\quad
2x_{1}x_{2}]^{T}$ and $x\oplus y=[x_{1}y_{1}\quad x_{2}y_{2}\quad
x_{1}y_{2}+x_{2}y_{1}]^{T}$. Using the definitions of both operators $\oplus$
and $\text{vec}(.)$, we can represent the term $x^{T}Py$ as $(x\oplus
y)^{T}\text{vec}(P)$
Finally, the $\mathbf{L}_{2}$ norm of a signal $\zeta$ is defined as
$\left\lVert\zeta\right\rVert_{2}^{2}=\int_{0}^{\infty}\zeta^{T}\zeta\text{dt}$
and its root-mean square gain $\mathbf{RMS}(\xi)$ is defined as follows:
$\mathbf{RMS}(\xi)\triangleq\left(\limsup_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}\xi^{T}\xi
dt\right)^{1/2}$ (4)
### II-C Outline
This paper is organized as follows. Section III presents using our approach to
learn lyapunov function and solve lyapunov equation for LTI system using data
along a single trajectory. Sections IV, V, VI show how the proposed approach
is used to learn bounds on output energy, bounds on output peak and
$\mathbf{L}_{2}$ gain of LTI systems respectively.
## III Data-driven Construction for Lyapuonv function for LTI systems
Stability analysis of dynamical systems aims to show that a set of initial
states will stay in the neighborhood of an equilibrium point or converge to
it. Based on the construction of Lyapunov functions, the stability of
equilibrium points, which can form positive invariant sets (i.e., regions of
attraction), can be certified. Lyapunov showed that the linear time invariant
system is stable (i.e. all trajectories converge to zero asymptotically) if
and only if there exists a quadratic function $V(x)=x^{T}Px$ which is positive
$V(x)>0$ and its gradient is negative along the system’s trajectories
$\dot{V}(x)<0$. These two conditions can be formulated as the following Linear
matrix inequalities: $P>0$ and $PA+A^{T}P<0$ [16]. Lyapunov inequality in $P$
can be explicitly solved by picking $Q=Q^{T}>0$ , then solving the linear
equation $PA+A^{T}P=-Q$ for the matrix $P$ [16]. This method assumes the
knowledge of the system dynamic equation of Eq. 1. For real systems, it may be
very difficult to obtain the system dynamics and very expensive to estimate
its unknown parameters. Hence, we propose a new method to learn Lyapunov
function for LTI systems given only a single trajectory. To the best of our
knowledge, our proposed approach is very novel and it has never been
introduced in the control society as a tool for LTI system analysis. For
stable LTI systems, Lyapunov function is known to be quadratic ($x^{T}Px$).
So, our approach focuses on deriving a $P$ matrix that makes the Lyapunov
function positive and its gradient negative at all given data points along the
given system trajectory.
First, a trajectory for the LTI system is generated from the initial time
$t_{0}=0$ to $T+\text{dt}$ with a time step dt. The states along this
trajectory are $\begin{bmatrix}x(0)&x(1)&\dots&x(N)&x(N+1)\end{bmatrix}$,
where $N=T/\text{dt}$ and $x(i)$ represents the state vector at time
$i\text{dt}$. Then, the state vector derivative w.r.t. time $\dot{x}(i)$ are
calculated numerically using forward finite difference differentiation from
$i=0$ to $i=N$. So, the data points $i=0\dots N$ for state vectors $x$ and
state vectors derivative $\dot{x}$ are only used to obtain the unknown
parameters of $P$ matrix. The objective is to find the unknown parameters of
$P\in\mathbb{R}^{n\times n}$ which satisfies Lyapunov conditions at every
point on the given trajectory:
$\begin{gathered}V(x(i))=x(i)^{T}Px(i)>0\\\
\frac{d}{\text{dt}}V(x(i))=2x(i)^{T}P\dot{x}(i)<0\end{gathered}$ (5)
for all $i=0,\dots,N$. It is known that $P$ matrix is a symmetric positive
definite matrix, so the number of unknown parameters in $P$ is $n(n+1)/2$. Let
$p\in\mathbb{R}^{n(n+1)/2}$ is the vector of unknown parameters. So,
$p=\text{vec}(P)$ using (2). Therefore, the Lyapunov-based conditions of (5)
can be formulated as a set of linear inequalities in $p$ as: $L_{1}p>\epsilon$
and $L_{2}p<-\epsilon$ where,
$L_{1}=\begin{bmatrix}(x(0)\oplus x(0))^{T}\\\ (x(1)\oplus x(1))^{T}\\\
\vdots\\\ (x(N)\oplus x(N))^{T}\end{bmatrix}$ (6)
$L_{2}=2\begin{bmatrix}(x(0)\oplus\dot{x}(0))^{T}\\\
(x(1)\oplus\dot{x}(1))^{T}\\\ \vdots\\\
(x(N)\oplus\dot{x}(N))^{T}\end{bmatrix}$ (7)
such that $L_{1},L_{2}\in\mathbb{R}^{(N+1)\times n(n+1)/2}$. The parameter
$\epsilon$ is a positive small number. These linear inequalities can be solved
trivially by any linear programming solver such as CVX [21] to get $p$ (i.e.,
the vector of unknown parameters of $P$ matrix that defines our desired
Lyapunov function $V(x)$).
Numerical example: We have the following unforced LTI system: $\dot{x}=Ax$
where
$A=\begin{bmatrix}0&1\\\ -1&-3\end{bmatrix}$ (8)
Our approach utilizes only one generated trajectory and then learns the
quadratic Lyapunov function $x^{T}Px$ from data along this trajectory to prove
the system stability. The given trajectory starts at $x(0)=[2,2]^{T}$ to the
final time $T=1\text{sec}$ with time step $dt=0.01\text{sec}$ and $N=100$.
First, the finite difference method is used to get $\dot{x}$ along the given
trajectory. Then, the two sets of linear inequalities that represent the
satisfaction of Lyapunov conditions at every point on the given trajectory
$L_{1}p>\epsilon$ and $L_{2}p<-\epsilon$ are solved. Since the dimension of
the state vector $x$ is $\mathbb{R}^{2}$, $L_{1}$ and $L_{2}$ can be
structured as follows:
$L_{1}=\begin{bmatrix}x_{1}(0)^{2}&x_{2}(0)^{2}&2x_{1}(0)x_{2}(0)\\\
\vdots&\vdots&\vdots\\\
x_{1}(N)^{2}&x_{2}(N)^{2}&2x_{1}(N)x_{2}(N)\end{bmatrix}$ (9)
$L_{2}=2\begin{bmatrix}x_{1}(0)\dot{x_{1}}(0)&x_{2}(0)\dot{x_{2}}(0)&x_{1}(0)\dot{x_{2}}(0)+x_{2}(0)\dot{x_{1}}(0)\\\
\vdots&\vdots&\vdots\\\
x_{1}(N)\dot{x_{1}}(N)&x_{2}(N)\dot{x_{2}}(N)&x_{1}(N)\dot{x_{2}}(N)+x_{2}(N)\dot{x_{1}}(N)\end{bmatrix}$
(10)
These linear inequalities are solved by Matlab CVX solver [21] to get the
unknown $P$ matrix:
$P=\begin{bmatrix}26.4840&5.3151\\\ 5.3151&17.0361\end{bmatrix}$ (11)
Since $P$ is positive definite matrix and $PA+A^{T}P$ is negative definite,
the learned quadratic function $V(x)=x^{T}Px$ is a valid Lyapunov function for
the LTI system $\dot{x}=Ax$. Such function was derived from dataset along a
single trajectory of the considered LTI system. Fig. 1 shows one level set of
the learned Lyapunov function ($x^{T}Px<=1000$) which constitutes a forward
invariant set $\Omega_{\rho}$ where $\rho$ is 1000. Several points at the
boundary of the Lyapunov level set are chosen to be initial states for the
system trajectories. All the trajectories remain inside the set which
demonstrates that the learned function is a true Lyapunov function.
Figure 1: Data-driven Lyapunov function with $V(x)<=1000$
Lyapunov functions for stable LTI systems are not necessarily unique. As a
result of such fact, changes on the Lyapunov conditions can be made by the
user to improve the numerical stability of the linear program. Lyapunov
conditions can be modified to be $L_{1}p>c_{1}l$ and $L_{2}p<-c_{2}l$. Where
$c_{1},c_{2}>0$ and $l\in\mathbb{R}^{(N+1)\times 1}$ is:
$l=\begin{bmatrix}\left\lVert x(0)\right\rVert_{2}^{2}\\\ \vdots\\\
\left\lVert x(N)\right\rVert_{2}^{2}\end{bmatrix}$ (12)
To improve the robustness of the learned Lyapunov function $V(x)$, the number
of data points need to be increased and a probing noise to the given
trajectory needs to be augmented.
### III-A Exact solution of Lyapunov equation from data
The proposed approach can be employed to solve the Lyapunov equation
$PA+A^{T}P=-Q$ accurately by knowing only the states $x$ and their derivatives
w.r.t time evaluated at least at $n(n+1)/2$ points on any given trajectory.
where $n$ is the state space dimension. The solution of the Lyapunov equation
is based on formulating the problem as an LP problem without the need to know
the system dynamics (i.e., $A$ matrix for LTI systems). The matrix
$Q\in\mathbb{R}^{n\times n}$ at the right hand side of the Lyapunov function
is a user defined symmetric positive definite matrix. Hence, the set of linear
inequalities will be: $L_{1}p>\epsilon$, $L_{2}p\leq l_{Q}$ and $L_{2}p\geq
l_{Q}$. The vectors $L_{1}$ and $L_{2}$ are defined in (6) and (7) while
$l_{Q}$ is defined as follows:
$l_{Q}=\begin{bmatrix}x(0)^{T}Qx(0)\\\ x(1)^{T}Qx(1)\\\ \vdots\\\
x(N)^{T}Qx(N)\end{bmatrix}$ (13)
Numerical example: From the same state trajectory used in the previous
example, we will now pick only three data points to derive the Lyapunov
equation of the system of Eq. 8. The three data points are ($x,\dot{x}$) at
the following time instants $0,0.5$ and $1$ sec. When $Q=-I$, then the sets of
linear inequalities in $p$ (i.e., $L_{1}p>\epsilon$, $L_{2}p\leq l_{Q}$ and
$L_{2}p\geq l_{Q}$) are as follows:
$\displaystyle L_{1}$ $\displaystyle=\begin{bmatrix}4&4&8\\\
4.9627&0.4626&-3.0304\\\ 2.6757&2.1311&-4.7759\end{bmatrix}$ (14)
$\displaystyle L_{2}$ $\displaystyle=\begin{bmatrix}8&-32&-24\\\
-3.0304&0.2547&0.0910\\\ -4.7759&-8.0107&13.2384\end{bmatrix}$ (15)
$\displaystyle l_{Q}$ $\displaystyle=\begin{bmatrix}8\\\ 5.4253\\\
4.8068\end{bmatrix}$ (16)
The solution of these linear inequalities is:
$P=\begin{bmatrix}1.8333&0.5000\\\ 0.5000&0.3333\end{bmatrix}$ (17)
which is exactly the same solution of $PA+A^{T}P=-I$, yet derived by only
using three data points along any given system’s trajectory. Such an
interesting result shows the efficacy and accuracy of our proposed approach.
The main idea is that $P$ matrix has $n(n+1)/2$ unknown parameters so, we need
only $n(n+1)/2$ equations to solve for $P$. These equations represents the
satisfaction of Lyapunov second inequality at $n(n+1)/2$ points on the given
trajectory.
## IV Data-driven evaluation for bounds on the output energy
The maximum output energy for LTI system given the initial state is given by:
$\max\left\\{\int_{0}^{\infty}z^{T}zdt\mid\dot{x}=Ax,\quad z=Cx\right\\}$ (18)
Where the initial state $x(0)$ is given. Suppose that there exists a quadratic
positive function $V(\zeta)=\zeta^{T}P\zeta$ such that:
$P>0\text{ and }\frac{d}{\text{dt}}V(x)\leq-z^{T}z,\quad\text{ for every
}x\text{ and }z$ (19)
By integrating both sides of the second inequality in (19) from $0$ to $T$, we
obtain,
$V(x(T))-V(x(0))\leq-\int_{0}^{T}z^{T}zdt$ (20)
Given that $V(x(T))\geq 0$, we conclude that $V(x(0))=x(0)^{T}Px(0)$ is an
upper bound for the maximum output energy given the initial condition $x(0)$
[16]. Assume ($A$, $C$) are given. Then, the second inequality can be
formulated as the following LMI: $PA+A^{T}P+C^{T}C\leq 0$. Therefore, we get
the best upper bound on the output energy by solving the following SDP in the
variable $P$:
$\displaystyle\underset{P}{\text{minimize}}$ $\displaystyle x(0)^{T}Px(0)$
(21) subject to $\displaystyle P>0$ $\displaystyle PA+A^{T}P+C^{T}C\leq 0$
This SDP can be solved analytically or using MATLAB CVX. The solution is
exactly equal to the output energy $x(0)^{T}W_{0}x(0)$, where $W_{0}$ is the
observability gramiam of the system:
$W_{\mathrm{o}}\triangleq\int_{0}^{\infty}e^{A^{T}t}C_{z}^{T}C_{z}e^{At}dt$
(22)
Assuming $(A,C)$ are unknown, the problem of finding the observability gramiam
or the maximum output energy given the initial state $x(0)$ of a LTI system
can be solved given only a single trajectory. The proposed method is based on
finding a quadratic function that satisfies conditions in (19) at every point
along the given trajectory. Therefore, the conditions in (19) can be
represented numerically as follows:
$\begin{gathered}x(i)^{T}Px(i)>0\\\
2x(i)^{T}P\dot{x}(i)\leq-z(i)^{T}z(i)\end{gathered}$ (23)
for $i=0,\dots,N$. Hence, instead of formulating the problem as an LMI problem
as in (21) given $(A,C)$, we can reformulate the problem as the following LP
formulation provided that one single trajectory over a finite period of time
is on hand:
$\displaystyle\underset{p}{\text{minimize}}$ $\displaystyle(x(0)\oplus
x(0))^{T}p$ (24) subject to $\displaystyle L_{1}p>0$ $\displaystyle
L_{2}p\leq-l_{z}$
where, $p=\text{vec}(P)$, $L_{1}$ and$L_{2}$ are defined in (6)and (7), and
$l_{z}$ is defined as:
$l_{z}=\begin{bmatrix}z(0)^{T}z(0)\\\ z(1)^{T}z(1)\\\ \vdots\\\
z(N)^{T}z(N)\end{bmatrix}$ (25)
Numerical example: Given the following LTI system: $\dot{x}=Ax$ , $z=Cx$
$A=\begin{bmatrix}0&1\\\ -4&-2\end{bmatrix}\quad\quad
C=\begin{bmatrix}0&1\end{bmatrix}$ (26)
A single trajectory at the initial state
$x(0)=\begin{bmatrix}2&2\end{bmatrix}^{T}$ over a finite time period
$T=5\text{sec}$ with a time step $dt=0.1$ is provided. The output $z$ and
state vector $x$ at each time step can be measured. The derivatives of the
state vector $\dot{x}$ can be calculated numerically. So, $N=50$. The LP
problem of (24) is solved and the following $P$ matrix was the solution:
$\begin{bmatrix}0.5000&0.1250\\\ 0.1250&0.0625\end{bmatrix}$ (27)
Which is exactly the same solution obtained by solving LMI (21) where the full
knowledge of the system dynamics is assumed. Therefore, from a single
trajectory of the LTI system, the observability gramium matrix can be derived,
then the maximum output energy of any given initial condition can be
determined by $x(0)^{T}Px(0)$.
## V Data-driven Evaluation for bounds on output peak
The problem of deriving bounds on output energy $\left\lVert z(t)\right\rVert$
can be formulated as a SDP [16] when the system dynamics is known and the
initial state $x(0)$ is given. Let $\mathcal{E}=\left\\{\xi\mid\xi^{T}P\xi\leq
1\right\\}$ be an invariant ellipsoid that contains $x(0)$ for the LTI system
$\dot{x}=Ax$ , $z=cx$. Then,
$z(t)^{T}z(t)\leq\max_{\xi\in\mathcal{E}}\xi^{T}C^{T}C\xi$ (28)
In [16], the right hand side of (28) can be expressed as the square root of
the minimum $\delta$ subject to:
$\begin{bmatrix}P&C^{T}\\\ C&\delta I\end{bmatrix}\geq 0$ (29)
Therefore, given the initial state $x(0)$, the minimum bound of the output
peak is the square root of the optimal value of the following SDP where $P$
and $\delta$ are decision variables for the optimization problem.
$\displaystyle\underset{P,\delta}{\text{minimize}}$ $\displaystyle\delta$ (30)
subject to $\displaystyle P>0,\quad PA+A^{T}P\leq 0$ $\displaystyle
x(0)^{T}Px(0)\leq 1,\quad(\ref{bound outpeak cons})$
To obtain an optimal solution for the SDP of (30), the system dynamics has to
be known a prior. However, this problem can be solved without knowing the
system matrices ($A,C$) where only a single trajectory is assumed to be
available. Using our approach in sections III and IV, this problem can be
formulated as an LP problem given the initial state and $(x,\dot{x},z)$ at
every point along the given system trajectory. First, by Schur complement,
(29) can be rewritten as follows:
$P-C^{T}C/\delta\geq 0$ (31)
this condition should be satisfied along the given trajectory
$x(i)^{T}Px(i)-x(i)C^{T}Cx(i)/\delta\geq 0$. As a result, the constraints of
(30) will be represented as follows:
$\begin{gathered}x(i)^{T}Px(i)>0\\\ 2x(i)^{T}P\dot{x}(i)\leq 0\\\
x(0)^{T}Px(0)\leq 1\\\ x(i)^{T}Px(i)-z(i)^{T}z(i)/\delta\geq 0\end{gathered}$
(32)
The fourth constraint is nonlinear, so let $\lambda=1/\delta$ and instead of
minimizing $\delta$, $\lambda$ is maximized. Therefore, instead of formulating
the problem as a SDP (30), it can be formulated as the following LP problem
given data points along the system trajectory:
$\displaystyle\underset{p,\lambda}{\text{maximize}}$ $\displaystyle\lambda$
(33) subject to $\displaystyle L_{1}p>0,\quad L_{2}p\leq 0$
$\displaystyle(x(0)\oplus x(0))^{T}p\leq 1$ $\displaystyle L_{1}p-\lambda
l_{z}\geq 0$
where, $p=\text{vec}(P)$, $L_{1},L_{2},l_{z}$ are defined in (6),(7) and (25)
respectively. Given the initial state $x(0)$ and a single trajectory, the LP
problem of (33) can be solved to obtain the upper bound of the output peak
$\sqrt{1/\lambda}$ and the invariant ellipsoid $x^{T}Px\leq 1$ at which the
maximum output of any trajectory within the ellipsoid will not exceed that
upper bound.
Numerical example:
Using the same LTI system $\dot{x}=Ax,z=Cx$ such that $A$ and $C$ are defined
in (26). Given the initial state $x(0)=[3,3]$ and a trajectory of the LTI
system starting from $t=0~{}\text{sec}$ to $T=5~{}\text{sec}$ with a time step
$dt=0.1~{}\text{sec}$. Then the LP problem (33) is solved to obtain the upper
bound on the output peak and its corresponding invariant ellipsoid. The upper
bound found to be ($\sqrt{1/\lambda}=3.2901$) and the invariant ellipsoid is
$x^{T}Px\leq 1$ such that
$P=\begin{bmatrix}0.092453&0.001486\\\ 0.001486&0.015684\\\ \end{bmatrix}$
(34)
Fig. 2 shows the invariant ellipsoid $x^{T}Px\leq 1$ that contains the initial
state $x(0)$. As it can be seen from the figure, any trajectory starts at the
boundary or within the defined ellipsoid remains inside it.
Figure 2: Data-driven invariant ellipsoid
Fig. 3 shows the upper bound of the output peak and set of randomly selected
trajectories starting at the boundary of the invariant ellipsoid. The figure
shows that the output $\left\lVert z(t)\right\rVert$ of all trajectories
inside the invariant ellipsoid remains below the obtained upper bound.
Figure 3: Data-driven upper bound for output peak
The data-driven results of our proposed approach are almost identical to the
solution of the SDP (30) assuming the system dynamics is known. The upper
bound for output peak derived from the SDP of (30) is $3.2915$ and the
invariant ellipsoid is $x^{T}Px\leq 1$ such that:
$P_{LMI}=\begin{bmatrix}0.092426&0.001406\\\ 0.001406&0.015873\end{bmatrix}$
(35)
## VI Data-driven Evaluation for $\mathbf{L}_{2}$ gain and RMS gain
For LTI system(1), $\text{L}_{2}$ gain is defined as:
$\sup_{\left\lVert u\right\rVert_{2}\neq 0}\frac{\left\lVert
z\right\rVert_{2}}{\left\lVert u\right\rVert_{2}}$ (36)
where the supremum is over all nonzero trajectories of the LTI system starting
from $x(0)=0$. In [16], If there exists a quadratic function
$\mathcal{E}=\zeta^{T}P\zeta$, $P>0$ and for all $t\geq 0$,
$\frac{d}{\text{dt}}V(x)+z^{T}z-\gamma^{2}u^{T}u\leq 0\quad\forall x\text{ and
}u\text{ satisfying (\ref{LTI system}) }$ (37)
such that $\gamma\geq 0$, then the $\mathbf{L}_{2}$ gain of the LTI is less
than $\gamma$. To show that, integrate both sides of (37) from $0$ to $T$ with
$x(0)=0$, we obtain
$V(x(T))+\int_{0}^{T}\left(z^{T}z-\gamma^{2}u^{T}u\right)dt\leq 0$ (38)
$V(x(T))\geq 0$, so we conclude that
$\int_{0}^{T}z^{T}zdt\leq\gamma^{2}\int_{0}^{T}u^{T}udt$.
* •
Take the limit $T\to\infty$, then the system’s $\mathbf{L}_{2}$ gain is less
than $\gamma$
* •
Divide by $T$ and take the limit $T\to\infty$, then
$\frac{\mathbf{RMS}(z)}{\mathbf{RMS}(u)}\leq\gamma$, Therefore, the system’s
$\mathbf{RMS}$ gain is less than $\gamma$
Given the system dynamics ($A,B,C$), equation (37) can be written as follows:
$x^{T}(PA+A^{T}P+C^{T}C)x+2x^{T}PBu-\gamma^{2}u^{T}u\leq 0$ (39)
that can be formulated as the following LMI:
$\begin{bmatrix}PA+A^{T}P+C^{T}C&PB\\\ B^{T}P&-\gamma^{2}I\end{bmatrix}\leq 0$
(40)
Hence, the smallest upper bound for the LTI system’s $\mathbf{L}_{2}$ or
$\mathbf{RMS}$ gain can be obtained by minimizing $\gamma$ over the variables
$P$ and $\gamma$ while satisfying (40) and $P>0$. This method is based on the
full knowledge of the system’s dynamics ($A,B,C$). Our approach can be also
used here to solve this problem given only a single trajectory of the LTI
system starting from $x(0)=0$. Instead of solving the SDP problem of (40), the
problem can be reformulated as an LP problem by minimizing $\beta=\gamma^{2}$
while the conditions $P>0$ and (37) at any data point $i$ along the given
trajectory are satisfied in the following format:
$\begin{gathered}x(i)^{T}Px(i)>0\\\ 2x(i)^{T}P\dot{x}(i)+z(i)^{T}z(i)-\beta
u(i)^{T}u(i)\leq 0\end{gathered}$ (41)
where $\gamma^{2}$ is replaced by $\beta$ in both the objective and the second
condition to preserve the linearity. Therefore, the Linear program can be
written as:
$\displaystyle\underset{p,\beta}{\text{minimize}}$ $\displaystyle\beta$ (42)
subject to $\displaystyle L_{1}p>0$ $\displaystyle L_{2}p+l_{z}-\beta
l_{u}\leq 0$
where $p=\text{Vec}(P)$ and $L_{1}$,$L_{2}$ and $l_{z}$ are defined by (6),(7)
and (25) respectively. $l_{u}$ is:
$l_{u}=\begin{bmatrix}u(0)^{T}u(0)\\\ u(1)^{T}u(1)\\\ \vdots\\\
u(N)^{T}u(N)\end{bmatrix}$ (43)
Numerical example: Given the following LTI system:
$\displaystyle\dot{x}$ $\displaystyle=\begin{bmatrix}0&1\\\
-1&-2\end{bmatrix}x+\begin{bmatrix}1\\\ 2\end{bmatrix}u$ (44) $\displaystyle
z$ $\displaystyle=\begin{bmatrix}4&1\end{bmatrix}x$
Assuming that ($A,B,C$) are known. So,we can get the the system
$\mathbf{L}_{2}$ gain by solving the following SDP:
$\displaystyle\underset{P,\gamma}{\text{minimize}}$ $\displaystyle\gamma$ (45)
subject to $\displaystyle P>0,\quad(\ref{rbl LMI})$
So, $\mathbf{L}_{2}$ gain of the system is $\mathbf{15}$. Using our approach,
$\gamma$ can be obtained by exciting the system with a unit step input from
$0$ to a final time $T$, then measure the input-output ($u$,$z$) and states
($x,\dot{x}$) data of a single trajectory. Therefore, the linear program (42)
can be solved to get $\gamma=\sqrt{\beta}$. The time step used is $0.01$ sec.
Table I shows that as the length of the trajectory increases, the result’s
accuracy will be improved.
TABLE I: Learned $\mathbf{L}_{2}$ gain Final time $T$ [Seconds] | Learned $\mathbf{L}_{2}$ gain ($\gamma$) [SDP solution=15]
---|---
2 | 7.36147
4 | 12.17092
6 | 14.30461
8 | 14.86621
10 | 14.97684
12 | 14.99619
14 | 14.9994
16 | 14.99990
Note that $\mathbf{L}_{2}$ gain of the LTI system is the $\mathbf{H}_{\infty}$
norm of its transfer function $C(sI-A)^{-1}B$. Therefore, using our approach,
the $\mathbf{H}_{\infty}$ norm of the LTI system can be learned accurately
using only a single trajectory and without need to know its dynamic matrices.
## VII Conclusion and future work
In this work, a number of data-driven techniques that aim to evaluate various
metrics of LTI systems such as the upper bound on the output ’s energy and
peak as well as the $\mathbf{L}_{2}$ and $\mathbf{RMS}$ gains were introduced.
In addition, the exact and approximate construction of the Lyapunov function
of LTI systems were proposed. To demonstrate the proposed methodologies, a
number of numerical examples were given and thoroughly discussed. As for
future work, we are extending the proposed data-driven construction for the
Lyapunov function to a non-quadratic Lyapunov function for hybrid linear time-
invariant systems and specific classes of nonlinear systems. Finally, we are
extending the proposed data-driven methodologies to linear systems where
uncertainties happen and the measurements are noisy.
## References
* [1] M. P. Deisenroth, D. Fox, and C. E. Rasmussen, “Gaussian processes for data-efficient learning in robotics and control,” IEEE transactions on pattern analysis and machine intelligence, vol. 37, pp. 408–423, 2013.
* [2] K. Chua, R. Calandra, R. McAllister, and S. Levine, “Deep reinforcement learning in a handful of trials using probabilistic dynamics models,” arXiv preprint arXiv:1805.12114, 2018.
* [3] A. Aswani, H. Gonzalez, S. S. Sastry, and C. Tomlin, “Provably safe and robust learning-based model predictive control,” Automatica, vol. 49, pp. 1216–1226, 2013.
* [4] F. Berkenkamp, A. P. Schoellig, and A. Krause, “Safe controller optimization for quadrotors with gaussian processes,” in 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 491–496, 2016.
* [5] T. Beckers, D. Kulić, and S. Hirche, “Stable gaussian process based tracking control of euler-lagrange systems,” Automatica, vol. 103, pp. 390–397, 2019.
* [6] A. Lederer, A. Capone, and S. Hirche, “Parameter optimization for learning-based control of control-affine systems,” in Learning for Dynamics and Control, pp. 465–475, 2020.
* [7] J. F. Fisac, A. K. Akametalu, M. N. Zeilinger, S. Kaynama, J. Gillula, and C. J. Tomlin, “A general safety framework for learning-based control in uncertain robotic systems,” IEEE Transactions on Automatic Control, vol. 64, pp. 2737–2752, 2018.
* [8] A. Lederer, A. Capone, T. Beckers, J. Umlauft, and S. Hirche, “The impact of data on the stability of learning-based control,” in Learning for Dynamics and Control, pp. 623–635, 2021.
* [9] F. Pukelsheim, Optimal design of experiments. SIAM, 2006.
* [10] P. Hennig and C. J. Schuler, “Entropy search for information-efficient global optimization.,” Journal of Machine Learning Research, vol. 13, 2012.
* [11] A. Lederer, A. Capone, J. Umlauft, and S. Hirche, “How training data impacts performance in learning-based control,” IEEE Control Systems Letters, vol. 5, pp. 905–910, 2020.
* [12] K. S. Narendra and J. Balakrishnan, “A common Lyapunov function for stable lti systems with commuting a-matrices,” IEEE Transactions on automatic control, vol. 39, pp. 2469–2471, 1994.
* [13] O. Mason and R. Shorten, “On linear copositive Lyapunov functions and the stability of switched positive linear systems,” IEEE Transactions on Automatic Control, vol. 52, pp. 1346–1349, 2007.
* [14] H. Ravanbakhsh and S. Sankaranarayanan, “Learning Lyapunov (potential) functions from counterexamples and demonstrations,” arXiv preprint arXiv:1705.09619, 2017.
* [15] A. Abate, D. Ahmed, M. Giacobbe, and A. Peruffo, “Formal synthesis of Lyapunov neural networks,” IEEE Control Systems Letters, vol. 5, pp. 773–778, 2020.
* [16] S. Boyd, L. El Ghaoui, E. Feron, and V. Balakrishnan, Linear matrix inequalities in system and control theory. SIAM, 1994.
* [17] A. Papachristodoulou and S. Prajna, “On the construction of Lyapunov functions using the sum of squares decomposition,” in Proceedings of the 41st IEEE Conference on Decision and Control, 2002., vol. 3, pp. 3482–3487, IEEE, 2002.
* [18] A. Papachristodoulou and S. Prajna, “A tutorial on sum of squares techniques for systems analysis,” in Proceedings of the 2005, American Control Conference, 2005., pp. 2686–2700, IEEE, 2005.
* [19] Y.-C. Chang, N. Roohi, and S. Gao, “Neural Lyapunov control,” arXiv preprint arXiv:2005.00611, 2020.
* [20] S. Chen, M. Fazlyab, M. Morari, G. J. Pappas, and V. M. Preciado, “Learning Lyapunov functions for hybrid systems,” in Proceedings of the 24th International Conference on Hybrid Systems: Computation and Control, pp. 1–11, 2021.
* [21] M. Grant and S. Boyd, “CVX: Matlab software for disciplined convex programming, version 2.1,” 2014.
|
# iFlipper: Label Flipping for Individual Fairness
Hantian Zhang∗ Georgia Institute of Technology<EMAIL_ADDRESS>,
Ki Hyun Tae∗ KAIST<EMAIL_ADDRESS>, Jaeyoung Park KAIST
<EMAIL_ADDRESS>, Xu Chu Georgia Institute of Technology
<EMAIL_ADDRESS>and Steven Euijong Whang KAIST<EMAIL_ADDRESS>
###### Abstract.
As machine learning becomes prevalent, mitigating any unfairness present in
the training data becomes critical. Among the various notions of fairness,
this paper focuses on the well-known individual fairness where similar
individuals must be treated similarly. While individual fairness can be
improved when training a model (in-processing), we contend that fixing the
data before model training (pre-processing) is a more fundamental solution. In
particular, we show that label flipping is an effective pre-processing
technique for improving individual fairness. Our system iFlipper solves the
optimization problem of minimally flipping labels given a limit to the number
of individual fairness violations, where a violation occurs when two similar
examples in the training data have different labels. We first prove that the
problem is NP-hard. We then propose an approximate linear programming
algorithm and provide theoretical guarantees on how close its result is to the
optimal solution in terms of the number of label flips. We also propose
techniques for making the solution to the linear programming more optimal
without exceeding the violations limit. Experiments on real datasets show that
iFlipper significantly outperforms other pre-processing baselines in terms of
individual fairness and accuracy on unseen test sets. In addition, iFlipper
can be combined with in-processing techniques for even better results.
∗Equal contribution and co-first authors
PVLDB Reference Format:
Hantian Zhang, Ki Hyun Tae, Jaeyoung Park, Xu Chu, Steven Euijong Whang.
PVLDB, 14(1): XXX-XXX, 2020.
doi:XX.XX/XXX.XX ††This work is licensed under the Creative Commons BY-NC-ND
4.0 International License. Visit https://creativecommons.org/licenses/by-nc-
nd/4.0/ to view a copy of this license. For any use beyond those covered by
this license, obtain permission by emailing<EMAIL_ADDRESS>Copyright is held
by the owner/author(s). Publication rights licensed to the VLDB Endowment.
Proceedings of the VLDB Endowment, Vol. 14, No. 1 ISSN 2150-8097.
doi:XX.XX/XXX.XX
PVLDB Artifact Availability:
The source code, data, and/or other artifacts have been made available at
https://github.com/khtae8250/iFlipper.
## 1\. Introduction
Machine learning (ML) impacts our everyday lives where applications include
recommendation systems (chaney2018algorithmic), job application
(dastin2018amazon), and face recognition (buolamwini2018gender).
Unfortunately, ML algorithms are also known to reflect or even reinforce bias
in the training data and thus make unfair decisions (barocas2016big;
whang2021responsible). This issue draws concerns from both the public and
research community, so algorithms have been proposed to mitigate bias in the
data and improve the fairness of ML models.
There are several prominent notions of fairness, and we focus on individual
fairness (dwork2012fairness), which states that similar individuals must be
treated similarly. Suppose that two applicants are applying to the same
school. If the two applicants have similar application materials, then it
makes sense for them to obtain the same or similar outcomes. Likewise if two
individuals are applying for a loan and have similar financial profiles, it is
fair for them to be accepted or rejected together as well. In addition to
individual fairness, the other prominent fairness notions include group
fairness and causal fairness. Group fairness (zafar2017fairness;
agarwal2018reductions; zhang2021omnifair) focuses on the parity between two
different sensitive groups (e.g., male versus female), and causal fairness
(kusner2017counterfactual) looks at fairness from a causal perspective (e.g.,
does gender affect an outcome?). These are orthogonal notations of fairness
and do not capture the individual fairness we focus on.
How can one improve a model’s individual fairness? There are largely three
possible approaches: fixing the data before model training (pre-processing),
changing the model training procedure itself (in-processing), or updating the
model predictions after training (post-processing). Among them, most of the
literature focuses on in-processing (DBLP:conf/iclr/YurochkinBS20;
yurochkin2021sensei; vargo2021individually) and more recently pre-processing
techniques (pmlr-v28-zemel13; ifair; pfr2019). We contend that pre-processing
is important because biased data is the root cause of unfairness, so fixing
the data is the more fundamental solution rather than having to cope with the
bias during or after model training. The downside of pre-processing is that
one cannot access the model and has to address the bias only using the
training data. Due to this challenge, only a few pre-processing techniques for
individual fairness have been proposed, which we will compare with in the
experiments.
We propose label flipping as a way to mitigate data bias for individual
fairness and assume a binary classification setting where labels have 0 or 1
values. Given a training set of examples, the idea is to change the labels of
some of the examples such that similar examples have the same labels as much
as possible. Which examples are considered similar is application dependent
and non-trivial and can be learned from input data (ilvento2020metric;
fairmetric; pmlr-v119-mukherjee20a) or obtained from annotators (e.g.,
humans). This topic is important for individual fairness, but not the focus of
this work where we assume the criteria is given as an input. For example, one
may compute the Euclidean distance between two examples and consider them
similar if the distance is within a certain threshold. As the training set
becomes more individually fair, the trained model becomes fairer as well (see
Section 4.2). Our label flipping approach is inspired by the robust training
literature of learning from noisy labels (DBLP:journals/corr/abs-2007-08199)
where the labeling itself may be imperfect. The standard approach for handling
such labels is to ignore or fix them (DBLP:conf/icml/SongK019). In our
setting, we consider any label that reduces individual fairness as biased and
would like to fix it.
We can use a graph representation to illustrate label flipping as shown in
Figure 1. Each node represents a training example, and its color indicates the
original label (black indicates label 1, and white indicates 0). Two similar
nodes (defined in Section 2.1) are connected with an edge, and a violation
occurs when an edge is connecting two nodes with different labels. In Figure
1, there are four nodes where only the nodes 1, 2, and 3 are similar to each
other. Each edge also has an associated weight, which reflects the similarity
of the two nodes. For simplicity, let us assume that all weights are 1. We can
see that there are two “violations” of fairness in this dataset: (1,2) and
(1,3) because there are edges between them, and they have different colors.
After flipping the label of node 1 from 1 to 0, we have no violations.
Figure 1. Label flipping example for individual fairness using a graph
representation. Two similar nodes have an edge, and color indicates the label.
By flipping the label of node 1, all pairs of similar nodes have the same
label, which is considered individually fair.
Our key contribution is in formulating and solving a constrained label
flipping problem for the purpose of individual fairness. Just like in a robust
training setup, we assume labelers can make labeling mistakes in a biased
fashion. Here the effective solution is to debias the labels by flipping them.
One consequence of label flipping is an accuracy-fairness trade-off where the
model’s accuracy may diminish. As an extreme example, if we simply flip all
the labels to be 0, then a trained model that only predicts 0 is certainly
fair, but inaccurate to say the least. Even if we carefully flip the labels,
we still observe a trade-off as we detail in Section 4.2. We thus formulate
the optimization problem where the objective is to minimize the number of
label flipping while limiting the total error, which is the total degree of
the individual fairness violations (see Definition 2). The optimization can be
formally stated as an instance of mixed-integer quadratic programming (MIQP)
problem, and we prove that it is NP-hard. We then transform the problem to an
approximate linear programming (LP) problem for efficient computation.
Interestingly, we show that our LP algorithm has theoretical guarantees on how
close its result is to the optimal solution in terms of the number of flips
performed. We then further optimize the solution given by the LP algorithm to
reduce the number of flips while ensuring the total error does not exceed the
given limit.
We call our approach iFlipper and empirically show how its label flipping
indeed results in individually-fair models and significantly outperforms other
pre-processing baselines on real datasets. In particular, the state-of-the-art
baselines (ifair; pfr2019) use representation learning to place similar
examples closer in a feature space. We show that iFlipper has better accuracy-
fairness trade-off curves and is also significantly more efficient. We also
compare iFlipper with baselines (e.g., greedy approach and k-means clustering)
for solving the optimization problem and show that iFlipper is superior.
Finally, we demonstrate how iFlipper can be integrated with in-processing
techniques (DBLP:conf/iclr/YurochkinBS20) for even better results. We release
our code as a community resource (github).
The rest of the paper is organized as follows.
* $\bullet$
We formulate the label flipping optimization as an MIQP problem and prove that
it is NP-hard (Section 2).
* $\bullet$
We propose iFlipper, which solves this problem by converting it into an
approximate LP algorithm that has theoretical guarantees and present an
additional optimization to further improve the solution given by the LP solver
(Section 3).
* $\bullet$
We evaluate iFlipper on real datasets and show how it outperforms other pre-
processing baselines and can be integrated with in-processing techniques for
better results (Section 4).
## 2\. Problem Definition
### 2.1. Preliminaries
We focus on a _binary classification_ setting, and assume a training dataset
$\mathsf{D}$ = $\\{(x_{i},y_{i})\\}_{i=1}^{n}$ where $x_{i}$ is an example,
and $y_{i}$ is its label having a value of 0 or 1. A binary classifier $h$ can
be trained on $\mathsf{D}$, and its prediction on a test example $x$ is
$h(x)$.
Individual Fairness (dwork2012fairness) states that similar individuals must
be treated similarly. The criteria for determining if two examples are similar
depends on the application, and we assume it is given as input, though
research on how to automatically discover such similarity measures exists
(ilvento2020metric; fairmetric; pmlr-v119-mukherjee20a). For example, for each
$x$, we may consider all the examples within a certain distance to be similar.
Alternatively, we may consider the top-$k$ nearest examples to be similar. In
our work, we assume as input a given similarity measure, which can be applied
on the training set to produce a similarity matrix
$W_{ij}\in\mathbb{R}^{n\times n}$ where $W_{ij}>0$ if and only if $x_{i}$ and
$x_{j}$ are deemed similar according to the measure. If $W_{ij}>0$, we set it
to be the similarity of $x_{i}$ and $x_{j}$, although any other positive value
can be used as well. In order to satisfy individual fairness, we introduce the
notion of individual fairness violation as follows.
###### Definition 0.
Individual Fairness Violation. Given a similarity matrix $W$ on a training
set, an individual fairness violation occurs when $W_{ij}>0$, but $y_{i}\neq
y_{j}$. The magnitude of the violation is defined to be $W_{ij}$.
###### Definition 0.
Total Error. Given a similarity matrix $W$, the total error is the sum of all
$W_{ij}$ values where $y_{i}\neq y_{j}$.
Our goal is to reduce the total error to be within a maximum allowed amount.
###### Definition 0.
m-Individually Fair Dataset. Given a similarity matrix $W$, a dataset is
considered m-individually fair if the total error is at most $m$.
A smaller $m$ naturally translates to an m-individually fair dataset that is
likely to result in a more individually fair model on the unseen test set as
we demonstrate in Section 4.2.
##### Incomplete Similarity Matrix
If we only have partial knowledge of the similarities, iFlipper can be
naturally adapted to utilize or fix the partial data. The straightforward
solution is to treat the partial similarity as if it is complete information
where the pairs with unknown similarities are not included in our optimization
formulation. Another solution is to “extrapolate” the partial similarity
matrix to reconstruct the full similarity matrix. For example, one can learn
similarity functions using the partial data. In this case, iFlipper’s
performance will largely depend on how accurate the learned similarity
function is.
Evaluating the Fairness of a Trained Model $h$. To evaluate the final
individual fairness of a trained model $h$, we compute the widely-used
consistency score (pfr2019) of model predictions on an unseen test set as
defined below. Here $W_{ij}$ is the similarity matrix on the unseen test set.
$\text{\em{Consistency
Score}}=1-\frac{\sum_{i}\sum_{j}|h(x_{i})-h(x_{j})|\times
W_{ij}}{\sum_{i}\sum_{j}W_{ij}}$
Intuitively, if the model is trained on an individually-fair dataset, the
predictions on the test set between similar individuals tend to be the same,
so the consistency score on the test set increases. In the extreme case, a
consistency score of 1 indicates that all similar pairs of test examples get
the same predictions from the model.
Local Sensitive Hashing. We construct a similarity matrix $W$ using locality
sensitive hashing (LSH) (Andoni2015falconn) instead of materializing it
entirely and thus avoid performing $O(n^{2})$ comparisons. We exploit the fact
that $W$ is sparse, because most examples are dissimilar. This approach is
similar to blocking techniques in entity resolution
(DBLP:books/daglib/0030287).
### 2.2. Label Flipping Optimization Problem
We define the label flipping optimization problem for individual fairness.
Given a training dataset $\mathsf{D}$ and a limit $m$ of total error allowed,
our goal is to flip the minimum number of labels in $\mathsf{D}$ such that it
becomes _m-individually fair_. This statement can be formalized as a mixed-
integer quadratic programming (MIQP) problem:
(1)
$\begin{split}\text{(MIQP)}\quad\min\quad&\sum_{i=1}^{n}{(y_{i}-y_{i}^{\prime})^{2}}\\\
\text{s.t.}\quad&\sum_{i=1}^{n}\sum_{j=1}^{n}W_{ij}(y_{i}-y_{j})^{2}\leq m\\\
&y_{i}\in\text{\\{0,1\\}},\forall i\end{split}$
where $y_{i}$ indicates an output label, and $y_{i}^{\prime}$ is its original
value. Intuitively, we count the number of flips ($(y_{i}-y_{i}^{\prime})^{2}$
= 1) while ensuring that the total error ($(y_{i}-y_{j})^{2}$ = 1 and
$W_{ij}>0$) is within the limit $m$. We call a solution feasible if it
satisfies the error constraints in Equation 1, but may or may not be optimal.
MIQP is an NP-hard problem in general, and we prove that our specific instance
of the MIQP problem is also NP-hard.
###### Theorem 4.
The MIQP problem in Equation 1 is NP-hard.
The full proof for Theorem 4 can be found in our technical report
(iflippertr). The key idea is to reduce the well-known NP-hard at most
$k$-cardinality $s$-$t$ cut problem (DBLP:journals/dam/BruglieriME04) to our
MIQP problem.
### 2.3. Baseline Algorithms
Our key contribution is to propose algorithms for the label flipping problem
that not only scales to large datasets, but also provides feasible and high-
quality solutions. We present three naïve algorithms that are efficient, but
may fail to produce feasible solutions. In comparison, our method in Section 3
always produces feasible solutions with theoretical guarantees.
Greedy Algorithm. The greedy algorithm repeatedly flips labels of nodes that
reduce the total error the most. The algorithm terminates if the total error
is $m$ or smaller, or if we cannot reduce the error anymore. For example,
suppose we start from the graph in Figure 1 where there are initially two
violations (recall we assume that $W_{ij}=1$ for all edges for simplicity) and
set $m$ = 1. We need to determine which label leads to the largest reduction
in error when flipped. We can see that flipping node 1 will reduce the total
error by 2 while flipping the other nodes does not change the total error.
Hence, the greedy algorithm flips node 1 to reduce the total error to 0 and
then terminates. While the greedy approach seems to work for Figure 1, in
general it does not always find an optimal result and may fail to produce a
feasible solution even if one exists. Consider the example in Figure 2 where
there is one violation between nodes 2 and 3. Again, we assume that $W_{ij}=1$
for all edges for simplicity. If we set $m$ = 0, the feasible solution is to
flip nodes 1 and 2 or flip nodes 3 and 4 together. Unfortunately, the greedy
algorithm immediately terminates because it only flips one node at a time, and
no single flip can reduce the error.
Figure 2. A graph where the greedy algorithm fails to find a feasible solution
with zero violations.
The computational complexity of the greedy algorithm is $O(n^{2})$ because we
flip at most $O(n)$ labels, and each time a label is flipped, we update the
total error of each neighbor of the node.
Gradient Based Algorithm. The second naïve approach is to use a Lagrangian
multiplier to move the error constraint into the objective function and solve
the problem via gradient descent. This approach is common in machine learning,
and the following equation shows the problem setup:
(2) $\begin{split}\text{(Lagrangian) }\min\text{
}&\sum_{i=1}^{n}{(y_{i}-y_{i}^{\prime})^{2}}+\lambda\sum_{i=1}^{n}\sum_{j=1}^{n}W_{ij}(y_{i}-y_{j})^{2}\\\
&y_{i}\in\text{[0,1]},\forall i\end{split}$
where $\lambda$ is a hyperparameter that controls the trade-off between
fairness and accuracy. A higher $\lambda$ favors better fairness. Although
this gradient based algorithm is efficient, it shares the same problem as the
greedy algorithm where it may get stuck in a local minima and thus fail to
find a feasible solution.
Clustering Based Algorithm. The third approach is to use a clustering
algorithm for clustering examples and assigning the same label to each
cluster. We use k-means as a representative clustering algorithm. If $k=1$,
then all examples will have the same label, and we can simply choose the
majority label of the examples. If $k$ is set correctly, and the clustering is
perfect, then only the similar examples will be clustered together. However,
this approach is unsuitable for our problem, which is to flip the minimum
number of labels to have at most $m$ total error. Reducing the total error to
0 is not the primary goal as there may be a large degradation in accuracy. To
cluster for our purposes, one would have to adjust $k$ to find the clusters
with just the right amount of total error, but this fine-tuning is difficult
as we show in Section 4.4.
## 3\. iFlipper
We explain how iFlipper converts the MIQP problem into an approximate linear
program (LP) problem and produces a feasible solution with theoretical
guarantees. The conversion is done in two steps: from MIQP to an equivalent
integer linear program (ILP) using linear constraints (Section 3.1) and from
the ILP problem to an approximate LP problem (Section 3.2). We present
iFlipper’s algorithm to solve the approximate LP problem and show why its
result always leads to a feasible solution (Section 3.3). We then provide
theoretical guarantees on how far the result is from the optimal solution of
the ILP problem, and propose a reverse-greedy approach to further optimize the
solution (Section 3.4). Finally, we present iFlipper’s overall workflow with a
complexity analysis (Section 3.5).
### 3.1. From MIQP to Equivalent ILP
We convert the MIQP problem to an equivalent ILP problem. We first replace the
squared terms in Equation 1 to absolute terms:
(3) $\begin{split}\quad\min\quad&\sum_{i=1}^{n}{|y_{i}-y_{i}^{\prime}|}\\\
\text{s.t.}\quad&\sum_{i=1}^{n}\sum_{j=1}^{n}W_{ij}|y_{i}-y_{j}|\leq m\\\
&y_{i}\in\text{\\{0,1\\}},\forall i\end{split}$
The resulting formulation is equivalent to the original MIQP because $y_{i}$
and $y^{\prime}_{i}$ have binary values.
To convert this problem into an equivalent ILP problem, we replace each
absolute term with an XOR expression, which can be expressed as four linear
constraints. For two binary variables $x$ and $y$, one can easily see that
$|x-y|=x\text{ XOR }y$. Also, each expression $z=x\text{ XOR }y$ is known to
be equivalent to the following four linear constraints: $z\leq x+y$, $z\geq
y-x$, $z\geq x-y$, and $z\leq 2-x-y$. For example, if $x=1$ and $y=1$, $z$ is
bounded by $z\leq 2-x-y$ and is thus 0. For other combinations of $x$ and $y$,
$z$ will be bounded to its correct XOR value as well. We thus introduce the
auxiliary variables $z_{i}$ and $z_{ij}$ to represent $y_{i}\text{ XOR
}y^{\prime}_{i}$ and $y_{i}\text{ XOR }y_{j}$, respectively, and obtain the
following integer linear programming (ILP) problem:
(4) $\begin{split}\text{(ILP)}\quad\min\quad&\sum_{i=1}^{n}{z_{i}}\\\
\text{s.t.}\quad&\sum_{i=1}^{n}\sum_{j=1}^{n}W_{ij}z_{ij}\leq m\\\
&y_{i},z_{i}\in\text{\\{0,1\\}},\forall i,\quad
z_{ij}\in\text{\\{0,1\\}},\forall i,j\\\ &z_{i}-y_{i}\leq y^{\prime}_{i},\quad
z_{i}-y_{i}\geq-y^{\prime}_{i}\\\ &z_{i}+y_{i}\geq y^{\prime}_{i},\quad
z_{i}+y_{i}\leq 2-y^{\prime}_{i}\\\ &z_{ij}-y_{i}-y_{j}\leq 0,\quad
z_{ij}-y_{i}+y_{j}\geq 0\\\ &z_{ij}+y_{i}-y_{j}\geq 0,\quad
z_{ij}+y_{i}+y_{j}\leq 2\\\ \end{split}$
Since the ILP problem is equivalent to the MIQP problem, we know it is NP-hard
as well. In the next section, we convert the ILP problem to an approximate
linear program (LP), which can be solved efficiently.
### 3.2. From ILP to Approximate LP
We now relax the ILP problem to an approximate LP problem. At this point, one
may ask why we do not use existing solvers like CPLEX (cplex2009v12), MOSEK
(mosek), and Gurobi (gurobi), which also use LP relaxations. All these
approaches repeatedly solve LP problems using branch-and-bound methods and
have time complexities exponential to the number of variables in the worst
case. In Section 4.4, we demonstrate how slow the ILP solvers are. Our key
contribution is finding a near-exact solution to the original ILP problem by
solving the LP problem _only once_.
We first replace the integer constraints in Equation 4 with range constraints
to obtain the following LP problem:
(5) $\begin{split}\text{(LP)}\quad\min\quad&\sum_{i=1}^{n}{z_{i}}\\\
\text{s.t.}\quad&\sum_{i=1}^{n}\sum_{j=1}^{n}W_{ij}z_{ij}\leq m\\\
&y_{i},z_{i}\in\text{[0,1]},\forall i,\quad z_{ij}\in\text{[0,1]},\forall
i,j\\\ &z_{i}-y_{i}\leq y^{\prime}_{i},\quad z_{i}-y_{i}\geq-y^{\prime}_{i}\\\
&z_{i}+y_{i}\geq y^{\prime}_{i},\quad z_{i}+y_{i}\leq 2-y^{\prime}_{i}\\\
&z_{ij}-y_{i}-y_{j}\leq 0,\quad z_{ij}-y_{i}+y_{j}\geq 0\\\
&z_{ij}+y_{i}-y_{j}\geq 0,\quad z_{ij}+y_{i}+y_{j}\leq 2\\\ \end{split}$
Now the violation between two points becomes the product of the weight of the
edge $W_{ij}$ and the absolute difference between the two labels
$z_{ij}=|y_{i}-y_{j}|$. Although this problem can be solved more efficiently
than the ILP problem, its result cannot be used as is because of the
continuous values. Hence, we next convert the result into a binary solution
that is close to the optimal solution of the ILP problem. A naïve method is to
round the continuous values to their nearest integers, but this does not
guarantee a feasible solution.
###### Example 0.
Suppose we start from the graph in Figure 3a. Just like previous examples, we
again assume that $W_{ij}$ is 1 for simplicity. The total error in Figure 3a
is 4, and we set $m=2$. Solving the LP problem in Equation 5 can produce the
solution in Figure 3b where the labels of nodes 1 and 4 are flipped from 1 to
0.5 (gray color). One can intuitively see that the labels are minimally
flipped while the total error is exactly 2. However, rounding the labels
changes the two 0.5’s back to 1’s as shown in Figure 3c, resulting in an
infeasible solution, because the total error becomes 4 again.
Figure 3. Performing simple roundings on the optimal solution’s values of the
LP problem may result in an infeasible solution for the ILP problem.
### 3.3. Constructing a Feasible Solution
We now explain how to construct a feasible integer solution for the ILP
problem (Equation 4) from an optimal solution of the LP problem (Equation 5).
We first prove a surprising result that any optimal solution $y^{*}$ can be
converted to another optimal solution $\check{y}$ where all of its variables
have one of the three values: 0, 1, or some $\alpha\in(0,1)$. That is,
$\alpha$ is between 0 and 1, but not one of them. Next, we utilize this
property and propose an adaptive rounding algorithm that converts $\check{y}$
into a feasible solution whose values are only 0’s and 1’s.
Optimal Solution Conversion. We prove the following lemma for converting an
optimal solution to our desired form.
###### Lemma 2.
Given an optimal solution $y^{*}$ of Equation 5, we can always convert $y^{*}$
to a new solution $\check{y}$ where $\check{y}$ is also optimal and only has
0, 1, or a unique $\alpha\in(0,1)$ as its values.
###### Proof.
Suppose there are at least two distinct values $\alpha$ and $\beta$ that are
not 0 or 1 in $y^{*}$. We can combine all the $y^{*}$ nodes whose value is
$\alpha$ to one node while maintaining the same total error because an
$\alpha$-$\alpha$ edge has no violation (see Figure 4 for an illustration). We
call this collection an $\alpha$-cluster. Likewise, we can generate a
$\beta$-cluster.
Figure 4. Continuing from Figure 3, recall that the optimal LP solution has
nodes 1 and 4 with 0.5 labels. We can combine nodes 1 and 4 to construct a
0.5-cluster connected to nodes 2 and 3 with four edges as shown on the right.
The total error is still 2.
Without loss of generality, let us assume that 0 ¡ $\alpha$ ¡ $\beta$ ¡ 1, and
the sum of the pairwise node similarities between the two clusters is $E$.
Suppose an $\alpha$-cluster and a $\beta$-cluster have $A_{0}$ and $B_{0}$
nodes whose initial labels are 0, respectively, and $A_{1}$ and $B_{1}$ nodes
whose initial values are 1, respectively. Let $N_{\alpha}=A_{0}-A_{1}$ and
$N_{\beta}=B_{0}-B_{1}$. Now suppose there are $U$ nodes connected to the
$\alpha$-cluster by an edge $W_{a_{i}\alpha}$ and $V$ nodes connected to the
$\beta$-cluster by an edge $W_{b_{i}\beta}$, whose values satisfy
$\begin{split}&0\leq a_{1}\leq...\leq a_{k}<\alpha<a_{k+1}\leq...\leq
a_{U}\leq 1\text{ and }\\\ &0\leq b_{1}\leq...\leq
b_{l}<\beta<b_{l+1}\leq...\leq b_{V}\leq 1.\end{split}$
Note that there is no connected node with a value of $\alpha$ or $\beta$ by
construction. Let
$S_{\alpha}=\sum_{i=1}^{k}W_{a_{i}\alpha}-\sum_{i=k+1}^{U}W_{a_{i}\alpha}$ and
$S_{\beta}=\sum_{i=1}^{l}W_{b_{i}\beta}-\sum_{i=l+1}^{V}W_{b_{i}\beta}$. Let
us also add the following nodes for convenience: $a_{0}=0$, $a_{U+1}=1$,
$b_{0}=0$, and $b_{V+1}=1$. We can then reduce at least one unique non-0/1
value in $y^{*}$, by either changing $\alpha$ to $a_{k}$ or $a_{k+1}$, or
changing $\beta$ to $b_{l}$ or $b_{l+1}$, or changing both $\alpha$ and
$\beta$ to the same value, while keeping the solution optimal using the
following Lemmas 0.1 and 0.2. The key idea is that at least one of the
conversions allows the solution to have the same number of label flippings and
the same or even smaller amount of total error, keeping the solution optimal.
The conversion process only depends on $N_{\alpha}$, $S_{\alpha}$,
$N_{\beta}$, $S_{\beta}$, and $E$. The complete proof and detailed conditions
for each case are given in our technical report (iflippertr).
###### Lemma 0.1.
For an $\alpha$-cluster with $N_{\alpha}=0$ in the optimal solution, we can
always convert $\alpha$ to either $a_{k}$ or $a_{k+1}$ while maintaining an
optimal solution. As a result, we can reduce exactly one non-0/1 value in the
optimal solution.
###### Lemma 0.2.
For an $\alpha$-cluster with $N_{\alpha}\neq 0$ and a $\beta$-cluster with
$N_{\beta}\neq 0$ in the optimal solution, we can always convert
$(\alpha,\beta)$ to one of
$(a_{k},\beta+\frac{N_{\alpha}}{N_{\beta}}(a_{k}-\alpha))$,
$(a_{k+1},\beta-\frac{N_{\alpha}}{N_{\beta}}(a_{k+1}-\alpha))$,
$(\alpha+\frac{N_{\beta}}{N_{\alpha}}(\beta-b_{l}),b_{l})$,
$(\alpha-\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta),b_{l+1})$, or
$(\frac{\alpha N_{\alpha}+\beta N_{\beta}}{N_{\alpha}+N_{\beta}},\frac{\alpha
N_{\alpha}+\beta N_{\beta}}{N_{\alpha}+N_{\beta}})$, while maintaining an
optimal solution. As a result, we can reduce at least one non-0/1 value in the
optimal solution.
We can repeat this adjustment until the solution $\check{y}$ has at most one
unique value $\alpha$ that is neither 0 nor 1. ∎
Based on Lemma 2, Algorithm 1 shows how to convert the optimal LP solution
$y^{*}$ to another optimal solution $\check{y}$ whose values are in {0,
$\alpha$, 1}. We first obtain all the unique non-0/1 values in $y^{*}$ and for
each value $v$ construct a $v$-cluster. We then pick a cluster (say the
$\alpha$-cluster) with $N_{\alpha}=0$ and change $\alpha$ using the
TransformWithOneCluster function, which implements the converting process of
Lemma 0.1 (see our technical report (iflippertr) for details) until there are
no more such clusters (Lines 4–7). Among the rest of the clusters, we choose
two (say the $\alpha$-cluster and $\beta$-cluster) and transform them using
the TransformWithTwoClusters function, which implements the combining process
of Lemma 0.2 described in our technical report (iflippertr) (Lines 8–11). We
repeat these steps until there is at most one non-0/1 value in $\check{y}$.
The details of the TransformWithOneCluster and TransformWithTwoClusters
functions are shown in Algorithm 2.
Input: Optimal solution $y^{*}$ for the LP problem, similarity matrix $W$
Output : Transformed optimal solution $\check{y}$ where each value is one of
{0, $\alpha$, 1}
1
2$\check{y}$ = $y^{*}$;
// There are T unique non-0/1 values in $y^{*}$
3 $T$ = GetNumClusters($y^{*}$);
4 while _$T >1$_ do
5 $ZeroClusterList$ = GetZeroClusters($\check{y}$);
6 for _$\alpha$ in $ZeroClusterList$_ do
// Details are in Algorithm 2
7 TransformWithOneCluster($\check{y}$, $\alpha$, $W$);
8 $T$ = $T$ \- 1;
9
10 if _$T >1$_ then
11 $\alpha$, $\beta$ = GetNonZeroTwoClusters($\check{y}$);
// Details are in Algorithm 2
12 TransformWithTwoClusters($\check{y}$, $\alpha$, $\beta$, $W$);
13 $T$ = GetNumClusters($\check{y}$);
14
// There is at most one unique non-0/1 value in $\check{y}$
15 return $\check{y}$;
16
Algorithm 1 iFlipper’s LP solution conversion algorithm.
1
2Function _TransformWithOneCluster(_$y$ , $\alpha$, $W$_)_:
3
// Replace all $\alpha$ with $\alpha_{new}$ to reduce one unique non-0/1 value
4
5 $a_{k}$, $a_{k+1}$, $S_{\alpha}$ = GetOneClusterInfo($y$, $\alpha$, $W$);
6
7 if _$S_{\alpha}\leq 0$_ then
8 $\alpha\leftarrow{}a_{k+1}$;
9 else
10 $\alpha\leftarrow{}a_{k}$;
// Now $\alpha\in\\{a_{k},a_{k+1}\\}$
11
12Function _TransformWithTwoClusters(_$y$ , $\alpha$, $\beta$, $W$_)_:
// Replace all ($\alpha$, $\beta$) with ($\alpha_{new}$, $\beta_{new}$) using
one of the five cases in Lemma 0.2 to reduce at least one unique non-0/1 value
13
14 $a_{k}$, $a_{k+1}$, $N_{\alpha}$, $S_{\alpha}$, $b_{l}$, $b_{l+1}$,
$N_{\beta}$, $S_{\beta}$, $E$ = GetTwoClustersInfo($y$, $\alpha$, $\beta$,
$W$);
15
16 $X$, $Y$ = $\frac{N_{\alpha}}{N_{\beta}}$,
$\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}$;
17
// Details are in our technical report (iflippertr)
18
19 if _$X <0$, $Y\leq 0$_ then
20 if _$X+1\leq 0$_ then
21
$\alpha\leftarrow{}\alpha+min(a_{k+1}-\alpha,-\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta))$;
22 else
23
$\alpha\leftarrow{}\alpha+min(a_{k+1}-\alpha,-\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta),\frac{N_{\beta}(\beta-\alpha)}{N_{\alpha}+N_{\beta}})\;$
24 $\beta\leftarrow{}\beta-\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$;
25
26 else if _$X <0$, $Y>0$_ then
27 if _$X+1\geq 0$_ then
28 $\alpha\leftarrow{}\alpha-min(\alpha-
a_{k},-\frac{N_{\beta}}{N_{\alpha}}(\beta-b_{l}))$;
29
30 else
31 $\alpha\leftarrow{}\alpha-min(\alpha-
a_{k},-\frac{N_{\beta}}{N_{\alpha}}(\beta-
b_{l}),-\frac{N_{\beta}(\beta-\alpha)}{N_{\alpha}+N_{\beta}})$;
32 $\beta\leftarrow{}\beta+\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$;
33
34 else if _$X >0$, $Y\leq 0$_ then
35
$\alpha\leftarrow{}\alpha+min(a_{k+1}-\alpha,\frac{N_{\beta}}{N_{\alpha}}(\beta-
b_{l}),\frac{N_{\beta}(\beta-\alpha)}{N_{\alpha}+N_{\beta}})$;
36 $\beta\leftarrow{}\beta-\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$;
37
38 else if _$X >0$, $Y>0$_ then
39 $\alpha\leftarrow{}\alpha-min(\alpha-
a_{k},\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta))$;
40 $\beta\leftarrow{}\beta+\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$;
41
// Now $(\alpha,\beta)$ is one of the cases in Lemma 0.2
42
Algorithm 2 Transformation functions in Algorithm 1.
###### Example 0.
To illustrate Algorithm 1, consider Figure 3a again where $m=2$. Say we obtain
an optimal solution from an LP solver as shown in Figure 5a where nodes 1 and
4 have the labels $\alpha=0.1$ and $\beta=0.9$, respectively, while the other
nodes have 0 labels. This solution uses one flip and has a total error of 2.
We first construct an $\alpha$-cluster and a $\beta$-cluster where each
cluster only contains a single node, and there is no edge between them as
shown in Figure 5b. We then convert ($\alpha$, $\beta$) using the
TransformWithTwoClusters. As a result, we set ($\alpha$, $\beta$) to (0.5,
0.5) and reduce one non-0/1 unique value in the solution. The solution now has
only one non-0/1 value (i.e., 0.5) and still has a total error of 2 while
using one flip.
Figure 5. Cluster construction example for Algorithm 1.
Adaptive Rounding into a Binary Solution. We now convert $\check{y}$ into a
feasible integer solution with only 0’s and 1’s. If we use simple rounding to
change $\alpha$ to 0 or 1 as in the naïve method in Section 3.2, the resulting
integer solution may not be feasible like the example in Figure 3c.
Fortunately, the fact that we only have three possible values allows us to
propose an adaptive rounding algorithm (Algorithm 3) that guarantees
feasibility. We first denote $M_{ab}$ as the sum of similarity values of
($a$-$b$) label pairs with edges. For example, $M_{0\alpha}$ is the sum of
similarities of similar node pairs whose labels are 0 and $\alpha$,
respectively. Then the key idea is to round $\alpha$ to 1 if the (1-$\alpha$)
label pairs have a higher similarity sum than that of the (0-$\alpha$) label
pairs (i.e., $M_{0\alpha}\leq M_{1\alpha}$), and round $\alpha$ to 0
otherwise. For example, consider Figure 3b again. Here $\alpha=0.5$,
$M_{0\alpha}=4$, and $M_{1\alpha}=0$. We have $M_{0\alpha}>M_{1\alpha}$, so
rounding $\alpha$ to 0 will result in a feasible solution with zero
violations.
Input: Transformed optimal solution $\check{y}$ for the LP problem whose
values are in {0, $\alpha$, 1}, similarity matrix $W$
Output : Feasible binary integer solution $\overline{y}$
// Get the total similarity values of ($0$-$\alpha$) and ($1$-$\alpha$) label
pairs
1 $M_{0\alpha},M_{1\alpha}$ = GetSimilaritySumOfPairs($\check{y}$, $W$);
2 if _$M_{0\alpha}\leq M_{1\alpha}$_ then
3 $\overline{\alpha}\leftarrow{}1$;
4
5else
6 $\overline{\alpha}\leftarrow{}0$;
7
// Replace $\alpha$ with $\overline{\alpha}\in\\{0,1\\}$
8 $\overline{y}$ = ApplyRounding($\check{y},\overline{\alpha}$);
9 return $\overline{y}$;
10
Algorithm 3 iFlipper’s adaptive rounding algorithm.
We prove the correctness of Algorithm 3 in Lemma 4.
###### Lemma 4.
Given an optimal solution $\check{y}$ of Equation 5 where each value is one of
{0, $\alpha$, 1}, applying adaptive rounding (Algorithm 3) always results in a
feasible solution.
###### Proof.
The total error in $\check{y}$ can be expressed as a function of $\alpha$:
(6) $\begin{split}\text{Total
Error}&=\sum_{i=1}^{n}\sum_{j=1}^{n}W_{ij}\check{z}_{ij}=\sum_{i=1}^{n}\sum_{j=1}^{n}W_{ij}|\check{y}_{i}-\check{y}_{j}|\\\
&=f(\alpha)=M_{01}+\alpha M_{0\alpha}+(1-\alpha)M_{1\alpha}\leq m\\\
\end{split}$
In addition, the error constraint in Equation 5 is satisfied because
$\check{y}$ is an optimal solution.
We now want to round $\alpha$ to either 0 or 1 while satisfying the error
constraint. If $M_{0\alpha}\leq M_{1\alpha}$, we have
$f(1)=M_{01}+M_{0\alpha}\leq M_{01}+\alpha
M_{0\alpha}+(1-\alpha)M_{1\alpha}\leq m$ $(\because 0<\alpha<1)$, so setting
$\alpha=1$ guarantees a feasible solution. On the other hand, if
$M_{1\alpha}<M_{0\alpha}$, we have $f(0)=M_{01}+M_{1\alpha}<M_{01}+\alpha
M_{0\alpha}+(1-\alpha)M_{1\alpha}\leq m$ $(\because 0<\alpha<1)$, so setting
$\alpha=0$ ensures feasibility. ∎
Lemma 4 shows that the rounding choice of $\alpha$ depends on how
$M_{0\alpha}$ and $M_{1\alpha}$ compare instead of $\alpha$’s value itself.
This result explains why simple rounding as in the naïve method does not
necessarily lead to a feasible solution.
In the next section, we analyze how accurate the rounded solution is compared
to the optimal solution of the ILP problem and investigate whether it can be
further improved.
### 3.4. Optimality and Improvement
We analyze the optimality of Algorithm 3 and propose a technique called
reverse greedy for further optimizing it without exceeding the total error
limit.
Optimality Analysis. We prove the theoretical bounds of Algorithm 3 in Lemma
5:
###### Lemma 5.
For a given optimal solution $\check{y}$ of Equation 5, let us denote $N_{ab}$
as the number of nodes in $\check{y}$ where the label $a$ was flipped to $b$.
Then the objective value of the output $\overline{y}$ from Algorithm 3 is at
most $C$ more than the optimal objective value of the original ILP problem
where the value of $C$ depends on $\alpha$:
(7)
$\begin{split}&\alpha=1\rightarrow{}C=(1-\alpha)(N_{0\alpha}-N_{1\alpha})\\\
&\alpha=0\rightarrow{}C=\alpha(N_{1\alpha}-N_{0\alpha})\\\ \end{split}$
###### Proof.
Since the optimal solution of the ILP problem is always one of the feasible
solutions of the LP problem, the optimal objective values ($OPT$) of the two
optimization problems should satisfy:
(8) $\begin{split}OPT_{LP}\leq OPT_{ILP}\\\ \end{split}$
The objective value of $\check{y}$ can then be expressed as follows:
(9)
$\begin{split}OPT_{LP}(\check{y})&=\sum_{i=1}^{n}\check{z}_{ij}=(N_{01}+N_{10})+\alpha
N_{0\alpha}+(1-\alpha)N_{1\alpha}\\\ \end{split}$
We first consider the case where $\alpha=1$. Here the objective value of
$\overline{y}$ is $OPT_{LP}(\overline{y})=(N_{01}+N_{10})+N_{0\alpha}$. We can
then derive the bound on $OPT_{LP}(\overline{y})$ from Equation 8 and Equation
9 as follows:
(10) $\begin{split}&OPT_{LP}(\check{y})-OPT_{LP}(\overline{y})\leq
OPT_{ILP}-OPT_{LP}(\overline{y})\\\ &(1-\alpha)(N_{1\alpha}-N_{0\alpha})\leq
OPT_{ILP}-OPT_{LP}(\overline{y})\\\ &OPT_{LP}(\overline{y})\leq
OPT_{ILP}+(1-\alpha)(N_{0\alpha}-N_{1\alpha})\\\ \end{split}$
We note that $(1-\alpha)(N_{0\alpha}-N_{1\alpha})$ is always non-negative
because $\check{y}$ is an optimal solution, i.e., $OPT_{LP}(\check{y})\leq
OPT_{LP}(\overline{y})$.
Similarly, if $\alpha=0$, we can obtain the objective value bound of
$\alpha(N_{1\alpha}-N_{0\alpha})$. ∎
Reverse Greedy Algorithm. Lemma 5 shows that the bound may sometimes be loose
because it depends on $\alpha$ and the number of nodes whose labels are
flipped to $\alpha$. There is a possibility that Algorithm 3 flips nodes
unnecessarily, which results in a total error smaller than $m$ (see Section
4.5 for empirical results). Hence, we propose an algorithm called reverse
greedy (Algorithm 4), which unflips flipped labels as long as the total error
does not exceed $m$. As the name suggests, we run the greedy algorithm in
Section 2.2 in the reverse direction where for each iteration, we unflip nodes
that increase the error the least to recover the original labels as much as
possible. Thus, reverse greedy can only improve the optimality of Algorithm 3.
Input: Feasible binary integer solution $\overline{x}$, total error limit $m$
Output : Improved binary integer solution $\tilde{x}$
1
2$\tilde{x}$ = $\overline{x}$;
3 $flipped\\_labels$ = GetFlippedLabel($\tilde{x}$);
4 $total\\_error$ = GetTotalError($\tilde{x}$);
5 while _$total\\_error\leq m$_ do
// Unflip flipped labels of the nodes so that the total error is increased the
least
6 $\tilde{x}$ = FlipLabelLeastViolation($\tilde{x}$, $flipped\\_labels$);
7 $flipped\\_labels$ = GetFlippedLabel($\tilde{x}$);
8 $total\\_error$ = GetTotalError($\tilde{x}$);
9
10return $\tilde{x}$;
11
Algorithm 4 iFlipper’s reverse greedy algorithm.
###### Example 0.
Continuing from our example using Figure 3 and $m=2$, suppose we run Algorithm
3 on Figure 3b and round the 0.5 labels to 0 to obtain the feasible solution
in Figure 6a. A total of two flips are performed compared to the initial graph
Figure 3a. However, there exists an optimal solution like Figure 6b where only
one flip is necessary. Running Algorithm 4 will unflip nodes 1 or 4, and
unflipping node 1 produces this solution.
Figure 6. The reverse greedy algorithm can further optimize a rounded solution
from Algorithm 3.
The time complexity of reverse greedy is the same as the greedy algorithm,
i.e., $O(n^{2})$, but in practice requires fewer iterations than greedy
because the total error is relatively close to $m$.
### 3.5. Putting Everything Together
We now describe the overall workflow of iFlipper. For a given ILP problem, we
first convert it into an approximate LP problem. We then solve the approximate
LP problem and convert its optimal solution to another optimal solution that
only has values in {0, $\alpha$, 1} using Algorithm 1. We then apply adaptive
rounding using Algorithm 3. Finally, we possibly improve the rounded solution
by unflipping labels using Algorithm 4.
Complexity Analysis. iFlipper consists of four parts: solving the approximate
LP problem, the converting algorithm, the adaptive rounding algorithm, and the
reverse-greedy algorithm. The interior-points algorithm for solving LP has a
time complexity of $O(n^{2+\varepsilon})$, where $\varepsilon$ is smaller than
1. The converting algorithm has a time complexity of $O(n^{2})$: in the worst
case, we may need to combine clusters $n$ times, and for each combining
process, we need to check all neighbors of a particular cluster, which is up
to $n$ points. For the adaptive rounding algorithm, the time complexity is
also $O(n^{2})$, because we need to count the number of $(1,\alpha)$ edges and
$(0,\alpha)$ edges, which is $O(n^{2})$ in the worst case. The reverse greedy
algorithm, as analyzed in Section 3.4, is $O(n^{2})$. Overall, the complexity
is bounded by the LP solver, which is $O(n^{2+\varepsilon})$. This result may
look worse than the greedy algorithm, but iFlipper runs much faster than
greedy in practice. The reason is that we use efficient LP solvers, and the
empirical running times of the adaptive rounding and reverse greedy algorithms
are much faster than their theoretical worst case complexities. We show the
empirical runtimes in Section 4.
## 4\. Experiments
In this section, we evaluate iFlipper on real datasets and address the
following key questions:
* •
Is there an accuracy-fairness trade-off for iFlipper?
* •
How does iFlipper compare with various baselines in terms of model accuracy,
individual fairness, and efficiency?
* •
How useful is each component of iFlipper?
* •
Does the LP solution conversion run correctly?
* •
Can iFlipper be integrated with in-processing techniques?
We implement iFlipper in Python and use Scikit-learn (scikit-learn) and
PyTorch (NEURIPS2019_9015) libraries for model training. For performing
optimization, we use two software packages: CPLEX (cplex2009v12) and MOSEK
(mosek). We run all experiments on Intel Xeon Gold 5115 CPUs.
### 4.1. Setting
##### Datasets
We experiment on the following three popular datasets in the fairness
literature. We randomly split each dataset into training, test, and validation
sets. We then use the same feature pre-processing as in IBM’s AI Fairness 360
toolkit (DBLP:journals/corr/abs-1810-01943) where examples with missing values
are discarded. See Table 1 for more details.
* •
COMPAS (machinebias): Contains 6,167 people examples and is used to predict
criminal recidivism rates. The features include gender, race, and criminal
history.
* •
AdultCensus (DBLP:conf/kdd/Kohavi96): Contains 45,222 people examples and is
used to predict whether an individual’s income exceeds $50K per year. The
features include gender, age, and occupation.
* •
Credit (Dua:2019): Contains 1,000 examples and is used to predict an
individual’s credit risk. The features contain race, credit amount, and credit
history.
* •
Synthetic (pmlr-v54-zafar17a): We generate 200,000 samples with two attributes
($x_{1}$, $x_{2}$) and a binary label $y$ following a process introduced in
(pmlr-v54-zafar17a). Tuples with a positive label ($x_{1}$, $x_{2}$, $y=1$)
are drawn from a Gaussian distribution and, tuples with a negative label
($x_{1}$, $x_{2}$, $y=0$) are drawn from another Gaussian distribution.
We also considered two additional fairness datasets: Bank (bankdataset) and
LSAC (lsacdataset). However, these datasets exhibit fewer individual fairness
issues as shown in our technical report (iflippertr). Hence, we do not include
them in our main experiments because the fairness improvements are not as
significant.
##### Similarity Matrices
We consider two similarity matrices using Euclidean distance (i.e.,
$d(x_{i},x_{j})=\|x_{i}-x_{j}\|^{2}$). When measuring distances, we follow
previous approaches (ifair; pmlr-v28-zemel13) and exclude sensitive attributes
like gender and race. The rationale is that individuals who have similar
information other than the sensitive attributes values must be treated
similarly. We quantify the similarity as $W_{ij}=e^{-\theta d(x_{i},x_{j})}$
where $\theta>0$ is a scale parameter. Our design follows a related work on
individual fairness that also uses similarity matrices (petersen2021post). See
Table 1 for the configurations.
* •
kNN-based: Considers $(x_{i},x_{j})$ as a similar pair if $x_{i}$ is one of
$x_{j}$’s nearest neighbors or vice versa:
$W_{ij}=\begin{cases}e^{-\theta d(x_{i},x_{j})}&\text{if $x_{i}\in
NN_{k}(x_{j})$ or $x_{j}\in NN_{k}(x_{i})$}\\\ 0&\text{otherwise}\end{cases}$
where $NN_{k}(x_{j})$ denotes the set of $k$ nearest examples of $x_{j}$ in a
Euclidean space.
* •
threshold-based: Considers $(x_{i},x_{j})$ as a similar pair if their distance
is smaller than a threshold $T$:
$W_{ij}=\begin{cases}e^{-\theta d(x_{i},x_{j})}&\text{if $d(x_{i},x_{j})\leq
T$}\\\ 0&\text{otherwise}\end{cases}$
We use the FALCONN LSH library (Andoni2015falconn) to efficiently construct
both similarity matrices. For all datasets, we set the LSH hyperparameters
(number of hash functions and number of hash tables) to achieve a success
probability of 0.98 on a validation set where 98% of similar pairs of examples
are in the same buckets.
Dataset | # Train | # Test | # Valid | Sen. Attr | $k$ | $T$ | $\theta$
---|---|---|---|---|---|---|---
COMPAS | 3,700 | 1,850 | 617 | gender | 20 | 3 | 0.05
AdultCensus | 27,133 | 13,566 | 4,523 | gender | 20 | 3 | 0.1
Credit | 700 | 201 | 199 | age | 20 | 7 | 0.05
Synthetic | 100,000 | 60,000 | 40,000 | N/A111There is no sensitive attribute to exclude when measuring distances. | 20 | 3 | 0.05
Table 1. Settings for the four datasets.
##### Measures
We evaluate a model’s accuracy by computing the portion of correctly-predicted
data points. We evaluate individual fairness using the consistency score
(pfr2019) defined in Section 2.1, which measures the consistency of
predictions on the test set between similar individuals. For both scores,
higher values are better. We report mean values over five trials for all
measures.
##### Hyperparameter Tuning
iFlipper provides a total error limit hyperparameter $m$, which impacts the
trade-off between the model accuracy and fairness where a lower $m$ results in
better fairness, but worse accuracy (see Section 4.3.1). Given a desired
consistency score, we can use binary search to adjust $m$ by comparing the
consistency score of the trained model with the desired score.
##### Methods Compared
We compare iFlipper with the following existing pre-processing algorithms for
individual fairness:
* •
LFR (pmlr-v28-zemel13): A fair representation learning algorithm that
optimizes between accuracy, group fairness, and individual fairness in terms
of data reconstruction loss.
* •
iFair (ifair): A fair representation learning algorithm that optimizes
accuracy and individual fairness. Compared to LFR, iFair does not optimize
group fairness, which enables the classifier to have better individual
fairness. If two data points are close to each other in the input space, iFair
aims to map them close to each other in the feature space as well.
* •
PFR (pfr2019): A fair representation learning algorithm that tries to learn an
embedding of the fairness graph. The fairness graph used in PFR is similar to
the one used in iFlipper. Similar to iFair, PFR optimizes individual fairness
while preserving the original data by mapping similar individuals to nearby
points in the learned representation. Among the three baselines, only PFR is
able to support a general similarity matrix. PFR uses an efficient trace
optimization algorithm, which can learn representations much faster than
iFair.
For all baselines, there are multiple hyperparameters that can balance the
competing objectives. We start with the same sets of hyperparameters as
described in their papers, and tune them to provide the best results.
##### Optimization Solutions Compared.
We compare iFlipper with the following optimization baselines described in
Sections 2.3 and 3.1.
* •
Greedy: Flips labels that reduce the total error the most.
* •
Gradient: Solves an unconstrained optimization problem.
* •
kMeans: Applies k-means clustering and, for each cluster, make its examples
have the majority label.
* •
ILP Solver: Solves the ILP problem exactly using CPLEX (cplex2009v12), which
is a state-of-the-art solver.
For Gradient and KMeans, we tune $\lambda$ and $k$, respectively, to satisfy
the total error limit. We use CPLEX to solve ILP problems as it is faster than
MOSEK. When solving LP problems in iFlipper, however, CPLEX turns out to be
much slower than MOSEK due to its long construction time, so we use MOSEK
instead.
An interesting behavior of MOSEK is that empirically all of its optimal
solutions only contain values in $\\{0,\alpha,1\\}$ without having to run our
Algorithm 1. Note that this behavior is limited to the label flipping problems
we are solving and does not necessarily occur for other LP problems. We
suspect that MOSEK’s algorithm effectively contains the functionality of
Algorithm 1. We also make the same observations when using CPLEX for our LP
problems. We find these results encouraging because it means that Algorithm 1
is not a significant runtime overhead and can even be tightly integrated with
the LP solving code. A detailed analysis of MOSEK’s code is an interesting
future work. For any other solver that does not have this behavior, one can
always run Algorithm 1 on its solutions. Hence to make sure our Algorithm 1 is
correct, we separately evaluate it in Section 4.6.
##### Model Setup
We use logistic regression (LR), random forest (RF), and neural network (NN)
models for our experiments. We tune the model hyperparameters (e.g., learning
rate and maximum depth of the tree) such that the model has the highest
validation accuracy. Our goal is to pre-process the training data to make
these models train fairly.
### 4.2. Accuracy and Fairness Trade-off
Figure 7 shows the trade-off between fairness and accuracy for the COMPAS
dataset when running iFlipper with different allowed amounts of total error.
The x-axis is accuracy, and the y-axis is the consistency score. There are two
curves, which correspond to two different similarity matrices: kNN-based and
threshold-based. Each data point on the curve has two other results: (total
error after repairing, number of flips). As we flip more labels, there is less
total error in the repaired dataset, and the resulting model has a higher
consistency score on the test set, which means it has better individual
fairness. However, the accuracy drops as we flip more labels. The trade-off
curves of the other datasets are similar and can be found in our technical
report (iflippertr).
Figure 7. Accuracy-fairness trade-off curves for iFlipper. For each data
point, we also show the total error after repairing and the number of flips in
parentheses.
(a) COMPAS-kNN
(b) AdultCensus-kNN
(c) Credit-kNN
(d) COMPAS-threshold
(e) AdultCensus-threshold
(f) Credit-threshold
Figure 8. Accuracy-fairness trade-offs of logistic regression on the three
datasets using the two similarity matrices. In addition to the four methods
LFR, iFair, PFR, and iFlipper, we add the result of model training without any
pre-processing and call it “Original.” As a result, only iFlipper shows a
clear accuracy and fairness trade-off.
### 4.3. Performance and Runtime Results
We now compare iFlipper with other baselines using the three datasets and two
similarity matrices.
#### 4.3.1. Accuracy and Fairness Comparison
We first compare the accuracy and fairness results of iFlipper with the other
methods. Figure 8 shows the trade-off results with logistic regression where
the x-axis is the accuracy, and the y-axis is the consistency score on a test
set. Original is where we train a model on the original data with no data pre-
processing. For LFR, iFair, and PFR, we employ 20, 45, and 15 different
hyperparameter sets, respectively, as described in their papers. For iFlipper,
we use 10–11 different $m$ values to control accuracy and fairness. For a
clear visualization, we omit some points in the bottom left region of the
graph for methods other than iFlipper. As a result, iFlipper significantly
outperforms the baselines in terms of both test accuracy and consistency score
for all cases, which demonstrates that our label flipping approach is
effective in improving individual fairness while maintaining high accuracy. We
also observe that iFlipper performs better than the three baselines on both
similarity matrices. This result is important because the similarity matrix
may vary depending on the application. The results for a random forest and
neural network are in our technical report (iflippertr), and the key trends
are similar to Figure 8 where iFlipper shows a better trade-off than the
baselines. We note that the advantage of iFlipper compared to the baselines is
relatively less clear on the Credit dataset mainly because the data is small,
making the variance of the results higher.
Dataset | Adult-kNN | Adult-threshold
---|---|---
Model | LR | RF | NN | LR | RF | NN
(C. Score) | (0.94) | (0.90) | (0.90) | (0.95) | (0.83) | (0.95)
LFR | .815 | .827 | .806 | .821 | .827 | .806
iFair | .797 | .820 | .805 | .798 | .820 | .806
PFR | .826 | .808 | .844 | .826 | .808 | .829
iFlipper | .844 | .851 | .848 | .845 | .850 | .845
Table 2. Accuracy comparison of methods with similar individual fairness on
different models. In order to align the individual fairness, for each model we
make the methods have similar consistency scores on the validation data (C.
Score) by tuning their hyperparameters.
Table 2 shows a more detailed comparison with the baselines using the
AdultCensus dataset. For each ML model, we fix the consistency score (denoted
as “C. Score”) to be some value as shown below each model name in Table 2 in
parentheses. Then for each method, we report the test accuracy when we tune
its hyperparameter such that the trained model has the highest validation
accuracy while achieving the fixed consistency score on the validation set. As
a result, iFlipper consistently shows the highest test accuracy compared to
the other baselines for all models.
In comparison to the other baselines, iFlipper’s accuracy-fairness trade-off
is also much cleaner as shown in Figure 8 where the other baselines have noisy
trade-offs and even overlapping results for different hyperparameter sets. The
benefit of having a clean trade-off is that iFlipper can easily balance
between accuracy and fairness to obtain a desired fairness by varying the
total error limit $m$. In Figure 8, we also observe that iFlipper is flexible
and can obtain a wide range of test consistency scores, all the way up to
nearly 1.0.
Datasets | Sim. Matrix | Avg. Runtime (sec)
---|---|---
| | LFR | iFair | PFR | iFlipper
COMPAS | kNN | 786 | 29,930 | 0.41 | 5.70
threshold | 786 | 29,930 | 0.35 | 8.69
Adult- Census | kNN | 700 | 21,321$\dagger$ | 15.62 | 140
threshold | 700 | 21,321$\dagger$ | 14.98 | 685
Credit | kNN | 22.78 | 394 | 0.01 | 1.74
threshold | 22.78 | 394 | 0.01 | 0.68
Table 3. Runtime results of LFR, iFair, PFR, and iFlipper on the COMPAS,
AdultCensus, and Credit datasets. For each method, we show the average runtime
for all experiments in Figure 8. The symbol $\dagger$ indicates that we reduce
the size of the training data for better efficiency. Figure 9. Runtime results
of iFlipper on synthetic datasets.
(a) Total error
(b) Number of flips
(c) Runtime (sec)
Figure 10. A detailed comparison of iFlipper against the three naïve
optimization solutions (Greedy, Gradient, and KMeans) and ILP solver on the
AdultCensus dataset where we use the kNN-based similarity matrix. Here the
initial amount of total error is 65,742.5. We show the results for three
different total error limits ($m$).
#### 4.3.2. Runtime Comparison
We evaluate the efficiency of iFlipper and the three baselines in Table 3. For
each method, we show the average runtime for all experiments in Figure 8. We
note that the runtimes of LFR and iFair do not depend on the similarity
matrix. For the iFair experiments on the AdultCensus dataset, we lower the
training data using uniform sampling to fit iFair because fitting iFair on the
entire data takes more than 24 hours. We confirm that this relaxation does not
hurt the original performance reported in the iFair paper (ifair). As a
result, iFlipper is much faster than LFR and iFair for all cases. Although PFR
seems to be the fastest, it performs much worse than iFlipper in terms of
accuracy and fairness as shown in Figure 8.
Another observation is that as the dataset size increases, iFlipper’s runtime
increases following the time complexity analysis in Section 3.5. For a
detailed evaluation, we conduct the experiment using random subsets of the
synthetic dataset. For each experiment, we set the total error limit $m$ to
20% of the initial amount of total error. Figure 9 shows runtime results of
iFlipper on datasets of different sizes. As a result, we observe a quadratic
increase in running time as the size of the training data increases, which is
consistent with our theoretical analysis.
### 4.4. Optimization Solution Comparison
We now compare iFlipper with other optimization solutions: Greedy, Gradient,
KMeans, and an ILP solver. Figure 10 makes a comparison in terms of optimality
and efficiency on the AdultCensus dataset where we use the kNN-based
similarity matrix. We set the total error limit $m$ to three different values
for a detailed comparison. Note that all three subfigures within Figure 10 use
the same legends.
We first compare the resulting total error of the methods in Figure 10(a).
Obviously, the optimal solution from the ILP solver is the closest to the
total error limit. We observe that iFlipper never exceeds the target. On the
other hand, both Greedy and Gradient are unable to find a feasible solution in
some cases. For example, Greedy and Gradient cannot reduce the total error to
less than 4,576.4 and 34,053.8, respectively. This result is expected because
these methods may fall into local minima as we explained in Section 2.3. Also,
KMeans does find feasible solutions, but flips too many labels unnecessarily.
For example, for $m$ = 100 and $m$ = 1,000, KMeans has zero violations using
more flips than iFlipper, which defeats the purpose of the optimization
altogether.
We also compare the number of flips in Figure 10(b). As a result, iFlipper
produces solutions that are close to the optimal ones. When $m$ = 100 and
1,000, Greedy shows the fewest number of label flips because it fails to find
a feasible solution. Gradient has the largest number of flips. We suspect this
poor performance is due to the fact that Gradient (1) solves an unconstrained
optimization problem that makes it hard to satisfy the limit on total error,
and (2) provides continuous values, which introduces rounding errors. KMeans
also performs worse than iFlipper because its clustering cannot be fine-tuned
to have close to $m$ total error.
Finally, we compare the average runtimes (i.e., wall clock times in seconds)
of each method in Figure 10(c). As a result, iFlipper is much faster than the
ILP solver as it solves an approximate LP problem only once. In addition,
Greedy and KMeans are slower than iFlipper because they use many iterations to
reduce the total error and adjust the number of clusters, respectively.
Gradient is the most efficient, but the result is not meaningful because the
total error and the numbers of flips are even worse than Greedy.
We also perform the above experiments on the COMPAS dataset, and the results
are similar (see our technical report (iflippertr)).
### 4.5. Ablation Study
We perform an ablation study in Figure 11 to demonstrate the effectiveness of
each component in iFlipper using the AdultCensus dataset and kNN-based
similarity matrix. The results for the COMPAS dataset are similar and shown in
our technical report (iflippertr). As we explained in Section 4.1, MOSEK
always produces an optimal LP solution that only has values in {0, $\alpha$,
1}, so we do not have an ablation study without the LP solution conversion.
Instead, we separately evaluate the conversion algorithm in Section 4.6.
Again, the fact that MOSEK effectively contains the conversion functionality
indicates that the conversion overhead is not significant and that a tight
integration with the LP solving is possible. We thus compare iFlipper (LP
solver with both adaptive rounding and reverse greedy algorithms) with the
following variants: (1) LP solver with simple rounding (LP-SR); and (2) LP
solver with adaptive rounding (LP-AR). We also consider an optimal solution
from the ILP solver to compare the optimality of each solution.
Figure 11(a) shows that the adaptive rounding algorithm always provides a
feasible solution while simple rounding to a fractional optimal solution
results in an infeasible solution as we discussed in Lemma 3. However, the
rounding algorithm does flip more labels than the optimal solution as shown in
Figure 11(b), resulting in an unintended fairness improvement and accuracy
drop. In this case, the reverse greedy algorithm in iFlipper is used to reduce
the optimality gap with the optimal solution by recovering as many original
labels as possible without exceeding the total error limit.
(a) Total error
(b) Number of flips
Figure 11. Ablation study for iFlipper on the AdultCensus dataset and kNN-
based similarity matrix.
Table 4 shows the average runtime of each component (LP solver, adaptive
rounding, and reverse greedy) in iFlipper in Figure 11. As a result, the
runtime for solving the LP problem is dominant, which demonstrates that
iFlipper is able to provide a near-exact solution with minimal time overhead.
Method | Avg. Runtime (sec)
---|---
LP Solver (effectively includes Alg. 1) | 153.61
\+ Adaptive Rounding (Alg. 3) | 0.48
\+ Reverse Greedy (Alg. 4) | 13.26
Table 4. Avg. runtimes of iFlipper’s components in Figure 11.
### 4.6. Algorithm 1 Evaluation
We now evaluate the LP solution conversion (Algorithm 1) separately using the
three datasets and kNN-based similarity matrices. For all datasets, we start
with an LP problem solution by assigning random values and perform the
conversion. As a result, we successfully convert random solutions to new
solutions whose values are in {0, $\alpha$, 1} where the exact value of
$\alpha$ for each dataset is shown in Table 5. In addition, each converted
solution always has the same number of label flips and a smaller total error.
This result is consistent with Lemma 2, which says the conversion maintains
the optimality of the original solution. Finally, the conversion is much
faster than the overall runtime of iFlipper (see Table 3) even in this worst-
case scenario where the labels are randomly initialized.
| | Before Conv. | After Conv. |
---|---|---|---|---
Dataset | $\boldsymbol{\alpha}$ | Tot. Err. | # Flips | Tot. Err. | # Flips | Time(s)
COMPAS | 0.471 | 12645.1 | 1,838 | 1038.2 | 1,838 | 2.47
AdultCensus | 0.494 | 93519.7 | 13,514 | 3164.4 | 13,514 | 93.63
Credit | 0.448 | 2447.0 | 361 | 128.7 | 361 | 0.25
Table 5. iFlipper’s conversion algorithm (Algorithm 1) on LP problem solutions
that have random labels.
### 4.7. Compatibility with In-processing Method
In this section, we demonstrate how iFlipper can be combined with an in-
processing algorithm to further improve individual fairness. We evaluate
iFlipper with SenSR (DBLP:conf/iclr/YurochkinBS20), which is an in-processing
fair training algorithm that is robust to sensitive perturbations to the
inputs, on the AdultCensus dataset. For iFlipper, we use the same distance
function in SenSR, which computes the distance using the features that are not
correlated to sensitive attributes and use the threshold $T$ = 2 to construct
the similarity matrix. For a fair comparison, we report both our consistency
score and the GR-Con. (gender and race consistency) / S-Con. (spouse
consistency) metrics used by SenSR to evaluate the individual fairness. Here
Con. measures the consistency between $h(x_{i})$ and $h(x_{i})$ when $x_{i}$
and $x_{j}$ are the same except the sensitive attributes. To evaluate Con., we
use the same method in SenSR where the test data examples are duplicated, but
assigned different sensitive attribute values. As a result, Table 6 shows that
using both methods gives the best individual fairness for both metrics while
having little accuracy drop. Thus, iFlipper complements in-processing
algorithms by removing the bias inherent in the data during the pre-processing
step.
Dataset | Method | Test Acc. | Con. Score | GR/S-Con.
---|---|---|---|---
Adult- Census | Original | 0.855 | 0.917 | 0.919 / 0.867
iFlipper | 0.853 | 0.955 | 0.931 / 0.907
SenSR | 0.836 | 0.953 | 0.990 / 0.945
Both | 0.829 | 0.960 | 0.992 / 0.984
Table 6. Using iFlipper, SenSR, or both on the AdultCensus dataset.
## 5\. Related Work
Various fairness measures have been proposed to capture legal and social
issues (narayanan2018translation). The prominent measures include individual
fairness (dwork2012fairness), group fairness (zafar2017fairness;
agarwal2018reductions; zhang2021omnifair), and causal fairness
(kusner2017counterfactual). Individual fairness captures the notion that
similar individuals should be treated similarly. Group fairness measures like
equal opportunity (DBLP:conf/nips/HardtPNS16), equalized odds
(DBLP:conf/nips/HardtPNS16), and demographic parity
(DBLP:conf/kdd/FeldmanFMSV15) ensure that two sensitive groups have similar
statistics. Causal fairness identifies causal relationships among attributes.
Although optimizing for a different objective function, the causality-based
methods are often used to improve group or individual fairness as well. All
these measures complement each other, and there is a known conflict between
group fairness and individual fairness (binns2020apparent;
friedler2021possibility) that they cannot be satisfied at the same time
because of their different assumptions. Our primary focus is on individual
fairness.
Recently, many unfairness mitigation techniques for individual fairness
(DBLP:journals/corr/abs-1810-01943) have been proposed where they can be
categorized into pre-processing (pmlr-v28-zemel13; ifair; pfr2019), in-
processing (DBLP:conf/iclr/YurochkinBS20; yurochkin2021sensei;
vargo2021individually), and post-processing (petersen2021post) techniques
depending on whether the fix occurs before, during, or after model training,
respectively. Among the categories, we focus on pre-processing because fixing
the data can solve the root cause of unfairness.
The recent pre-processing works LFR (pmlr-v28-zemel13), iFair (ifair), and PFR
(pfr2019) all propose fair representation learning algorithms that optimize
accuracy and individual fairness. Using an adjacency matrix that represents a
fairness graph, the goal is to optimize for a combined objective function with
reconstruction loss and individual fairness loss based on the fairness graph.
In comparison, iFlipper takes the alternative approach of flipping labels to
fix bias, which results in better accuracy-fairness trade-offs and efficiency.
It is also important to understand the in-processing and post-processing
techniques. SenSR (DBLP:conf/iclr/YurochkinBS20) enforces the model to be
invariant under certain sensitive perturbations to the inputs using
adversarial learning. Several extensions have been proposed as well. SenSeI
(yurochkin2021sensei) enforces invariance on certain sensitive sets and uses a
regularization-based approach for jointly optimizing accuracy and fairness,
and BuDRO (vargo2021individually) extends the fairness loss used in SenSR to
use gradient boosting. For post-processing, GLIF (petersen2021post) formulates
a graph smoothening problem and uses Laplacian regularization to enforce
individual fairness. In comparison, iFlipper can complement all these
techniques as we demonstrated with SenSR.
Pre-processing techniques for other fairness measures have been proposed as
well. For group fairness, Kamiran et al. (kamiran2012data) and Calmon et al.
(calmon2017optimized) change features of the training data to remove
dependencies between the sensitive attribute and label and thus achieve
statistical parity. It is worth noting that Kamiran et al. (kamiran2012data)
also proposed label flipping (called massaging) to reduce bias in the data for
group fairness, but not for individual fairness, which is our focus. There is
also a possibility of extending iFlipper to support group fairness. However,
we do not believe iFlipper can directly support group fairness with the
current optimization objective, so an interesting direction is to develop new
label flipping techniques that can support group fairness notions beyond
statistical parity by formulating the right optimization problems.
For causal fairness, Capuchin (salimi2019interventional) makes a connection
between multivalued dependencies (MVDs) and causal fairness, and proposes a
data repair algorithm for MVDs using tuple inserts and deletes that also
ensures causal fairness.
## 6\. Conclusion and Future Work
We proposed iFlipper, which flips labels in the training data to improve the
individual fairness of trained binary classification models. iFlipper uses
pairwise similarity information and minimizes the number of flipped labels to
limit the total error to be within a threshold. We proved that this MIQP
optimization problem is NP-hard. We then converted the problem to an
approximate LP problem, which can be solved efficiently. Our key finding is
that the proposed LP algorithm has theoretical guarantees on how close its
result is to the optimal solution in terms of the number of label flips. In
addition, we further optimized the algorithm without exceeding the total error
limit. We demonstrated on real datasets that iFlipper outperforms pre-
processing unfairness mitigation baselines in terms of fairness and accuracy,
and can complement existing in-processing techniques for better results.
In the future, we would like to support multi-class classification and
regression tasks. For multi-class classification, we can group the labels to
favorable and unfavorable labels and make the problem a binary classification
one. Alternatively, we can solve a one-vs-all problem for each class (binary
classification) and use a softmax to calculate the final label. For
regression, the action becomes changing the label instead of flipping it.
Since the current optimization problem can be extended to support continuous
labels, we plan to change the optimization setup to support regression
problems.
## Acknowledgments
Ki Hyun Tae, Jaeyoung Park, and Steven Euijong Whang were supported by a
Google Research Award and by the National Research Foundation of Korea(NRF)
grant funded by the Korea government(MSIT) (No. NRF-2018R1A5A1059921 and
NRF-2022R1A2C2004382).
## Appendix A Proof for Theorem 4
We continue from Section 2.2 and provide a full proof for Theorem 4.
Theorem 1. The MIQP problem in Equation 1 is NP-hard.
###### Proof.
We prove that the MIQP problem in Equation 1 is NP-hard by showing its sub-
problem where $W_{ij}$ is binary ($W_{ij}\in\\{0,1\\}^{n\times n}$) is NP-
hard. We prove the sub-problem is NP-hard by reducing it from the well-known
at most $k$-cardinality $s$-$t$ cut problem. Given an undirected graph
$G=(V,E)$, the at most $k$-cardinality minimum $s$-$t$ cut problem is to find
the minimum sum of edge weights in the cut edge set $C$ to divide the vertex
set $V$ into two sets $V_{1}$ and $V_{2}$, where $s\in V_{1}$ and $t\in
V_{2}$, and the cut edge set $C=\\{(v_{1},v_{2})\in E:v_{1}\in V_{1},v_{2}\in
V_{2}\\}$ has at most $k$ edges. This problem is known to be NP-hard even if
all the edge weights are 1 [DBLP:journals/dam/BruglieriME04, kim2021solving,
kratsch2020multi]. Furthermore, the existence of at most $k$-cardinality
$s$-$t$ cut problem is also known to be NP-hard
[DBLP:journals/dam/BruglieriME04].
We will show the reduction as a three-step approach. First, given an instance
of the at most $k$-cardinality $s$-$t$ cut problem on $G=(V,E)$ with all edge
weights equal to 1, we show how to create $k+1$ label flipping problems in
Equation 1 accordingly. Second, we show how a solution to any of the $k+1$
MIQP problems represents an $s$-$t$ cut to the original graph cut problem.
Third, we establish that if there exists an at most $k$-cardinality $s$-$t$
cut, it must be one of the $k+1$ $s$-$t$ cuts obtained in the previous step.
Step 1: We show how to create MIQP problems for a given at most
$k$-cardinality $s$-$t$ cut problem on $G=(V,E)$. We create a variable $y_{i}$
for each $v_{i}\in V-\\{s,t\\}$. We make $W_{ij}=1$ for each edge $(i,j)\in E$
in the graph, and $W_{ij}=0$ for other pairs of $(i,j)$. If a node $v_{i}$ is
only connected to $s$ ($t$), we make $y_{i}^{\prime}=1(0)$ and add
$(y_{i}-y_{i}^{\prime})^{2}$ to the objective function. That is, we set the
initial label of nodes directly connected to $s$ ($t$) as 1 (0). If a node
$v_{i}$ is directly connected to both $s$ and $t$, we add two terms to the
objective function for this node, one with $y_{i}^{\prime}=0$ and the other
one with $y_{i}^{\prime}=1$. If a node $v_{i}$ is not directly connected to
$s$ or $t$, we do not add any terms to the objective. Now that we have defined
all $y_{i}$, their initial assignments $y_{i}^{\prime}$, and the weight
$W_{ij}$, we vary $m$ from 0 to $k$ to create $k+1$ instances of MIQP problems
with different allowed amounts of total error as the constraints.
The intuition for the above process of creating $k+1$ MIQP problem is that an
$s$-$t$ cut is a binary partition of a graph and can be viewed as a binary-
valued labeling process. The nodes that are connected to the source $s$ have
label 1 and the nodes that are connected to sink $t$ have label 0. A cut
consists of two types of edges: $e_{f}$ and $e_{v}$. A flip edge
$e_{f}:(v_{1},v_{2}):v_{1}\in\\{s,t\\},v_{2}\in V-\\{s,t\\}$ is an edge
between $s$ ($t$) and other nodes. If an $e_{f}$ edge exists in a cut, that
means a node which was connected to $s$ ($t$) is no longer in the same
partition as $s$ ($t$), which can be represented as a flip of label. See the
$(s,1)$ edge in Figure 12 as an example for flip edges. A violation edge
$e_{v}:(v_{1},v_{2}):v_{1}\in V-\\{s,t\\},v_{2}\in V-\\{s,t\\}$ is an edge
between two nodes that are not $s$ or $t$. If an $e_{v}$ edge exists in a cut,
that means two nodes that are supposed to be similar and have the same label
ended up in two different partitions, which can be represented as a violation.
See the $(1,2)$ edge in Figure 12 as an example for violation edges. The
cardinality of the cut equals to the sum of the counts of the two types of
edges, which equals to the number of flips and the total error in the MIQP
solution. Our mapping from graph cutting to label flipping is inspired by a
similar process used in image segmentation [kolmogorov2004energy] that aims at
using graph cutting to split an image into foreground and background.
Step 2: We show how a solution to any of the $k+1$ MIQP problems can be mapped
to an $s$-$t$ cut. After solving a corresponding MIQP, we put all nodes with
$y_{i}=1$ in $V_{1}$ and $y_{i}=0$ in $V_{2}$ as the resulting $s$-$t$ cut.
The cardinality of the cut equals to the sum of the number of label flips and
the total error in the label flipping problem.
Figure 12. An example for the $s$-$t$ cut. The cut represented by the blue
dashed line consists of one $e_{f}$ edge, and the cut represented by the red
dotted line consists of two $e_{v}$ edges.
In Figure 12, after removing $s$ and $t$, we get the same graph representation
as Figure 1. By solving the label flipping problem with $m$ equal to 0 or 1,
the resulting label assignment is $y_{4}=1$ and $y_{1}=y_{2}=y_{3}=0$. The
corresponding cut is shown as the blue dashed line in Figure 12. The cut is
$V_{1}=\\{s,4\\},V_{2}=\\{1,2,3,t\\},\text{ and }C=\\{(s,1)\\}$. This label
assignment has one flip (node 1 flips to label 0) and zero violations. This
flip is represented as an $e_{f}$ edge (s,1) in the cut. If we solve the
problem with $m=2$, the result is $y_{1}=y_{4}=1$ and $y_{2}=y_{3}=0$. The
corresponding cut is shown as the red dotted line in Figure 12. The cut is
$V_{1}=\\{s,1,4\\},V_{2}=\\{2,3,t\\},\text{ and }C=\\{(1,2),(1,3)\\}$. This
label assignment has zero flips and a total error of 2, represented by two
$e_{v}$ edges in the cut: (1,2) and (1,3).
Step 3: Since the cardinality of the cut equals to the sum of the number of
label flips and the total error in the label flipping problem, finding an at
most $k$-cardinality $s$-$t$ cut can be solved by finding a label assignment
where the sum of flips and the total error equals to at most $k$. To find such
a label assignment, we can repeatedly solve the MIQP problem $k+1$ times, for
all $m$ values in {0, $\ldots$, $k$}, and check if the sum of label flips and
the total error is less than or equal to $k$. The $k$-cardinality minimum
$s$-$t$ cut, if exists, must equal to the result of the MIQP problem with at
least one of the possible $m$ values. Therefore, if the label flipping MIQP
problem is not NP-hard, then the $k$-cardinality minimum $s$-$t$ cut problem
would also not be NP-hard, which contradicts the known results.
∎
## Appendix B Proof for Lemma 0.1
We continue from Section 3.3 and provide a full proof for Lemma 0.1. Here we
restate Lemma 0.1 with the problem setup:
Lemma 2.1. Suppose an $\alpha$-cluster has $A_{0}$ points whose initial labels
are 0 and $A_{1}$ points whose initial values are 1. Let
$N_{\alpha}=A_{0}-A_{1}$. Now suppose there are $U$ nodes connected to the
$\alpha$-cluster (shown in Figure 13) by an edge $W_{a_{i}\alpha}$ satisfying
$\begin{split}&0\leq a_{1}\leq...\leq a_{k}<\alpha<a_{k+1}\leq...\leq
a_{U}\leq 1.\end{split}$
Note that there is no connected node with a value of $\alpha$ by construction.
Let
$S_{\alpha}=\sum_{i=1}^{k}W_{a_{i}\alpha}-\sum_{i=k+1}^{U}W_{a_{i}\alpha}$.
Let us also add the following nodes for convenience: $a_{0}=0$ and
$a_{U+1}=1$. For an $\alpha$-cluster with $N_{\alpha}=0$ in the optimal
solution, we can always convert $\alpha$ to either $a_{k}$ or $a_{k+1}$ while
maintaining an optimal solution. As a result, we can reduce exactly one
non-0/1 value in the optimal solution.
Figure 13. Problem setup for Lemma 0.1.
###### Proof.
We compute the initial number of flips and total error in Figure 13 as
follows:
(11) $\begin{split}&\\#\text{ Flips}=\alpha A_{0}+(1-\alpha)A_{1}=A_{1}+\alpha
N_{\alpha}=A_{1}(\because N_{\alpha}=0)\\\ &\text{Total
Error}=\sum_{i=1}^{U}W_{a_{i}\alpha}|a_{i}-\alpha|=S_{\alpha}\alpha+C\\\
&\qquad\qquad\quad(C=-\sum_{i=1}^{k}W_{a_{i}\alpha}a_{i}+\sum_{i=k+1}^{U}W_{a_{i}\alpha}a_{i})\\\
\end{split}$
We first observe that the number of flips is independent of the $\alpha$
value. Hence, even if we change $\alpha$ to an arbitrary value, the solution
still has the same objective value.
Now consider a small positive value $\epsilon_{\alpha}$. Suppose we change
$\alpha$ by $\epsilon_{\alpha}$ such that
(12) $0\leq a_{1}\leq...\leq a_{k}\leq\alpha+\epsilon_{\alpha}\leq
a_{k+1}\leq...\leq a_{U}\leq 1\\\ $
Similarly, we also change $\alpha$ by $-\epsilon_{\alpha}$ such that
(13) $0\leq a_{1}\leq...\leq a_{k}\leq\alpha-\epsilon_{\alpha}\leq
a_{k+1}\leq...\leq a_{U}\leq 1\\\ $
Note that such $\epsilon_{\alpha}$ always exists because
$a_{k}<\alpha<a_{k+1}$. If we change $\alpha$ to $\alpha+\epsilon_{\alpha}$
while satisfying Equation 12, the total error becomes
$S_{\alpha}(\alpha+\epsilon_{\alpha})+C$. Similarly, if we change $\alpha$ to
$\alpha-\epsilon_{\alpha}$ while satisfying Equation 13, the total error
becomes $S_{\alpha}(\alpha-\epsilon_{\alpha})+C$. Hence, the change in the
total error for each case can be computed as follows:
(14) $\begin{split}&\alpha+\epsilon_{\alpha}\rightarrow{}\Delta(\text{Total
Error})=S_{\alpha}\epsilon_{\alpha}\\\
&\alpha-\epsilon_{\alpha}\rightarrow{}\Delta(\text{Total
Error})=-S_{\alpha}\epsilon_{\alpha}\\\ \end{split}$
From Equation 14, we observe that one of the transformations always maintains
or even reduces the total error according to the sign of $S_{\alpha}$, i.e.,
the solution is feasible. Specifically, if $S_{\alpha}\leq 0$, we change
$\alpha$ to $a_{k+1}$ by setting $\epsilon_{\alpha}=a_{k+1}-\alpha$, otherwise
we change $\alpha$ to $a_{k}$ by setting $\epsilon_{\alpha}=\alpha-a_{k}$. As
a result, we can remove one non-0/1 value $\alpha$ while maintaining an
optimal solution.
∎
## Appendix C Proof for Lemma 0.2
We continue from Section 3.3 and provide a full proof for Lemma 0.2. Here we
restate Lemma 0.2 with the problem setup:
Lemma 2.2. Let us assume that 0 ¡ $\alpha$ ¡ $\beta$ ¡ 1, and the sum of the
pairwise node similarities between the two clusters between the two clusters
is $E$. Suppose an $\alpha$-cluster and a $\beta$-cluster have $A_{0}$ and
$B_{0}$ points whose initial labels are 0, respectively, and $A_{1}$ and
$B_{1}$ points whose initial values are 1, respectively. Let
$N_{\alpha}=A_{0}-A_{1}$ and $N_{\beta}=B_{0}-B_{1}$. Now suppose there are
$U$ nodes connected to the $\alpha$-cluster by an edge $W_{a_{i}\alpha}$ and
$V$ nodes connected to the $\beta$-cluster by an edge $W_{b_{i}\beta}$
satisfying
$\begin{split}&0\leq a_{1}\leq...\leq a_{k}<\alpha<a_{k+1}\leq...\leq
a_{U}\leq 1\text{ and }\\\ &0\leq b_{1}\leq...\leq
b_{l}<\beta<b_{l+1}\leq...\leq b_{V}\leq 1.\end{split}$
Note that there is no connected node with a value of $\alpha$ or $\beta$ by
construction. Let
$S_{\alpha}=\sum_{i=1}^{k}W_{a_{i}\alpha}-\sum_{i=k+1}^{U}W_{a_{i}\alpha}$ and
$S_{\beta}=\sum_{i=1}^{l}W_{b_{i}\beta}-\sum_{i=l+1}^{V}W_{b_{i}\beta}$. Let
us also add the following nodes for convenience: $a_{0}=0$, $a_{U+1}=1$,
$b_{0}=0$, and $b_{V+1}=1$. For an $\alpha$-cluster with $N_{\alpha}\neq 0$
and a $\beta$-cluster with $N_{\beta}\neq 0$ in the optimal solution, we can
always convert $(\alpha,\beta)$ to one of
$(a_{k},\beta+\frac{N_{\alpha}}{N_{\beta}}(a_{k}-\alpha))$,
$(a_{k+1},\beta-\frac{N_{\alpha}}{N_{\beta}}(a_{k+1}-\alpha))$,
$(\alpha+\frac{N_{\beta}}{N_{\alpha}}(\beta-b_{l}),b_{l})$,
$(\alpha-\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta),b_{l+1})$, or
$(\frac{\alpha N_{\alpha}+\beta N_{\beta}}{N_{\alpha}+N_{\beta}},\frac{\alpha
N_{\alpha}+\beta N_{\beta}}{N_{\alpha}+N_{\beta}})$, while maintaining an
optimal solution. As a result, we can reduce at least one non-0/1 value in the
optimal solution.
Figure 14. Problem setup for Lemma 0.2.
###### Proof.
We compute the initial number of flips and total error in Figure 14 as
follows:
(15) $\begin{split}&\\#\text{ Flips}=\alpha A_{0}+(1-\alpha)A_{1}+\beta
B_{0}+(1-\beta)B_{1}=A_{1}+B_{1}+\alpha N_{\alpha}+\beta N_{\beta}\\\
&\text{Total
Error}=\sum_{i=1}^{U}W_{a_{i}\alpha}|a_{i}-\alpha|+\sum_{i=1}^{V}W_{b_{i}\beta}|b_{i}-\beta|+E(\beta-\alpha)\\\
&\qquad\qquad\qquad\;=(S_{\alpha}-E)\alpha+(S_{\beta}+E)\beta+C\\\
&\quad(C=-\sum_{i=1}^{k}W_{a_{i}\alpha}a_{i}+\sum_{i=k+1}^{U}W_{a_{i}\alpha}a_{i}-\sum_{i=1}^{l}W_{b_{i}\beta}b_{i}+\sum_{i=l+1}^{V}W_{b_{i}\beta}b_{i})\\\
\end{split}$
Now consider two small positive values $\epsilon_{\alpha}$ and
$\epsilon_{\beta}$. Both $N_{\alpha}$ and $N_{\beta}$ are non-zero, so we have
only two cases: $\frac{N_{\alpha}}{N_{\beta}}<0$ and
$\frac{N_{\alpha}}{N_{\beta}}>0$.
Case 1 $\frac{N_{\alpha}}{N_{\beta}}<0$: Suppose we change $\alpha$ by
$\epsilon_{\alpha}$ and $\beta$ by $\epsilon_{\beta}$, respectively, such that
(16) $\begin{gathered}0\leq a_{1}\leq...\leq
a_{k}\leq\alpha+\epsilon_{\alpha}\leq a_{k+1}\leq...\leq a_{U}\leq 1\\\ 0\leq
b_{1}\leq...\leq b_{l}\leq\beta+\epsilon_{\beta}\leq b_{l+1}\leq...\leq
b_{V}\leq 1\\\ \alpha+\epsilon_{\alpha}\leq\beta+\epsilon_{\beta}\\\
\end{gathered}$
We then compute the number of flips for
$(\alpha+\epsilon_{\alpha},\beta+\epsilon_{\beta})$ as
$A_{1}+B_{1}+(\alpha+\epsilon_{\alpha})N_{\alpha}+(\beta+\epsilon_{\beta})N_{\beta}$.
In order to have the same number of flips as the initial value in Equation 15,
($\epsilon_{\alpha}$, $\epsilon_{\beta}$) should satisfy
$\epsilon_{\alpha}N_{\alpha}+\epsilon_{\beta}N_{\beta}=0$.
Similarly, we also change $\alpha$ by $-\epsilon_{\alpha}$ and $\beta$ by
$-\epsilon_{\beta}$, respectively, such that
(17) $\begin{gathered}0\leq a_{1}\leq...\leq
a_{k}\leq\alpha-\epsilon_{\alpha}\leq a_{k+1}\leq...\leq a_{U}\leq 1\\\ 0\leq
b_{1}\leq...\leq b_{l}\leq\beta-\epsilon_{\beta}\leq b_{l+1}\leq...\leq
b_{V}\leq 1\\\ \alpha-\epsilon_{\alpha}\leq\beta-\epsilon_{\beta}\\\
\end{gathered}$
In this case, $(\alpha-\epsilon_{\alpha},\beta-\epsilon_{\beta})$ also
maintains the same number of label flips if
$\epsilon_{\alpha}N_{\alpha}+\epsilon_{\beta}N_{\beta}=0$.
From now, we consider ($\epsilon_{\alpha}$, $\epsilon_{\beta}$) that also
satisfies $\epsilon_{\alpha}N_{\alpha}+\epsilon_{\beta}N_{\beta}=0$, i.e.,
$\epsilon_{\beta}=-\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$. Note that
such $\epsilon_{\alpha}$ and $\epsilon_{\beta}$ always exist because
$a_{k}<\alpha<a_{k+1}$, $b_{l}<\beta<b_{l+1}$, and
$\frac{N_{\alpha}}{N_{\beta}}<0$. Therefore, both
$(\alpha+\epsilon_{\alpha},\beta+\epsilon_{\beta})$ and
$(\alpha-\epsilon_{\alpha},\beta-\epsilon_{\beta})$ have the same number of
label flipping as the initial number.
If we change $(\alpha,\beta)$ to
$(\alpha+\epsilon_{\alpha},\beta+\epsilon_{\beta})$ while satisfying Equation
16 and $\epsilon_{\alpha}N_{\alpha}+\epsilon_{\beta}N_{\beta}=0$, the total
error becomes
$(S_{\alpha}-E)(\alpha+\epsilon_{\alpha})+(S_{\beta}+E)(\beta+\epsilon_{\beta})+C$.
Similarly, if we change $(\alpha,\beta)$ to
$(\alpha-\epsilon_{\alpha},\beta-\epsilon_{\beta})$ while satisfying Equation
17 and $\epsilon_{\alpha}N_{\alpha}+\epsilon_{\beta}N_{\beta}=0$, the total
error becomes
$(S_{\alpha}-E)(\alpha-\epsilon_{\alpha})+(S_{\beta}+E)(\beta-\epsilon_{\beta})+C$.
Hence, the change in the total error for each case can be computed as follows:
(18)
$\begin{split}(\alpha+\epsilon_{\alpha},\beta+\epsilon_{\beta})\rightarrow{}&\Delta(\text{Total
Error})=(S_{\alpha}-E)\epsilon_{\alpha}+(S_{\beta}+E)\epsilon_{\beta}\\\
&=\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}\\\
(\alpha-\epsilon_{\alpha},\beta-\epsilon_{\beta})\rightarrow{}&\Delta(\text{Total
Error})=-(S_{\alpha}-E)\epsilon_{\alpha}-(S_{\beta}+E)\epsilon_{\beta}\\\
&=-\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}\\\
\end{split}$
From Equation 18, we observe that one of the transformations always maintains
or even reduces the total error according to the sign of
$\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}$, i.e., the
solution is feasible.
* •
If $\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}\leq 0$,
we change $(\alpha,\beta)$ to
$(\alpha+\epsilon_{\alpha},\beta+\epsilon_{\beta})$ so that the solution is
still optimal. Recall $(\epsilon_{\alpha},\epsilon_{\beta})$ satisfies the
three inequalities and one condition: $\alpha+\epsilon_{\alpha}\leq a_{k+1}$,
$\beta+\epsilon_{\beta}\leq b_{l+1}$,
$\alpha+\epsilon_{\alpha}\leq\beta+\epsilon_{\beta}$, and
$\epsilon_{\alpha}N_{\alpha}+\epsilon_{\beta}N_{\beta}=0$. Among the possible
$(\epsilon_{\alpha},\epsilon_{\beta})$, we choose the upper bound of
$\epsilon_{\alpha}$ and the corresponding $\epsilon_{\beta}$
($\epsilon_{\beta}=-\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$). To get an
upper bound of $\epsilon_{\alpha}$, we find the equality conditions for each
inequality and take the smallest value among them. If
$1+\frac{N_{\alpha}}{N_{\beta}}\leq 0$, the last inequality
($\alpha+\epsilon_{\alpha}\leq\beta+\epsilon_{\beta}$) always hold because
$\epsilon_{\alpha}\leq\epsilon_{\beta}$. Hence, we consider only the first two
inequalities and set $\epsilon_{\alpha}$ to
$min(a_{k+1}-\alpha,-\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta)$. On the
other hand, if $1+\frac{N_{\alpha}}{N_{\beta}}>0$, we set $\epsilon_{\alpha}$
to
$min(a_{k+1}-\alpha,-\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta),\frac{N_{\beta}(\beta-\alpha)}{N_{\alpha}+N_{\beta}})$
from the three inequalities. As a result, we can convert $(\alpha,\beta)$ to
$(a_{k+1},\beta-\frac{N_{\alpha}}{N_{\beta}}(a_{k+1}-\alpha))$,
$(\alpha-\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta),b_{l+1})$, or
$(\frac{\alpha N_{\alpha}+\beta N_{\beta}}{N_{\alpha}+N_{\beta}},\frac{\alpha
N_{\alpha}+\beta N_{\beta}}{N_{\alpha}+N_{\beta}})$, which is one of the cases
in Lemma 0.2.
* •
If $\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}>0$, we
change $(\alpha,\beta)$ to $(\alpha-\epsilon_{\alpha},\beta-\epsilon_{\beta})$
so that the solution is still optimal. Recall
$(\epsilon_{\alpha},\epsilon_{\beta})$ satisfies the three inequalities and
one condition: $a_{k}\leq\alpha-\epsilon_{\alpha}$,
$b_{l}\leq\beta-\epsilon_{\beta}$,
$\alpha-\epsilon_{\alpha}\leq\beta-\epsilon_{\beta}$, and
$\epsilon_{\alpha}N_{\alpha}+\epsilon_{\beta}N_{\beta}=0$. Among the possible
$(\epsilon_{\alpha},\epsilon_{\beta})$, we choose the upper bound of
$\epsilon_{\alpha}$ and the corresponding $\epsilon_{\beta}$
($\epsilon_{\beta}=-\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$). To get an
upper bound of $\epsilon_{\alpha}$, we find the equality conditions for each
inequality and take the smallest value among them. If
$1+\frac{N_{\alpha}}{N_{\beta}}\geq 0$, the last inequality
($\alpha-\epsilon_{\alpha}\leq\beta-\epsilon_{\beta}$) always holds because
$\epsilon_{\alpha}\geq\epsilon_{\beta}$. Hence, we consider only the first two
inequalities and set $\epsilon_{\alpha}$ to $min(\alpha-
a_{k},-\frac{N_{\beta}}{N_{\alpha}}(\beta-b_{l})$. On the other hand, if
$1+\frac{N_{\alpha}}{N_{\beta}}<0$, we set $\epsilon_{\alpha}$ to $min(\alpha-
a_{k},-\frac{N_{\beta}}{N_{\alpha}}(\beta-
b_{l}),-\frac{N_{\beta}(\beta-\alpha)}{N_{\alpha}+N_{\beta}})$ from the three
inequalities. As a result, we can convert $(\alpha,\beta)$ to
$(a_{k},\beta+\frac{N_{\alpha}}{N_{\beta}}(\alpha-a_{k}))$,
$(\alpha+\frac{N_{\beta}}{N_{\alpha}}(\beta-b_{l}),b_{l})$, or $(\frac{\alpha
N_{\alpha}+\beta N_{\beta}}{N_{\alpha}+N_{\beta}},\frac{\alpha
N_{\alpha}+\beta N_{\beta}}{N_{\alpha}+N_{\beta}})$, which is one of the cases
in Lemma 0.2.
Case 2: $\frac{N_{\alpha}}{N_{\beta}}>0$: The proof is similar to the proof
for Case 1 except that we consider
$(\alpha+\epsilon_{\alpha},\beta-\epsilon_{\beta})$ and
$(\alpha-\epsilon_{\alpha},\beta+\epsilon_{\beta})$ instead of
$(\alpha+\epsilon_{\alpha},\beta+\epsilon_{\beta})$ and
$(\alpha-\epsilon_{\alpha},\beta-\epsilon_{\beta})$. We now write the full
proof for Case 2 for completeness.
Suppose we change $\alpha$ by $\epsilon_{\alpha}$ and $\beta$ by
$-\epsilon_{\beta}$, respectively, such that
(19) $\begin{gathered}0\leq a_{1}\leq...\leq
a_{k}\leq\alpha+\epsilon_{\alpha}\leq a_{k+1}\leq...\leq a_{U}\leq 1\\\ 0\leq
b_{1}\leq...\leq b_{l}\leq\beta-\epsilon_{\beta}\leq b_{l+1}\leq...\leq
b_{V}\leq 1\\\ \alpha+\epsilon_{\alpha}\leq\beta-\epsilon_{\beta}\\\
\end{gathered}$
We then compute the number of flips for
$(\alpha+\epsilon_{\alpha},\beta-\epsilon_{\beta})$ as
$A_{1}+B_{1}+(\alpha+\epsilon_{\alpha})N_{\alpha}+(\beta-\epsilon_{\beta})N_{\beta}$.
In order to have the same number of flips as the initial value in Equation 15,
($\epsilon_{\alpha}$, $\epsilon_{\beta}$) should satisfy
$\epsilon_{\alpha}N_{\alpha}-\epsilon_{\beta}N_{\beta}=0$.
Similarly, we also change $\alpha$ by $-\epsilon_{\alpha}$ and $\beta$ by
$\epsilon_{\beta}$, respectively, such that
(20) $\begin{gathered}0\leq a_{1}\leq...\leq
a_{k}\leq\alpha-\epsilon_{\alpha}\leq a_{k+1}\leq...\leq a_{U}\leq 1\\\ 0\leq
b_{1}\leq...\leq b_{l}\leq\beta+\epsilon_{\beta}\leq b_{l+1}\leq...\leq
b_{V}\leq 1\\\ \alpha-\epsilon_{\alpha}\leq\beta+\epsilon_{\beta}\\\
\end{gathered}$
In this case, $(\alpha-\epsilon_{\alpha},\beta+\epsilon_{\beta})$ also
maintains the same number of label flips if
$\epsilon_{\alpha}N_{\alpha}-\epsilon_{\beta}N_{\beta}=0$.
From now, we consider ($\epsilon_{\alpha}$, $\epsilon_{\beta}$) that also
satisfies $\epsilon_{\alpha}N_{\alpha}-\epsilon_{\beta}N_{\beta}=0$, i.e.,
$\epsilon_{\beta}=\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$. Note that
such ($\epsilon_{\alpha}$ and $\epsilon_{\beta}$) always exist because
$a_{k}<\alpha<a_{k+1}$, $b_{l}<\beta<b_{l+1}$, and
$\frac{N_{\alpha}}{N_{\beta}}>0$. Therefore, both
$(\alpha+\epsilon_{\alpha},\beta-\epsilon_{\beta})$ and
$(\alpha-\epsilon_{\alpha},\beta+\epsilon_{\beta})$ have the same number of
label flipping as the initial number.
If we change $(\alpha,\beta)$ to
$(\alpha+\epsilon_{\alpha},\beta-\epsilon_{\beta})$ while satisfying Equation
19 and $\epsilon_{\alpha}N_{\alpha}-\epsilon_{\beta}N_{\beta}=0$, the total
error becomes
$(S_{\alpha}-E)(\alpha+\epsilon_{\alpha})+(S_{\beta}+E)(\beta-\epsilon_{\beta})+C$.
Similarly, if we change $(\alpha,\beta)$ to
$(\alpha-\epsilon_{\alpha},\beta+\epsilon_{\beta})$ while satisfying Equation
20 and $\epsilon_{\alpha}N_{\alpha}-\epsilon_{\beta}N_{\beta}=0$, the total
error becomes
$(S_{\alpha}-E)(\alpha-\epsilon_{\alpha})+(S_{\beta}+E)(\beta+\epsilon_{\beta})+C$.
Hence, the change in the total error for each case can be computed as follows:
(21)
$\begin{split}(\alpha+\epsilon_{\alpha},\beta-\epsilon_{\beta})\rightarrow{}&\Delta(\text{Total
Error})=(S_{\alpha}-E)\epsilon_{\alpha}-(S_{\beta}+E)\epsilon_{\beta}\\\
&=\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}\\\
(\alpha-\epsilon_{\alpha},\beta+\epsilon_{\beta})\rightarrow{}&\Delta(\text{Total
Error})=-(S_{\alpha}-E)\epsilon_{\alpha}+(S_{\beta}+E)\epsilon_{\beta}\\\
&=-\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}\\\
\end{split}$
From Equation 21, we observe that one of the transformations always maintains
or reduces the total error according to the sign of
$\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}$, i.e., the
solution is feasible.
* •
If $\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}\leq 0$,
we can change $(\alpha,\beta)$ to
$(\alpha+\epsilon_{\alpha},\beta-\epsilon_{\beta})$ so that the solution is
still optimal. Recall $(\epsilon_{\alpha},\epsilon_{\beta})$ satisfies the
three inequalities and one condition: $\alpha+\epsilon_{\alpha}\leq a_{k+1}$,
$b_{l+1}\leq\beta-\epsilon_{\beta}$,
$\alpha+\epsilon_{\alpha}\leq\beta+\epsilon_{\beta}$, and
$\epsilon_{\alpha}N_{\alpha}-\epsilon_{\beta}N_{\beta}=0$. Among the possible
$(\epsilon_{\alpha},\epsilon_{\beta})$, we choose the upper bound of
$\epsilon_{\alpha}$ and the corresponding $\epsilon_{\beta}$
($\epsilon_{\beta}=\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$). To get an
upper bound of $\epsilon_{\alpha}$, we find the equality conditions for each
inequality and take the smallest value among them. Specifically, we set
$\epsilon_{\alpha}$ to $min(a_{k+1}-\alpha,\frac{N_{\beta}}{N_{\alpha}}(\beta-
b_{l}),\frac{N_{\beta}(\beta-\alpha)}{N_{\alpha}+N_{\beta}})$ and
$\epsilon_{\beta}$. As a result, we can convert $(\alpha,\beta)$ to
$(a_{k+1},\beta-\frac{N_{\alpha}}{N_{\beta}}(a_{k+1}-\alpha))$,
$(\alpha+\frac{N_{\beta}}{N_{\alpha}}(\beta-b_{l}),b_{l})$, or $(\frac{\alpha
N_{\alpha}+\beta N_{\beta}}{N_{\alpha}+N_{\beta}},\frac{\alpha
N_{\alpha}+\beta N_{\beta}}{N_{\alpha}+N_{\beta}})$, which is one of the cases
in Lemma 0.2.
* •
If $\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}>0$, we
can change $(\alpha,\beta)$ to
$(\alpha-\epsilon_{\alpha},\beta+\epsilon_{\beta})$ so that the solution is
still optimal. Recall $(\epsilon_{\alpha},\epsilon_{\beta})$ satisfies the
three inequalities and one condition: $a_{k}\leq\alpha-\epsilon_{\alpha}$,
$\beta+\epsilon_{\beta}\leq b_{l+1}$,
$\alpha-\epsilon_{\alpha}\leq\beta+\epsilon_{\beta}$, and
$\epsilon_{\alpha}N_{\alpha}-\epsilon_{\beta}N_{\beta}=0$. Among the possible
$(\epsilon_{\alpha},\epsilon_{\beta})$, we choose the upper bound of
$\epsilon_{\alpha}$ and the corresponding $\epsilon_{\beta}$
($\epsilon_{\beta}=\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$). To get an
upper bound of $\epsilon_{\alpha}$, we find the equality conditions for each
inequality and take the smallest value among them. In this case, the last
inequality ($\alpha-\epsilon_{\alpha}\leq\beta+\epsilon_{\beta}$) always hold.
Hence, we consider only the first two conditions and set $\epsilon_{\alpha}$
to $min(\alpha-a_{k},\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta))$. As a
result, we can convert $(\alpha,\beta)$ to either
$(a_{k},\beta+\frac{N_{\alpha}}{N_{\beta}}(a_{k}-\alpha))$ or
$(\alpha-\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta),b_{l+1})$, which is one
of the cases in Lemma 0.2.
We summarize the main results for each case below and conclude that
$(\alpha,\beta)$ can be transformed into one of the five cases in Lemma 0.2
while maintaining an optimal optimal. As a result, we remove at least one of
$\alpha$ and $\beta$. If both converted values already exist in the solution,
we can even reduce two non-0/1 values.
* •
If ($\frac{N_{\alpha}}{N_{\beta}}<0$,
$\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}\leq 0$,
$1+\frac{N_{\alpha}}{N_{\beta}}\leq 0$), we convert $(\alpha,\beta)$ to
$(\alpha+\epsilon_{\alpha},\beta+\epsilon_{\beta})$ where
$\epsilon_{\alpha}=min(a_{k+1}-\alpha,-\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta)$
and $\epsilon_{\beta}=-\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$.
* •
If ($\frac{N_{\alpha}}{N_{\beta}}<0$,
$\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}\leq 0$,
$1+\frac{N_{\alpha}}{N_{\beta}}>0$), we convert $(\alpha,\beta)$ to
$(\alpha+\epsilon_{\alpha},\beta+\epsilon_{\beta})$ where
$\epsilon_{\alpha}=min(a_{k+1}-\alpha,-\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta),\frac{N_{\beta}(\beta-\alpha)}{N_{\alpha}+N_{\beta}})$
and $\epsilon_{\beta}=-\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$.
* •
If ($\frac{N_{\alpha}}{N_{\beta}}<0$,
$\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}>0$,
$1+\frac{N_{\alpha}}{N_{\beta}}\geq 0$), we convert $(\alpha,\beta)$ to
$(\alpha-\epsilon_{\alpha},\beta-\epsilon_{\beta})$ where
$\epsilon_{\alpha}=min(\alpha-a_{k},-\frac{N_{\beta}}{N_{\alpha}}(\beta-
b_{l})$ and $\epsilon_{\beta}=-\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$.
* •
If ($\frac{N_{\alpha}}{N_{\beta}}<0$,
$\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}>0$,
$1+\frac{N_{\alpha}}{N_{\beta}}<0$), we convert $(\alpha,\beta)$ to
$(\alpha-\epsilon_{\alpha},\beta-\epsilon_{\beta})$ where
$\epsilon_{\alpha}=min(\alpha-a_{k},-\frac{N_{\beta}}{N_{\alpha}}(\beta-
b_{l}),-\frac{N_{\beta}(\beta-\alpha)}{N_{\alpha}+N_{\beta}})$ and
$\epsilon_{\beta}=-\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$.
* •
If ($\frac{N_{\alpha}}{N_{\beta}}>0$,
$\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}\leq 0$), we
convert $(\alpha,\beta)$ to
$(\alpha+\epsilon_{\alpha},\beta-\epsilon_{\beta})$ where
$\epsilon_{\alpha}=min(a_{k+1}-\alpha,\frac{N_{\beta}}{N_{\alpha}}(\beta-
b_{l}),\frac{N_{\beta}(\beta-\alpha)}{N_{\alpha}+N_{\beta}})$ and
$\epsilon_{\beta}=\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$.
* •
If ($\frac{N_{\alpha}}{N_{\beta}}>0$,
$\frac{(S_{\alpha}-E)N_{\beta}-(S_{\beta}+E)N_{\alpha}}{N_{\beta}}>0$), we
convert $(\alpha,\beta)$ to
$(\alpha-\epsilon_{\alpha},\beta+\epsilon_{\beta})$ where
$\epsilon_{\alpha}=min(\alpha-
a_{k},\frac{N_{\beta}}{N_{\alpha}}(b_{l+1}-\beta))$ and
$\epsilon_{\beta}=\frac{N_{\alpha}}{N_{\beta}}\epsilon_{\alpha}$.
∎
## Appendix D Trade-off for other datasets
We continue from Section 4.2 and show the trade-off results on the AdultCensus
and Credit datasets in Figure 15. The results are similar to the COMPAS
dataset in Figure 7 where there is a clear trade-off between accuracy and
fairness.
(a) AdultCensus
(b) Credit
Figure 15. Trade-off curves on the AdultCensus and Credit datasets.
## Appendix E Bank and LSAC Datasets Results
We continue from Section 4.1 and show experimental results on the Bank and
LSAC datasets when training a logistic regression model. Table 7 shows
consistency scores with respect to the threshold-based similarity matrix. As a
result, both datasets have almost 1.0 consistency scores, which means that
they are already inherently fair in terms of individual fairness. The reason
is that these datasets have much smaller total errors compared to other
fairness datasets.
Dataset | Test Accuracy | Consistency Score
---|---|---
Bank | 0.956 | 0.997
LSAC | 0.825 | 0.986
Table 7. Accuracy and fairness results using logistic regression on the Bank
and LSAC datasets.
## Appendix F Optimization Solutions for COMPAS
We continue from Section 4.4 and perform the same experiments on the COMPAS
dataset where we use the kNN-based similarity matrix. In Figure 16, the key
trends are still similar to Figure 10 where iFlipper (1) always satisfies the
total error limit while Greedy and Gradient result in infeasible solutions for
some cases while KMeans returns feasible solutions, but flips too many labels
(Figure 16(a)), (2) provides the solution closest to the optimal in terms the
number of label flips (Figure 16(b)), and (3) is much faster than other
optimization solutions (Figure 16(c)). Compared to the AdultCensus results in
Figure 10, iFlipper is the most efficient because the COMPAS dataset is
relatively small and has a smaller total error.
(a) Total error
(b) Number of flips
(c) Runtime (sec)
Figure 16. A detailed comparison of iFlipper against three naïve solutions
(Greedy, Gradient, and KMeans) and ILP solver on the COMPAS dataset where we
use the kNN-based similarity matrix. Here the initial amount of total error is
16,454.0. We show the results for three different total error limits ($m$).
All three subfigures use the same legends.
## Appendix G Ablation Study for COMPAS dataset
We continue from Section 4.5 and provide the ablation study for the COMPAS
dataset where we use the kNN-based similarity matrix in Figure 17. The
observations are similar to those of Figure 11 where both adaptive rounding
and reverse greedy algorithms are necessary for iFlipper to provide a near-
exact solution. In addition, Table 8 shows the average runtime of each
component in iFlipper in Figure 17 and the results are similar to Table 4
where the proposed algorithms are efficient in practice.
(a) Total error
(b) Number of flips
Figure 17. Ablation study for iFlipper on the COMPAS dataset and the kNN-based similarity matrix. Method | Avg. Runtime (sec)
---|---
LP Solver (effectively includes Alg. 1) | 5.87
\+ Adaptive Rounding (Alg. 3) | 0.09
\+ Reverse Greedy (Alg. 4) | 2.19
Table 8. Avg. runtimes of iFlipper’s components in Figure 17.
## Appendix H Comparison with other ML models
In Section 4.3, we compared iFlipper with the baselines using logistic
regression. In this section, we perform the same experiments using random
forest and neural network models. Figure 18 and Figure 19 are the trade-off
results using the random forest and neural network, respectively. The key
trends are still similar to Figure 8 where iFlipper consistently outperforms
the baselines in terms of accuracy and fairness. The results clearly
demonstrate that how iFlipper’s pre-processing algorithm benefits various ML
models.
(a) COMPAS-kNN
(b) AdultCensus-kNN
(c) Credit-kNN
(d) COMPAS-threshold
(e) AdultCensus-threshold
(f) Credit-threshold
Figure 18. Accuracy-fairness trade-offs of random forest on the three datasets
using the two similarity matrices. In addition to the four methods LFR, iFair,
PFR, and iFlipper, we add the result of model training without any pre-
processing and call it “Original.” As a result, only iFlipper shows a clear
accuracy and fairness trade-off.
(a) COMPAS-kNN
(b) AdultCensus-kNN
(c) Credit-kNN
(d) COMPAS-threshold
(e) AdultCensus-threshold
(f) Credit-threshold
Figure 19. Accuracy-fairness trade-offs of neural network on the three
datasets using the two similarity matrices. In addition to the four methods
LFR, iFair, PFR, and iFlipper, we add the result of model training without any
pre-processing and call it “Original.” As a result, only iFlipper shows a
clear accuracy and fairness trade-off.
|
# Evolutionary Data Measures: Understanding the Difficulty of Text
Classification Tasks
Edward Collins
Wluper Ltd.
London, United Kingdom
<EMAIL_ADDRESS>
&Nikolai Rozanov
Wluper Ltd.
London, United Kingdom
<EMAIL_ADDRESS>
&Bingbing Zhang
Wluper Ltd.
London, United Kingdom
<EMAIL_ADDRESS>
###### Abstract
Classification tasks are usually analysed and improved through new model
architectures or hyperparameter optimisation but the underlying properties of
datasets are discovered on an ad-hoc basis as errors occur. However,
understanding the properties of the data is crucial in perfecting models. In
this paper we analyse exactly which characteristics of a dataset best
determine how difficult that dataset is for the task of text classification.
We then propose an intuitive measure of difficulty for text classification
datasets which is simple and fast to calculate. We show that this measure
generalises to unseen data by comparing it to state-of-the-art datasets and
results. This measure can be used to analyse the precise source of errors in a
dataset and allows fast estimation of how difficult a dataset is to learn. We
searched for this measure by training 12 classical and neural network based
models on 78 real-world datasets, then use a genetic algorithm to discover the
best measure of difficulty. Our difficulty-calculating
code111https://github.com/Wluper/edm and datasets222http://data.wluper.com are
publicly available.
## 1 Introduction
If a machine learning (ML) model is trained on a dataset then the same machine
learning model on the same dataset but with more granular labels will
frequently have lower performance scores than the original model (see results
in Zhang et al. (2015); Socher et al. (2013a); Yogatama et al. (2017); Joulin
et al. (2016); Xiao and Cho (2016); Conneau et al. (2017)). Adding more
granularity to labels makes the dataset harder to learn - it increases the
dataset’s difficulty. It is obvious that some datasets are more difficult for
learning models than others, but is it possible to quantify this “difficulty”?
In order to do so, it would be necessary to understand exactly what
characteristics of a dataset are good indicators of how well models will
perform on it so that these could be combined into a single measure of
difficulty.
Such a difficulty measure would be useful as an analysis tool and as a
performance estimator. As an analysis tool, it would highlight precisely what
is causing difficulty in a dataset, reducing the time practitioners need spend
analysing their data. As a performance estimator, when practitioners approach
new datasets they would be able to use this measure to predict how well models
are likely to perform on the dataset.
The complexity of datasets for ML has been previously examined Ho and Basu
(2002); Mansilla and Ho (2004); Bernadó-Mansilla and Ho (2005); Macià et al.
(2008), but these works focused on analysing feature space data $\in{\rm
I\\!R}^{n}$. These methods do not easily apply to natural language, because
they would require the language be embedded into feature space in some way,
for example with a word embedding model which introduces a dependency on the
model used. We extend previous notions of difficulty to English language text
classification, an important component of natural language processing (NLP)
applicable to tasks such as sentiment analysis, news categorisation and
automatic summarisation Socher et al. (2013a); Antonellis et al. (2006);
Collins et al. (2017). All of our recommended calculations depend only on
counting the words in a dataset.
### 1.1 Related Work
One source of difficulty in a dataset is mislabelled items of data (noise).
Brodley and Friedl (1999) showed that filtering noise could produce large
gains in model performance, potentially yielding larger improvements than
hyperparameter optimisation Smith et al. (2014). We ignored noise in this work
because it can be reduced with proper data cleaning and is not a part of the
true signal of the dataset. We identified four other areas of potential
difficulty which we attempt to measure:
##### Class Interference.
Text classification tasks to predict the 1 - 5 star rating of a review are
more difficult than predicting whether a review is positive or negative Zhang
et al. (2015); Socher et al. (2013a); Yogatama et al. (2017); Joulin et al.
(2016); Xiao and Cho (2016); Conneau et al. (2017), as reviews given four
stars share many features with those given five stars. Gupta et al. (2014)
describe how as the number of classes in a dataset increases, so does the
potential for ”confusability” where it becomes difficult to tell classes
apart, therefore making a dataset more difficult. Previous work has mostly
focused on this confusability - or class interference - as a source of
difficulty in machine learning tasks Bernadó-Mansilla and Ho (2005); Ho and
Basu (2000, 2002); Elizondo et al. (2009); Mansilla and Ho (2004), a common
technique being to compute a minimum spanning tree on the data and count the
number of edges which link different classes.
##### Class Diversity.
Class diversity provides information about the composition of a dataset by
measuring the relative abundances of different classes Shannon (2001).
Intuitively, it gives a measure of how well a model could do on a dataset
without examining any data items and always predicting the most abundant
class. Datasets with a single overwhelming class are easy to achieve high
accuracies on by always predicting the most abundant class. A measure of
diversity is one feature used by Bingel and Søgaard (2017) to identify
datasets which would benefit from multi-task learning.
##### Class Balance.
Unbalanced classes are a known problem in machine learning Chawla et al.
(2004, 2002), particularly if classes are not easily separable Japkowicz
(2000). Underrepresented classes are more difficult to learn because models
are not exposed to them as often.
##### Data Complexity.
Humans find some pieces of text more difficult to comprehend than others. How
difficult a piece of text is to read can be calculated automatically using
measures such as those proposed by Mc Laughlin (1969); Senter and Smith
(1967); Kincaid et al. (1975). If a piece of text is more difficult for a
human to read and understand, the same may be true for an ML model.
## 2 Method
We used 78 text classification datasets and trained 12 different ML algorithms
on each of the datasets for a total of 936 models trained. The highest
achieved macro F1 score Powers (2011), on the test set for each model was
recorded. Macro F1 score is used because it is valid under imbalanced classes.
We then calculated 48 different statistics which attempt to measure our four
hypothesised areas of difficulty for each dataset. We then needed to discover
which statistic or combination thereof correlated with model F1 scores.
We wanted the discovered difficulty measure to be useful as an analysis tool,
so we enforced a restriction that the difficulty measure should be composed
only by summation, without weighting the constituent statistics. This meant
that each difficulty measure could be used as an analysis tool by examining
its components and comparing them to the mean across all datasets.
Each difficulty measure was represented as a binary vector of length 48 - one
bit for each statistic - each bit being 1 if that statistic was used in the
difficulty measure. We therefore had $2^{48}$ possible different difficulty
measures that may have correlated with model score and needed to search this
space efficiently.
Genetic algorithms are biologically inspired search algorithms and are good at
searching large spaces efficiently Whitley (1994). They maintain a population
of candidate difficulty measures and combine them based on their ”fitness” -
how well they correlate with model scores - so that each ”parent” can pass on
pieces of information about the search space Jiao and Wang (2000). Using a
genetic algorithm, we efficiently discovered which of the possible
combinations of statistics correlated with model performance.
### 2.1 Datasets
Dataset Name | Num. Class. | Train Size | Valid Size | Test Size
---|---|---|---|---
AG’s News Zhang et al. (2015) | 4 | 108000 | 12000 | 7600
Airline Twitter Sentiment FigureEight (2018) | 3 | 12444 | - | 2196
ATIS Price (1990) | 26 | 9956 | - | 893
Corporate Messaging FigureEight (2018) | 4 | 2650 | - | 468
ClassicLit | 4 | 40489 | 5784 | 11569
DBPedia wiki.dbpedia.org (2015) | 14 | 50400 | 5600 | 7000
Deflategate FigureEight (2018) | 5 | 8250 | 1178 | 2358
Disaster Tweets FigureEight (2018) | 2 | 7597 | 1085 | 2172
Economic News Relevance FigureEight (2018) | 2 | 5593 | 799 | 1599
Grammar and Product Reviews Datafiniti (2018) | 5 | 49730 | 7105 | 14209
Hate Speech Davidson et al. (2017) | 3 | 17348 | 2478 | 4957
Large Movie Review Corpus Maas et al. (2011) | 2 | 35000 | 5000 | 10000
London Restaurant Reviews (TripAdvisor333https://www.kaggle.com/PromptCloudHQ/londonbased-restaurants-reviews-on-tripadvisor) | 5 | 12056 | 1722 | 3445
New Year’s Tweets FigureEight (2018) | 10 | 3507 | 501 | 1003
New Year’s Tweets FigureEight (2018) | 115 | 3507 | 501 | 1003
Paper Sent. Classification archive.ics.uci.edu (2018) | 5 | 2181 | 311 | 625
Political Social Media FigureEight (2018) | 9 | 3500 | 500 | 1000
Question Classification Li and Roth (2002) | 6 | 4906 | 546 | 500
Review Sentiments Kotzias et al. (2015) | 2 | 2100 | 300 | 600
Self Driving Car Sentiment FigureEight (2018) | 6 | 6082 | - | 1074
SMS Spam Collection Almeida and Hidalgo (2011) | 2 | 3901 | 558 | 1115
SNIPS Intent Classification Coucke et al. (2018) | 7 | 13784 | - | 700
Stanford Sentiment Treebank Socher et al. (2013a) | 3 | 236076 | 1100 | 2210
Stanford Sentiment Treebank Socher et al. (2013a) | 2 | 117220 | 872 | 1821
Text Emotion FigureEight (2018) | 13 | 34000 | - | 6000
Yelp Reviews Yelp.com (2018) | 5 | 29250 | 3250 | 2500
YouTube Spam Alberto et al. (2015) | 2 | 1363 | 194 | 391
Table 1: The 27 different publicly available datasets we gathered with
references.
We gathered 27 real-world text classification datasets from public sources,
summarised in Table 1; full descriptions are in Appendix A.
We created 51 more datasets by taking two or more of the original 27 datasets
and combining all of the data points from each into one dataset. The label for
each data item was the name of the dataset which the text originally came
from. We combined similar datasets in this way, for example two different
datasets of tweets, so that the classes would not be trivially distinguishable
- there is no dataset to classify text as either a tweet or Shakespeare for
example as this would be too easy for models. The full list of combined
datasets is in Appendix A.2.
Our datasets focus on short text classification by limiting each data item to
100 words. We demonstrate that the difficulty measure we discover with this
setup generalises to longer text classification in Section 3.1. All datasets
were lowercase with no punctuation. For datasets with no validation set, 15%
of the training set was randomly sampled as a validation set at runtime.
### 2.2 Dataset Statistics
We calculated 12 distinct statistics with different n-gram sizes to produce 48
statistics of each dataset. These statistics are designed to increase in value
as difficulty increases. The 12 statistics are described here and a listing of
the full 48 is in Appendix B in Table 5. We used n-gram sizes from unigrams up
to 5-grams and recorded the average of each statistic over all n-gram sizes.
All probability distributions were count-based - the probability of a
particular n-gram / class / character was the count of occurrences of that
particular entity divided by the total count of all entities.
#### 2.2.1 Class Diversity
We recorded the Shannon Diversity Index and its normalised variant the Shannon
Equitability Shannon (2001) using the count-based probability distribution of
classes described above.
#### 2.2.2 Class Balance
We propose a simple measure of class imbalance:
$\displaystyle
Imbal=\sum_{c=1}^{C}\left|\frac{1}{C}-\frac{n_{c}}{T_{DATA}}\right|$ (1)
$C$ is the total number of classes, $n_{c}$ is the count of items in class $c$
and $T_{DATA}$ is the total number of data points. This statistic is 0 if
there are an equal number of data points in every class and the upper bound is
$2\left(1-\frac{1}{C}\right)$ and is achieved when one class has all the data
points - a proof is given in Appendix B.2.
#### 2.2.3 Class Interference
Per-class probability distributions were calculated by splitting the dataset
into subsets based on the class of each data point and then computing count-
based probability distributions as described above for each subset.
##### Hellinger Similarity
One minus both the average and minimum Hellinger Distance Le Cam and Yang
(2012) between each pair of classes. Hellinger Distance is 0 if two
probability distributions are identical so we subtract this from 1 to give a
higher score when two classes are similar giving the Hellinger Similarity. One
minus the minimum Hellinger Distance is the maximum Hellinger Similarity
between classes.
##### Top N-Gram Interference
Average Jaccard similarity Jaccard (1912) between the set of the top 10 most
frequent n-grams from each class. N-grams entirely composed of stopwords were
ignored.
##### Mutual Information
Average mutual information Cover and Thomas (2012) score between the set of
the top 10 most frequent n-grams from each class. N-grams entirely composed of
stopwords were ignored.
#### 2.2.4 Data Complexity
##### Distinct n-grams : Total n-grams
Count of distinct n-grams in a dataset divided by the total number of n-grams.
Score of 1 indicates that each n-gram occurs once in the dataset.
##### Inverse Flesch Reading Ease
The Flesch Reading Ease (FRE) formula grades text from 100 to 0, 100
indicating most readable and 0 indicating difficult to read Kincaid et al.
(1975). We take the reciprocal of this measure.
##### N-Gram and Character Diversity
Using the Shannon Index and Equitability described by Shannon (2001) we
calculate the diversity and equitability of n-grams and characters.
Probability distributions are count-based as described at the start of this
section.
### 2.3 Models
Word Embedding Based | tf-idf Based | Character Based
---|---|---
LSTM-RNN | Adaboost | 3 layer CNN
GRU-RNN | Gaussian Naive Bayes (GNB) | -
Bidirectional LSTM-RNN | 5-Nearest Neighbors | -
Bidirectional GRU-RNN | (Multinomial) Logistic Regression | -
Multilayer Perceptron (MLP) | Random Forest | -
- | Support Vector Machine | -
Table 2: Models summary organised by which input type they use.
To ensure that any discovered measures did not depend on which model was used
(i.e. that they were model agnostic), we trained 12 models on every dataset.
The models are summarised in Table 2. Hyperparameters were not optimised and
were identical across all datasets. Specific implementation details of the
models are described in Appendix C. Models were evaluated using the macro
F1-Score. These models used three different representations of text to learn
from to ensure that the discovered difficulty measure did not depend on the
representation. These are:
##### Word Embeddings
Our neural network models excluding the Convolutional Neural Network (CNN)
used 128-dimensional FastText Bojanowski et al. (2016) embeddings trained on
the One Billion Word corpus Chelba et al. (2013) which provided an open
vocabulary across the datasets.
##### Term Frequency Inverse Document Frequency (tf-idf)
Our classical machine learning models represented each data item as a tf-idf
vector Ramos et al. (2003). This vector has one entry for each word in the
vocab and if a word occurs in a data item, then that position in the vector is
the word’s tf-idf score.
##### Characters
Our CNN, inspired by Zhang et al. (2015), sees only the characters of each
data item. Each character is assigned an ID and the list of IDs is fed into
the network.
### 2.4 Genetic Algorithm
The genetic algorithm maintains a population of candidate difficulty measures,
each being a binary vector of length 48 (see start of Method section). At each
time step, it will evaluate each member of the population using a fitness
function. It will then select pairs of parents based on their fitness, and
perform crossover and mutation on each pair to produce a new child difficulty
measure, which is added to the next population. This process is iterated until
the fitness in the population no longer improves.
##### Population
The genetic algorithm is non-randomly initialised with the 48 statistics
described in Section 2.2 \- each one is a difficulty measure composed of a
single statistic. 400 pairs of parents are sampled with replacement from each
population, so populations after this first time step will consist of 200
candidate measures. The probability of a measure being selected as a parent is
proportional to its fitness.
##### Fitness Function
The fitness function of each difficulty measure is based on the Pearson
correlation Benesty et al. (2009). Firstly, the Pearson correlation between
the difficulty measure and the model test set score is calculated for each
individual model. The Harmonic mean of the correlations of each model is then
taken, yielding the fitness of that difficulty measure. Harmonic mean is used
because it is dominated by its lowest constituents, so if it is high then
correlation must be high for every model.
##### Crossover and Mutation
To produce a new difficulty measure from two parents, the constituent
statistics of each parent are randomly intermingled, allowing each parent to
pass on information about the search space. This is done in the following way:
for each of the 48 statistics, one of the two parents is randomly selected and
if the parent uses that statistic, the child also does. This produces a child
which has features of both parents. To introduce more stochasticity to the
process and ensure that the algorithm does not get trapped in a local minima
of fitness, the child is mutated. Mutation is performed by randomly adding or
taking away each of the 48 statistics with probability 0.01. After this
process, the child difficulty measure is added to the new population.
##### Training
The process of calculating fitness, selecting parents and creating child
difficulty measures is iterated until there has been no improvement in fitness
for 15 generations. Due to the stochasticity in the process, we run the whole
evolution 50 times. We run 11 different variants of this evolution, leaving
out different statistics of the dataset each time to test which are most
important in finding a good difficulty measure, in total running 550
evolutions. Training time is fast, averaging 79 seconds per evolution with a
standard deviation of 25 seconds, determined over 50 runs of the algorithm on
a single CPU.
## 3 Results and Discussion
The four hypothesized areas of difficulty - Class Diversity, Balance and
Interference and Data Complexity - combined give a model agnostic measure of
difficulty. All runs of the genetic algorithm produced different combinations
of statistics which had strong negative correlation with model scores on the
78 datasets. The mean correlation was $-0.8795$ and the standard deviation was
$0.0046$. Of the measures found through evolution we present two of particular
interest:
1. 1.
D1: Distinct Unigrams : Total Unigrams + Class Imbalance + Class Diversity +
Top 5-Gram Interference + Maximum Unigram Hellinger Similarity + Unigram
Mutual Info. This measure achieves the highest correlation of all measures at
$-0.8845$.
2. 2.
D2: Distinct Unigrams : Total Unigrams + Class Imbalance + Class Diversity +
Maximum Unigram Hellinger Similarity + Unigram Mutual Info. This measure is
the shortest measure which achieves a higher correlation than the mean, at
$-0.8814$. This measure is plotted against model F1 scores in Figure 1.
Figure 1: Model F1 scores against difficulty measure D2 for each of the three
input types.
We perform detailed analysis on difficulty measure D2 because it relies only
on the words of the dataset and requires just five statistics. This simplicity
makes it interpretable and fast to calculate. All difficulty measures which
achieved a correlation better than $-0.88$ are listed in Appendix D, where
Figure 3 also visualises how often each metric was selected.
### 3.1 Does it Generalise?
Model | AG | Sogou | Yelp P. | Yelp F. | DBP | Yah A. | Amz. P. | Amz. F. | Corr.
---|---|---|---|---|---|---|---|---|---
D2 | 3.29 | 3.77 | 3.59 | 4.42 | 3.50 | 4.51 | 3.29 | 4.32 | -
char-CNN Zhang et al. (2015) | 87.2 | 95.1 | 94.7 | 62 | 98.3 | 71.2l | 95.1 | 59.6 | -0.86
Bag of Words Zhang et al. (2015) | 88.8 | 92.9 | 92.2 | 57.9 | 96.6 | 68.9 | 90.4 | 54.6 | -0.87
Discrim. LSTM Yogatama et al. (2017) | 92.1 | 94.9 | 92.6 | 59.6 | 98.7 | 73.7 | - | - | -0.87
Genertv. LSTM Yogatama et al. (2017) | 90.6 | 90.3 | 88.2 | 52.7 | 95.4 | 69.3 | - | - | -0.88
Kneser-Ney Bayes Yogatama et al. (2017) | 89.3 | 94.6 | 81.8 | 41.7 | 95.4 | 69.3 | - | - | -0.79
FastText Lin. Class. Joulin et al. (2016) | 91.5 | 93.9 | 93.8 | 60.4 | 98.1 | 72 | 91.2 | 55.8 | -0.86
Char CRNN Xiao and Cho (2016) | 91.4 | 95.2 | 94.5 | 61.8 | 98.6 | 71.7 | 94.1 | 59.2 | -0.88
VDCNN Conneau et al. (2017) | 91.3 | 96.8 | 95.7 | 64.7 | 98.7 | 73.4 | 95.7 | 63 | -0.88
Harmonic Mean | | | | | | | | | -0.86
Table 3: Difficulty measure D2 compared to recent results from papers on
large-scale text classification. The correlation column reports the
correlation between difficulty measure D2 and the model scores for that row.
A difficulty measure is useful as an analysis and performance estimation tool
if it is model agnostic and provides an accurate difficulty estimate on unseen
datasets.
When running the evolution, the F1 scores of our character-level CNN were not
observed by the genetic algorithm. If the discovered difficulty measure still
correlated with the CNN’s scores despite never having seen them during
evolution, it is more likely to be model agnostic. The CNN has a different
model architecture to the other models and has a different input type which
encodes no prior knowledge (as word embeddings do) or contextual information
about the dataset (as tf-idf does). D1 has a correlation of $-0.9010$ with the
CNN and D2 has a correlation of $-0.8974$ which suggests that both of our
presented measures do not depend on what model was used.
One of the limitations of our method was that our models never saw text that
was longer than 100 words and were never trained on any very large datasets
(i.e. $>$1 million data points). We also performed no hyperparameter
optimisation and did not use state-of-the-art models. To test whether our
measure generalises to large datasets with text longer than 100 words, we
compared it to some recent state-of-the-art results in text classification
using the eight datasets described by Zhang et al. (2015). These results are
presented in Table 3 and highlight several important findings.
##### The Difficulty Measure Generalises to Very Large Datasets and Long Data
Items.
The smallest of the eight datasets described by Zhang et al. (2015) has 120
000 data points and the largest has 3.6 million. As D2 still has a strong
negative correlation with model score on these datasets, it seems to
generalise to large datasets. Furthermore, these large datasets do not have an
upper limit of data item length (the mean data item length in Yahoo Answers is
520 words), yet D2 still has strong negative correlation with model score,
showing that it does not depend on data item length.
##### The Difficulty Measure is Model and Input Type Agnostic.
The state-of-the-art models presented in Table 3 have undergone hyperparameter
optimisation and use different input types including per-word learned
embeddings Yogatama et al. (2017), n-grams, characters and n-gram embeddings
Joulin et al. (2016). As D2 still has a strong negative correlation with these
models’ scores, we can conclude that it has accurately measured the difficulty
of a dataset in a way that is useful regardless of which model is used.
##### The Difficulty Measure Lacks Precision.
The average score achieved on the Yahoo Answers dataset is $69.9\%$ and its
difficulty is $4.51$. The average score achieved on Yelp Full is $56.8\%$,
$13.1\%$ less than Yahoo Answers and its difficulty is $4.42$. In ML terms, a
difference of 13% is significant yet our difficulty measure assigns a higher
difficulty to the easier dataset. However, Yahoo Answers, Yelp Full and Amazon
Full, the only three of Zhang et al. (2015)’s datasets for which the state-of-
the-art is less than $90\%$, all have difficulty scores $>4$, whereas the five
datasets with scores $>90\%$ all have difficulty scores between 3 and 4. This
indicates that the difficulty measure in its current incarnation may be more
effective at assigning a class of difficulty to datasets, rather than a
regression-like value.
### 3.2 Difficulty Measure as an Analysis Tool
Statistic | Mean | Sigma
---|---|---
Distinct Words : Total Words | 0.0666 | 0.0528
Class Imbalance | 0.503 | 0.365
Class Diversity | 0.905 | 0.759
Max. Unigram Hellinger Similarity | 0.554 | 0.165
Top Unigram Mutual Info | 1.23 | 0.430
Table 4: Means and standard deviations of the constituent statistics of
difficulty measure D2 across the 78 datasets from this paper and the eight
datasets from Zhang et al. (2015).
As our difficulty measure has no dependence on learned weightings or complex
combinations of statistics - only addition - it can be used to analyse the
sources of difficulty in a dataset directly. To demonstrate, consider the
following dataset:
##### Stanford Sentiment Treebank Binary Classification (SST_2) Socher et al.
(2013b)
SST is a dataset of movie reviews for which the task is to classify the
sentiment of each review. The current state-of-the-art accuracy is $91.8\%$
Radford et al. (2017).
Figure 2: Constituents of difficulty measure D2 for SST, compared to the mean
across all datasets.
Figure 2 shows the values of the constituent statistics of difficulty measure
D2 for SST and the mean values across all datasets. The mean (right bar) also
includes an error bar showing the standard deviation of statistic values. The
exact values of the means and standard deviations for each statistic in
measure D2 are shown in Table 4.
Figure 2 shows that for SST_2 the Mutual Information is more than one standard
deviation higher than the mean. A high mutual information score indicates that
reviews have both positive and negative features. For example, consider this
review: ”de niro and mcdormand give solid performances but their screen time
is sabotaged by the story s inability to create interest” which is labelled
”positive”. There is a positive feature referring to the actors’ performances
and a negative one referring to the plot. A solution to this would be to treat
the classification as a multi-label problem where each item can have more than
one class, although this would require that the data be relabelled by hand. An
alternate solution would be to split reviews like this into two separate ones:
one with the positive component and one with the negative.
Furthermore, Figure 2 shows that the Max. Hellinger Similarity is higher than
average for SST_2, indicating that the two classes use similar words.
Sarcastic reviews use positive words to convey a negative sentiment Maynard
and Greenwood (2014) and could contribute to this higher value, as could
mislabelled items of data. Both of these things portray one class with
features of the other - sarcasm by using positive words with a negative tone
and noise because positive examples are labelled as negative and vice versa.
This kind of difficulty can be most effectively reduced by filtering noise
Smith et al. (2014).
To show that our analysis with this difficulty measure was accurately
observing the difficulty in SST, we randomly sampled and analysed 100
misclassified data points from SST’s test set out of 150 total misclassified.
Of these 100, 48 were reviews with both strong positive and negative features
and would be difficult for a human to classify, 22 were sarcastic and 8 were
mislabelled. The remaining 22 could be easily classified by a human and are
misclassified due to errors in the model rather than the data items themselves
being difficult to interpret. These findings show that our difficulty measure
correctly determined the source of difficulty in SST because 78% of the errors
are implied by our difficulty measure and the remaining 22% are due to errors
in the model itself, not difficulty in the dataset.
### 3.3 The Important Areas of Difficulty
We hypothesized that the difficulty of a dataset would be determined by four
areas not including noise: Class Diversity, Class Balance, Class Interference
and Text Complexity. We performed multiple runs of the genetic algorithm,
leaving statistics out each time to test which were most important in finding
a good difficulty measure which resulted in the following findings:
##### No Single Characteristic Describes Difficulty
When the Class Diversity statistic was left out of evolution, the highest
achieved correlation was $-0.806$, $9\%$ lower than D1 and D2. However, on its
own Class Diversity had a correlation of $-0.644$ with model performance.
Clearly, Class Diversity is necessary but not sufficient to estimate dataset
difficulty. Furthermore, when all measures of Class Diversity and Balance were
excluded, the highest achieved correlation was $-0.733$ and when all measures
of Class Interference were excluded the best correlation was $-0.727$. These
three expected areas of difficulty - Class Diversity, Balance and Interference
- must all be measured to get an accurate estimate of difficulty because
excluding any of them significantly damages the correlation that can be found.
Correlations for each individual statistic are in Table 6, in Appendix D.
##### Data Complexity Has Little Affect on Difficulty
Excluding all measures of Data Complexity from evolution yielded an average
correlation of $-0.869$, only $1\%$ lower than the average when all statistics
were included. Furthermore, the only measure of Data Complexity present in D1
and D2 is Distinct Words : Total Words which has a mean value of $0.067$ and
therefore contributes very little to the difficulty measure. This shows that
while Data Complexity is necessary to achieve top correlation, its
significance is minimal in comparison to the other areas of difficulty.
### 3.4 Error Analysis
#### 3.4.1 Overpowering Class Diversity
When a dataset has a large number of balanced classes, then Class Diversity
dominates the measure. This means that the difficulty measure is not a useful
performance estimator for such datasets.
To illustrate this, we created several fake datasets with 1000, 100, 50 and 25
classes. Each dataset had 1000 copies of the same randomly generated string in
each class. It was easy for models to overfit and score a 100% F1 score on
these fake datasets.
For the 1000-class fake data, Class Diversity is $6.91$, which by our
difficulty measure would indicate that the dataset is extremely difficult.
However, all models easily achieve a 100% F1 score. By testing on these fake
datasets, we found that the limit for the number of classes before Class
Diversity dominates the difficulty measure and renders it inaccurate is
approximately 25. Any datasets with more than 25 classes with an approximately
equal number of items per class will be predicted as difficult regardless of
whether they actually are because of this diversity measure.
Datasets with more than 25 unbalanced classes are still measured accurately.
For example, the ATIS dataset Price (1990) has 26 classes but because some of
them have only 1 or 2 data items, it is not dominated by Class Diversity. Even
when the difficulty measure is dominated by Class Diversity, examining the
components of the difficulty measure independently would still be useful as an
analysis tool.
#### 3.4.2 Exclusion of Useful Statistics
One of our datasets of New Year’s Resolution Tweets has 115 classes but only
3507 data points FigureEight (2018). An ML practitioner knows from the number
of classes and data points alone that this is likely to be a difficult dataset
for an ML model.
Our genetic algorithm, based on an unweighted, linear sum, cannot take
statistics like data size into account currently because they do not have a
convenient range of values; the number of data points in a dataset can vary
from several hundred to several million. However, the information is still
useful to practitioners in diagnosing the difficulty of a dataset.
Given that the difficulty measure lacks precision and may be better suited to
classification than regression as discussed in Section 3.1, cannot take
account of statistics without a convenient range of values and that the
difficulty measure must be interpretable, we suggest that future work could
look at combining statistics with a white-box, non-linear algorithm like a
decision tree. As opposed to summation, such a combination could take account
of statistics with different value ranges and perform either classification or
regression while remaining interpretable.
### 3.5 How to Reduce the Difficulty Measure
Here we present some general guidelines on how the four areas of difficulty
can be reduced.
Class Diversity can only be sensibly reduced by lowering the number of
classes, for example by grouping classes under superclasses. In academic
settings where this is not possible, hierarchical learning allows grouping of
classes but will produce granular labels at the lowest level Kowsari et al.
(2017). Ensuring a large quantity of data in each class will also help models
to better learn the features of each class.
Class Interference is influenced by the amount of noise in the data and
linguistic phenomena like sarcasm. It can also be affected by the way the data
is labelled, for example as shown in Section 3.2 where SST has data points
with both positive and negative features but only a single label. Filtering
noise, restructuring or relabelling ambiguous data points and detecting
phenomena like sarcasm will help to reduce class interference. Easily confused
classes can also be grouped under one superclass if practitioners are willing
to sacrifice granularity to gain performance.
Class Imbalance can be addressed with data augmentation such as thesaurus
based methods Zhang et al. (2015) or word embedding perturbation Zhang and
Yang (2018). Under- and over-sampling can also be utilised Chawla et al.
(2002) or more data gathered. Another option is transfer learning where
knowledge from high data domains can be transferred to those with little data
Jaech et al. (2016).
Data Complexity can be managed with large amounts of data. This need not
necessarily be labelled - unsupervised pre-training can help models understand
the form of complex data before attempting to use it Halevy et al. (2009).
Curriculum learning may also have a similar effect to pre-training Bengio et
al. (2009).
### 3.6 Other Applications of the Measure
##### Model Selection
Once the difficulty of a dataset has been calculated, a practitioner can use
this to decide whether they need a complex or simple model to learn the data.
##### Performance Checking and Prediction
Practitioners will be able to compare the results their models get to the
scores of other models on datasets of an equivalent difficulty. If their
models achieve lower results than what is expected according to the difficulty
measure, then this could indicate a problem with the model.
## 4 Conclusion
When their models do not achieve good results, ML practitioners could
potentially calculate thousands of statistics to see what aspects of their
datasets are stopping their models from learning. Given this, how do
practitioners tell which statistics are the most useful to calculate? Which
ones will tell them the most? What changes could they make which will produce
the biggest increase in model performance?
In this work, we have presented two measures of text classification dataset
difficulty which can be used as analysis tools and performance estimators. We
have shown that these measures generalise to unseen datasets. Our recommended
measure can be calculated simply by counting the words and labels of a dataset
and is formed by adding five different, unweighted statistics together. As the
difficulty measure is an unweighted sum, its components can be examined
individually to analyse the sources of difficulty in a dataset.
There are two main benefits to this difficulty measure. Firstly, it will
reduce the time that practitioners need to spend analysing their data in order
to improve model scores. As we have demonstrated which statistics are most
indicative of dataset difficulty, practitioners need only calculate these to
discover the sources of difficulty in their data. Secondly, the difficulty
measure can be used as a performance estimator. When practitioners approach
new tasks they need only calculate these simple statistics in order to
estimate how well models are likely to perform.
Furthermore, this work has shown that for text classification the areas of
Class Diversity, Balance and Interference are essential to measure in order to
understand difficulty. Data Complexity is also important, but to a lesser
extent.
Future work should firstly experiment with non-linear but interpretable
methods of combining statistics into a difficulty measure such as decision
trees. Furthermore, it should apply this difficulty measure to other NLP tasks
that may require deeper linguistic knowledge than text classification, such as
named entity recognition and parsing. Such tasks may require more advanced
features than simple word counts as were used in this work.
## References
* Alberto et al. (2015) Túlio C Alberto, Johannes V Lochter, and Tiago A Almeida. 2015. Tubespam: Comment spam filtering on youtube. In _Machine Learning and Applications (ICMLA), 2015 IEEE 14th International Conference on_ , pages 138–143. IEEE. [Online; accessed 22-Feb-2018].
* Almeida and Hidalgo (2011) Tiago A. Almeida and José María Gómez Hidalgo. 2011. Sms spam collection v. 1. [Online; accessed 25-Feb-2018].
* Antonellis et al. (2006) Ioannis Antonellis, Christos Bouras, and Vassilis Poulopoulos. 2006. Personalized news categorization through scalable text classification. In _Asia-Pacific Web Conference_ , pages 391–401. Springer.
* archive.ics.uci.edu (2018) archive.ics.uci.edu. 2018. Sentence classification data set. https://archive.ics.uci.edu/ml/datasets/Sentence+Classification. [Online; accessed 20-Feb-2018].
* Benesty et al. (2009) Jacob Benesty, Jingdong Chen, Yiteng Huang, and Israel Cohen. 2009. Pearson correlation coefficient. In _Noise reduction in speech processing_ , pages 1–4. Springer.
* Bengio et al. (2009) Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. 2009\. Curriculum learning. In _Proceedings of the 26th annual international conference on machine learning_ , pages 41–48. ACM.
* Bernadó-Mansilla and Ho (2005) Ester Bernadó-Mansilla and Tin Kam Ho. 2005. Domain of competence of xcs classifier system in complexity measurement space. _IEEE Transactions on Evolutionary Computation_ , 9(1):82–104.
* Bingel and Søgaard (2017) Joachim Bingel and Anders Søgaard. 2017. Identifying beneficial task relations for multi-task learning in deep neural networks. _arXiv preprint arXiv:1702.08303_.
* Bojanowski et al. (2016) Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. _arXiv preprint arXiv:1607.04606_.
* Brodley and Friedl (1999) Carla E Brodley and Mark A Friedl. 1999. Identifying mislabeled training data. _Journal of artificial intelligence research_ , 11:131–167.
* Chawla et al. (2002) Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002\. Smote: synthetic minority over-sampling technique. _Journal of artificial intelligence research_ , 16:321–357.
* Chawla et al. (2004) Nitesh V Chawla, Nathalie Japkowicz, and Aleksander Kotcz. 2004. Special issue on learning from imbalanced data sets. _ACM Sigkdd Explorations Newsletter_ , 6(1):1–6.
* Chelba et al. (2013) Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robinson. 2013. One billion word benchmark for measuring progress in statistical language modeling. _arXiv preprint arXiv:1312.3005_.
* Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. _arXiv preprint arXiv:1406.1078_.
* Collins et al. (2017) Ed Collins, Isabelle Augenstein, and Sebastian Riedel. 2017. A supervised approach to extractive summarisation of scientific papers. In _Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)_ , pages 195–205.
* Conneau et al. (2017) Alexis Conneau, Holger Schwenk, Loïc Barrault, and Yann Lecun. 2017. Very deep convolutional networks for text classification. In _Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers_ , volume 1, pages 1107–1116.
* Coucke et al. (2018) Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. _arXiv preprint arXiv:1805.10190_.
* Cover and Thomas (2012) Thomas M Cover and Joy A Thomas. 2012. _Elements of information theory_. John Wiley & Sons.
* Datafiniti (2018) Datafiniti. 2018. Grammar and online product reviews. [Online; accessed 26-Feb-2018].
* Davidson et al. (2017) Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. _arXiv preprint arXiv:1703.04009_.
* Elizondo et al. (2009) D. A. Elizondo, R. Birkenhead, M. Gamez, N. Garcia, and E. Alfaro. 2009. Estimation of classification complexity. In _2009 International Joint Conference on Neural Networks_ , pages 764–770.
* FigureEight (2018) FigureEight. 2018. Data for everyone. https://www.figure-eight.com/data-for-everyone/. [Online; accessed 25-Feb-2018].
* Gupta et al. (2014) Maya R Gupta, Samy Bengio, and Jason Weston. 2014. Training highly multiclass classifiers. _The Journal of Machine Learning Research_ , 15(1):1461–1492.
* Halevy et al. (2009) Alon Halevy, Peter Norvig, and Fernando Pereira. 2009. The unreasonable effectiveness of data. _IEEE Intelligent Systems_ , 24(2):8–12.
* Hastie et al. (2009) Trevor Hastie, Saharon Rosset, Ji Zhu, and Hui Zou. 2009. Multi-class adaboost. _Statistics and its Interface_ , 2(3):349–360.
* Ho and Basu (2000) Tin Kam Ho and M. Basu. 2000. Measuring the complexity of classification problems. In _Proceedings 15th International Conference on Pattern Recognition. ICPR-2000_ , volume 2, pages 43–47 vol.2.
* Ho and Basu (2002) Tin Kam Ho and M. Basu. 2002. Complexity measures of supervised classification problems. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 24(3):289–300.
* Hochreiter and Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. _Neural computation_ , 9(8):1735–1780.
* Hong and Cho (2008) Jin-Hyuk Hong and Sung-Bae Cho. 2008. A probabilistic multi-class strategy of one-vs.-rest support vector machines for cancer classification. _Neurocomputing_ , 71(16-18):3275–3281.
* Jaccard (1912) Paul Jaccard. 1912. The distribution of the flora in the alpine zone. _New phytologist_ , 11(2):37–50.
* Jaech et al. (2016) Aaron Jaech, Larry Heck, and Mari Ostendorf. 2016. Domain adaptation of recurrent neural networks for natural language understanding. _arXiv preprint arXiv:1604.00117_.
* Japkowicz (2000) Nathalie Japkowicz. 2000. The class imbalance problem: Significance and strategies. In _Proc. of the Int’l Conf. on Artificial Intelligence_.
* Jiao and Wang (2000) Licheng Jiao and Lei Wang. 2000. A novel genetic algorithm based on immunity. _IEEE Transactions on Systems, Man, and Cybernetics-part A: systems and humans_ , 30(5):552–561.
* Joulin et al. (2016) Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. _arXiv preprint arXiv:1607.01759_.
* Kågebäck et al. (2014) Mikael Kågebäck, Olof Mogren, Nina Tahmasebi, and Devdatt Dubhashi. 2014\. Extractive summarization using continuous vector space models. In _Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)_ , pages 31–39.
* Kincaid et al. (1975) J Peter Kincaid, Robert P Fishburne Jr, Richard L Rogers, and Brad S Chissom. 1975\. Derivation of new readability formulas (automated readability index, fog count and flesch reading ease formula) for navy enlisted personnel. Technical report, Naval Technical Training Command Millington TN Research Branch.
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_.
* Kotzias et al. (2015) Dimitrios Kotzias, Misha Denil, Nando De Freitas, and Padhraic Smyth. 2015. From group to individual labels using deep features. In _Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , pages 597–606. ACM.
* Kowsari et al. (2017) Kamran Kowsari, Donald E Brown, Mojtaba Heidarysafa, Kiana Jafari Meimandi, Matthew S Gerber, and Laura E Barnes. 2017. Hdltex: Hierarchical deep learning for text classification. In _Machine Learning and Applications (ICMLA), 2017 16th IEEE International Conference on_ , pages 364–371. IEEE.
* Le Cam and Yang (2012) Lucien Le Cam and Grace Lo Yang. 2012. _Asymptotics in statistics: some basic concepts_. Springer Science & Business Media.
* Li and Roth (2002) Xin Li and Dan Roth. 2002. Learning question classifiers. In _Proceedings of the 19th international conference on Computational linguistics-Volume 1_ , pages 1–7. Association for Computational Linguistics.
* Maas et al. (2011) Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In _Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1_ , pages 142–150. Association for Computational Linguistics.
* Macià et al. (2008) N. Macià, A. Orriols-Puig, and E. Bernadó-Mansilla. 2008. Genetic-based synthetic data sets for the analysis of classifiers behavior. In _2008 Eighth International Conference on Hybrid Intelligent Systems_ , pages 507–512.
* Mansilla and Ho (2004) E. B. Mansilla and Tin Kam Ho. 2004. On classifier domains of competence. In _Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004._ , volume 1, pages 136–139 Vol.1.
* Maynard and Greenwood (2014) Diana Maynard and Mark A Greenwood. 2014. Who cares about sarcastic tweets? investigating the impact of sarcasm on sentiment analysis. In _Lrec_ , pages 4238–4243.
* Mc Laughlin (1969) G Harry Mc Laughlin. 1969. Smog grading-a new readability formula. _Journal of reading_ , 12(8):639–646.
* Powers (2011) David Martin Powers. 2011. Evaluation: from precision, recall and f-measure to roc, informedness, markedness and correlation.
* Price (1990) Patti J Price. 1990. Evaluation of spoken language systems: The atis domain. In _Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990_.
* Radford et al. (2017) Alec Radford, Rafal Jozefowicz, and Ilya Sutskever. 2017. Learning to generate reviews and discovering sentiment. _arXiv preprint arXiv:1704.01444_.
* Ramos et al. (2003) Juan Ramos et al. 2003. Using tf-idf to determine word relevance in document queries. In _Proceedings of the first instructional conference on machine learning_ , volume 242, pages 133–142.
* Rennie et al. (2003) Jason D Rennie, Lawrence Shih, Jaime Teevan, and David R Karger. 2003. Tackling the poor assumptions of naive bayes text classifiers. In _Proceedings of the 20th international conference on machine learning (ICML-03)_ , pages 616–623.
* Schuster and Paliwal (1997) Mike Schuster and Kuldip K Paliwal. 1997. Bidirectional recurrent neural networks. _IEEE Transactions on Signal Processing_ , 45(11):2673–2681.
* Senter and Smith (1967) RJ Senter and Edgar A Smith. 1967. Automated readability index. Technical report, CINCINNATI UNIV OH.
* Shannon (2001) Claude Elwood Shannon. 2001. A mathematical theory of communication. _ACM SIGMOBILE Mobile Computing and Communications Review_ , 5(1):3–55.
* Smith et al. (2014) Michael R Smith, Tony Martinez, and Christophe Giraud-Carrier. 2014. The potential benefits of filtering versus hyper-parameter optimization. _arXiv preprint arXiv:1403.3342_.
* Socher et al. (2013a) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013a. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of the 2013 conference on empirical methods in natural language processing_ , pages 1631–1642.
* Socher et al. (2013b) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of the 2013 conference on empirical methods in natural language processing_ , pages 1631–1642.
* Whitley (1994) Darrell Whitley. 1994. A genetic algorithm tutorial. _Statistics and computing_ , 4(2):65–85.
* wiki.dbpedia.org (2015) wiki.dbpedia.org. 2015. Data set 2.0. http://wiki.dbpedia.org/data-set-20. [Online; accessed 21-Feb-2018].
* Xiao and Cho (2016) Yijun Xiao and Kyunghyun Cho. 2016. Efficient character-level document classification by combining convolution and recurrent layers. _arXiv preprint arXiv:1602.00367_.
* Yelp.com (2018) Yelp.com. 2018. Yelp dataset challenge. http://www.yelp.com/dataset_challenge. [Online; accessed 23-Feb-2018].
* Yogatama et al. (2017) Dani Yogatama, Chris Dyer, Wang Ling, and Phil Blunsom. 2017. Generative and discriminative text classification with recurrent neural networks. _arXiv preprint arXiv:1703.01898_.
* Zhang and Yang (2018) Dongxu Zhang and Zhichao Yang. 2018. Word embedding perturbation for sentence classification. _arXiv preprint arXiv:1804.08166_.
* Zhang et al. (2015) Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In _Advances in neural information processing systems_ , pages 649–657.
## Appendix A Dataset Descriptions
### A.1 Main Datasets
All 89 datasets used in this paper are available from http://data.wluper.com.
##### AG’s News Topic Classification Dataset - version 3 (AG)
AG’s News Topic Classification
Dataset444https://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html is
constructed and used as a benchmark in the paper Zhang et al. (2015). It is a
collection of more than 1 million news articles gathered from more than 2000
news sources, which is used for research purposes. It has four classes:
”Business”, ”Sci/Tech”, ”Sports”, ”World”, each class contains 30000 training
samples and 1900 testing samples. In total, the training set has 108000
sentences, the validation set has 12000 sentences and the test set has 7600
sentences.
##### Airline Twitter Sentiment (AT_Sentiment)
The airline twitter sentiment dataset is crowd-sourced by FigureEight (2018),
it has three sentiment classes to classify the service of the major U.S.
airlines: positive, negative, and neutral. The training set has 12444
sentences and the test set has 2196 sentences.
##### ATIS dataset with Intents (ATIS_Int)
The airline travel information system dataset Price (1990) is a widely used
benchmark dataset for the task of slot filling. The data is from air travel
reservation making and follows the (Begin/Inside/Output) format for the
semantic label of each word, the class O is assigned for those words without
semantic labels. In this paper, we only employed the dataset with intents for
the text classification task. In total, the ATIS_Int data has 26 classes, the
training set contains 9956 sentences and the test set contains 893 sentences.
##### Corporate Messaging (CM)
The Corporate messaging dataset is crowd-sourced by FigureEight (2018), it has
4 classes about what corporations talk about on social media: ”information”,
”Action”, ”Dialogue” and ”Exclude”. The training set has 2650 sentences and
the test set has 468 sentences.
##### Classic Literature Sentence Classification (ClassicLit)
We created this dataset for this work. It consists of The Complete Works of
William Shakespeare, War and Peace by Leo Tolstoy, Wuthering Heights by Emily
Brontë and the War of the Worlds by H.G. Wells. Each data item is a sentence
from one of these books. All sentences longer than 100 words are discarded.
The label of each sentence is which author wrote that sentence. All books were
downloaded from the Project Gutenberg555http://www.gutenberg.org/ website.
There are 40489 training sentences, 5784 validation sentences and 11569
testing sentences.
##### Classification of Political Social Media (PSM)
The classification of political social media dataset is crowd-sourced by
FigureEight (2018), the social media messages from US Senators and other
American politicians are classified into 9 classes ranging from ”attack” to
”support”, the training set has 3500 sentences, the validation set has 500
sentences and the test set has 1000 sentences.
##### DBPedia Ontology Classification Dataset - version 2 (DB PEDIA)
The DBpedia dataset are licensed under the terms of GNU Free Documentation
License wiki.dbpedia.org (2015), the DBPedia ontology classification dataset
is constructed and used as a benchmark in the paper Zhang et al. (2015). It
has 14 classes, the total size of the training set is 560000 and testing set
70000, we split 10 $\%$ of the training set as validation set with size 5600.
Due to the large size of this dataset and the need to increase training speed
due to the large number of models we had to train, we randomly sampled 10 $\%$
of the dataset based on the class distribution as our training, validation and
test datasets. We later compared our measure to the full, unaltered dataset
used by Zhang et al. (2015) to show generalisation.
##### Economic News Article Tone and Relevance (ENR)
The Economic News Article Tone and Relevance dataset is crowd-sourced by
FigureEight (2018), it contains classes for whether the article is about the
US economy, if so, what tone (1-9) that article is. In this work, we employed
a binary classification task by only taking two classes: Yes or No. The
training set has 5593 sentences, the validation set has 799 sentences and the
test set has 1599 sentences.
##### Experimental Data for Question Classification (QC)
The Question Classification dataset comes from the Li and Roth (2002), it
classifies questions into six classes: ”NUM”, ”LOC”, ”HUM”, ”DESC”, ”ENTY” and
”ABBR”, the training set has 4096 sentences, the validation set has 546
sentences and the test set has 500 sentences.
##### Grammar and Online Product Reviews (GPR)
The Grammar and Online Product Reviews comes from Datafiniti (2018), it is a
list of reviews of products with 5 classes (rating from 1 to 5). The training
set has 49730 sentences, the validation set has 7105 sentences and the test
set has 14209 sentences.
##### Hate Speech (HS)
The hate speech data comes from the work Davidson et al. (2017) which has
three classes: ”offensiveLanguage”, ”hateSpeech”, and ”neither”. The training
set has 17348 sentences, the validation set has 2478 sentences and the test
set has 1115 sentences.
##### Large Movie Review Corpus (LMRC)
The large movie review corpus is conducted by Maas et al. (2011) which
contains 50000 reviews from IMDB and an even number of positive and negative
reviews. The number of reviews for each movie is not allowed to be more than
30 to avoid correlated ratings. It contains two classes: A negative review has
a score $\leq$ 4 out of 10, and a positive review has a score $\geq$ 7 out of
10, no neutral class is included.
##### London-based Restaurants’ Reviews on TripAdvisor (LRR)
The London-based restaurants’ reviews on
TripAdvisor666https://www.kaggle.com/PromptCloudHQ/londonbased-restaurants-
reviews-on-tripadvisor is taken as subset of a bigger dataset (more than 1.8
million restaurants) that was created by extracting data from
Tripadvisor.co.uk. It has five classes (rating from 1-5). The training set has
12056 sentences, the validation set has 1722 sentences and the test set has
3445 sentences.
##### New England Patriots Deflategate sentiment (DFG)
The New England Patriots Deflategate sentiment dataset is crowd-sourced by
FigureEight (2018), it is gathered from Twitter sentiment on chatter around
deflated footballs. It has five sentiment classes: negative, slightly
negative, neutral, slightly positive and positive. The training set has 8250
sentences, the validation set has 1178 sentences and the test set has 2358
sentences.
##### New Years Resolutions (NYR)
The 2015 New Year’s resolutions dataset is crowd-sourced by FigureEight
(2018), it contains demographic and geographical data of users and resolution
categorizations. In this project, we conducted two text-classification tasks
based on different numbers of classes: 10 classes and 115 classes. The
training set has 3507 sentences, the validation set has 501 sentences and the
test set has 1003 sentences.
##### Paper Sentence Classification (PSC)
The paper sentence classification dataset comes from the archive.ics.uci.edu
(2018), it contains sentences from the abstract and introduction of 30
articles ranging from biology, machine learning and psychology. There are 5
classes in total, the training set has 2181 sentences, the validation set has
311 sentences and the test set has 625 sentences.
##### Relevance of terms to disaster relief topics (DT)
The relevance of terms to disaster relief topics dataset is crowd-sourced by
FigureEight (2018), it contains pairs of terms and topics relevant to disaster
relief with relevance determinations: relevant and not relevant. The training
set has 7597 sentences, the validation set has 1085 sentences and the test set
has 2172 sentences.
##### Review Sentiments (RS)
The review sentiments dataset is generated by Kotzias et al. (2015) using an
approach from group level labels to instance level labels, which is evaluated
on three large review datasets: IMDB, Yelp, and Amazon. The dataset contains
classes, the training set has 2100 sentences, the validation set has 300
sentences and the test set has 600 sentences.
##### Self Driving Cars Twitter Sentiment (SDC)
The Self-driving cars Twitter sentiment analysis dataset is crowd-sourced by
FigureEight (2018), it has 6 sentiment classes to classify the sentiments of
self driving cars: very positive, slightly positive, neutral, slightly
negative, very negative and not relevant. The training set has 6082 sentences
and the test set has 1074 sentences.
##### SMS Spam Collection (SMSS)
The SMS Spam Collection is a collection of labelled message for mobile phone
spam research Almeida and Hidalgo (2011). It contains two class: spam and ham,
the training set has 3901 sentences, the validation set has 558 sentences and
the test set has 4957 sentences.
##### SNIPS Natural Language Understanding Benchmark (SNIPS)
The SNIPS data is open sourced by Coucke et al. (2018), it has 7 intents:
”AddToPlaylist”, ”BookRestaurant”, ”GetWeather”, ”PlayMusic”, ”RateBook”,
”SearchCreativeWork”, ”SearchScreeningEvent”. The training set contains 13784
sentences and the test set contains 700 sentences.
##### Stanford Sentiment Treebank (SST)
Stanford Sentiment Treebank was introduced by Socher et al. (2013a), it is the
first corpus with fully labelled parse trees, which is normally used to
capture linguistic features and predict the presented compositional semantic
effect. It contains 5 sentiment classes: very negative, negative, neutral,
positive and very positive. In this project, we conducted two text
classification tasks based on a different number of classes in SST:
1. 1.
three classes (negative, neutral, positive) classification task, the training
data has 236076 sentences, the validation set has 1100 sentences and the test
set has 2210 sentences.
2. 2.
binary classification task (negative, positive), the training data has 117220
sentences, the validation set has 872 sentences and the test set has 1821
sentences.
We used each labelled phrase rather than each sentence as an item of training
data which is why the training sets are so large.
##### Text Emotion Classification (TE)
The Text Emotion Classification dataset is crowd-sourced by FigureEight
(2018), it contains 13 classes for emotional content like happiness or
sadness. The training set has 34000 sentences and the test set has 6000
sentences.
##### Yelp Review Full Star Dataset (YELP)
The Yelp reviews dataset consists of reviews from Yelp, it is extracted from
the Yelp Dataset Challenges 2015 data Yelp.com (2018), it is is constructed
and used as a benchmark in the paper Zhang et al. (2015). In total, there are
650,000 training samples and 50,000 testing samples with 5 classes, we split
10 $\%$ of the training set as validation set. Due to the large size of this
dataset and the need to increase training speed due to the large number of
models we had to train, we sampled 5 $\%$ of the dataset based on the class
distribution as our training, validation and test dataset. We later compare
our measure to the full, unaltered dataset as used by Zhang et al. (2015) to
show generalisation.
##### YouTube Spam Classification (YTS)
The Youtube Spam Classification dataset comes from Alberto et al. (2015),
which is a public set of comments collected for spam research. The dataset
contains 2 classes, the training set has 1363 sentences, the validation set
has 194 sentences and the test set has 391 sentences.
### A.2 Combined Datasets
We combined the above datasets to make 51 new datasets. The combined datasets
were:
1. 1.
AT and ATIS Int
2. 2.
AT and ClassicLit
3. 3.
AT and CM and DFG and DT and NYR 10 and SDC and TE
4. 4.
ATIS Int and ClassicLit
5. 5.
ATIS Int and CM
6. 6.
ClassicLit and CM
7. 7.
ClassicLit and DFG
8. 8.
ClassicLit and LMRC
9. 9.
ClassicLit and RS
10. 10.
ClassicLit and SST
11. 11.
ClassicLit and TE
12. 12.
CM and AT
13. 13.
CM and DFG
14. 14.
DFG and AT
15. 15.
DFG and ATIS Int
16. 16.
DT and AT
17. 17.
DT and ATIS Int
18. 18.
LMRC and AT
19. 19.
LMRC and ATIS Int
20. 20.
LMRC and RS and SST
21. 21.
LMRC and RS and YTS
22. 22.
NYR 10 and AT
23. 23.
NYR 10 and ATIS Int
24. 24.
NYR 115 and AT
25. 25.
NYR 115 and ATIS Int
26. 26.
PSC and AT
27. 27.
PSC and ATIS Int
28. 28.
RS and AT
29. 29.
RS and ATIS Int
30. 30.
RS and LMRC
31. 31.
RS and SST
32. 32.
SDC and AT
33. 33.
SDC and ATIS Int
34. 34.
SNIPS and AT
35. 35.
SNIPS and ATIS Int
36. 36.
SNIPS and ATIS Int and ClassicLit
37. 37.
SNIPS and ATIS Int and PSC
38. 38.
SNIPS and ATIS Int and SST
39. 39.
SST 2 and AT
40. 40.
SST 2 and ATIS Int
41. 41.
SST and AT
42. 42.
SST and ATIS Int
43. 43.
SST and ClassicLit and LMRC
44. 44.
SST and LMRC
45. 45.
SST and SST 2
46. 46.
TE and AT
47. 47.
TE and ATIS Int
48. 48.
TE and NYR 10
49. 49.
YTS and AT
50. 50.
YTS and ATIS Int
51. 51.
YTS and TE and PSC and RS
Class Diversity | Class Balance | Class Interference | Data Complexity
---|---|---|---
Shannon Class Diversity | Class Imbalance | Maximum Unigram Hellinger Similarity | Distinct Unigrams (Vocab) : Total Unigrams
Shannon Class Equitability | | Maximum Bigram Hellinger Similarity | Distinct Bigrams : Total Bigrams
| | Maximum Trigram Hellinger Similarity | Distinct Trigrams : Total Trigrams
| | Maximum 4-gram Hellinger Similarity | Distinct 4-grams : Total 4-grams
| | Maximum 5-gram Hellinger Similarity | Distinct 5-grams : Total 5-grams
| | Mean Maximum Hellinger Similarity | Mean Distinct n-grams : Total n-grams
| | Average Unigram Hellinger Similarity | Unigram Shannon Diversity
| | Average Bigram Hellinger Similarity | Bigram Shannon Diversity
| | Average Trigram Hellinger Similarity | Trigram Shannon Diversity
| | Average 4-gram Hellinger Similarity | 4-gram Shannon Diversity
| | Average 5-gram Hellinger Similarity | 5-gram Shannon Diversity
| | Mean Average Hellinger Similarity | Mean n-gram Shannon Diversity
| | Top Unigram Interference | Unigram Shannon Equitability
| | Top Bigram Interference | Bigram Shannon Equitability
| | Top Trigram Interference | Trigram Shannon Equitability
| | Top 4-gram Interference | 4-gram Shannon Equitability
| | Top 5-gram Interference | 5-gram Shannon Equitability
| | Mean Top n-gram Interference | Mean n-gram Shannon Equitability
| | Top Unigram Mutual Information | Shannon Character Diversity
| | Top Bigram Mutual Information | Shannon Character Equitability
| | Top Trigram Mutual Information | Inverse Flesch Reading Ease
| | Top 4-gram Mutual Information |
| | Top 5-gram Mutual Information |
| | Mean Top n-gram Mutual Information |
Table 5: The 48 different statistics we calculated of each datasets which we
suspected may correlate with model score on different datasets.
## Appendix B Dataset Statistic Details
### B.1 Shannon Diversity Index
The Shannon Diversity Index is given by:
$\displaystyle H$ $\displaystyle=-\sum_{i=1}^{R}p_{i}lnp_{i}$ (2)
$\displaystyle E_{H}$ $\displaystyle=\frac{H}{lnR}$ (3)
$R$ is the ”richness” and corresponds in our case to the number of classes /
different n-grams / different characters. $p_{i}$ is the probability of that
class / n-gram / character occurring given by the count base probability
distributions described in the method section.
### B.2 Class Imbalance
As part of this paper we presented the class imbalance statistic which is zero
if a dataset has precisely the same number of data points in each class. To
the best of our knowledge we are the first ones to propose it. Therefore, we
will provide a brief mathematical treaty of this metric; in particular we will
demonstrate its upper bound. It is defined by:
$\displaystyle
Imbal=\sum_{c=1}^{C}\left|\frac{1}{C}-\frac{n_{c}}{T_{DATA}}\right|$ (4)
Where $n_{c}$ is the count of points in class c, $T_{DATA}$ is the total count
of points and $C$ is the total number of classes. A trivial upper bound can be
derived using the triangle inequality and observing that
$\sum_{c=1}^{C}\frac{n_{c}}{T_{DATA}}=1$ and $0\leq\frac{n_{c}}{T_{DATA}}\leq
1$
$\displaystyle(4)$
$\displaystyle\leq\sum_{c=1}^{C}\left|\frac{1}{C}\right|+\left|\frac{n_{c}}{T_{DATA}}\right|$
(5)
$\displaystyle=C\left|\frac{1}{C}\right|+\sum_{c=1}^{C}\left|\frac{n_{c}}{T_{DATA}}\right|$
(6) $\displaystyle=C\frac{1}{C}+\sum_{c=1}^{C}\frac{n_{c}}{T_{DATA}}=1+1=2$
(7)
A tight upper bound, however, is given by: $2(1-\frac{1}{C})$, i.e. when one
class has all the data points, while all the other classes have zero data
points. A brief derivation hereof goes as follows:
We let $\frac{n_{c}}{T_{DATA}}=p_{c}\in[0,1]$, with $\sum_{c}p_{c}=1$ then
$\displaystyle(4)=\sum_{c=1}^{C}\left|\frac{1}{C}-p_{c}\right|$ (8)
We will find the upper bound by maximising (7). Now we split the p’s into two
sets. Set $I$, consists of all the $p_{i}\geq\frac{1}{C}$ and set $J$, which
consists of $p_{j}<\frac{1}{C}$. Then:
$\displaystyle(7)$ $\displaystyle=\sum_{i\in
I}\left|\frac{1}{C}-p_{i}\right|+\sum_{j\in J}\left|\frac{1}{C}-p_{j}\right|$
(9) $\displaystyle=\sum_{i\in I}(p_{i}-\frac{1}{C})+\sum_{j\in
J}(\frac{1}{C}-p_{j})$ (10)
$\displaystyle=\sum_{i}p_{i}+\sum_{j}\frac{1}{C}-\sum_{j}p_{j}-\sum_{i}\frac{1}{C}$
(11)
As we are free to choose the p’s, it is clear from (10) that all the p’s in
set $J$ should be 0, as they are a negative term in the sum. This leaves us
with:
$\displaystyle(11)\leq\sum_{i}p_{i}+\sum_{j}\frac{1}{C}-\sum_{i}\frac{1}{C}$
(12)
Where $\sum_{i}p_{i}=1$, as $\sum_{i}p_{i}+\sum_{j}p_{j}=1$, hence:
$\displaystyle(12)=1+\sum_{j}\frac{1}{C}-\sum_{i}\frac{1}{C}$ (13)
Now the only term which can be maximised is the split between the set $I$ and
$J$, leaving us with the maximum the Imbalance can achieve, i.e. split of
$|I|=1$ and $|J|=C-1$ and therefore:
$\displaystyle(13)\leq 1+(C-1)\frac{1}{C}-\frac{1}{C}=2(1-\frac{1}{C})\qed$
(14)
This shows that the Imbalance metric aligns with our intuitive definition of
imbalance.
### B.3 Hellinger Similarity
The Hellinger Similarity is given by:
$\displaystyle
HS(P,Q)=1-\frac{1}{\sqrt{2}}\sqrt{\sum_{i=1}^{k}(\sqrt{p_{i}}-\sqrt{q_{i}})^{2}}$
(15)
Where $p_{i}$ is the probability of word $i$ occurring and $k$ is the number
of words in either $P$ or $Q$, whichever is larger.
### B.4 Jaccard Distance
The Jaccard distance is given by:
$\displaystyle J(P,Q)=\frac{\left|P\cap Q\right|}{\left|P\cup Q\right|}$ (16)
Where P is the set of words in one class and Q is the set of words in another.
### B.5 Mutual Information
The mutual information statistic between two classes $X$ and $Y$ is:
$\displaystyle MI=\sum_{x\in X}\sum_{y\in
Y}p(x,y)log\left(\frac{p(x,y)}{p(x)p(y)}\right)$ (17)
$p(x)$ represents the probability of a word occurring in class $X$. $p(x,y)$
is the sum of the count of the word in class $X$ and the count of the word in
class $Y$ divided by the sum of the total count of words in both classes.
## Appendix C Model Descriptions
All of our models were implemented in Python with
TensorFlow777https://www.tensorflow.org/ for neural networks and scikit-
learn888http://scikit-learn.org/ for classical models.
### C.1 Word Embedding Based Models
Five of our models use word embeddings. We use a FastText Bojanowski et al.
(2016) model trained on the One Billion Word Corpus Chelba et al. (2013) which
gives us an open vocabulary. Embedding size is 128. All models use a learning
rate of 0.001 and the Adam optimiser Kingma and Ba (2014). The loss function
is weighted cross entropy where each class is weighted according to the
proportion of its occurrence in the data. This allows the models to handle
unbalanced data.
##### LSTM- and GRU-based RNNs
Two simple recurrent neural networks (RNNs) with LSTM cells Hochreiter and
Schmidhuber (1997) and GRU cells Cho et al. (2014), which are directly passed
the word embeddings. The cells have 64 hidden units and no regularisation. The
final output is projected and passed through a softmax to give class
probabilities.
##### Bidirectional LSTM and GRU RNNs
Uses the bidirectional architecture proposed by Schuster and Paliwal (1997),
the networks are directly passed the word embeddings. The cells have 64 units
each. The final forward and backward states are concatenated giving a 128
dimensional vector which is projected and passed through a softmax to give
class probabilities.
##### Multi-Layer Perceptron (MLP)
A simple single hidden layer neural network with 64 hidden units and ReLU non-
linearity. The word embeddings for each word of the sentence are summed before
being passed through the network, following the approach of Kågebäck et al.
(2014).
### C.2 TF-IDF Based Models
Six of our models use TFIDF based vectors Ramos et al. (2003) as input. Loss
is weighted to handle class imbalance, but the function used varies between
models.
##### Adaboost
An ensemble algorithm which trains many weak learning models on subsets of the
data using the SAMME algorithm Hastie et al. (2009). We use a learning rate of
1 and 100 different weak learners. The weak learners are decision trees which
measure split quality with the Gini impurity and have a maximum depth of 5.
##### Gaussian Naive Bayes
A simple and fast algorithm based on Bayes’ theorem with the naive
independence assumption among features. It can be competitive with more
advanced models like Support Vector Machines with proper data preprocessing
Rennie et al. (2003). We do not preprocess the data in any way and simply pass
the TF-IDF vector.
##### k-Nearest Neighbors
A non-parametric model which classifies new data by the majority class among
the k nearest neighbors of each new item. We set $k=5$. Euclidean distance is
used as the the distance metric between items.
##### Logistic Regression
A regression model which assigns probabilities to each class. For multiple
classes this becomes multinomial logistic or softmax regression. We use L2
regularization with this model with regularization strength set to 1.
##### Random Forest
An ensemble method that fits many decision trees on subsamples of the dataset.
The forest then chooses the best classification over all trees. Each of our
forests contain 100 trees and have a maximum depth of 100. Trees require a
minimum of 25 samples to create a leaf node.
##### Support Vector Machine (SVM)
A mathematical method which maximises the distance between the decision
boundary and the support vectors of each class. Implemented with a linear
kernel, l2 regularisation with a strength of 1 and squared hinge loss. Multi-
class is handled by training multiple binary classifiers in a ”one vs rest”
scheme as followed by Hong and Cho (2008). Due to the inefficiency of SVMs
operating on large datasets, all datasets are sub sampled randomly to $10000$
data items before fitting this model.
### C.3 Character Based Models
We use a single character based model which is a convolutional neural network
(CNN) inspired by Zhang et al. (2015). Similar to our word embedding based
models, it uses a learning rate of 0.001 and Adam optimizer. It receives one-
hot representations of the characters and embeds these into a 128-dimensional
dense representation. It then has three convolutional layers of kernel sizes
$5$, $3$ and $3$ with channel sizes of $256$, $128$ and $64$ respectively. All
stride sizes are 1 and valid padding is used. The convolutional layers are
followed by dropout with probability set to 0.5 during training and 1 during
testing. The dropout is followed by two hidden layers with ReLU non-linearity
with $1000$ then $256$ hidden units respectively. Finally this output is
projected and passed through a softmax layer to give class probabilities.
## Appendix D Further Evolution Results
### D.1 Other Discovered Difficulty Measures
This appendix contains a listing of other difficulty measures discovered by
the evolutionary process. The selection frequencies of different components
are visualised in Figure 3.
Figure 3: Frequency of different statistics’ selection during evolution for
all difficulty measures found with correlation greater than 0.88.
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Maximum Unigram Hellinger Similarity +
Top Unigram Mutual Information : 0.884524539285
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Unigram Shannon Equitability + Maximum
Unigram Hellinger Similarity + Maximum 4-gram Hellinger Similarity + Top
Unigram Mutual Information + Shannon Character Equitability : 0.884444632266
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Average 4-gram Hellinger Similarity +
Maximum Unigram Hellinger Similarity + Top Unigram Mutual Information +
Shannon Character Equitability : 0.884417837328
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Unigram Shannon Equitability + Trigram
Shannon Equitability + Maximum Unigram Hellinger Similarity + Top Unigram
Mutual Information : 0.884389247487
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Unigram Shannon Equitability + Trigram
Shannon Equitability + Maximum Unigram Hellinger Similarity + Top Unigram
Mutual Information + Inverse Flesch Reading Ease : 0.883771907471
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + 4-Gram Shannon Equitability +
Maximum Unigram Hellinger Similarity + Shannon Character Equitability +
Inverse Flesch Reading Ease : 0.883726094418
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Average 5-gram Hellinger Similarity +
Maximum Unigram Hellinger Similarity + Top Unigram Mutual Information :
0.883722932273
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Mean Shannon n-Gram Equitability +
Maximum Unigram Hellinger Similarity + Top Unigram Mutual Information +
Shannon Character Diversity : 0.883590459193
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Trigram Shannon Equitability +
5-Gram Shannon Equitability + Maximum Unigram Hellinger Similarity :
0.883561148159
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + 2-Gram Shannon Equitability + 5-Gram
Shannon Equitability + Maximum Unigram Hellinger Similarity + Top Unigram
Mutual Information : 0.883335585611
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + 2-Gram Shannon Equitability + 4-Gram Shannon Equitability +
Maximum Unigram Hellinger Similarity + Shannon Character Equitability +
Inverse Flesch Reading Ease : 0.883257036313
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Average Trigram Hellinger Similarity +
Maximum Unigram Hellinger Similarity + Top Unigram Mutual Information +
Shannon Character Equitability + Inverse Flesch Reading Ease : 0.883217937163
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Maximum Unigram Hellinger Similarity +
Top Unigram Mutual Information + Shannon Character Diversity + Inverse Flesch
Reading Ease : 0.883092632656
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Maximum Unigram Hellinger
Similarity + Maximum Trigram Hellinger Similarity + Top Unigram Mutual
Information : 0.882946516641
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Trigram Shannon Equitability + Mean
Shannon n-Gram Equitability + Maximum Unigram Hellinger Similarity + Shannon
Character Equitability + Inverse Flesch Reading Ease : 0.882914430188
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Mean Shannon n-Gram Equitability + Maximum Unigram Hellinger
Similarity : 0.882863026072
* •
Distinct Bigrams : Total Bigrams + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Maximum Unigram Hellinger
Similarity : 0.882825047536
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Trigram Shannon Equitability + Maximum Unigram Hellinger
Similarity : 0.882628270942
* •
Distinct Bigrams : Total Bigrams + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Maximum Unigram Hellinger
Similarity + Shannon Character Equitability : 0.882562918832
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Mean Shannon n-Gram Equitability + Average Trigram Hellinger
Similarity + Maximum Unigram Hellinger Similarity + Top Unigram Mutual
Information + Shannon Character Equitability : 0.882376082243
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + Unigram Shannon
Equitability + Maximum Unigram Hellinger Similarity + Shannon Character
Equitability : 0.882297072242
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + 2-Gram Shannon Equitability + Average
Trigram Hellinger Similarity + Maximum Unigram Hellinger Similarity + Top
Unigram Mutual Information + Inverse Flesch Reading Ease : 0.882245884638
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + 2-Gram Shannon Equitability +
Average Unigram Hellinger Similarity : 0.882237171884
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Average 5-gram Hellinger Similarity + Maximum Unigram Hellinger
Similarity + Top Unigram Mutual Information + Shannon Character Equitability :
0.882046043522
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Unigram Shannon Equitability + Trigram
Shannon Equitability + Maximum Unigram Hellinger Similarity + Top Unigram
Mutual Information + Shannon Character Diversity + Inverse Flesch Reading Ease
: 0.881936634248
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + Unigram Shannon
Equitability + Average Unigram Hellinger Similarity : 0.881760155361
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Unigram Shannon Equitability + 5-Gram
Shannon Equitability + Maximum Bigram Hellinger Similarity + Maximum Trigram
Hellinger Similarity + Top Unigram Mutual Information : 0.881643119225
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Maximum Unigram Hellinger Similarity +
Maximum Bigram Hellinger Similarity + Top Unigram Mutual Information :
0.881581392494
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + Trigram Shannon Equitability + Maximum
Unigram Hellinger Similarity + Maximum 5-gram Hellinger Similarity + Top
Unigram Mutual Information : 0.881517499763
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + Unigram Shannon
Equitability + 4-Gram Shannon Equitability + Mean Shannon n-Gram Equitability
+ Maximum Unigram Hellinger Similarity : 0.881490427511
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Trigram Shannon Equitability + Average Trigram Hellinger
Similarity + Maximum Unigram Hellinger Similarity + Top Unigram Mutual
Information : 0.88147122781
* •
Distinct Words (vocab) : Total Words + Top 5-gram Interference + Class
Imbalance + Shannon Class Diversity + 4-Gram Shannon Equitability + Average
5-gram Hellinger Similarity + Maximum Unigram Hellinger Similarity + Top
Unigram Mutual Information + Shannon Character Diversity + Inverse Flesch
Reading Ease : 0.881440344906
* •
Distinct Bigrams : Total Bigrams + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Maximum Unigram Hellinger Similarity : 0.881440337829
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + Mean Shannon n-Gram
Equitability + Maximum Unigram Hellinger Similarity : 0.881426394865
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + Trigram Shannon
Equitability + 4-Gram Shannon Equitability + Maximum Unigram Hellinger
Similarity + Shannon Character Equitability : 0.881404209076
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Maximum Unigram Hellinger Similarity + Top Unigram Mutual
Information : 0.881365728443
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + 5-Gram Shannon
Equitability + Mean Shannon n-Gram Equitability + Maximum Unigram Hellinger
Similarity : 0.881342679515
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Maximum Unigram Hellinger Similarity + Mean Maximum Hellinger
Similarity + Top Unigram Mutual Information : 0.881340929845
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Maximum Unigram Hellinger
Similarity + Top Unigram Mutual Information + Shannon Character Diversity :
0.88117529932
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + Trigram Shannon
Equitability + Maximum Unigram Hellinger Similarity + Shannon Character
Diversity : 0.881163020765
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Trigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Maximum Unigram Hellinger
Similarity : 0.881044616332
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + 4-Gram Shannon
Equitability + Mean Shannon n-Gram Equitability + Average Unigram Hellinger
Similarity : 0.88091437587
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Mean Top n-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Maximum Unigram Hellinger
Similarity + Shannon Character Diversity + Shannon Character Equitability :
0.880910807679
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + Trigram Shannon
Equitability + Average Unigram Hellinger Similarity : 0.880885142235
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + 5-Gram Shannon Equitability + Average Unigram Hellinger Similarity
: 0.880837764104
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Maximum Unigram Hellinger Similarity + Maximum Bigram Hellinger
Similarity + Top Unigram Mutual Information : 0.880709196549
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top 5-gram
Interference + Mean Top n-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Maximum Unigram Hellinger
Similarity + Shannon Character Diversity : 0.880654042756
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top 5-gram
Interference + Mean Top n-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + 5-Gram Shannon Equitability + Mean
Shannon n-Gram Equitability + Maximum Unigram Hellinger Similarity + Inverse
Flesch Reading Ease : 0.88058845366
* •
Distinct Bigrams : Total Bigrams + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + Unigram Shannon
Equitability + Maximum Unigram Hellinger Similarity + Maximum Bigram Hellinger
Similarity : 0.88057312013
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + 2-Gram Shannon
Equitability + 5-Gram Shannon Equitability + Maximum Unigram Hellinger
Similarity + Shannon Character Diversity + Shannon Character Equitability :
0.880527649949
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + 2-Gram Shannon
Equitability + Trigram Shannon Equitability + Maximum Unigram Hellinger
Similarity + Inverse Flesch Reading Ease : 0.880466229085
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + Unigram Shannon
Equitability + Average Unigram Hellinger Similarity + Inverse Flesch Reading
Ease : 0.880427741747
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + 5-Gram Shannon
Equitability + Maximum Unigram Hellinger Similarity : 0.880408773828
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Top 5-gram Interference + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Average 4-gram Hellinger Similarity
+ Maximum Unigram Hellinger Similarity : 0.880334639215
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Mean Shannon n-Gram Equitability + Average Trigram Hellinger
Similarity + Maximum Unigram Hellinger Similarity + Top Unigram Mutual
Information + Inverse Flesch Reading Ease : 0.880326776791
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Unigram Shannon Equitability + Trigram Shannon Equitability +
Maximum Unigram Hellinger Similarity + Mean Maximum Hellinger Similarity + Top
Unigram Mutual Information : 0.880295177778
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + Maximum Unigram
Hellinger Similarity : 0.880252374975
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + Trigram Shannon Equitability + Average Trigram Hellinger
Similarity + Maximum Unigram Hellinger Similarity + Top Unigram Mutual
Information + Shannon Character Diversity : 0.880239646699
* •
Distinct Words (vocab) : Total Words + Top Unigram Interference + Top Bigram
Interference + Class Imbalance + Shannon Class Diversity + 2-Gram Shannon
Equitability + Average Unigram Hellinger Similarity + Inverse Flesch Reading
Ease : 0.880200393627
* •
Distinct Words (vocab) : Total Words + Class Imbalance + Shannon Class
Diversity + 4-Gram Shannon Equitability + Average Trigram Hellinger Similarity
+ Maximum Unigram Hellinger Similarity + Top Unigram Mutual Information +
Shannon Character Equitability + Inverse Flesch Reading Ease : 0.880083849581
Statistic | Correlation
---|---
Maximum Unigram Hellinger Similarity | 0.720896895887
Top Unigram Interference | 0.64706340007
Maximum Bigram Hellinger Similarity | 0.619410655023
Mean Maximum Hellinger Similarity | 0.599742584859
Mean Top N-Gram Interference | 0.592624636419
Average Unigram Hellinger Similarity | 0.574120851308
Top Bigram Interference | 0.574018328147
Top Trigram Interference | 0.556869160804
Shannon Class Diversity | 0.495247387609
Maximum Trigram Hellinger Similarity | 0.470443549996
Top 5-Gram Interference | 0.469209823975
Average Bigram Hellinger Similarity | 0.457163222902
Mean Average Hellinger Similarity | 0.454790305987
Top 4-Gram Interference | 0.418374832964
Maximum Unigram Hellinger Similarity | 0.332573671726
Average Trigram Hellinger Similarity | 0.328687842958
Top Unigram Mutual Information | 0.293673742958
Maximum 5-gram Hellinger Similarity | 0.261369081098
Average 4-gram Hellinger Similarity | 0.24319918737
Average 5-gram Hellinger Similarity | 0.20741866152
Top 5-gram Mutual Information | 0.18246852683
Class Imbalance | 0.164274169881
Mean n-Gram Shannon Equitability | 0.14924393263
4-Gram Shannon Equitability | 0.142930086195
Trigram Shannon Equitability | 0.130883685416
Unigram Shannon Equitability | 0.129571167512
5-Gram Shannon Equitability | 0.118068879785
Bigram Shannon Equitability | 0.116996612078
Unigram Shannon Diversity | 0.0587541973146
Distinct Words (Vocab) : Total Words | 0.0578981403589
Bigram Shannon Diversity | 0.0516963177593
Mean n-Gram Shannon Diversity | 0.0440696293705
Shannon Character Diversity | 0.0431234569786
Mean Top n-gram Mutual Information | 0.0413350594379
Shannon Character Equitability | 0.0402159715373
Trigram Shannon Diversity | 0.0396008851652
Shannon Class Equitability | 0.0360726401633
Top Trigram Mutual Information | 0.0337710575222
Top 4-gram Mutual Information | 0.0279796567333
4-Gram Shannon Diversity | 0.0259739834385
Top Bigram Mutual Information | 0.0257123532616
Distinct Bigrams : Total Bigrams | 0.0252155036575
Inverse Flesch Reasing Ease | 0.0250329647438
5-Gram Shannon Diversity | 0.0189276868112
Mean Distinct n-grams : Total n-grams | 0.0141636605848
Distinct 5-grams : Total 5-grams | 0.00664063690957
Distinct Trigrams : Total Trigrams | 0.00465734012651
Distinct 4-grams : Total 4-grams | 0.0015168555015
Table 6: The 48 different statistics we calculated and their correlations
with model score across datasets
|
# Enhanced Optimal Quantum Communication
by Generalized Phase Shift Keying Coherent Signal
Min Namkung Jeong San Kim<EMAIL_ADDRESS>Department of Applied
Mathematics and Institute of Natural Sciences, Kyung Hee University, Yongin
17104, Republic of Korea
###### Abstract
It is well known that the maximal success probability of the binary quantum
communication can be improved by using a sub-Poissonian non-standard coherent
state as an information carrier. In the present article, we consider the
quantum communication with $N$-ary phase shift keying ($N$-PSK) signal for an
arbitrary positive integer $N>1$. By using non-standard coherent state, we
analytically provide the maximal success probability of the quantum
communication with $N$-PSK. Unlike the binary case, we show that even super-
Poissonianity of non-standard coherent state can improve the maximal success
probability of $N$-PSK quantum communication.
††preprint: APS/123-QED
## I Introduction
In optical communication, a sender encodes a message in an optical signal and
sends it to a receiver who detectes the signal to decode the message
g.cariolaro . Thus, the success probability of the optical communication is
determined by the physical and statistical properties of the optical signal
together with the structure of the receiver’s measurement device. In classical
optical communication, the receiver can use an on-off detector to decode a
sender’s message encoded in on-off keying signal c.w.helstrom ; k.tsujino ,
and a homodyne detector for binary phase shift keying signal j.g.proakis .
However, the maximal success probability for decoding encoded messages by
using conventional measurements such as the on-off and the homodyne detectors
cannot exceed the standard quantum limit.
One of the goals in quantum communication is to design a novel measurement so
that the maximal success probability to decode messages can surpass standard
quantum limit i.a.burenkov . According to the quantum theory, optical signal
is described as a density operator on a Hilbert space and a measurement is
descrived as a positive-operator-valued measure (POVM), therefore the quantum
communication is described as a quantum state discrimination protocol
s.m.barnett ; j.a.bergou .
Minimum error discrimination j.bae ; d.ha is one representative state
discrimination strategy used in various quantum communication protocols. When
one bit message is encoded by binary coherent states, minimum error
discrimination between the binary coherent states can be implemented via the
Dolinar receiver s.j.dolinar . However, when several bits are encoded and
sequentially sent, the photon number detector used for the discrimination may
not efficiently react along the received states i.a.burenkov . For this
reason, $N$-ary coherent states such as $N$-amplitude shift keying ($N$-ASK)
signal c.w.helstrom and $N$-phase shift keying ($N$-PSK) signal j.g.proakis
have been considered to send $\log_{2}N$ bit messages.
According to a recent work e.m.f.curado , the maximal success probability (or
Helstrom bound) of discriminating a message encoded in 2-PSK signal composed
of non-standard coherent states (NS-CS) with a novel quantum measurement can
be improved by the sub-Poissonianity of the NS-CS. Moreover, the experimental
method for implementing the quantum measurement reaching for the Helstrom
bound has recently been proposed m.namkung . Since the negative Mandel
parameter to quantify the sub-Poissonianity is considered as a resource in a
non-classical light s.dey , this result implies that the sub-Poissonianity can
be a resource for improving the performance of the quantum communication.
In the present article, we consider the quantum communication with $N$-PSK
signal for arbitrary an arbitrary positive integer $N>1$. By using non-
standard coherent state, we analytically provide the maximal success
probability of the quantum communication with $N$-PSK. Unlike the binary case,
we show that even super-Poissonianity of non-standard coherent state can
improve the maximal success probability of $N$-PSK quantum communication: The
Helstrom bound can be improved by considering the sub-Poissonian NS-CS for
$N=3$, meanwhile the super-Poissonian NS-CS can improve the Helstrom bound for
$N=4$ and $N=8$.
For $N>2$, $N$-PSK signal enables us to transmit a $\log_{2}N$-bit message per
a signal pulse, which is a better information exchange rate than binary-PSK.
Moreover it is also known that $N$-PSK signal can provide an improved
information exchange rate between the sender and receiver even though the
receiver’s measurement is slow i.a.burenkov . However, the maximal success
probability of discriminating a message encoded in $N$-PSK signal generally
decreases as $N$ is getting large. Thus our results about the possible
enhancement of the maximal success probability in $N$-PSK quantum
communication by NS-CS is important and even necessary to design efficient
quantum communication schemes.
The present article is organized as follows. In Section 2, we briefly review
the problem of minimum error discrimination among $N$ symmetric pure states.
In Section 3, we provide the analytical Helstrom bound of $N$-PSK signal
composed of NS-CS. In Section 4, we investigate the Helstrom bound of $N$-PSK
signal composed of optical spin coherent states (OS-CS), Perelomov coherent
states (P-CS), Barut-Girardello coherent states (BG-CS) and modified Susskind-
Glogower coherent states (mSG-CS), and discuss the relation between the sub-
Poissonianity of the non-classical light and the performance of the $N$-PSK
quantum communication. Finally, in Section 5, we propose the conclusion of the
present article.
## II Preliminaries: Minimum Error Discrimination among Symmetric Pure States
In quantum communication, Alice (sender) prepares her message
$x\in\\{1,\cdots,N\\}$ with a prior probability
$q_{x}\in\\{q_{1},\cdots,q_{N}\\}$, encodes the message in a quantum state
$\rho_{x}\in\\{\rho_{1},\cdots,\rho_{N}\\}$, and sends the quantum state to
Bob (receiver). Bob performs a quantum measurement described as a POVM
$\\{M_{1},\cdots,M_{N}\\}$ to discriminate the encoded message. In the POVM,
$M_{x}$ is a POVM element with respect to a result $x$.
For a given ensemble $\mathcal{E}=\\{q_{x},\rho_{x}\\}_{x=1}^{N}$ of Alice and
a POVM $\mathcal{M}=\\{M_{x}\\}_{x=1}^{N}$ of Bob, the success probability of
the quantum communication between Alice and Bob is described by the success
probability of the state discrimination,
$P_{s}(\mathcal{E},\mathcal{M})=\sum_{x=1}^{N}q_{x}\mathrm{tr}\left\\{\rho_{x}M_{x}\right\\},$
(1)
One way to optimize the efficiency of quantum communication is to consider a
POVM that maximizes the success probability in Eq. (1). In this case, the
maximization of the success probability in Eq. (1) is equivalent to the
minimization of the error probability
$P_{e}(\mathcal{E},\mathcal{M})=1-P_{s}(\mathcal{E},\mathcal{M})=\sum_{x=1}^{N}\sum_{y\not=x}q_{x}\mathrm{tr}\left\\{\rho_{x}M_{y}\right\\}.$
(2)
Minimum error discrimination is to minimize the error probability in Eq. (2)
over all possible POVMs $\mathcal{M}$ of Bob.
For a given ensemble $\mathcal{E}$, it is known that the following inequality
is a necessary and sufficient condition for POVM $\mathcal{M}$ minimizing the
error probability c.w.helstrom ; s.m.barnett2 ,
$\displaystyle\sum_{z=1}^{N}q_{z}\rho_{z}M_{z}-q_{x}\rho_{x}\geq 0,\ \ \forall
x\in\\{1,\cdots,N\\}.$ (3)
Moreover, it is known that the following equality is a useful necessary
condition to characterize the structure of the POVM,
$\displaystyle M_{x}(q_{x}\rho_{x}-q_{y}\rho_{y})M_{y}=0,\ \ \forall
x,y\in\\{1,\cdots,N\\}.$ (4)
If every quantum state $\rho_{x}$ is pure, that is,
$\rho_{x}=|\psi_{x}\rangle\langle\psi_{x}|$, the optimal POVM is given by a
rank-1 projective measurement c.w.helstrom . In other word,
$M_{x}=|\pi_{x}\rangle\langle\pi_{x}|$ for every $x\in\\{1,\cdots,N\\}$.
Now, we focus on the minimum error discrimination among a specific class of
pure states, called symmetric pure states.
###### Definition 1
a.chefles For a positive integer $N$, the distinct pure states
$|\psi_{1}\rangle,\cdots,|\psi_{N}\rangle$ are called symmetric, if there
exists a unitary operator $V$ such that
$|\psi_{x}\rangle=V^{x-1}|\psi_{1}\rangle$ (5)
for $x=1,2,\cdots,N$ and
$V^{N}=\mathbb{I},$ (6)
where $\mathbb{I}$ is an identity operator on a subspace spanned by
$\\{|\psi_{1}\rangle,\cdots,|\psi_{N}\rangle\\}$.
The Gram matrix composed of the symmetric pure states in Definition 1 is
$G=\left(\langle\psi_{x}|\psi_{y}\rangle\right)_{x,y=1}^{N}.$ (7)
From a straightforward calculation, the eigenvalues of the Gram matrix in Eq.
(7) are in forms of
$\lambda_{p}=\sum_{k=1}^{N}\langle\psi_{j}|\psi_{k}\rangle e^{-\frac{2\pi
i(p-1)(j-k)}{N}},\ \ p=1,2,\cdots,N,$ (8)
for any choice of $j\in\\{1,2,\cdots,N\\}$. We note that the set
$\\{\lambda_{p}\\}_{p=1}^{N}$ is invariant under the choice of $j$ due to the
symmetry of the pure state $\\{|\psi_{1}\rangle,\cdots,|\psi_{N}\rangle\\}$.
The following proposition provides the maximal success probability of the
minimum error discrimination among the symmetric pure states in Definition 1.
###### Proposition 1
c.w.helstrom Let $\mathcal{E}_{sym}$ be an equiprobable ensemble of symmetric
pure states $|\psi_{1}\rangle,\cdots,|\psi_{N}\rangle$. Then, the maximal
success probability is given as
$P_{hel}(\mathcal{E}_{sym})=\frac{1}{N^{2}}\left(\sum_{p=1}^{N}\sqrt{\lambda_{p}}\right)^{2},$
(9)
where $\lambda_{p}$ are the eigenvalues of the Gram matrix composed of
$\\{|\psi_{1}\rangle,\cdots,|\psi_{N}\rangle\\}$ in Eq. (7).
Eq. (9) is also called Helstrom bound, and $1-P_{hel}(\mathcal{E}_{sym})$ is
called minimum error probability.
## III Optimal Communication with
Phase Shift Keying (PSK) Signal
In quantum optical communication, phase shift keying (PSK) signal is expressed
as equiprobable symmetric pure states g.cariolaro . In this section, we derive
the maximal success probability of the quantum communication with PSK signal
composed of generalized coherent states. First, we provide definition of the
generalized coherent state which encapsulates standard coherent state (S-CS)
and non-standard coherent state (NS-CS) as special cases.
###### Definition 2
e.m.f.curado If a pure state takes the form
$|\alpha,\vec{h}\rangle=\sum_{n=0}^{\infty}\alpha^{n}h_{n}(|\alpha|^{2})|n\rangle,\
\ \alpha\in\mathbb{C},$ (10)
where $\\{|n\rangle|n\in\mathbb{Z}^{+}\cup\\{0\\}\\}$ is Fock basis and
$\vec{h}$ is a tuple of real-valued functions
$h_{n}:[0,R^{2}]\rightarrow\mathbb{R}$ satisfying
$\displaystyle\sum_{n=0}^{\infty}u^{n}\left\\{h_{n}(u)\right\\}^{2}=1,$ (11)
$\displaystyle\sum_{n=0}^{\infty}nu^{n}\left\\{h_{n}(u)\right\\}^{2}$ (12)
$\displaystyle\ \ \ \ \ \ \ \ \textrm{ is a strictly increasing function of
}u,$ $\displaystyle\int_{0}^{R^{2}}duw(u)u^{n}\left\\{h_{n}(u)\right\\}^{2}=1$
(13) $\displaystyle\ \ \ \ \ \ \ \ \textrm{ for a real-valued function
}w:[0,R^{2}]\rightarrow\mathbb{R}^{+},$
then the pure state is called generalized coherent state. If every real-valued
function $h_{n}$ in Eq. (10) takes the form
$h_{n}(u)=\frac{1}{\sqrt{n!}}e^{-\frac{1}{2}u},\ \ \forall
n\in\mathbb{Z}^{+}\cup\\{0\\},$ (14)
then Eq. (10) is called standard coherent state (S-CS) r.j.glauber .
Otherwise, Eq. (10) is called non-standard coherent state (NS-CS).
Several examples of NS-CS have been introduced such as optical spin coherent
state (OS-CS) a.m.perelomov , Perelomov coherent state (P-CS) a.m.perelomov ,
Barut-Girardello coherent state (BG-CS) a.o.barut and modified Susskind-
Glogower coherent state (mSG-CS) j.-p.gazeau .
###### Example 1
For a given non-negative integer $\widetilde{n}$, if $h_{n}$ takes the form
$\displaystyle
h_{n}(u)=\sqrt{\frac{\widetilde{n}!}{n!(\widetilde{n}-n)!}}(1+u)^{-\frac{\widetilde{n}}{2}},$
(15)
for $0\leq n\leq\widetilde{n}$ and $h_{n}(u)=0$ for $n>\widetilde{n}$, then
the generalized coherent state in Eq. (10) is called optical spin coherent
state (OS-CS).
###### Example 2
For all non-negative integer $n$ and a real number $\varsigma$ with
$\varsigma\geq 1/2$, if $h_{n}$ takes the form
$\displaystyle
h_{n}(u)=\frac{1}{\mathcal{N}(u)}\sqrt{\frac{\Gamma(2\varsigma)}{n!\Gamma(2\varsigma+n)}},$
(16)
then the generalized coherent state in Eq.(10) is called Barut-Girardello
coherent state (BG-CS). Here, $\Gamma$ is the Gamma function of the first kind
and $\mathcal{N}(u)$ is a normalization factor
$\mathcal{N}(u)=\Gamma(2\varsigma)u^{1/2-u}I_{2\varsigma-1}(2\sqrt{u}),$ (17)
where $I_{\nu}$ is the modified Bessel function of the first kind.
###### Example 3
For all non-negative integer $n$, if $h_{n}$ takes the form
$\displaystyle
h_{n}(u)=\sqrt{\frac{n+1}{\mathcal{\bar{N}}(u)}}\frac{1}{u^{\frac{n+1}{2}}}J_{n+1}(2\sqrt{u}),$
(18)
then the generalized coherent state in Eq. (10) is called modified Susskind-
Glogower coherent state (mSG-CS). Here, $J_{n}$ is the Bessel function of the
first kind and $\mathcal{\bar{N}}(u)$ is a normalization factor
$\displaystyle\mathcal{\bar{N}}(u)$ $\displaystyle=$
$\displaystyle\frac{1}{u}\Big{[}2u\left\\{J_{0}(2\sqrt{u})\right\\}^{2}$
$\displaystyle-$
$\displaystyle\sqrt{u}J_{0}(2\sqrt{u})J_{1}(2\sqrt{u})+2u\left\\{J_{1}(2\sqrt{u})\right\\}^{2}\Big{]}.$
###### Example 4
For all non-negative integer $n$, $\varsigma$ and an integer or half-integer
with $\varsigma\geq 1/2$, if $h_{n}$ takes the form
$\displaystyle
h_{n}(u)=\sqrt{\frac{(2\varsigma-1+n)!}{n!(2\varsigma-1)!}}(1-u)^{\varsigma},$
(20)
then the generalized coherent state in Eq. (10) is called Perelomov coherent
state (P-CS).
We mainly focus on which NS-CS provided in the examples can give the advantage
to the $N$-ary PSK quantum communication. For this reason, we define the
$N$-ary generalized PSK ($N$-GPSK) signal as follows.
###### Definition 3
If an equiprobable ensemble $\mathcal{E}_{gcs}$ consists of generalized
coherent states,
$\left\\{|\alpha_{x},\vec{h}\rangle|x\in\\{1,2,\cdots,N\\}\right\\},$ (21)
where $N\in\mathbb{Z}^{+}$ and $\alpha_{x}\in\mathbb{C}$ such that
$\alpha_{x}=\alpha e^{\frac{2\pi i}{N}x},$ (22)
with a non-negative integer $\alpha$, then the ensemble $\mathcal{E}_{gcs}$ is
called $N$-ary generalized PSK ($N$-GPSK) signal.
Moreover, $N$-GPSK signal is called $N$-ary standard PSK ($N$-SPSK) signal
g.cariolaro if every coherent state in Eq. (21) is S-CS, and $N$-PSK signal
is called $N$-ary non-standard PSK ($N$-NSPSK) signal if every coherent state
in Eq. (21) is NS-CS. The following theorem shows that the generalized
coherent states in Definition 3 are symmetric.
###### Theorem 1
For given distinct generalized coherent states
$|\alpha_{1},\vec{h}\rangle,\cdots,|\alpha_{N},\vec{h}\rangle$, there exists a
unitary operator $U$ such that
$\displaystyle|\alpha_{x},\vec{h}\rangle=U^{x-1}|\alpha_{1},\vec{h}\rangle,\ \
\forall x\in\\{1,2,\cdots,N\\},$ (23)
for $x=1,2,\cdots,N$ and
$U^{N}=\mathbb{I},$ (24)
where $\mathbb{I}$ is an identity operator on a subspace spanned by
$\\{|\alpha_{1},\vec{h}\rangle,\cdots,|\alpha_{N},\vec{h}\rangle\\}$.
Proof. Consider a unitary operator
$\displaystyle U=e^{\frac{2\pi i}{N}a^{\dagger}a},$ (25)
where $a$ and $a^{\dagger}$ are the annihilation and creation operators
satisfying
$\displaystyle a|n\rangle=\sqrt{n}|n-1\rangle,\ \ \forall n\in\mathbb{Z}^{+},$
(26) $\displaystyle a^{\dagger}|n\rangle=\sqrt{n+1}|n+1\rangle,\ \ \forall
n\in\mathbb{Z}^{+}\cup\\{0\\},$ (27)
respectively. It is straightforward to show that the unitary operator $U$ in
Eq. (25) satisfies Eq. (24).
We also note that
$\displaystyle U|n\rangle=e^{\frac{2\pi i}{N}n}|n\rangle,$ (28)
for any non-negative integer $n$, therefore we have that
$\displaystyle U|\alpha_{x},\vec{h}\rangle$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\alpha_{x}^{n}h_{n}(|\alpha_{x}|^{2})e^{\frac{2\pi
i}{N}a^{\dagger}a}|n\rangle$ (29) $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\alpha_{x}^{n}h_{n}(|\alpha_{x}|^{2})e^{\frac{2\pi
i}{N}n}|n\rangle$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}(\alpha_{x}e^{\frac{2\pi
i}{N}})^{n}h_{n}(|\alpha_{x}|^{2})|n\rangle,$
for every $x\in\\{1,2,\cdots,N-1\\}$. Moreover, Eq. (22) leads us to
$\displaystyle\alpha_{x}e^{\frac{2\pi i}{N}}=\alpha_{x+1},$ (30)
for $x\in\\{1,2,\cdots,N-1\\}$ and
$\displaystyle|\alpha_{x}|=\alpha,$ (31)
for $x\in\\{1,2,\cdots,N\\}$. From Eqs. (29), (30) and (31), we have
$\displaystyle
U|\alpha_{x},\vec{h}\rangle=\sum_{n=0}^{\infty}(\alpha_{x+1})^{n}h_{n}(|\alpha_{x+1}|^{2})|n\rangle=|\alpha_{x+1},\vec{h}\rangle.$
(32)
Eq. (23) can be shown by an inductive use of Eq. (32), which completes the
proof.
Theorem 1 means that the Helstrom bound of quantum communication with $N$-GPSK
signal is given by Eq. (9) in Proposition 1, which is encapsulated in the
following theorem.
###### Theorem 2
The Helstrom bound of $N$-GPSK signal is given by
$\displaystyle
P_{hel}(\mathcal{E}_{gcs})=\frac{1}{N^{2}}\left(\sum_{p=1}^{N}\sqrt{\lambda_{p}^{(G)}}\right)^{2},$
(33)
where $\lambda_{p}^{(G)}$ takes the form of
$\displaystyle\lambda_{p}^{(G)}=\sum_{k=0}^{M-1}\left[\sum_{n=0}^{\infty}\alpha^{2n}\cos\left\\{\frac{2\pi}{N}k(n+p-1)\right\\}\left\\{h_{n}(\alpha^{2})\right\\}^{2}\right].$
(34)
for every $p\in\\{1,2,\cdots,N\\}$.
Proof. For every $j,k\in\\{1,2,\cdots,N\\}$, the inner product
$\langle\alpha_{j},\vec{h}|\alpha_{k},\vec{h}\rangle$ is
$\displaystyle\langle\alpha_{j},\vec{h}|\alpha_{k},\vec{h}\rangle=\sum_{n=0}^{\infty}\left\\{\alpha^{2}e^{i\frac{2\pi}{N}(k-j)}\right\\}^{n}\left\\{h_{n}(\alpha^{2})\right\\}^{2}.$
(35)
From Eq. (35) together with Eq. (8), $\lambda_{p}^{(G)}$ is also obtained by
$\displaystyle\lambda_{p}^{(G)}$
$\displaystyle=\sum_{k=1}^{N}\left[\sum_{n=0}^{\infty}\left\\{\alpha^{2}e^{i\frac{2\pi}{N}(k-j)}\right\\}^{n}\left\\{h_{n}(\alpha^{2})\right\\}^{2}\right]e^{-\frac{2\pi
i(p-1)(j-k)}{N}}$
$\displaystyle=\sum_{k=1}^{N}\left[\sum_{n=0}^{\infty}\alpha^{2n}e^{i\frac{2\pi}{N}(k-j)(n+p-1)}\left\\{h_{n}(\alpha^{2})\right\\}^{2}\right].$
(36)
As mentioned before, the set $\\{\lambda_{p}^{(G)}\\}_{p=1}^{N}$ is invariant
under the choice of $j\in\\{1,2,\cdots,N\\}$. By choosing $j=1$ and
substituting $k$ to $k-1$, $\lambda_{p}^{(G)}$ can be rewritten by
$\displaystyle\lambda_{p}^{(G)}=\sum_{k=0}^{N-1}\left[\sum_{n=0}^{\infty}\alpha^{2n}e^{i\frac{2\pi}{N}k(n+p-1)}\left\\{h_{n}(\alpha^{2})\right\\}^{2}\right].$
(37)
Since the Gram matrix is Hermitian, $\lambda_{p}^{(G)}$ is a real number.
Thus, by using the relation
$\lambda_{p}^{(G)}=\frac{\lambda_{p}^{(G)}+\lambda_{p}^{(G)*}}{2},$ (38)
together with Eq. (37), we have Eq. (34). Due to Theorem 1, every generalized
coherent state in $N$-GPSK signal is symmetric. Thus, Proposition 1 and Eq.
(34) lead us to the Helstrom bound in Eq. (33).
## IV Sub-Poissonianity of NS-CS and the Helstrom bound
For $N=3,4$ and 8, we provide illustrative results of the Helstrom bound of
$N$-NSPSK signal of Eq. (33) in case of OS-CS, P-CS, BG-CS and mSG-CS. We also
compare these results with the case of $N$-SPSK signal.
### IV.1 Optical Spin Coherent States (OS-CS)
Figure 1: The minimum error probabilities of $N$-SPSK signal and $N$-NSPSK
signal composed of OS-CS , where (a), (b) and (c) show the case of $N=$3, 4
and 8, respectively. In these figures, purple, red, blue and green lines show
the case of $N$-NSPSK signal with $\widetilde{n}=3$, $\widetilde{n}=5$,
$\widetilde{n}=7$ and $\widetilde{n}=11$, respectively. Black lines in the
figures show the case of $N$-SPSK signal. Figure 2: The minimum error
probabilities of $N$-SPSK signal and $N$-NSPSK signal composed of BG-CS, where
(a), (b) and (c) shows the case of $N=$3, 4 and 8, respectively. In these
figures, blue and red lines show the case of $N$-NSPSK signal with
$\varsigma=0.5$ and $\varsigma=1.5$, respectively. Black lines show the case
of $N$-SPSK.
The minimum error probabilities of $N$-SPSK signal and $N$-NSPSK signal
composed of OS-CS are illustrated in Fig. 1, where Fig. 1(a), (b) and (c) show
the case of $N=$3, 4 and 8, respectively. In these figures, purple, red, blue
and green lines show the case of $N$-NSPSK signal with $\widetilde{n}=3$,
$\widetilde{n}=5$, $\widetilde{n}=7$ and $\widetilde{n}=11$, respectively.
Black lines in the figures show the case of $N$-SPSK signal.
In Fig. 1(a), the minimum error probabilities of 3-NSPSK signal composed of
OS-CS is smaller than that of 3-SPSK signal when mean photon number is large
($\langle n\rangle>0.45$, $\langle n\rangle>0.42$, $\langle n\rangle>0.38$ and
$\langle n\rangle>0.37$ in case of $\widetilde{n}=3$, $\widetilde{n}=5$,
$\widetilde{n}=7$ and $\widetilde{n}=11$, respectively). In other words, 3-PSK
quantum communication can be enhanced by a non-standard coherent state using
OS-CS. However, in Fig. 1(b), each minimum error probability of 4-NSPSK signal
is larger than that of 4-SPSK signal for arbitrary mean photon number. This
aspect repeatedly happens in Fig. 1(c) where 8-NSPSK signal is considered.
These results imply that 4-PSK and 8-PSK quantum communication cannot be
enhanced by OS-CS.
### IV.2 Barut-Girardello Coherent States (BG-CS)
The minimum error probabilities of $N$-SPSK signal and $N$-NSPSK signal
composed of BG-CS are illustrated in Fig. 2, where Fig. 2(a), (b) and (c)
shows the case of $N=$3, 4 and 8, respectively. In these figures, blue and red
lines show the case of $N$-NSPSK signal with $\varsigma=0.5$ and
$\varsigma=1.5$, respectively. Black lines show the case of $N$-SPSK.
In Fig. 2(a), each minimum error probabilty of 3-NSPSK signal with
$\varsigma=1.5$ is smaller than that of 3-SPSK signal when mean photon number
is larger than 0.48. Meanwhile, each minimum error probabilty of 3-NSPSK
signal with $\varsigma=0.5$ is larger than that of 3-SPSK signal for arbitrary
mean photon number. Thus, enhancing 3-PSK quantum communication by non-
standard coherent state using BG-CS depends on the parameter $\varsigma$.
However, in Fig. 2(b), each minimum error probability of $4$-NSPSK signal is
larger than that of $4$-SPSK signal for arbitrary mean photon number. This
aspect repeatedly happens in Fig. 2(c) where 8-NSPSK signal is considered.
These results imply that 4-PSK and 8-PSK quantum communication cannot be
enhanced by BG-CS.
### IV.3 Modified Susskind-Glogower Coherent States (mSG-CS)
Figure 3: The minimum error probabilities of $N$-SPSK signal and $N$-NSPSK
signal composed of mSG-CS, where (a), (b) and (c) shows the case of $N=$3, 4
and 8, respectively. In these figures, red lines show the case of $N$-NSPSK
signal and black lines show the case of $N$-SPSK. Figure 4: The minimum error
probabilities of $N$-SPSK signal and $N$-NSPSK signal composed of P-CS, where
(a), (b) and (c) shows the case of $N=$3, 4 and 8, respectively. In these
figures, blue and red lines show the case of P-CS with $\varsigma=0.5$ and
$\varsigma=1.5$, respectively. Black lines show the case of S-CS.
The minimum error probabilities of $N$-SPSK signal and $N$-NSPSK signal
composed of mSG-CS are illustrated in Fig. 3, where Fig. 3(a), (b) and (c)
shows the case of $N=$3, 4 and 8, respectively. In these figures, red lines
show the case of $N$-NSPSK signal and black lines show the case of $N$-SPSK.
In Fig. 3, each minimum error probability of $N$-NSPSK signal is larger than
that of $N$-SPSK signal for any $N=$3, 4 and 8 and any mean photon number.
This result implies that 3-PSK, 4-PSK, and 8-PSK quantum communication cannot
be enhanced by mSG-CS. We also compare this result with the previous work
about on-off keying signal e.m.f.curado ; it is known that the minimum error
probability of on-off keying signal composed of mSG-CS has a singular point
where the logarithm of the minimum error probability diverges to $-\infty$.
This implies that the minimum error probability can achieve to zero. Unlike
the result in the previous work e.m.f.curado , the minimum error probabilities
of 3, 4 and 8-NSPSK signal in Fig. 3 do not have such singular points.
### IV.4 Perelomov Coherent States (P-CS)
The minimum error probabilities of $N$-SPSK signal and $N$-NSPSK signal
composed of P-CS are illustrated in Fig. 4, where Fig. 4(a), (b) and (c) shows
the case of $N=$3, 4 and 8, respectively. In these figures, blue and red lines
show the case of P-CS with $\varsigma=0.5$ and $\varsigma=1.5$, respectively.
Black lines show the case of S-CS.
In Fig. 4(a), each minimum error probabilty of 3-NSPSK signal composed of P-CS
with $\varsigma=$0.5 and 1.5 is larger than that of 3-SPSK signal for
arbitrary mean photon number. In other words, $3$-PSK quantum communication
cannot be enhanced by non-standard coherent state using P-CS. However, in Fig.
4(b), each minimum error probabilty of 4-NSPSK signal composed of P-CS is
smaller than that of 4-SPSK signal when mean photon number is small ($\langle
n\rangle<0.585$ and $\langle n\rangle<0.786$ in case of $\varsigma=0.5$ and
$\varsigma=1.5$, respectively). This implies that 4-PSK quantum communication
can be enhanced by P-CS. In Fig. 4(c), each minimum error probabilty of
8-NSPSK signal composed of P-CS is smaller than that of 8-SPSK signal for
arbitrary mean photon number. This result is rather surprising since even
super-Poissonianity in P-CS can enhance the 4-PSK and 8-PSK quantum
communication unlike the binary case m.namkung . We discuss the details in the
next section.
### IV.5 Mandel Parameter and $N$-NSPSK Quantum Communication
It is known that sub-Poissonianity of non-classical light is one of the
important statistical properties for improving Helstrom bound of binary
quantum optical communication e.m.f.curado . For this reason, we consider the
following Mandel parameter,
$Q_{M}^{(NS)}=\frac{(\Delta n)^{2}}{\langle n\rangle}-1,$ (39)
where $\langle n\rangle$ is mean photon number and $\Delta n$ is standard
deviation of the number of photons. It is known that if $Q_{M}^{(NS)}>0(<0)$,
then the generalized coherent state is super-Poissonian(sub-Poissonian)
l.mandel ; r.short . If $Q_{M}^{(NS)}=0$ (for example, S-CS), then the
generalized coherent state is Poissonian. Here, we consider the relation
between the performance of the $N$-PSK quantum communication and the Mandel
parameter.
1. 1.
In case of OS-CS, the Mandel parameter is analytically driven as e.m.f.curado
$\displaystyle Q_{M}^{(OS)}=-\frac{\langle n\rangle}{\widetilde{n}},$ (40)
which means that OS-CS is always sub-Poissonian. According to Fig. 1, we note
that sub-Poissonianity of OS-CS does not always guarantee the enhancement of
the $N$-PSK quantum communication.
2. 2.
In case of BG-CS, the Mandel parameter is analytically driven in terms of the
Modified Bessel function of the first kind as e.m.f.curado
$\displaystyle
Q_{M}^{(BG)}=\alpha\left[\frac{I_{2\varsigma+1}(2\alpha)}{I_{2\varsigma}(2\alpha)}-\frac{I_{2\varsigma}(2\alpha)}{I_{2\varsigma-1}(2\alpha)}\right].$
(41)
Since the inequality $\left\\{I_{\nu+1}(x)\right\\}^{2}\geq
I_{\nu}(x)I_{\nu+2}(x)$ holds for every $x\geq 0$, $Q_{M}^{(BG)}$ is negative
semidefinite. Therefore, BG-CS is always sub-Poissonian or Poissonian.
Moreover, $Q_{M}^{(BG)}$ is known to be strictly negative for non-zero mean
photon number e.m.f.curado . Nevertheless, Fig. 2 shows that sub-Poissonianity
of BG-CS does not always guarantee the enhancement of the $N$-PSK quantum
communication.
3. 3.
Since the analytic form of the Mandel parameter of the mSG-CS Mandel parameter
is too complex e.m.f.curado , we do not introduce the analytic form here.
According to the result of e.m.f.curado , the Mandel parameter of mSG-CS is
negative when the mean photon number is not too large. Nevertheless, Fig. 3
shows that the sub-Poissonianity of the mSG-CS cannot provide any advantage on
the $N$-PSK quantum communication.
4. 4.
In case of P-CS, the Mandel parameter is analytically driven as e.m.f.curado
$\displaystyle Q_{M}^{(P)}=\frac{\langle n\rangle}{2\varsigma},$ (42)
which means that P-CS is super-Poissonian. However, Fig. 4 shows that P-CS can
enhance the $N$-PSK quantum communication for $N=$3, 4 or 8. It is surprising
since the super-Poissonianity of NS-CS can even enhance $N$-PSK quantum
communication unlike the binary case.
## V Conclusion
In the present article, we have considered the quantum communication with
$N$-ary phase shift keying ($N$-PSK) signal for arbitrary an arbitrary
positive integer $N>1$. By using NS-CS, we have analytically provided the
Helstrom bound of the quantum communication with $N$-PSK. Unlike the binary
case e.m.f.curado ; m.namkung , we have shown that even super-Poissonianity of
NS-CS can improve the Helstrom bound of $N$-PSK quantum communication: The
Helstrom bound can be improved by considering the sub-Poissonian NS-CS for
$N=3$, meanwhile the super-Poissonian NS-CS can improve the Helstrom bound for
$N=4$ and $N=8$.
Using $N$-PSK signal with $N>2$, we can achieve a better transmission rate per
a signal pulse than that of binary-BPSK even if the receiver’s measurement is
slow i.a.burenkov . On the other hands, the maximal success probability of
discriminating a message encoded in $N$-PSK signal generally decreases as $N$
is getting large. Thus our results about the possible enhancement of the
maximal success probability in $N$-PSK quantum communication by NS-CS is
important and even necessary to design efficient quantum communication
schemes.
In the present article, we have only considered PSK signal with equal prior
probabilities, which is composed of symmetric pure states. However, it is
interesting and even important to consider a non-equiprobable or asymmetric
ensemble of NS-CS for several reasons: First, it is practically difficult to
implement the PSK signal having perfect symmetry or equal prior probabilities.
Moreover, in discriminating three non-equiprobable and asymmetric pure states,
there is possibility that sub-Poissonianity of non-classical light can enhance
the Helstrom bound. We note that it is also interesting to consider
unambiguous discrimination i.d.ivanovic ; d.dieks ; a.peres ; g.jaeger ;
s.pang ; j.a.bergou2 of NS-CS since this strategy can provide us with better
confidence than the minimum error discrimination.
## VI Acknowledgement
This work was supported by Quantum Computing Technology Development Program
(NRF2020M3E4A1080088) through the National Research Foundation of Korea (NRF)
grant funded by the Korea government (Ministry of Science and ICT).
## References
* (1) G. Cariolaro, Quantum Communications (Springer, 2015).
* (2) C. W. Helstrom, Quantum Detection and Estimation Theory (Academic Press, 1976).
* (3) K. Tsujino, D. Fukuda, G. Fujii, S. Inoue, M. Fujiwara, N. Takeoka, and M. Sasaki, “Sub-shot-noise-limit discrimination of on-off keyed coherent states via a quantum receiver with a superconducting transition edge sensor”, Opt. Express 18, 8107 (2010).
* (4) J. G. Proakis and M. Salehi, Digital Communications, 5th ed. (McGraw-Hill, 2008).
* (5) I. A. Burenkov, M. V. Jabir, and S. V. Polyakov, “Practical quantum-enhanced receivers for classical communication”, AVS Quantum Sci. 3, 025301 (2021).
* (6) S. M. Barnett and S. Croke, “Quantum state discrimination”, Adv. Opt. Photon. 1, 238 (2009).
* (7) J. A. Bergou, “Quantum state discrimination and selected applications”, J. Phys: Conf. Ser. 84, 012001 (2007).
* (8) J. Bae and L. C. Kwek, “Quantum state discrimination and its applications”, J. Phys. A: Math. Theor. 48, 083001 (2015).
* (9) D. Ha and Y. Kwon, “Complete analysis for three-qubit mixed-state discrimination”, Phys. Rev. A 87, 062302 (2013).
* (10) S. J. Dolinar, “An optimum receiver for the binary coherent state quantum channel”, Q. Prog. Rep. 108, 219 (1973).
* (11) E. M. F. Curado, S. Faci, J.-P. Gazeau, and D. Noguera, “Lowering Helstrom Bound with non-standard coherent states”, J. Opt. Soc. Am. B 38, 3556 (2021).
* (12) M. Namkung and J. S. Kim, “Indirect Measurement for Optimal Quantum Communication Enhanced by Binary Non-standard Coherent States”, arXiv:2112.02312.
* (13) S. Dey, “An introductory review on resource theories of generalized nonclassical light”, J. Phys: Conf. Ser. 2038, 012008 (2021).
* (14) S. M. Barnett and S. Croke, “On the conditions for discrimination between quantum states with minimum error”, J. Phys. A: Math. Theor. 42, 062001 (2009).
* (15) A. Chefles and S. M. Barnett, “Optimum unambiguous discrimination between linearly independent symmetric states”, Phys. Lett. A 250, 223 (1998).
* (16) R. J. Glauber, “Coherent and Incoherent States of the Radiation Field”, Phys. Rev. 131, 2766 (1963).
* (17) A. M. Perelomov, Generalized Coherent States and Their Applications (Springer, 1986).
* (18) A. O. Barut and L. Girardello, “New ‘coherent’ states associated with non-compact groups”, Commun. Math. Phys. 21, 2349 (2001).
* (19) J.-P. Gazeau, V. Hussin, J. Moran, and K. Zelaya, “Generalized Susskind-Glogower coherent states” J. Math. Phys. 62, 072104 (2020).
* (20) L. Mandel, “Fluctuations of Photon Beams: The Distribution of the Photo-Electrons”, Proc. Phys. Soc. 74, 233 (1959).
* (21) R. Short and L. Mandel, “Observation of Sub-Poissonian Photon Statistics”, Phys. Rev. Lett. 51, 384 (1983).
* (22) I. D. Ivanovic, “How to differentiate between non-orthogonal states”, Phys. Lett. A 123, 257 (1987).
* (23) D. Dieks, “Overlap and distinguishability of quantum states”, Phys. Lett. A 126, 303 (1988).
* (24) A. Peres, “How to differentiate between non-orthogonal states”, Phys. Lett. A 128, 19 (1988).
* (25) G. Jaeger, “Optimal distinction between two non-orthogonal quantum states”, Phys. Lett. A 197. 83 (1995).
* (26) S. Pang and S. Wu, “Optimum unambiguous discrimination of linearly independent pure states”, Phys. Rev. A 80, 052320 (2009).
* (27) J. A. Bergou, U. Futschik, and E. Feldman, “Optimal Unambiguous Discrimination of Pure Quantum States”, Phys. Rev. Lett. 108, 250502 (2012).
|
# IACT event analysis with the MAGIC telescopes using deep convolutional
neural networks with CTLearn
T. Miener1, R. López-Coto2, J. L. Contreras1, J. G. Green3, D. Green4 for the
MAGIC Collaboration, E. Mariotti2, D. Nieto1, L. Romanato2, and S. Yadav5
###### Abstract
The Major Atmospheric Gamma Imaging Cherenkov (MAGIC) telescope system
consists of two imaging atmospheric Cherenkov telescopes (IACTs) and is
located on the Canary island of La Palma. IACTs are excellent tools to inspect
the very-high-energy (few tens of GeV and above) gamma-ray sky by capturing
images of the air showers, originated by the absorption of gamma rays and
cosmic rays by the atmosphere, through the detection of Cherenkov photons
emitted in the shower. One of the main factors determining the sensitivity of
IACTs to gamma-ray sources, in general, is how well reconstructed the
properties (type, energy, and incoming direction) of the primary particle
triggering the air shower are. We present how deep convolutional neural
networks (CNNs) are being explored as a promising method for IACT full-event
reconstruction. The performance of the method is evaluated on observational
data using the standard MAGIC Analysis and Reconstruction Software, MARS, and
CTLearn, a package for IACT event reconstruction through deep learning.
1Instituto de Física de Partículas y del Cosmos and Departamento de EMFTEL,
Universidad Complutense de Madrid, Spain<EMAIL_ADDRESS>
2 Dipartimento di Fisica e Astronomia dell?Università and Sezione INFN,
Padova,Italy
3INAF - National Institute for Astrophysics, Roma, Italy
4Max-Planck-Institut für Physik, München, Germany
5Birla Institute of Technology and Science, Pilani, India
## 1 Introduction
In this contribution, we show how deep learning (DL) algorithms like CNNs are
incorporated into the analysis workflow of the MAGIC telescopes to perform
full-event reconstruction. We also explore the robustness of CNN-based
methods, when applying them to real observational data and compare the
sensitivity to the standard analysis of MARS (Zanin et al. 2013; Aleksić et
al. 2016). The DL workflow consists of three main building bricks (see Fig.
1). The Monte Carlo (MC) simulations and observational data are processed by
the MARS software. A complementary macro translate crucial information into
uproot111https://github.com/scikit-hep/uproot4-readable branches (Pivarski et
al. 2021). Then, the DL1-Data-Handler222https://github.com/cta-
observatory/dl1-data-handler (Kim et al. 2020) (DL1DH) assembles several data
levels from the standard approach and unifies them in a common data format in
HDF5 designed for DL purposes. The training of the CNN-based models and their
prediction, the actual full-event reconstruction, are performed with
CTLearn333https://github.com/ctlearn-project/ctlearn (Nieto et al. 2019a;
Brill et al. 2019), a backend for IACT analysis using TensorFlow.
Figure 1.: Diagram depicting the main analysis steps of the MAGIC DL analysis
with CTLearn.
## 2 DL analysis with the MAGIC telescopes
### Model selection
For this work, CTLearn’s Thin-ResNet (TRN) was selected based on previous
studies (Grespan et al. 2021; Miener et al. 2021). Stereoscopic information
are explored by concatenating the images (integrated pixel charges and signal
arrival times) of MAGIC1 and MAGIC2 channel-wise before feeding the network.
We explored two different analysis schemes, where we trained the same TRN
model with raw images, containing besides the Cherekov light of the shower
also the fluctuations of the Night Sky Background (NSB), and cleaned images,
where pixels dominated by noise rather than Cherenkov light emission are set
to zero. The cleaning mask are obtained with the standard MARS analysis
cleaning. Since the pixel layout of the MAGIC cameras is a hexagonal lattice,
we mapped them to a Cartesian lattice using bilinear interpolation to directly
apply CNNs (Nieto et al. 2019b).
### Validation on simulations
To evaluate the performance common metrics like ROC curves, energy and angular
resolution curves are used, applying the same quality cuts (see Fig. 2). The
reconstruction performance is obtained using MC gamma simulations coming
uniformly from a $0.4^{\circ}$ offset of the telescope pointing (ringwobble).
To demonstrate the robustness of CNNs trained with cleaned images, we tested
all methods for the background rejection against MC proton simulations and
observational off data, where we do not expect any gamma-ray signal.
Figure 2.: The performance parameters are obtained using the MC gamma
simulations (ringwobble). _Left)_ ROC curves tested against MC proton
simulations and observational off data. _Center)_ Angular resolution vs.
reconstructed energy. _Right)_ Energy resolution vs. simulated energy.
### Results on observational data
In this work, 2.93h observation of the standard gamma-ray candle Crab Nebula,
taken on four different nights in 2018 under good weather condition at low
zenith (zd < $35^{\circ}$), are considered. The data have been analyzed with
latest MARS software using the standard settings for the analysis focusing of
the medium energy (ME) - and low energy (LE) range. For CTLearn, we strictly
adopted the quality cuts from the MARS analysis. ME analysis (> $250$ GeV)
apply the cuts: valid stereo reconstruction, $\theta^{2}$ < 0.009
$\text{deg}^{2}$, hadronness < 0.16 and both hillas intensity sizes > 300 phe,
while the LE analysis (> $100$ GeV) apply the cuts: valid stereo
reconstruction, $\theta^{2}$ < 0.02 $\text{deg}^{2}$, hadronness < 0.28 and
both hillas intensity sizes > 60 phe. To fairly compare the results, obtained
with CNN-based models, with the standard approach (random forest (RF) for the
background rejection, Look-Up tables (LUTs) for the energy estimation and RF
for bidimensional direction reconstruction), the hadronness cut is adjusted in
the CTLearn analysis to equalize the background (bkg) rates for the
corresponding standard MARS analyses (ME or LE). A source detection is
determined using a $\theta^{2}$ plot (see Fig. 3 for the CTLearn ME analysis
with cleaned images) and the significance (Sig. in Tab. 1) calculation (Eq. 17
in (Li & Ma 1983)). The main properties of all analyses are summarized in Tab.
1. The sensitivity (Sen. in Tab. 1) is computed as the strength of the source
that gives excess/sqrt(background) = 5 after 50h.
Analysis | $N_{on}$ | $N_{off}$ | $N_{ex}$ | $\gamma$ rate [/min] | bkg rate [/min] | Sen. [% Crab] | Sig. (Li&Ma)
---|---|---|---|---|---|---|---
MARS – ME | $819$ | $21.0\pm 2.6$ | $798.0\pm 28.7$ | $4.54\pm 0.16$ | $0.119\pm 0.015$ | $0.70\pm 0.05$ | $43.0\sigma$
CTLearn – ME (raw) | $629$ | $23.3\pm 3.1$ | $605.7\pm 25.3$ | $3.45\pm 0.14$ | $0.133\pm 0.018$ | $0.97\pm 0.08$ | $36.5\sigma$
CTLearn – ME (cleaned) | $844$ | $22.0\pm 2.7$ | $822.0\pm 29.2$ | $4.68\pm 0.17$ | $0.125\pm 0.015$ | $0.69\pm 0.05$ | $43.6\sigma$
MARS – LE | $3579$ | $679.0\pm 15.0$ | $2900.0\pm 61.7$ | $16.49\pm 0.35$ | $3.861\pm 0.086$ | $1.09\pm 0.03$ | $61.1\sigma$
CTLearn – LE (raw) | $2730$ | $673.7\pm 20.0$ | $2056.3\pm 56.0$ | $11.70\pm 0.32$ | $3.832\pm 0.114$ | $1.53\pm 0.05$ | $47.5\sigma$
CTLearn – LE (cleaned) | $3536$ | $680.7\pm 15.1$ | $2855.3\pm 61.3$ | $16.24\pm 0.35$ | $3.872\pm 0.086$ | $1.11\pm 0.03$ | $60.4\sigma$
Table 1.: Summary of all performed analyses of the same Crab Nebula sample.
Figure 3.: $\theta^{2}$ plot for the CTLearn ME analysis with cleaned images.
## 3 Conclusions and outlook
This contribution shows for the first time that CNN-based full-event
reconstruction works for the MAGIC telescopes and that CTLearn analyses are
capable of detecting the Crab Nebula with a clear signal. We demonstrate that
CNNs trained with cleaned images rather than raw images show a stronger
robustness, when applying them to observational data, and the performance
already matches the sensitivity of detection of the conventional analysis on
real data. The performance of CNNs trained with raw images can be optimized by
pixel-wise tuning of the NSB noise of the MCs (Vuillaume et al. 2021) to match
the NSB level of each observation. The selected TRN model is relatively
shallow and further performance enhancements are foreseen by increasing the
model depth/complexity. Future work is planned, where the full performance of
CNNs under various observation conditions are evaluated.
### Acknowledgments
We would like to thank the Instituto de Astrofísica de Canarias for the
excellent working conditions at the Observatorio del Roque de los Muchachos in
La Palma. The financial support of the German BMBF, MPG and HGF; the Italian
INFN and INAF; the Swiss National Fund SNF; the ERDF under the Spanish
Ministerio de Ciencia e Innovación (MICINN) (FPA2017-87859-P, FPA2017-
85668-P, FPA2017-82729-C6-5-R, FPA2017-90566-REDC, PID2019-104114RB-C31,
PID2019-104114RB-C32, PID2019- 105510GB-C31C42, PID2019- 107847RB-C44,
PID2019-107988GB-C22); the Indian Department of Atomic Energy; the Japanese
ICRR, the University of Tokyo, JSPS, and MEXT; the Bulgarian Ministry of
Education and Science, National RI Roadmap Project DO1-268/16.12.2019 and the
Academy of Finland grant nr. 317637 and 320045 are gratefully acknowledged.
This work was also supported by the Spanish Centro de Excelencia “Severo
Ochoa” SEV-2016- 0588, SEV-2017-0709 and CEX2019-000920-S, and "María de
Maeztu” CEX2019-000918-M, the Unidad de Excelencia “María de Maeztu”
MDM-2015-0509-18-2 and the "la Caixa" Foundation (fellowship
LCF/BQ/PI18/11630012), by the Croatian Science Foundation (HrZZ) Project
IP-2016-06-9782 and the University of Rijeka Project 13.12.1.3.02, by the DFG
Collaborative Research Centers SFB823/C4 and SFB876/C3, the Polish National
Research Centre grant UMO-2016/22/M/ST9/00382 and by the Brazilian MCTIC, CNPq
and FAPERJ. TM acknowledges support from PID2019-104114RB-C32. JLC and DN
acknowledges partial support from The European Science Cluster of Astronomy &
Particle Physics ESFRI Research Infrastructures funded by the European Union’s
Horizon 2020 research and innovation program under Grant Agreement no. 824064.
SY acknowledges financial support from Google LLC through the Google Summer of
Code 2020 program. We acknowledge the support of NVIDIA Corporation with the
donation of a Titan X Pascal GPU used for part of this research.
This paper has gone through internal review by the MAGIC Collaboration.
## References
* Aleksić et al. (2016) Aleksić, J., et al. 2016, Astroparticle Physics, 72, 76
* Brill et al. (2019) Brill, A., et al. 2019, CTLearn: Deep learning for imaging atmospheric Cherenkov telescopes event reconstruction. URL https://doi.org/10.5281/zenodo.3345947
* Grespan et al. (2021) Grespan, P., et al. 2021, in Proceedings of 37th International Cosmic Ray Conference — PoS(ICRC2021), vol. 395, 771
* Kim et al. (2020) Kim, B., et al. 2020, DL1-Data-Handler: DL1 HDF5 writer, reader, and processor for IACT data. URL https://doi.org/10.5281/zenodo.3979698
* Li & Ma (1983) Li, T. P., & Ma, Y. Q. 1983, ApJ, 272, 317
* Miener et al. (2021) Miener, T., et al. 2021, in Proceedings of 37th International Cosmic Ray Conference — PoS(ICRC2021), vol. 395, 730
* Nieto et al. (2019a) Nieto, D., et al. 2019a, in Proceedings of 36th International Cosmic Ray Conference — PoS(ICRC2019), vol. 358, 752
* Nieto et al. (2019b) — 2019b, in Proceedings of 36th International Cosmic Ray Conference — PoS(ICRC2019), vol. 358, 753
* Pivarski et al. (2021) Pivarski, J., et al. 2021, scikit-hep/uproot4: 4.1.4. URL https://doi.org/10.5281/zenodo.5567737
* Vuillaume et al. (2021) Vuillaume, T., et al. 2021, in Proceedings of 37th International Cosmic Ray Conference — PoS(ICRC2021), vol. 395, 703
* Zanin et al. (2013) Zanin, R., et al. 2013, in Proceedings of 33th International Cosmic Ray Conference — PoS(ICRC2013), 773
|
# CLT for real $\beta$-ensembles at high temperature††thanks: This project has
received funding from the European Research Council (ERC) under the European
Union Horizon 2020 research and innovation program (grant agreement No.
884584).
Charlie Dworaczek Guera111Université de Lyon, ENSL, CNRS, France
email<EMAIL_ADDRESS>Ronan Memin222Université de Lyon, ENSL,
CNRS, France
email<EMAIL_ADDRESS>
###### Abstract
We establish a central limit theorem for the fluctuations of the empirical
measure in the $\beta$-ensemble of dimension $N$ at a temperature proportional
to $N$ and with confining smooth potential. The space of test functions for
which the CLT holds includes $\mathcal{C}^{1}$, vanishing functions at
infinity. It is obtained by the inversion of an operator which is a
pertubation of a Sturm-Liouville operator. The method that we use is based on
a change of variables introduced in [BFG15] and in [Shc14].
###### Contents
1. 1 Introduction and main result
2. 2 Regularity of the equilibrium measure and Hilbert transform
3. 3 Concentration inequality, proof of Theorem 1.5
4. 4 Localization of the edge of a configuration
5. 5 Laplace transform for smooth test functions, proof of Theorem 1.3
6. 6 Inversion of $\mathcal{L}$
7. 7 Regularity of the inverse of $\mathcal{L}$ and completion of the proof of Theorem 1.3
8. A Appendix: proof of Theorem 6.2
## 1 Introduction and main result
The $\beta$-ensemble of dimension $N\geqslant 1$ with parameter $\beta>0$ and
potential $V$ is the probability measure on ${\mathbb{R}}^{N}$ given by
$d\mathbf{P}^{\beta,V}_{N}(x_{1},\ldots,x_{N})=\frac{1}{Z_{N}(V,\beta)}\prod_{i<j}|x_{i}-x_{j}|^{\beta}e^{-\sum_{i=1}^{N}V(x_{i})}dx_{1}\ldots
dx_{N}\,.$ (1)
The potential $V$ has to be chosen so that the partition function
$Z_{N}(V,\beta)=\int_{{\mathbb{R}}^{N}}\prod_{i<j}|x_{i}-x_{j}|^{\beta}e^{-\sum_{i=1}^{N}V(x_{i})}dx_{1}\ldots
dx_{N}$
is finite. This is the case for example if for some
$\beta^{\prime}>\max(1,\beta)$,
$\liminf_{|x|\to\infty}\frac{V(x)}{N\beta^{\prime}\ln|x|}>1\,,$ (2)
see [AGZ10, equation (2.6.2)]. The parameter $\beta$, which is allowed to
depend on $N$, is the so-called inverse temperature.
Under the special choice of $V_{G}(x)=\dfrac{x^{2}}{2}$, the measure (1) can
be seen as the joint law of the (unordered) eigenvalues of certain matrix
models:
* •
For $\beta=1$ (resp. $\beta=2$), it is the law of the eigenvalues of the
Gaussian Orthogonal Ensemble (resp. Gaussian Unitary Ensemble),
[AGZ10][Theorem 2.5.2].
* •
For general $\beta>0$, potentially depending on $N$, it is the law of the
spectrum of certain tri-diagonal random matrices as shown by Dumitriu and
Edelman in [DE02].
We consider here the high temperature regime where $\beta$ scales as $1/N$,
and write $\beta=\frac{2P}{N}$ for some $P>0$. The corresponding measure is
therefore
$d\mathbb{P}^{V,P}_{N}(x_{1},\ldots,x_{N})=\frac{1}{Z_{N}^{V,P}}\prod_{i<j}|x_{i}-x_{j}|^{\frac{2P}{N}}e^{-\sum_{i=1}^{N}V(x_{i})}dx_{1}\ldots
dx_{N}\,,$ (3)
with partition function
$Z_{N}^{V,P}=\int_{{\mathbb{R}}^{N}}\prod_{i<j}|x_{i}-x_{j}|^{\frac{2P}{N}}e^{-\sum_{i=1}^{N}V(x_{i})}dx_{1}\ldots
dx_{N}\,.$ (4)
It was shown in [GZ19] that under $\mathbb{P}^{V,P}_{N}$, the sequence of
empirical measures
$\hat{\mu}_{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{i}}$
satisfies a large deviation principle at speed $N$ with strictly convex, good
rate function. As a consequence, $\hat{\mu}_{N}$ converges almost surely in
distribution towards a deterministic measure $\mu^{V}_{P}$ as $N$ goes to
infinity, meaning that almost surely, for every bounded continuous
$f:{\mathbb{R}}\to{\mathbb{R}}$,
$\int_{\mathbb{R}}fd\hat{\mu}_{N}\underset{N\rightarrow\infty}{\longrightarrow}\int_{\mathbb{R}}fd\mu^{V}_{P}\,.$
The limiting measure $\mu^{V}_{P}$ can be seen to have a density
$\rho^{V}_{P}$ which satisfies for almost every $x\in{\mathbb{R}}$
$V(x)-2P\int_{\mathbb{R}}\ln|x-y|\rho^{V}_{P}(y)dy+\ln\rho^{V}_{P}(x)=\lambda^{V}_{P}\,,$
(5)
where $\lambda^{V}_{P}$ is constant (see [GM22, Lemma 3.2] for example).
The $\beta$-ensemble in the regime $\beta
N\underset{N\rightarrow\infty}{\longrightarrow}2P>0$ has drawn a lot of
attention from the random matrix and statistical physics communities lately.
This regime was first considered by [CL97] with the study of Dyson Brownian
motion with vanishing repulsive coefficient scaled like $\dfrac{1}{N}$. Gases
of vortices were also studied with temperature proportional to $N$ in [BG99].
The limiting density was then described in the case of the quadratic potential
in [ABG12], as a crossover between the Wigner semicircle law (fixed $\beta>0$
case) and the Gaussian density (case $\beta=0$). The fluctuations of the
eigenvalues in the bulk and at the edge of a configuration were studied for
example in [BGP15],[NT18],[NT20],[Pak18], [Lam21]. These fluctuations were
shown to be described by Poisson statistics in this regime. Recently, Spohn
uncovered in [Spo20] a link between the study of the Classical Toda chain and
the $\beta$-ensemble in the high temperature regime, showing that the limiting
density of states of the classical Toda chain, distributed according to the
generalized Gibbs ensemble with polynomial potential, can be computed by means
of the limiting empirical measure of the $\beta$-ensemble at high temperature.
In [Maz22], the author established this relation using the matrix
representation of the $\beta$-ensemble and a moment method, and in [GM22] the
authors proved a large deviation principle for the empirical measure of the
Toda chain, establishing the previous result for potentials with polynomial
growth. See also [Spo22],[GM23],[MM22] for a similar link between the
Ablowitz-Ladik lattice and the circular $\beta$-ensemble at high temperature.
This relation can be further pushed to compute the limiting currents of the
Toda chain through the central limit theorem for the empirical measure in the
$\beta$ ensemble. The computation of these currents is a crucial step to the
derivation of a hydrodynamic equation for the Toda chain, and to the analysis
of the correlations of the locally conserved quantities at equilibrium through
linearized hydodynamics, see [Spo21].
The Central Limit Theorem for the fluctuations of the linear statistics of
$\beta$-ensembles was first established by [Joh98] for $\beta=2$ polynomial
potential, then generalized and further developed in the regime where $\beta$
is fixed in [Shc13], [BG13a], [BG13b],[BLS18],[LLW19]. Also an optimal local
law was found in this regime in [BMP22]. The CLT was obtained in the high-
temperature regime $\beta N\to 2P>0$ by Nakano and Trinh in [NT18, Theorem
4.9] for quadratic $V$, relying on the tridiagonal representation for the
$\beta$-ensemble with quadratic potential in [DE02]. In [HL21], the authors
prove the CLT in the case of the circular $\beta$-ensemble at high temperature
with general potential, using a normal approximation method involving the
spectral analysis of an operator associated to the limiting covariance
structure. Their method allowed them to derive a Berry-Esseen bound, i.e. a
speed of convergence of the fluctuations towards a Gaussian variable.
In this paper, we adapt part of the arguments of [HL21] to our setup. More
precisely, we show that for a class of regular, convex potentials $V$
satisfying a growth condition of the type
$\lim_{|x|\to\infty}\frac{V^{\prime\prime}(x)}{V^{\prime}(x)^{2}}=0\,,$
denoting $\nu_{N}=\hat{\mu}_{N}-\mu^{V}_{P}$ and considering test functions
$f$ belonging to the range of a certain integro-differential operator, the
scaled fluctuations of $\hat{\mu}_{N}$, defined by
$\sqrt{N}\nu_{N}(f):=\sqrt{N}\left(\int_{\mathbb{R}}fd\mu_{N}-\int_{\mathbb{R}}fd\mu^{V}_{P}\right)\,,$
converge in law towards centered Gaussian law with variance depending on $f$.
When considering the fixed temperature regime, i.e. $\beta$ fixed, one has to
renormalize the $x_{i}$’s by $\sqrt{N}$. It is shown in [AGZ10][Theorem 2.6.1]
that the measure
$\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{i}/\sqrt{N}}$
satisfies a large deviation principle, and the limiting measure is
characterized in [AGZ10][Lemma 2.6.2] by an equation similar to (5). In fact,
the term $\ln\rho^{V}_{P}$ in the left-hand side of (5) is the only difference
in the equation characterizing the limiting measure in the fixed $\beta$ case.
We point out the very similar characterization of the equilibrium measure
corresponding to the minimization problem arising in [BGK16]. There again, the
limiting measure is compactly supported. The term $\ln\rho^{V}_{P}$ is of
prime importance because its presence implies that the support of
$\rho_{P}^{V}$ is the whole real line. It leads to technicalities to deal with
the behavior at infinity of most of the associated objects, namely dealing
with weighted Lebesgue spaces $L^{2}(\mu_{P}^{V})$ and the corresponding
Sobolev spaces $H^{k}(\mu_{P}^{V})$.
Our strategy is based on a change of variables in the partition function
$Z^{V,P}_{N}$ (4), used for the $\beta$-ensemble at fixed temperature
introduced in [BFG15] and [Shc14], and used in [Gui19] and in [BGK16] to
derive the loop equations and in [BLS18] to derive a CLT in the
$\beta$-ensemble with $\beta$ fixed. The outline of the argument goes as
follows: Take $\phi:{\mathbb{R}}\to{\mathbb{R}}$ smooth, vanishing fast enough
at infinity, and do the change of variables in $Z_{N}^{V,P}$,
$x_{i}=y_{i}+\frac{t}{\sqrt{N}}\phi(y_{i})$, $1\leqslant i\leqslant N$, to get
$Z_{N}^{V,P}=\int_{{\mathbb{R}}^{N}}\prod_{i<j}\left|y_{i}-y_{j}+\frac{t}{\sqrt{N}}(\phi(y_{i})-\phi(y_{j}))\right|^{2P/N}e^{-\sum_{i=1}^{N}V\left(y_{i}+\frac{t}{\sqrt{N}}\phi(y_{i})\right)}\prod_{i=1}^{N}\left(1+\frac{t}{\sqrt{N}}\phi^{\prime}(y_{i})\right)d^{N}\mathbf{y}\,.$
Expanding the different terms in this integral, one gets
$\displaystyle
Z_{N}^{V,P}=\int_{{\mathbb{R}}^{N}}\prod_{i<j}|y_{i}-y_{j}|^{\frac{2P}{N}}e^{-\sum_{i=1}^{N}V(y_{i})}e^{\frac{t}{\sqrt{N}}\left[\frac{2P}{N}\sum_{i<j}\frac{\phi(y_{i})-\phi(y_{j})}{y_{i}-y_{j}}+\sum_{i=1}^{N}\left(\phi^{\prime}(y_{i})-V^{\prime}(y_{i})\phi(y_{i})\right)\right]}e^{-\frac{t^{2}}{2}\sigma^{2}_{N}(\phi)}d^{N}\mathbf{y}\,,$
where the term $\sigma_{N}^{2}(\phi)$ converges towards a limiting variance
$\sigma^{2}(\phi)$ depending on $\phi$, $P$ and $V$. After dividing both parts
of the equation by $Z_{N}^{V,P}$, and because of equation (5) characterizing
$\mu^{V}_{P}$, one can deduce from the last equation the convergence of the
Laplace transform
$\mathbb{E}\left[e^{t\sqrt{N}\left(\nu_{N}(\Xi\phi)+\text{error
term}\right)}\right]\underset{N\rightarrow\infty}{\longrightarrow}\exp\Big{(}\frac{t^{2}}{2}\sigma^{2}(\phi)\Big{)}\,,$
(6)
where $\Xi$ is a linear operator acting on test functions and defined by
$(\Xi\phi)(x)=2P\int_{\mathbb{R}}\frac{\phi(x)-\phi(y)}{x-y}d\mu^{V}_{P}(y)+\phi^{\prime}(x)-V^{\prime}(x)\phi(x)\,.$
(7)
Once the error term is taken care of, (6) shows the central limit theorem for
test functions of the form $\Xi\phi$. Following [HL21], the operator
$\mathcal{L}$ given by
$\mathcal{L}\phi=\Xi\phi^{\prime}$ (8)
can be analyzed using Hilbert space techniques. In particular, the operator
$\mathcal{L}$, seen as an unbounded operator of the Hilbert space
$\mathsf{H}=\left\\{u\in L^{2}(\mu^{V}_{P})\ \Big{|}\ u^{\prime}\in
L^{2}(\mu^{V}_{P}),\int_{\mathbb{R}}u\rho^{V}_{P}dx=0\right\\},\qquad\Braket{u,v}_{\mathsf{H}}=\Braket{u^{\prime},v^{\prime}}_{L^{2}(\rho^{V}_{P})}\,,$
can be decomposed as
$-\mathcal{L}=\mathcal{A}+2P\mathcal{W}\,,$
where $\mathcal{A}$ is a positive Sturm-Liouville operator and $\mathcal{W}$
is positive and self-adjoint. Such a writing allows us to show that
$-\mathcal{L}$ is invertible, see Theorem 6.7.
We now state the assumptions we make on the potential $V$. Recall that a
probability measure $\mu$ supported on ${\mathbb{R}}$ satisfies the Poincaré
inequality if there exists $C>0$ such that for all
$f\in\mathcal{C}^{1}({\mathbb{R}})$ with compact support:
$\mathrm{Var}_{\mu}(f):=\int_{\mathbb{R}}\left(f-\int_{\mathbb{R}}fd\mu\right)^{2}d\mu\leqslant
C\int f^{\prime 2}d\mu\,.$ (9)
###### Assumptions 1.1.
The potential $V$ satisfies:
* i)
$V\in\mathcal{C}^{3}({\mathbb{R}})$,
$V(x)\underset{|x|\rightarrow+\infty}{\longrightarrow}+\infty$,
$|V^{\prime}(x)|\underset{|x|\rightarrow+\infty}{\longrightarrow}+\infty$ and
is such that $\mu_{P}^{V}$ satisfies the Poincaré inequality (9).
* ii)
For all polynomial $Q\in{\mathbb{R}}[X]$ and $\alpha>0$,
$Q\bigg{(}V^{\prime}(x)\bigg{)}e^{-V(x)}=\underset{|x|\to\infty}{o}(x^{-\alpha})$
.
* iii)
Furthermore, for any sequence $x_{N}$ such that $|x_{N}|$ goes to infinity,
and for all real $a<b$, we have, as $N$ goes to infinity,
$\frac{1}{V^{\prime}(x_{N})^{2}}\sup_{a\leqslant x\leqslant
b}|V^{\prime\prime}(x_{N}+x)|\underset{N\rightarrow\infty}{\longrightarrow}0\,.$
* iv)
The function $\dfrac{1}{V^{\prime 2}}$ is integrable at infinity.
$\dfrac{V^{\prime\prime}(x)}{V^{\prime}(x)}=\underset{|x|\rightarrow\infty}{O}(1)$
and $\dfrac{V^{(3)}(x)}{V^{\prime}(x)}=\underset{|x|\rightarrow\infty}{O}(1)$.
Taking $V=V_{\mathrm{conv}}+\phi$ with
$V_{\mathrm{conv}},\,\phi\in\mathcal{C}^{3}({\mathbb{R}})$ such that
$\phi^{(k)}$ is bounded for $k=0,\dots,3$, $V_{\mathrm{conv}}$ is convex with
$|V_{\mathrm{conv}}^{\prime}|\to+\infty$ at infinity, satisfying hypotheses
ii), iii) and iv) such that there exists $\varepsilon>0$ such that
$V_{\mathrm{conv}}-2Pf_{\varepsilon}$ is convex (see Lemma 2.4), then $V$
satisfies Assumptions 1.1.
Because i) implies that $V$ goes to infinity faster than linearly, we will see
that it ensures exponential decay at infinity of $\rho^{V}_{P}$. Recalling the
sufficient condition for $\mathbb{P}^{V,P}_{N}$ of equation (2) to be defined,
this first assumption implies that there exists $\alpha>0$ such that
$\liminf_{|x|\to\infty}\frac{V(x)}{|x|}>\alpha$. This guarantees in particular
that the $\beta$-ensemble (3) is well-defined for all $N\geqslant 1$ and
$P\geqslant 0$. We will use the fact that $\mu_{P}^{V}$ satisfies the Poincaré
inequality to ensure that $\mathsf{H}$ endowed with
$\braket{\cdot,\cdot}_{\mathsf{H}}$ is a Hilbert space.
The second assumption ensures that any power of $V^{\prime}$ and
$V^{\prime\prime}$ is in $L^{2}(\mu_{P}^{V})$ and $\rho_{P}^{V}$, which
behaves like $e^{-V}$ up to a sub-exponential factor, belongs to the Sobolev
space $H^{2}({\mathbb{R}})\subset\mathcal{C}^{1}({\mathbb{R}})$. Indeed, for
$k\leqslant 2$, using iv), $\rho_{P}^{V}\,{}^{(k)}$ behaves at infinity like
$(V^{\prime})^{k}\rho_{P}^{V}$ as shown in Lemma 2.2 which is in
$L^{2}({\mathbb{R}})$ by assumption ii).
Assumption iii) will be used to localize the minimum/maximum point of a
typical configuration $(x_{1},\ldots,x_{N})$ following the law
$\mathbb{P}^{V,P}_{N}$: this will be done in Corollary 4.2, which comes as a
consequence of [Lam21][Theorem 3.4]. More precisely, Corollary 4.2 establishes
that for sequences $(\alpha_{N}^{+})_{N},(\alpha_{N}^{-})_{N}$ going to
infinity, the random variables
$\alpha_{N}^{+}\left(\max_{1\leqslant j\leqslant
N}x_{j}-E_{N}^{+}\right)\qquad\text{and
}\qquad\alpha_{N}^{-}\left(\max_{1\leqslant j\leqslant
N}x_{j}-E_{N}^{-}\right)$
converge in distribution. For large $N$, the scalars $E_{N}^{+}$ and
$E_{N}^{-}$ can thus be seen as the edges of a typical configuration.
Furthermore,
$V(E_{N}^{\pm})\sim\ln N\,.$ (10)
We refer to Section 4 for detailed statements. The final step in the proof of
Theorem 1.3 consists in lifting the result of Proposition 5.1 from compactly
supported functions to more general functions.
We use Assumption iv) to control integral remainders in the proof of Theorem
7.1, ensuring that $\mathcal{L}^{-1}$ is regular enough i.e. that for
sufficiently smooth functions $f$,
$\Big{(}\mathcal{L}^{-1}f\Big{)}^{\prime}\in H^{2}({\mathbb{R}})$.
We will need another technical assumption to ensure that Taylor remainders
arising in the proof of Theorem 5.2 are negligible.
###### Assumption 1.2.
With the notations of Theorem 4.1, we have
$\sup_{d(x,I_{N})\leqslant 1}\left|V^{(3)}(x)\right|=o(N^{1/2})\,,$
where $I_{N}=\left[E_{N}^{-}-2;E_{N}^{+}+2\right]$.
Again taking $V=V_{\mathrm{conv}}+\phi$ with
$V_{\mathrm{conv}},\,\phi\in\mathcal{C}^{3}({\mathbb{R}})$ such that
$\phi^{(k)}$ is bounded for $k=0,\dots,3$, $V_{\mathrm{conv}}$ is convex with
$|V_{\mathrm{conv}}^{\prime}|\to+\infty$ at infinity, satisfying hypotheses
ii), iii), iv) and Assumption 1.2, such that there exists $\varepsilon>0$ such
that $V_{\mathrm{conv}}-2Pf_{\varepsilon}$ is convex (see Lemma 2.4), then $V$
satisfies Assumptions 1.1.
This last hypothesis is satisfied whenever $V_{\mathrm{conv}}$ is sufficiently
convex in a compact centered at 0 and is made so that $V_{\mathrm{conv}}$
compensate the small lack of concavity in the bulk of a function behaving like
$-\ln|x|$ at infinity (note that assuming that
$V^{\prime\prime}\geqslant\alpha$ for some $\alpha>0$ is sufficient). This is
the reason why we introduce the function $f_{\varepsilon}$, which are
functions with required growth at infinity and with second derivative as small
as desired. The main point that needs to be checked is that the measure
$\mu^{V}_{P}$ satisfies the Poincaré inequality, this will be done in
Proposition 2.6.
The type of potential $V_{\mathrm{conv}}$ that one can consider is typically
the convex polynomials or $\cosh(\alpha x)$. On the other hand a scaled
potentials like $e^{x^{2}}$ which have a faster growing derivative at infinity
and so doesn’t satisfy assumptions iii) and iv).
We are now able to state the main result, ie the central limit theorem for
functions belonging to the image of the operator $\mathcal{L}$ introduced in
(8).
###### Theorem 1.3.
Assume that $V$ satisfies Assumptions 1.1 and Assumption 1.2. Then for $\phi$
verifying the following conditions:
* •
$\phi\in\mathcal{C}^{1}({\mathbb{R}})$
* •
there exists $\varepsilon>0$,
$\phi(x)=\underset{|x|\rightarrow\infty}{O}(x^{-\frac{1}{2}-\varepsilon})$ and
$\phi^{\prime}(x)=\underset{|x|\rightarrow\infty}{O}(x^{\frac{1}{2}-\varepsilon})$
at infinity
* •
$\displaystyle\int_{\mathbb{R}}\phi(x)d\mu_{P}^{V}(x)=0$
we have the convergence in law
$\sqrt{N}\nu_{N}(\phi)\to\mathcal{N}\Bigg{(}0,(\sigma^{V}_{P})^{2}(\phi)\Bigg{)}$
(11)
where the limiting variance $(\sigma^{V}_{P})^{2}(\phi)$ is given by
$(\sigma^{V}_{P})^{2}(\phi)=\braket{\phi,\mathcal{L}^{-1}\phi}_{\mathsf{H}}=\int_{\mathbb{R}}\Bigg{(}\big{(}\mathcal{L}^{-1}\phi\big{)}^{\prime\prime}(x)^{2}+V^{\prime\prime}(x)\big{(}\mathcal{L}^{-1}\phi\big{)}^{\prime}(x)^{2}\Bigg{)}d\mu^{V}_{P}(x)\\\
+P\iint_{{\mathbb{R}}^{2}}\Bigg{(}\frac{\big{(}\mathcal{L}^{-1}\phi\big{)}^{\prime}(x)-\big{(}\mathcal{L}^{-1}\phi\big{)}^{\prime}(y)}{x-y}\Bigg{)}^{2}d\mu^{V}_{P}(x)d\mu^{V}_{P}(y)\,.$
(12)
###### Remark 1.4.
Since $\nu_{N}(\phi+c)=\nu_{N}(\phi)$ for all constant $c\in{\mathbb{R}}$, the
assumption $\displaystyle\int_{\mathbb{R}}\phi(x)d\mu^{V}_{P}=0$ can be
dropped by replacing $\phi$ by
$\phi-\displaystyle\int_{\mathbb{R}}\phi(x)d\mu^{V}_{P}$ in the expression of
the limiting variance.
As a tool to deal with the error term of equation (6), we establish a
concentration inequality for the empirical measure. This inequality is stated
in terms of the following distance over the set of probability distributions
$\mathcal{P}({\mathbb{R}})$.
For $\mu,\mu^{\prime}\in\mathcal{P}({\mathbb{R}})$ we define the distance
$d(\mu,\mu^{\prime})=\sup_{\begin{subarray}{c}\|f\|_{\text{Lip}}\leqslant 1\\\
\|f\|_{1/2}\leqslant 1\end{subarray}}\left\\{\left|\int fd\mu-\int
fd\mu^{\prime}\right|\right\\}\,,$ (13)
where $\|f\|_{\text{Lip}}$ denotes the Lipschitz constant of $f$, and
$\|f\|_{1/2}^{2}=\displaystyle\int_{\mathbb{R}}|t|\left|\mathcal{F}[f](t)\right|^{2}dt$,
where $\mathcal{F}$ denotes the Fourier transform on $L^{2}({\mathbb{R}})$
which takes the following expression
$\mathcal{F}[f](t)=\displaystyle\int_{\mathbb{R}}f(x)e^{-\mathrm{i}tx}dx$ for
$f$ in $L^{1}({\mathbb{R}})\cap L^{2}({\mathbb{R}})$.
We then have
###### Theorem 1.5.
There exists $K\in{\mathbb{R}}$ (depending on $P$ and on $V$), such that for
any $N\geqslant 1$ and $r>0$,
$\mathbb{P}^{V,P}_{N}\left(d(\hat{\mu}_{N},\mu^{V}_{P})>r\right)\leqslant
e^{-Nr^{2}\frac{P\pi^{2}}{2}+5P\ln N+K}\,.$ (14)
This result is the analog of [HL21, Theorem 1.4].
The paper is organized as follows. In Section 2 we discuss the regularity of
the equilibrium density $\rho^{V}_{P}$ under Assumption 1.1. In Section 3 we
prove Theorem 1.5. Section 4 is dedicated to the localization of the edge of a
typical configuration, mentioned in the discussion preceding the statement of
Assumption 1.2. We next prove in Section 5 the convergence of the Laplace
transform of $\sqrt{N}\nu_{N}(\mathcal{L}\phi)$ for general functions $\phi$
which establishes Theorem 1.3 for functions of the form $\mathcal{L\phi}$.
Section 6 is dedicated to the diagonalization and inversion of $\mathcal{L}$
given by (8). In Section 7, we show regularity properties of
$\mathcal{L}^{-1}$ to establish Theorem 1.3. We detail in Appendix A elements
of proof for the spectral theory of Schrödinger operators, used in Section 6.
Acknowledgments The authors wish to thank Alice Guionnet and Karol Kozlowski
for their helpful suggestions. We also thank Arnaud Debussche for pointing out
the link with Schrödinger operators theory and Gautier Lambert for pointing
out [Lam21]. We would also like to thank Jeanne Boursier, Corentin Le Bihan
and Jules Pitcho for their intuition about the regularity of the inverse
operator. We would like to thank Jean-Christophe Mourrat for telling us about
a more general framework for Poincaré inequalities.
## 2 Regularity of the equilibrium measure and Hilbert transform
In this section, we discuss the regularity properties of the equilibrium
density $\rho^{V}_{P}$, namely its decay at infinity and its smoothness, and
give formulas for its two first derivatives.
The Hilbert transform, whose definition we recall, plays a central role in the
analysis of the equilibrium measure. It is first defined on the Schwartz class
through $\forall\phi\in\mathcal{S}({\mathbb{R}}),\,\forall
x\in{\mathbb{R}},\;$
$\mathcal{H}[\phi](x):=\fint_{\mathbb{R}}\dfrac{\phi(t)}{t-x}dt=\lim_{\varepsilon\downarrow
0}\int_{|t-x|>\varepsilon}\dfrac{\phi(t)}{t-x}dt=\int_{0}^{+\infty}\dfrac{\phi(x+t)-\phi(x-t)}{t}dt,$
(15)
where $\displaystyle\fint$ denotes the Cauchy principal value integral, and
then extended to $L^{2}({\mathbb{R}})$ thanks to property ii) of Lemma 2.1:
$\|f\|_{L^{2}(dx)}=\dfrac{1}{\pi}\|\mathcal{H}[f]\|_{L^{2}(dx)}$. The last
expression in (15) is a definition where the integral converges in the
classical sense. We also recall the definition of the logarithmic potential
$U^{f}$ of a density of probability $f:{\mathbb{R}}\to{\mathbb{R}}$, given for
$x\in{\mathbb{R}}$ by
$U^{f}(x)=-\int_{\mathbb{R}}\ln|x-y|f(y)dy\,.$ (16)
Because we assume $f\in L^{1}({\mathbb{R}})$ to be nonnegative, $U^{f}$ takes
values in $[-\infty,+\infty)$. If $f$ integrates the function $\ln$, i.e
$\int_{\mathbb{R}}\ln|x|f(x)dx<+\infty$, then $U^{f}$ takes real values.
Additionally, one can check that the logarithmic potential and the Hilbert
transform of $f$ are linked through the distributional identity
$\big{(}U^{f}\big{)}^{\prime}=\mathcal{H}[f]$.
We recall in the next lemma some properties of the Hilbert transform that we
will use in the rest of the paper.
###### Lemma 2.1 (Properties of the Hilbert transform).
* i)
Fourier transform: For all $\phi\in L^{2}({\mathbb{R}})$,
$\mathcal{F}\Big{[}\mathcal{H}[\phi]\Big{]}(\omega)=\mathrm{i}\pi\text{sgn}(\omega)\mathcal{F}[\phi](\omega)$
for all $\omega\in{\mathbb{R}}$.
* ii)
As a consequence, $\dfrac{1}{\pi}\mathcal{H}$ is an isometry of
$L^{2}({\mathbb{R}})$, and $\mathcal{H}$ satisfies on $L^{2}({\mathbb{R}})$
the identity $\mathcal{H}^{2}=-\pi^{2}I$.
* iii)
Derivative: For any $f\in H^{1}({\mathbb{R}})$, $\mathcal{H}[f]$ is also
$H^{1}({\mathbb{R}})$ and $\mathcal{H}[f]^{\prime}=\mathcal{H}[f^{\prime}]$.
* iv)
For all $p>1$, the Hilbert transform can be extended as a bounded operator
$\mathcal{H}:L^{p}({\mathbb{R}})\to L^{p}({\mathbb{R}})$.
* v)
Skew-self adjointness: For any $f,g\in L^{2}({\mathbb{R}})$,
$\Braket{\mathcal{H}[f],g}_{L^{2}({\mathbb{R}})}=-\langle
f,\mathcal{H}[g]\rangle_{L^{2}({\mathbb{R}})}$.
###### Proof.
We refer to [Kin09] for the proofs of these properties. ∎
As a consequence of [GZ19], $\hat{\mu}_{N}$ converges almost surely under
$\mathbb{P}^{V,P}_{N}$ towards the unique minimizer of the energy-functional
$\mathcal{E}_{P}^{V}$, defined for $\mu\in\mathcal{P}({\mathbb{R}})$ by
$\mathcal{E}_{P}^{V}(\mu)=\begin{cases}\displaystyle\int_{\mathbb{R}}\Big{[}V+\ln\Big{(}\dfrac{d\mu}{dx}\Big{)}\Big{]}d\mu-P\iint_{{\mathbb{R}}^{2}}\ln\big{|}x-y\big{|}d\mu(x)d\mu(y)\text{
if }\mu\ll dx\\\ +\infty\text{ otherwise }\end{cases}\,.$ (17)
(Here we wrote $\mu\ll dx$ for "$\mu$ is absolutely continuous with respect to
Lebesgue measure")
Consequently, following [GM22, Lemma 3.2], the density $\rho^{V}_{P}$ of
$\mu^{V}_{P}$ satisfies equation (5), which we rewrite here for convenience.
$V(x)-2P\int_{\mathbb{R}}\ln|x-y|\rho^{V}_{P}(y)dy+\ln\rho^{V}_{P}(x)=\lambda^{V}_{P}\,,$
(18)
where $\lambda^{V}_{P}$ is a constant (depending on $V$ and $P$). Using this
equation, we will show in the next lemma that $\rho^{V}_{P}$ decays
exponentially and is twice continuously differentiable via the representation:
$\forall
x\in{\mathbb{R}},\;\rho^{V}_{P}(x)=\exp\Big{(}-V(x)-2PU^{\rho_{P}^{V}}(x)-\lambda_{P}^{V}\Big{)}$
In the Gaussian potential case ie $V_{G}(x)=\dfrac{x^{2}}{2}$, an explicit
formula has been found [ABG12]:
$\rho_{P}^{V_{G}}(x)=\dfrac{\Gamma(P)}{P\sqrt{2\pi}}\dfrac{\exp\Big{(}-\dfrac{x^{2}}{2}\Big{)}}{\displaystyle\int_{0}^{+\infty}t^{P-1}e^{-\frac{t^{2}}{2}+ixt}dt}$
It has been established in [BGP15] that
$\sqrt{P+1}\rho_{P}^{V_{G}}(\sqrt{P+1}x)$ converges to the Gaussian
distribution when $P$ goes to zero and the semi-circle law when $P$ goes to
infinity. So in the Gaussian case, $\mu_{P}$ can be seen as an interpolation
between the Gaussian distribution and the semi-circular one. We now drop the
superscript of $\rho^{V}_{P}$ and $\mu_{P}^{V}$ and denote it $\rho_{P}$ and
$\mu_{P}$ for convenience. In the next lemma, we prove that $\rho_{P}$ has the
same regularity as $V$.
###### Lemma 2.2.
Under Assumption 1.1,
* •
The support of $\mu_{P}$ is ${\mathbb{R}}$ and there exists a constant
$C^{V}_{P}$ such that for all $x\in{\mathbb{R}}$,
$\rho_{P}(x)\leqslant C^{V}_{P}(1+|x|)^{2P}e^{-V(x)}\,.$
* •
The density $\rho_{P}$ is in $\mathcal{C}^{3}({\mathbb{R}})$ and we have
$\rho_{P}^{\prime}=-\Big{(}V^{\prime}+2P\mathcal{H}[\rho_{P}]\Big{)}\rho_{P}$
(19)
and
$\rho_{P}^{\prime\prime}=\Big{(}-2P\mathcal{H}[\rho_{P}]^{\prime}-V^{\prime\prime}+V^{\prime
2}+4P^{2}\mathcal{H}[\rho_{P}]^{2}+4PV^{\prime}\mathcal{H}[\rho_{P}]\Big{)}\rho_{P}\,.$
(20)
###### Proof.
For the first point, [GM22, Lemma 3.2] establishes that the support of
$\mu_{P}$ is the whole real axis, and that under the first condition of
Assumptions 1.1, we have the bound, valid for all $x\in{\mathbb{R}}$
$\rho_{P}(x)\leqslant\frac{K^{V}_{P}}{(1+|x|)^{2}}\,,$ (21)
with $K^{V}_{P}$ a positive constant. Using (18) and the fact that
$\ln|x-y|\leqslant\ln\big{(}1+|x|\big{)}+\ln\big{(}1+|y|\big{)}\,,$
we see that for all $x\in{\mathbb{R}}$,
$\rho_{P}(x)\leqslant C^{V}_{P}\exp\Big{(}-V(x)+2P\ln(1+|x|)\Big{)}\,,$ (22)
with
$C^{V}_{P}=\exp\Big{(}2P\int_{\mathbb{R}}\ln(1+|y|)\rho_{P}(y)dy+\lambda_{P}^{V}\Big{)}$
which is indeed finite by (21).
For the second point, we use that
$\big{(}U^{\rho_{P}}\big{)}^{\prime}=\mathcal{H}[\rho_{P}]$ weakly and
equation (18) to conclude on the distributional identity
$\rho_{P}^{\prime}=\Big{(}-V^{\prime}-2P\mathcal{H}[\rho_{P}]\Big{)}\rho_{P}\,.$
By the second point of Assumption 1.1,
$V^{\prime}(x)e^{-V(x)+2P\ln(1+|x|)}=o(x^{-1})$ as $|x|\rightarrow\infty$,
thus by (22), $V^{\prime}\rho_{P}\in L^{2}({\mathbb{R}})$. Also since
$\rho_{P}$ is $L^{2}({\mathbb{R}})$ and bounded, we deduce, by using that
$\mathcal{H}\big{[}L^{2}({\mathbb{R}})\big{]}=L^{2}({\mathbb{R}})$, that
$\mathcal{H}[\rho_{P}]\rho_{P}\in L^{2}({\mathbb{R}})$. Adding up these terms
we get $\rho_{P}\in H^{1}({\mathbb{R}})$. Because
$\mathcal{H}[\rho_{P}]^{\prime}=\mathcal{H}[\rho_{P}^{\prime}]$ in a weak
sense by Lemma 2.1, $\mathcal{H}[\rho_{P}]\in H^{1}({\mathbb{R}})$. By the
classical fact that $H^{1}({\mathbb{R}})$ is contained in the set of
$1/2$-Hölder functions $\mathcal{C}^{1/2}({\mathbb{R}})$, we have
$\mathcal{H}[\rho_{P}]\in\mathcal{C}^{1/2}({\mathbb{R}})$ and so
$U^{\rho_{P}}\in\mathcal{C}^{1,1/2}({\mathbb{R}})$, the set of functions in
$\mathcal{C}^{1}({\mathbb{R}})$ with derivative of class $1/2$-Hölder.
Using the fact that $V$ is continuously differentiable, the previous equation
for the weak derivative of $\rho_{P}$ then ensures that
$\rho_{P}\in\mathcal{C}^{1}({\mathbb{R}})$ and equation (19) holds in the
strong sense.
Differentiating (in a weak sense) equation (19) we obtain
$\rho_{P}^{\prime\prime}=\Big{(}-2P\mathcal{H}[\rho_{P}]^{\prime}-V^{\prime\prime}+V^{\prime
2}+4P^{2}\mathcal{H}[\rho_{P}]^{2}+4PV^{\prime}\mathcal{H}[\rho_{P}]\Big{)}\rho_{P}\,.$
The three first terms belong to $L^{2}({\mathbb{R}})$ for the same reasons as
before. Since $\rho_{P}\in H^{1}({\mathbb{R}})$n by Lemma 2.1iii) so is
$\mathcal{H}[\rho_{P}]\in H^{1}({\mathbb{R}})$, it is then bounded over
${\mathbb{R}}$ hence the two last term are in $L^{2}({\mathbb{R}})$ when
multiplied by $\rho_{P}$. Finally, we can conclude that $\rho_{P}\in
H^{2}({\mathbb{R}})$ and so that $\mathcal{H}[\rho_{P}]\in
H^{2}({\mathbb{R}})$ with
$\mathcal{H}[\rho_{P}]^{\prime\prime}=\mathcal{H}[\rho_{P}^{\prime\prime}]$
(in a weak sense). As before, we conclude that
$\rho_{P}\in\mathcal{C}^{2}({\mathbb{R}})$ and that equation (20) holds in a
strong sense. By the exact same method, we can show that
$\rho_{P}\in\mathcal{C}^{3}({\mathbb{R}})$. ∎
We next show that the Hilbert transform of $\rho_{P}$ is continuous and decays
at infinity.
###### Lemma 2.3.
Let $u\in L^{2}({\mathbb{R}})$ such that $\int_{\mathbb{R}}u(t)dt$ exists and
$f:t\mapsto tu(t)\in H^{1}({\mathbb{R}})$ then
$\mathcal{H}[u](x)\underset{|x|\rightarrow\infty}{\sim}\displaystyle\dfrac{-\int_{\mathbb{R}}u(t)dt}{x}.$
Moreover if $\displaystyle\int_{\mathbb{R}}u(t)dt=0$,
$\int_{\mathbb{R}}f(t)dt$ exists and $g:t\mapsto t^{2}u(t)\in
H^{1}({\mathbb{R}})$, then
$\mathcal{H}[u](x)\underset{|x|\rightarrow\infty}{\sim}\displaystyle\dfrac{-\int_{\mathbb{R}}tu(t)dt}{x^{2}}.$
As a consequence, we obtain that
$\mathcal{H}[\rho_{P}](x)\underset{|x|\rightarrow\infty}{\sim}\\-x^{-1}$ and
the logarithmic potential $U^{\rho_{P}}$ is Lipschitz bounded, with bounded
derivative $\mathcal{H}[\rho_{P}]$.
###### Proof.
Let $u\in L^{2}({\mathbb{R}})$, such that $\int_{\mathbb{R}}u(t)dt$ exists and
$f:t\mapsto tu(t)\in H^{1}({\mathbb{R}})$. Then
$x\mathcal{H}[u](x)+\int_{\mathbb{R}}u(t)dt=\int_{\mathbb{R}}\Big{[}\dfrac{xu(x+t)-xu(x-t)}{2t}+\dfrac{u(x+t)}{2}+\dfrac{u(x-t)}{2}\Big{]}dt=\mathcal{H}[f](x).$
Since $f\in H^{1}({\mathbb{R}})$, so is $\mathcal{H}[f]$, proving that it goes
to zero at infinity. Hence
$\mathcal{H}[u](x)\underset{|x|\rightarrow\infty}{\sim}\displaystyle\dfrac{-\int_{\mathbb{R}}u(t)dt}{x}$
Moreover if $\displaystyle\int_{\mathbb{R}}u(t)dt=0$,
$\int_{\mathbb{R}}f(t)dt$ exists and $g:t\mapsto t^{2}u(t)\in
H^{1}({\mathbb{R}})$, then by the same argument:
$x^{2}\mathcal{H}[u](x)=x\mathcal{H}[f](x)=\mathcal{H}[g](x)-\int_{\mathbb{R}}f(t)dt$
where $g(t)=t^{2}u(t)$. We deduce that
$\mathcal{H}[u](x)\underset{|x|\rightarrow\infty}{\sim}\displaystyle\dfrac{-\int_{\mathbb{R}}tu(t)dt}{x^{2}}$
since $\mathcal{H}[g]$ goes to zero at infinity. ∎
###### Lemma 2.4 (Asymptotic of the logarithmic potential).
We have the following asymptotic expansion at infinity
$U^{\rho_{P}}=\ln|x|+\underset{|x|\rightarrow\infty}{O}(1)$.
###### Proof.
Since
$\mathcal{H}[\rho_{P}](x)=\\-x^{-1}+\underset{|x|\rightarrow\infty}{O}(x^{-2})$,
and recalling that $U^{\rho^{V}_{P}}$ (defined by (16)) satisfies
$(U^{\rho^{V}_{P}})^{\prime}=\mathcal{H}[\rho^{V}_{P}]$, we deduce the result
by integrating $t\mapsto\mathcal{H}[\rho^{V}_{P}](t)-1/t$ in a neighborhood of
infinity. ∎
We conclude this section by stating the Poincaré inequality for the measure
$\mu_{P}$ under the assumption that $V$ is a bounded perturbation of a
strictly convex potential $V_{\mathrm{conv}}$.
###### Lemma 2.5.
Let $\varepsilon>0$, there exists a function
$f\in\mathcal{C}^{2}({\mathbb{R}})$ such that
$f_{\varepsilon}(x)-\ln|x|=\underset{|x|\rightarrow\infty}{O}(1)$, and
$\|f_{\varepsilon}^{\prime\prime}\|_{\infty}\leqslant\varepsilon$.
###### Proposition 2.6.
Assume that $V=V_{\mathrm{conv}}+\phi$, where
$V_{\mathrm{conv}}\in\mathcal{C}^{3}({\mathbb{R}})$ with $V_{\mathrm{conv}}$
convex, such that there exists $\varepsilon>0$ such that $\phi$ is bounded and
$V_{\mathrm{conv}}-2Pf_{\varepsilon}$ is convex ($f_{\varepsilon}$ being given
by Lemma 2.5). Then, the measure $\mu_{P}$ satisfies the Poincaré inequality:
there exists a constant $C>0$ such that for all
$f\in\mathcal{C}^{1}_{c}({\mathbb{R}})$,
$\operatorname{Var}_{\mu_{P}}(f)\leqslant
C\int_{\mathbb{R}}|f^{\prime}|^{2}d\mu_{P}\,.$ (23)
###### Proof.
We use the fact that if $\mu_{1},\mu_{2}$ are two absolutely continuous
probability measures supported on ${\mathbb{R}}$ such that
$\dfrac{1}{C}\leqslant\dfrac{d\mu_{1}}{d\mu_{2}}\leqslant C$ for some $C>0$
and $\mu_{1}$ satisfies Poincaré inequality with constant $C_{1}$ then so does
$\mu_{2}$ for some other constant. Indeed, in that case let
$f\in\mathcal{C}_{c}^{1}({\mathbb{R}})$, we have
$\displaystyle\operatorname{Var}_{\mu_{2}}(f)=\inf_{a}\int_{\mathbb{R}}\left(f-a\right)^{2}d\mu_{2}$
$\displaystyle\leqslant C\operatorname{Var}_{\mu_{1}}(f)\leqslant
C^{2}C_{1}\int_{\mathbb{R}}f^{\prime}d\mu_{2}.$
Here we take $d\mu_{2}(x):=\rho_{P}^{V}(x)dx$ and we want to compare it to a
measure $\mu_{1}$ supported on ${\mathbb{R}}$ defined by
$d\mu_{1}(x)=\dfrac{1}{Z}\exp\big{(}-W(x)\big{)}dx$ for some convex function
$W$. The measure $\mu_{1}$ then clearly verifies the Poincaré inequality. This
fact comes as a direct consequence of [BBCG08][Corollary 1.9], which states
that if a probability measure $\mu$ has a log-concave density on
${\mathbb{R}}$, then it satisfies (23). With the definition
$W:=V_{\mathrm{conv}}-2Pf_{\varepsilon}$ with $\varepsilon>0$ such that
$V_{\mathrm{conv}}-2Pf_{\varepsilon}$ is convex, $W-V-2PU^{\rho_{P}}$ is
bounded on ${\mathbb{R}}$. It is then not hard to see that
$\dfrac{1}{C}\leqslant\dfrac{d\mu_{1}}{d\mu_{P}}\leqslant C$ for some $C>0$
which allows to conclude that $\mu_{P}$ satisfies the Poincaré inequality.
∎
###### Remark 2.7.
We will apply later inequality (23) to more general functions than
$\mathcal{C}^{1}_{c}({\mathbb{R}})$, namely functions of the weighted Sobolev
space $H^{1}(\rho_{P})$, defined in Section 6; which can be seen as the
completion of $\mathcal{C}^{\infty}_{c}({\mathbb{R}})$ with respect to the
norm $\|u\|_{L^{2}(\rho_{P})}+\|u^{\prime}\|_{L^{2}(\rho_{P})}$.
## 3 Concentration inequality, proof of Theorem 1.5
We prove in this section the concentration Theorem 1.5. Its proof is a direct
adaptation of Theorem 1.4 of [HL21], which shows the analogous estimate in the
circular setup. It is inspired by [MMS14] and based on a comparison between a
configuration $\mathbf{x}_{N}=(x_{1},\ldots,x_{N})$ sampled with
$\mathbb{P}_{N}^{V,P}$ and a regularized version
$\mathbf{y}_{N}=(y_{1},\ldots,y_{N})$, which we describe here.
###### Definition 3.1.
$y_{1}:=x_{1}$, and for $0\leqslant k\leqslant N-1$,
$y_{k+1}:=y_{k}+\max\\{x_{k+1}-x_{k},N^{-3}\\}.$
Note that the configuration $\mathbf{y}_{N}$ given by the previous definition
satisfies $y_{k+1}-y_{k}\geqslant N^{-3}$, and $\mathbf{y}_{N}$ is close to
$\mathbf{x}_{N}$ in the sense that
$\sum_{k=1}^{N}|x_{k}-y_{k}|\leqslant\dfrac{1}{2N}\,.$ (24)
Indeed, by construction we have
$|x_{k}-y_{k}|=y_{k}-x_{k}\leqslant(k-1)N^{-3}$, and we get the bound by
summing these inequalities.
The key point of the proof of Theorem 1.5 is comparing the empirical measure
$\hat{\mu}_{N}=\frac{1}{N}\sum_{i=1}^{N}\delta_{x_{i}},$ where
$\mathbf{x}_{N}$ follows $\mathbb{P}_{N}^{V,P}$, to the regularized measure
$\widetilde{\mu}_{N}:=\lambda_{N^{-5}}\ast\frac{1}{N}\sum_{i=1}^{N}\delta_{y_{i}},$
(25)
ie the convolution of $\lambda_{N^{-5}}$ and the empirical measure, where
$\lambda_{N^{-5}}$ is the uniform measure on $[0,N^{-5}]$. The interest of
introducing the measure $\widetilde{\mu}_{N}$ is that it is close to
$\hat{\mu}_{N}$, while having a finite energy
$\mathcal{E}_{P}^{V}(\widetilde{\mu}_{N})$, given by (17). Finally, notice
that the empirical measure doesn’t change when reordering
$x_{1},\ldots,x_{N}$, and thus we do not lose in generality for our purposes
in assuming that $x_{1}\leqslant\ldots\leqslant x_{N}$ in definition 3.1.
We now introduce a distance on $\mathcal{P}({\mathbb{R}})$ which is well-
suited to our context.
###### Definition 3.2.
For $\mu,\mu^{\prime}\in\mathcal{P}({\mathbb{R}})$ we define the distance
(possibly infinite) $D(\mu,\mu^{\prime})$ by
$\displaystyle D(\mu,\mu^{\prime})$
$\displaystyle:=\left(-\int\ln|x-y|d(\mu-\mu^{\prime})(x)d(\mu-\mu^{\prime})(y)\right)^{1/2}$
(26)
$\displaystyle=\left(\int_{0}^{+\infty}\frac{1}{t}\big{|}\mathcal{F}[\mu-\mu^{\prime}](t)\big{|}^{2}dt\right)^{1/2}.$
(27)
where the Fourier transform of a signed measure $\nu$ is defined by
$\mathcal{F}[\nu](x):=\displaystyle\int_{\mathbb{R}}e^{-\mathrm{i}tx}d(\mu-\mu^{\prime})(x)$
Let $f:{\mathbb{R}}\to{\mathbb{R}}$ with finite $1/2$ norm
$\|f\|_{1/2}:=\left(\int_{\mathbb{R}}|t|\left|\mathcal{F}[f](t)\right|^{2}dt\right)^{1/2}$.
By Plancherel theorem and Hölder inequality, for any
$\mu,\mu^{\prime}\in\mathcal{P}({\mathbb{R}})$, setting
$\nu=\mu-\mu^{\prime}$,
$\left|\int_{\mathbb{R}}fd\mu-\int_{\mathbb{R}}fd\mu^{\prime}\right|^{2}=\left|\dfrac{1}{2\pi}\int_{\mathbb{R}}|t|^{1/2}\mathcal{F}[f](t)\frac{\overline{\mathcal{F}[\nu](t)}}{|t|^{1/2}}dt\right|^{2}\leqslant\dfrac{1}{2\pi^{2}}\|f\|_{1/2}^{2}D^{2}(\mu,\mu^{\prime}).$
Therefore the metric $d$ defined in (13) is dominated by $D$:
$d(\mu,\mu^{\prime})\leqslant\dfrac{1}{\sqrt{2}\pi}D(\mu,\mu^{\prime}).$ (28)
The following lemma shows how the distance $D$ is related to the energy-
functional $\mathcal{E}^{V}_{P}$ defined in (17), we will write
$\mathcal{E}_{P}$ for simplicity.
###### Lemma 3.3.
We have for any absolutely continuous $\mu\in\mathcal{P}({\mathbb{R}})$ with
finite energy $\mathcal{E}^{V}_{P}(\mu)$,
$\mathcal{E}_{P}(\mu)-\mathcal{E}_{P}(\mu_{P})=PD^{2}(\mu,\mu_{P})+\int\ln\left(\frac{d\mu}{d\mu_{P}}\right)d\mu\,.$
(29)
###### Proof of Lemma 3.3.
Subtracting $\mathcal{E}_{P}(\mu)-\mathcal{E}_{P}(\mu_{P})$ we find
$\mathcal{E}_{P}(\mu)-\mathcal{E}_{P}(\mu_{P})=\int
Vd(\mu-\mu_{P})+\int\ln\frac{d\mu}{dx}d\mu-\int\ln\rho_{P}d\mu_{P}-P\iint\ln|x-y|d\mu(x)d\mu(y)\\\
+P\iint\ln|x-y|d\mu_{P}(x)d\mu_{P}(y)\,.$ (30)
Now, if $\nu$ is a signed measure of mass zero, integrating (18) we get
$\int
V(x)d\nu(x)-2P\iint\ln|x-y|d\nu(x)d\mu_{P}(y)+\int\ln(\rho_{P})(x)d\nu(x)=0\,.$
We take $\nu=\mu-\mu_{P}$, and get
$\int
V(x)d(\mu-\mu_{P})(x)=2P\iint\ln|x-y|d\mu(x)d\mu_{P}(y)-2P\iint\ln|x-y|d\mu_{P}(x)d\mu_{P}(y)\\\
-\int\ln(\rho_{P})(x)d\mu(x)+\int\ln(\rho_{P})(x)d\mu_{P}(x)\,.$
Plugging this last identity in (30), we find
$\mathcal{E}_{P}(\mu)-\mathcal{E}_{P}(\mu_{P})=-P\iint\ln|x-y|d\nu(x)d\nu(y)+\int\ln\left(\frac{d\mu}{d\mu_{P}}\right)(x)d\mu(x)$
which establishes the result. ∎
###### Proof of Theorem 1.5.
We first give a lower bound for the partition function $Z_{N}^{V,P}$ (4) of
$\mathbb{P}_{N}^{V,P}$. We rewrite it as
$Z^{V,P}_{N}=\int_{{\mathbb{R}}^{N}}\exp\Bigg{(}\dfrac{2P}{N}\sum_{i<j}\ln|x_{i}-x_{j}|-\sum_{i=1}^{N}\Big{[}V(x_{i})+\ln\rho_{P}(x_{i})\Big{]}\Bigg{)}d\rho_{P}(x_{1})\dots
d\rho_{P}(x_{N})\,,$
and apply Jensen inequality to obtain:
$\displaystyle\ln Z_{N}^{V,P}$
$\displaystyle\geqslant\int_{{\mathbb{R}}^{N}}\Bigg{(}\dfrac{2P}{N}\sum_{i<j}\ln|x_{i}-x_{j}|-\sum_{i=1}^{N}\Big{[}V(x_{i})+\ln\rho_{P}(x_{i})\Big{]}\Bigg{)}d\rho_{P}(x_{1})\dots
d\rho_{P}(x_{N})$ $\displaystyle\geqslant
P(N-1)\displaystyle\iint\ln|x-y|d\rho_{P}(x)d\rho_{P}(y)-N\int_{\mathbb{R}}\Big{[}V+\ln\rho_{P}\Big{]}d\rho_{P}$
$\displaystyle\geqslant-N\mathcal{E}_{P}^{V}\big{[}\mu_{P}\big{]}-P\iint\ln|x-y|d\rho_{P}(x)d\rho_{P}(y).$
Using this estimate and the fact that for $1\leqslant i,j\leqslant N$ we have
$|x_{i}-x_{j}|\leqslant|y_{i}-y_{j}|$, with
$\mathbf{y}_{N}=(y_{1},\ldots,y_{N})$ of definition 3.1, we deduce the bound
on the density of probability
$\frac{d\mathbb{P}^{V,P}_{N}}{d\mathbf{x}}(x_{1},\ldots,x_{N})\leqslant
e^{N\mathcal{E}_{P}(\mu_{P})+P\iint\ln|x-y|d\mu_{P}(x)d\mu_{P}(y)+\frac{P}{N}\sum_{i\neq
j}\ln|y_{i}-y_{j}|-\sum_{i=1}^{N}V(x_{i})}\,.$ (31)
Recalling (25), we now show the following estimate
$\sum_{i\neq j}\ln|y_{i}-y_{j}|\leqslant
2+N^{2}\iint\ln|x-y|d\widetilde{\mu}_{N}(x)d\widetilde{\mu}_{N}(y)+5N\ln
N+\frac{3}{2}N\,.$ (32)
Let $i\neq j$ and $u,v\in[0,N^{-5}]$. Since for $x\neq 0$ and
$|h|\leqslant\frac{|x|}{2}$, we have
$\big{|}\ln|x+h|-\ln|x|\big{|}\leqslant\frac{2|h|}{|x|}$, we deduce
$\big{|}\ln|y_{i}-y_{j}+u-v|-\ln|y_{i}-y_{j}|\big{|}\leqslant\frac{2|u-v|}{|y_{i}-y_{j}|}\leqslant\frac{2N^{-5}}{N^{-3}}=\dfrac{2}{N^{2}}.$
Thus, summing over $i\neq j$ and integrating with respect to $u$ and $v$, we
get
$\displaystyle\sum_{i\neq j}\ln|y_{i}-y_{j}|$ $\displaystyle\leqslant
2+\sum_{i\neq
j}\iint\ln|y_{i}-y_{j}+u-v|d\lambda_{N^{-5}}(u)d\lambda_{N^{-5}}(v)$
$\displaystyle=2+N^{2}\iint\ln|x-y|d\widetilde{\mu}_{N}(x)d\widetilde{\mu}_{N}(y)-N\iint\ln|u-v|d\lambda_{N^{-5}}(u)d\lambda_{N^{-5}}(v)\,.$
The last integral is equal to $-\frac{3}{2}-5\ln N$, so we deduce (32). We now
combine (31) and (32). Recall (17) and set
$c_{N}=P\left(\displaystyle\iint\ln|x-y|d\mu_{P}(x)d\mu_{P}(y)+3/2+2/N\right)\,.$
Then we get
$\displaystyle\frac{d\mathbb{P}^{V,P}_{N}}{d\mathbf{x}}(x_{1},\ldots,x_{N})$
$\displaystyle\leqslant e^{c_{N}+5P\ln
N}\exp{\left[N\left\\{\mathcal{E}_{P}(\mu_{P})-\mathcal{E}_{P}(\widetilde{\mu}_{N})+\int\left(V+\ln\frac{d\widetilde{\mu}_{N}}{dx}\right)d\widetilde{\mu}_{N}\right\\}-\sum_{i=1}^{N}V(x_{i})\right]}$
$\displaystyle=e^{c_{N}+5P\ln
N}\exp{\left[-NPD^{2}(\widetilde{\mu}_{N},\mu_{P})+N\int\left(V+\ln\rho_{P}\right)d\widetilde{\mu}_{N}-\sum_{i=1}^{N}V(x_{i})\right]}\,$
where we used equation (29) in the last equality. Using again equation (18) we
then see that the density
$\dfrac{d\mathbb{P}^{V,P}_{N}}{d\mathbf{x}}(x_{1},\ldots,x_{N})$ is bounded by
$e^{c_{N}+5P\ln
N}\exp{\left[-NPD^{2}(\widetilde{\mu}_{N},\mu_{P})+2PN\iint\ln|x-y|d(\widetilde{\mu}_{N}-\hat{\mu}_{N})(x)d\mu_{P}(y)\right]}\prod_{i=1}^{N}\rho_{P}(x_{i})\,.$
Recalling (16), we used that
$\displaystyle\iint\ln|x-y|d(\widetilde{\mu}_{N}-\hat{\mu}_{N})(x)d\mu_{P}(y)=-\int
U^{\rho_{P}}d(\widetilde{\mu}_{N}-\hat{\mu}_{N})$. As a consequence of the
bound on the density
$\dfrac{d\mathbb{P}^{V,P}_{N}}{d\mathbf{x}}(x_{1},\ldots,x_{N})$ we
established, we have for all $r>0$
$\mathbb{P}^{V,P}_{N}\left(D^{2}(\widetilde{\mu}_{N},\mu_{P})>r\right)\leqslant
e^{-NPr+c_{N}+5P\ln N}\int_{{\mathbb{R}}^{N}}\exp\left\\{-2PN\int
U^{\rho_{P}}d(\widetilde{\mu}_{N}-\hat{\mu}_{N})\right\\}\prod_{i=1}^{N}\rho_{P}(x_{i})dx_{i}\,.$
(33)
Next, we show that $-N\int U^{\rho_{P}}d(\widetilde{\mu}_{N}-\hat{\mu}_{N})$
is bounded. By Lemma 2.3, $U^{\rho_{P}}$ is differentiable with bounded
derivative $\mathcal{H}[\rho_{P}]$ on ${\mathbb{R}}$. As a consequence,
$\displaystyle\left|N\int
U^{\rho_{P}}d(\widetilde{\mu}_{N}-\hat{\mu}_{N})\right|$
$\displaystyle\leqslant\sum_{i=1}^{N}\int\left|U^{\rho_{P}}(y_{i}+u)-U^{\rho_{P}}(x_{i})\right|d\lambda_{N^{-5}}(u)$
$\displaystyle\leqslant\|\mathcal{H}[\rho_{P}]\|_{\infty}\left(\sum_{i=1}^{N}|y_{i}-x_{i}|+N\int
ud\lambda_{N^{-5}}(u)\right)$
$\displaystyle\leqslant\|\mathcal{H}[\rho_{P}]\|_{\infty}\Big{(}\dfrac{1}{2N}+N^{-4}/2\Big{)},$
where we used (24) in the last inequality. Therefore, we deduce from (33)
$\mathbb{P}^{V,P}_{N}\left(D^{2}(\widetilde{\mu}_{N},\mu_{P})>r\right)\leqslant
e^{-NPr+c_{N}+5P\ln
N+\frac{2P}{N}\|\mathcal{H}[\rho_{P}]\|_{\infty}}=e^{-NPr+5P\ln N+K_{N}}$ (34)
with
$K_{N}:=c_{N}+\dfrac{2P}{N}\|\mathcal{H}\left[\rho_{P}\right]\|_{\infty}$.
Since $(c_{N})_{N}$ is bounded, so is $(K_{N})_{N}$.
Finally, let $f$ be a Lipschitz bounded function with
$\|f\|_{\text{Lip}}\leqslant 1$, then, we have (as we did for $U^{\rho_{P}}$)
$\left|\int fd\hat{\mu}_{N}-\int fd\widetilde{\mu}_{N}\right|\leqslant
N^{-2}\,.$
Thus by (28)
$d(\hat{\mu}_{N},\mu_{P})\leqslant
d(\hat{\mu}_{N},\widetilde{\mu}_{N})+d(\widetilde{\mu}_{N},\mu_{P})\leqslant
N^{-2}+\frac{1}{\sqrt{2}\pi}D(\widetilde{\mu}_{N},\mu_{P})\,,$
and for any $N$ such that $r-N^{-2}\geqslant r/2$ (in particular $r-N^{-2}>0$)
we get
$\displaystyle\mathbb{P}^{V,P}_{N}\left(d(\hat{\mu}_{N},\mu_{P})>r\right)\leqslant\mathbb{P}^{V,P}_{N}\left(\frac{1}{2\pi^{2}}D^{2}(\widetilde{\mu}_{N},\mu_{P})>(r-N^{-2})^{2}\right)$
$\displaystyle\leqslant\mathbb{P}^{V,P}_{N}\left(\frac{1}{2\pi^{2}}D^{2}(\widetilde{\mu}_{N},\mu_{P})>r^{2}/4\right)\,,$
and the last term is bounded by $e^{-Nr^{2}\frac{P\pi^{2}}{2}+5P\ln N+K}$ for
some $K$ large enough, which concludes the proof. ∎
As a consequence of Theorem 1.5, we are able to control the quantities
$\zeta_{N}(\phi):=\iint_{{\mathbb{R}}^{2}}\frac{\phi(x)-\phi(y)}{x-y}d(\hat{\mu}_{N}-\mu_{P})(x)d(\hat{\mu}_{N}-\mu_{P})(y)$
(35)
for a certain class of test functions $\phi$.
###### Corollary 3.4.
There exists $C,K>0$ such that for all
$\phi\in\mathcal{C}^{2}({\mathbb{R}})\cap H^{2}({\mathbb{R}})$ with bounded
second derivative, we have for $\varepsilon>0$ and $N$ large enough,
$\mathbb{P}_{N}^{V,P}\left(\sqrt{N}|\zeta_{N}(\phi)|\leqslant
N^{-1/2+\varepsilon}\right)\geqslant
1-\exp\left\\{-\frac{PN^{\varepsilon}}{2C\|\psi\|_{H^{2}({\mathbb{R}})}}+5P\ln
N+K\right\\}\,$
with
$N_{2}(\phi)=\|\phi^{\prime}\|_{L^{2}(dx)}+\|\phi^{\prime\prime}\|_{L^{2}(dx)}$.
###### Proof.
We follow the proof given in [Gui19][Cor. 4.16] and adapt it to our setting.
Let us denote by $\widetilde{\zeta_{N}}(\phi)$ the quantity
$\iint_{{\mathbb{R}}^{2}}\frac{\phi(x)-\phi(y)}{x-y}d(\widetilde{\mu}_{N}-\mu_{P})(x)d(\widetilde{\mu}_{N}-\mu_{P})(y)\,.$
We have the almost sure inequality, by a Taylor estimate
$|\zeta_{N}(\phi)-\widetilde{\zeta_{N}}(\phi)|\leqslant
2N^{-2}\|\phi^{\prime\prime}\|_{\infty}\,.$ (36)
Thus, for any $\delta>0$,
$\displaystyle\mathbb{P}_{N}^{V,P}\left(|\zeta_{N}(\phi)|>\delta\right)$
$\displaystyle\leqslant\mathbb{P}_{N}^{V,P}\left(|\zeta_{N}(\phi)-\widetilde{\zeta_{N}}(\phi)|>\delta/2\right)+\mathbb{P}_{N}^{V,P}\left(|\widetilde{\zeta_{N}}(\phi)|>\delta/2\right)$
$\displaystyle\leqslant\mathbb{P}_{N}^{V,P}\left(2N^{-2}\|\phi^{\prime\prime}\|_{\infty}>\delta/2\right)+\mathbb{P}_{N}^{V,P}\left(|\widetilde{\zeta_{N}}(\phi)|>\delta/2\right)\,,$
where the first term of the right-hand side is either $0$ or $1$. With
$\delta=N^{-1+\varepsilon}$, $\varepsilon>0$, it is zero for $N$ large enough.
For such a choice of $\delta$, and for $N$ large enough,
$\mathbb{P}_{N}^{V,P}\left(|\zeta_{N}(\phi)|>N^{-1+\varepsilon}\right)\leqslant\mathbb{P}_{N}^{V,P}\left(|\widetilde{\zeta_{N}}(\phi)|>\frac{1}{2}N^{-1+\varepsilon}\right)\,.$
We next show that, for some $C>0$ independent of $\phi$, we have
$|\widetilde{\zeta_{N}}(\phi)|\leqslant
CD^{2}(\widetilde{\mu}_{N},\mu_{P})\|\phi\|_{H^{2}({\mathbb{R}})}\,.$ (37)
We begin by showing this inequality for $\psi\in\mathcal{S}({\mathbb{R}})$. By
using the inverse Fourier transform we have
$\displaystyle\widetilde{\zeta}_{N}$
$\displaystyle(\psi)=\dfrac{1}{2\pi}\iint\dfrac{\int
dt\mathcal{F}[\psi](t)e^{\mathrm{i}tx}-\int
dt\mathcal{F}[\psi](t)e^{\mathrm{i}ty}}{x-y}d\big{(}\widetilde{\mu}_{N}-\mu_{P}\big{)}(x)d\big{(}\widetilde{\mu}_{N}-\mu_{P}\big{)}(y)$
$\displaystyle=\dfrac{1}{2\pi}\int dt\mathrm{i}t\mathcal{F}[\psi](t)\iint
e^{\mathrm{i}ty}\dfrac{e^{\mathrm{i}t(x-y)}-1}{\mathrm{i}t(x-y)}d\big{(}\widetilde{\mu}_{N}-\mu_{P}\big{)}(x)d\big{(}\widetilde{\mu}_{N}-\mu_{P}\big{)}(y)$
$\displaystyle=\dfrac{1}{2\pi}\int dt\mathrm{i}t\mathcal{F}[\psi](t)\iint
e^{\mathrm{i}ty}\int_{0}^{1}d\alpha e^{\mathrm{i}\alpha
t(x-y)}d\big{(}\widetilde{\mu}_{N}-\mu_{P}\big{)}(x)d\big{(}\widetilde{\mu}_{N}-\mu_{P}\big{)}(y)$
$\displaystyle=\dfrac{1}{2\pi}\int
dt\mathrm{i}t\mathcal{F}[\psi](t)\int_{0}^{1}d\alpha\int e^{\mathrm{i}\alpha
tx}d\big{(}\widetilde{\mu}_{N}-\mu_{P}\big{)}(x)\int
e^{\mathrm{i}(1-\alpha)ty}d\big{(}\widetilde{\mu}_{N}-\mu_{P}\big{)}(y)$
We then apply in order the triangular inequality, Cauchy-Schwarz inequality, a
change of variable and the fact that
$\left|\mathcal{F}\left[\widetilde{\mu}_{N}-\mu_{P}\right]\right|^{2}$ is an
even function.
$\displaystyle|\widetilde{\zeta}_{N}(\psi)|$
$\displaystyle\leqslant\dfrac{1}{2\pi}\int_{\mathbb{R}}dt\left|t\mathcal{F}[\psi](t)\right|\int_{0}^{1}d\alpha\left|\mathcal{F}\left[\widetilde{\mu}_{N}-\mu_{P}\right](\alpha
t)\right|.\left|\mathcal{F}\left[\widetilde{\mu}_{N}-\mu_{P}\right]\big{(}(1-\alpha)t\big{)}\right|$
$\displaystyle\leqslant\dfrac{1}{2\pi}\int_{\mathbb{R}}dt\left|t\mathcal{F}[\psi](t)\right|\Big{(}\int_{0}^{1}d\alpha\left|\mathcal{F}\left[\widetilde{\mu}_{N}-\mu_{P}\right](\alpha
t)\right|^{2}\Big{)}^{\frac{1}{2}}\Big{(}\int_{0}^{1}d\alpha\left|\mathcal{F}\left[\widetilde{\mu}_{N}-\mu_{P}\right]\big{(}(1-\alpha)t\big{)}\right|^{2}\Big{)}^{\frac{1}{2}}$
$\displaystyle\leqslant\dfrac{1}{2\pi}\int_{\mathbb{R}}dt\left|t\mathcal{F}[\psi](t)\right|\int_{0}^{1}d\alpha\left|\mathcal{F}\left[\widetilde{\mu}_{N}-\mu_{P}\right](\alpha
t)\right|^{2}$
$\displaystyle\leqslant\dfrac{1}{2\pi}\int_{0}^{+\infty}dt\left|t\mathcal{F}[\psi](t)\right|\int_{0}^{1}\dfrac{td\alpha}{t\alpha}\left|\mathcal{F}\left[\widetilde{\mu}_{N}-\mu_{P}\right](\alpha
t)\right|^{2}+\dfrac{1}{2\pi}\int_{-\infty}^{0}dt\left|t\mathcal{F}[\phi](t)\right|\int_{0}^{1}\dfrac{-td\alpha}{-t\alpha}\left|\mathcal{F}\left[\widetilde{\mu}_{N}-\mu_{P}\right](\alpha
t)\right|^{2}$
$\displaystyle\leqslant\dfrac{1}{2\pi}\int_{\mathbb{R}}dt\left|t\mathcal{F}[\psi](t)\right|D^{2}(\widetilde{\mu}_{N},\mu_{P})$
$\displaystyle\leqslant\dfrac{1}{2\pi}\Big{(}\int_{\mathbb{R}}dt\left|t\mathcal{F}[\psi](t)\right|^{2}(1+t^{2})\Big{)}^{\frac{1}{2}}\Big{(}\int_{\mathbb{R}}\dfrac{dt}{1+t^{2}}\Big{)}^{\frac{1}{2}}D^{2}(\widetilde{\mu}_{N},\mu_{P})$
$\displaystyle\leqslant\dfrac{1}{2\sqrt{\pi}}D^{2}(\widetilde{\mu}_{N},\mu_{P})N_{2}(\psi)$
$\displaystyle\leqslant\dfrac{1}{2\sqrt{\pi}}D^{2}(\widetilde{\mu}_{N},\mu_{P})\|\psi\|_{H^{2}({\mathbb{R}})}$
By density of $\mathcal{S}({\mathbb{R}})$ in $H^{2}({\mathbb{R}})$, and since
$\widetilde{\zeta}_{N}:\Big{(}H^{2}({\mathbb{R}}),\|\cdot\|_{H^{2}({\mathbb{R}})}\Big{)}\rightarrow{\mathbb{R}}$
is continuous, the inequality still holds for $\phi$. Thus, using equation
(34),
$\mathbb{P}_{N}^{V,P}\left(|\widetilde{\zeta_{N}}(\phi)|>\frac{1}{2}N^{-1+\varepsilon}\right)\leqslant\mathbb{P}_{N}^{V,P}\left(D^{2}(\widetilde{\mu}_{N},\mu_{P})>\frac{N^{-1+\varepsilon}}{2C\|\phi\|_{H^{2}({\mathbb{R}})}}\right)\leqslant\exp\left\\{-P\frac{N^{\varepsilon}}{2C\|\phi\|_{H^{2}({\mathbb{R}})}}+5P\ln
N+K\right\\}\,,$
which concludes the proof. ∎
## 4 Localization of the edge of a configuration
In [Lam21][Theorem 1.8, Theorem 3.4], Lambert was able to control the edge
(i.e the minimum and the maximum) of a typical configuration
$(x_{1},\ldots,x_{N})$ distributed according to $\mathbb{P}^{V,P}_{N}$, by
showing that the random measure
$\Xi_{N}:=\sum_{j=1}^{N}\delta_{\varphi_{N}^{-1}(x_{j})}$
converges in distribution towards a Poisson point process for a function
$\varphi_{N}$ which takes the form
$\varphi_{N}(x):=E_{N}+\alpha_{N}^{-1}x\,.$
Before being more precise on the construction of $(E_{N})_{N}$ and
$(\alpha_{N})_{N}$, we explain, following [Lam21], how one can use this
convergence to localize the edge of a typical configuration
$(x_{1},\ldots,x_{N})$. Let us assume for a moment that $\Xi_{N}$ converges
towards a Poisson point process with intensity $\theta(x)=e^{-x}$, with
$E_{N}\to+\infty$. In particular, the random variable
$\Xi_{N}(t,+\infty)$
converges in distribution towards a Poisson random variable with mean
$\int_{t}^{+\infty}e^{-x}dx$. Combined with the equalities
$\displaystyle\mathbb{P}^{V,P}_{N}\bigg{(}\Xi_{N}(t,+\infty)=0\bigg{)}$
$\displaystyle=\mathbb{P}^{V,P}_{N}\bigg{(}\forall\ 1\leqslant j\leqslant N,\
\varphi_{N}^{-1}(x_{j})=\alpha_{N}(x_{j}-E_{N})\leqslant t\bigg{)}$
$\displaystyle=\mathbb{P}^{V,P}_{N}\bigg{(}\alpha_{N}\left(\max_{1\leqslant
j\leqslant N}x_{j}-E_{N}\right)\leqslant t\bigg{)}\,,$
we deduce that for all $t\in{\mathbb{R}}$
$\mathbb{P}^{V,P}_{N}\left(\alpha_{N}\left(\max_{1\leqslant j\leqslant
N}x_{j}-E_{N}\right)\leqslant
t\right)\underset{N\rightarrow\infty}{\longrightarrow}\exp(-e^{-t})\,.$
Therefore, the random variable
$\alpha_{N}\left(\max_{1\leqslant j\leqslant N}x_{j}-E_{N}\right)$
converges in distribution to the Gumbel law, showing that the maximum of a
configuration is of order $E_{N}$. Furthermore, as will be clear from the
construction of $\alpha_{N}$ and $E_{N}$, $\alpha_{N}$ is positive, and goes
to infinity as $N$ goes to infinity.
Replacing in the previous analysis $\theta(x)=e^{x}$ and $E_{N}\to-\infty$, we
would have deduced in the same fashion that
$\alpha_{N}\left(\min_{1\leqslant j\leqslant N}x_{j}-E_{N}\right)$
converges in law.
With the above notations, we can apply [Lam21][Theorem 3.4] to our context.
###### Theorem 4.1.
Let $v=\pm$. There exists $(E_{N}^{v})_{N},\,(\alpha_{N}^{v})_{N}$ sequences
of real numbers with $|E_{N}^{v}|\to+\infty$, $\alpha_{N}^{v}>0$ for large
enough $N$, satisfying $V^{\prime}(E_{N}^{v})=\alpha_{N}^{v}v$, such that:
* a)
$\dfrac{Ne^{-V(E_{N}^{v})+2P\ln|E_{N}^{v}|+\lambda_{V}^{P}}}{\alpha_{N}^{v}}\underset{N\rightarrow\infty}{\longrightarrow}1$
(recall $\lambda^{V}_{P}$ is defined through equation (5)),
* b)
$\frac{\ln(\alpha_{N}^{v})}{N}\underset{N\rightarrow\infty}{\longrightarrow}0$
and
$\alpha_{N}^{v}|E_{N}^{v}|\underset{N\rightarrow\infty}{\longrightarrow}+\infty$
,
* c)
For all compact $K\subset{\mathbb{R}}$,
$(\alpha_{N}^{v})^{-2}\sup_{x\in
K}\left|V^{\prime\prime}(\varphi_{N}(x))\right|\underset{N\rightarrow\infty}{\longrightarrow}0\,.$
As a consequence, the random measure $\Xi_{N}$ converges in distribution as
$N\to\infty$ to a Poisson point process with intensity $\theta(x)=e^{-vx}$.
###### Proof.
We prove it in the case $v=+$, the case where $v=-$ being similar. We show
that there exists a sequence $(E_{N}^{+})_{N}$ going to $+\infty$ satisfying
$f(E_{N}^{+})=-\ln N$, where we defined the function $f$ by
$f(x)=-V(x)+2P\ln|x|+\lambda_{P}^{V}-\ln|V^{\prime}(x)|\,.$
Recalling Assumption 1.1 i), $|V^{\prime}|$ goes to infinity at infinity, thus
$\alpha_{N}^{+}=V^{\prime}(E_{N}^{+})\to+\infty$ (in the case $v=-1$ we would
have looked for a sequence $(E_{N}^{-})_{N}$ going to $-\infty$ and
$\alpha_{N}^{-}=-V^{\prime}(E_{N}^{-})$).
As a consequence of Assumptions 1.1,ii), one shows that $\ln|V^{\prime}|$ is
negligible with respect to $V$ at infinity. Therefore, because
$\dfrac{\ln|x|}{V(x)}\underset{|x|\rightarrow\infty}{\longrightarrow}0$,
$f(x)=-V(x)+\underset{x\rightarrow+\infty}{o}(V(x))\,.$ (38)
Because $f(x)\underset{x\rightarrow+\infty}{\longrightarrow}-\infty$ there
exists $(E_{N}^{+})_{N}$ going to infinity such that for all $N\geqslant 1$,
$f(E_{N}^{+})=-\ln N$. Setting $x=E_{N}^{+}$ in (38), we obtain that
$-V(E_{N}^{+})\sim f(E_{N}^{+})=-\ln N$. Property c) follows from Assumptions
1.1, point ii), along with the fact that $\alpha_{N}^{-1}$ stays bounded.
It remains to show that
$\dfrac{\ln(\alpha_{N}^{+})}{N}=\dfrac{\ln|V^{\prime}(E_{N}^{+})|}{N}\underset{N\rightarrow\infty}{\longrightarrow}0$.
By construction, we have
$\dfrac{\ln|V^{\prime}(E_{N}^{+})|}{N}=\dfrac{\ln\Big{(}Ne^{-V(E_{N}^{+})+2P\ln
N+\lambda_{P}}\Big{)}}{N}=-\dfrac{V(E_{N}^{+})}{N}+o(1)\,.$
Using that $V(E_{N}^{+})\sim\ln N$, we can conclude that
$\ln|V^{\prime}(E_{N}^{+})|=o(N)$ which concludes the proof. ∎
By the discussion preceding Theorem 4.1, we deduce
###### Corollary 4.2 (Edge of a configuration).
Let $E_{N}^{\pm}$, $\alpha_{N}^{\pm}:=|V^{\prime}(E_{N}^{\pm})|$ be the
sequences of Theorem 4.1 associated with $v=\pm 1$. Then, both random
variables
$\alpha_{N}^{+}\left(\max_{1\leqslant j\leqslant N}x_{j}-E_{N}^{+}\right)$
and
$\alpha_{N}^{-}\left(\min_{1\leqslant j\leqslant N}x_{j}-E_{N}^{-}\right)$
converge to a Gumbel law, whose distribution function is given for $t\geqslant
0$ by $\mathcal{G}([0,t])=\exp(e^{-t})$. Furthermore, $V(E_{N}^{\pm})\sim\ln
N$ and
$\alpha_{N}^{\pm}\underset{N\rightarrow\infty}{\longrightarrow}\pm\infty$.
###### Remark 4.3.
Note that[Lam21][Theorem 3.4] applies for $V$ of class $\mathcal{C}^{2}$
outside of a compact set, allowing to take $V(x)=|x|^{a}$ for $a>1$. In this
case, we find $E_{N}^{\pm}\sim\pm(\ln N)^{1/a}$. If $V(x)=\cosh(x)$, we find
$E_{N}^{+}\sim-E_{N}^{-}\sim\arg\cosh(\ln N)\sim\ln\ln N$.
The next lemma will be convenient in the proof of Theorem 5.2 when dealing
with error terms.
###### Lemma 4.4.
With the notations of Corollary 4.2, we have
$\mu_{P}([E_{N}^{-},E_{N}^{+}]^{c})=o(N^{-1/2})\,.$
###### Proof.
Let $0<\delta<1$, to be specified later. We have
$\displaystyle\int_{E_{N}^{+}}^{+\infty}\rho_{P}dx=\int_{E_{N}^{+}}^{+\infty}(\rho_{P})^{\delta}(\rho_{P})^{1-\delta}dx$
$\displaystyle\leqslant\int_{\mathbb{R}}(\rho_{P})^{\delta}dx\sup_{[E_{N}^{+},+\infty[}(\rho_{P})^{1-\delta}\,.$
By the first inequality of Lemma 2.2, the integral is finite. Also from the
same inequality, we have for some constant $C^{\prime}$ and $x$ big enough
$\rho_{P}(x)\leqslant C^{\prime}e^{-\frac{3}{4}V(x)}$. Because $V$ is
increasing in a neighborhood of $+\infty$, we get for $N$ large enough
$\sup_{[E_{N}^{+},+\infty[}(\rho_{P})^{1-\delta}\leqslant C^{\prime
1-\delta}e^{-(1-\delta)\frac{3}{4}V(E_{N}^{+})}\,.$
Taking $\delta>0$ such that $\frac{1}{2}-(1-\delta)\frac{3}{4}=:-\gamma<0$ and
using that $V(E_{N}^{+})=\ln N+o(\ln N)$ (established in the proof of Theorem
4.1),
$\displaystyle\sqrt{N}\int_{E_{N}^{+}}^{+\infty}\rho_{P}dx\leqslant
Ke^{-\gamma\ln N+(1-\delta)\frac{3}{4}o(\ln N)}\,,$
and the right-hand side goes to zero as $N$ goes to infinity. We deal with the
integral $\int_{-\infty}^{E_{N}^{-}}\rho_{P}dx$ in the same way.
∎
###### Remark 4.5.
We could improve the proof to show that
$\mu_{P}([E_{N}^{-},E_{N}^{+}]^{c})\sim\dfrac{1}{N}$ but showing that it is
$o(N^{\frac{1}{2}})$ is sufficient for what we need and requires less
carefulness.
## 5 Laplace transform for smooth test functions, proof of Theorem 1.3
Section 3 allows us to justify in Proposition 5.1 the heuristics we gave in
equation (6) for $\phi$ having compact support. We will then extend in Theorem
5.2 this result to a more general set of functions, by an approximation by
compactly supported functions, using Corollary 4.2.
###### Proposition 5.1.
For $\phi\in\mathcal{C}^{1}({\mathbb{R}},{\mathbb{R}})$ with compact support,
we have for any real $t$, as $N$ goes to infinity,
$\mathbb{E}^{V,P}_{N}\left[e^{t\sqrt{N}\nu_{N}(\Xi\phi)}\right]\to\exp\left\\{\frac{t^{2}}{2}q_{P}(\phi)\right\\},$
(39)
where $\Xi\phi$ is given by equation (7), and $q_{P}(\phi)$ is given by
$q_{P}(\phi):=\int_{\mathbb{R}}\bigg{(}\phi^{\prime}(x)^{2}+V^{\prime\prime}(x)\phi(x)^{2}\bigg{)}d\mu_{P}(x)+P\iint_{{\mathbb{R}}^{2}}\Big{(}\frac{\phi(x)-\phi(y)}{x-y}\Big{)}^{2}d\mu_{P}(x)d\mu_{P}(y)\,.$
(40)
###### Proof.
Let $\phi\in\mathcal{C}_{c}^{1}({\mathbb{R}},{\mathbb{R}})$, and let
$t\in{\mathbb{R}}$. We perform in equation (4) the change of variables
$x_{i}=y_{i}+\frac{t}{\sqrt{N}}\phi(y_{i})$, $1\leqslant i\leqslant N$, which
is a diffeomorphism for $N$ big enough. We thus have
$Z_{N}^{V,P}=\int\prod_{1\leqslant i<j\leqslant
N}\left|y_{i}-y_{j}+\frac{t}{\sqrt{N}}\big{(}\phi(y_{i})-\phi(y_{j})\big{)}\right|^{2P/N}.e^{-\sum_{i=1}^{N}V\left(y_{i}+\frac{t}{\sqrt{N}}\phi(y_{i})\right)}.\prod_{i=1}^{N}\left(1+\frac{t}{\sqrt{N}}\phi^{\prime}(y_{i})\right)d^{N}\mathbf{y},$
(41)
and we develop separately the different terms of this integral. The first term
can be written as:
$\prod_{i<j}\left|y_{i}-y_{j}\right|^{2P/N}\prod_{i<j}\left|1+\frac{t}{\sqrt{N}}\frac{\phi(y_{i})-\phi(y_{j})}{y_{i}-y_{j}}\right|^{2P/N},$
The second product above, setting
$\Delta\phi_{i,j}:=\frac{\phi(y_{i})-\phi(y_{j})}{y_{i}-y_{j}}$ and using
Taylor-Lagrange theorem, equals
$\exp\bigg{(}\frac{2P}{N}\sum_{i<j}\ln\left|1+\frac{t}{\sqrt{N}}\frac{\phi(y_{i})-\phi(y_{j})}{y_{i}-y_{j}}\right|\bigg{)}=\exp\bigg{(}\frac{2P}{N}\sum_{i<j}\left(\frac{t}{\sqrt{N}}\Delta\phi_{i,j}-\frac{t^{2}}{2N}(\Delta\phi_{i,j})^{2}+R_{N,1}(i,j)\right)\bigg{)},$
where we noticed that $1+\frac{t}{\sqrt{N}}\Delta\phi_{i,j}\geqslant
1-\frac{t}{\sqrt{N}}\|\phi^{\prime}\|_{\infty}>0$ if $N$ is big enough, and
where
$|R_{N,1}(i,j)|\leqslant\frac{|t|^{3}}{3N^{3/2}}\|\phi^{\prime}\|_{\infty}^{3}.$
Again by Taylor-Lagrange theorem, the second term in (41) equals
$\exp\bigg{(}-\sum_{i=1}^{N}\left(V(y_{i})+\frac{t}{\sqrt{N}}V^{\prime}(y_{i})\phi(y_{i})+\frac{t^{2}}{2N}V^{\prime\prime}(y_{i})\phi(y_{i})^{2}+R_{N,2}(i)\right)\bigg{)}$
where
$R_{N,2}(i)=\frac{t^{3}}{6N^{3/2}}V^{(3)}\left(y_{i}+\frac{t\theta_{i}}{\sqrt{N}}\phi(y_{i})\right)\phi(y_{i})^{3}$
for some $\theta_{i}\in[0,1]$, thus for $N$ large enough
$|R_{N,2}(i)|\leqslant\frac{|t|^{3}}{6N^{3/2}}\|\phi\|_{\infty}^{3}\sup_{d(x,\text{supp
}\phi)\leqslant 1}|V^{(3)}(x)|.$
The last term reads
$\prod_{i=1}^{N}\left(1+\dfrac{t}{\sqrt{N}}\phi^{\prime}(y_{i})\right)=\exp\bigg{(}\sum_{i=1}^{N}\left(\frac{t}{\sqrt{N}}\phi^{\prime}(y_{i})-\frac{t^{2}}{2N}\phi^{\prime}(y_{i})^{2}+R_{N,3}(i)\right)\bigg{)},$
with
$|R_{N,3}(i)|\leqslant\frac{t^{3}}{3N^{3/2}}\|\phi^{\prime}\|_{\infty}^{3}$.
Dividing both sides of equation (41) by $Z^{V,P}_{N}$ we get
$\mathbb{E}_{N}^{V,P}\bigg{[}\exp\left\\{t\sqrt{N}\bigg{(}P\iint_{{\mathbb{R}}^{2}}\frac{\phi(x)-\phi(y)}{x-y}d\hat{\mu}_{N}(x)d\hat{\mu}_{N}(y)+\int_{\mathbb{R}}(\phi^{\prime}-V^{\prime}\phi)d\hat{\mu}_{N}\bigg{)}\right\\}\times\exp\left\\{K_{N}(t,\phi)\right\\}\\\
\times\exp\left\\{\frac{t^{2}}{2}\left(-P\iint_{{\mathbb{R}}^{2}}\left(\frac{\phi(x)-\phi(y)}{x-y}\right)^{2}d\hat{\mu}_{N}(x)d\hat{\mu}_{N}(y)-\int_{\mathbb{R}}(V^{\prime\prime}\phi^{2}+\phi^{\prime
2})d\hat{\mu}_{N}\right)\right\\}\bigg{]}=1,$
with $|K_{N}(t,\phi)|\leqslant\frac{c(t,\phi)}{\sqrt{N}}$ where
$c(t,\phi)\geqslant 0$ is independent of $N$. This bound shows that taking the
limit $N\to\infty$ we can get rid of $K_{N}$:
$\lim_{N\to\infty}\mathbb{E}_{N}^{V,P}\bigg{[}\exp\left\\{t\sqrt{N}\bigg{(}P\iint_{{\mathbb{R}}^{2}}\frac{\phi(x)-\phi(y)}{x-y}d\hat{\mu}_{N}(x)d\hat{\mu}_{N}(y)+\int_{\mathbb{R}}(\phi^{\prime}-V^{\prime}\phi)d\hat{\mu}_{N}\bigg{)}\right\\}\\\
\times\exp\left\\{\frac{t^{2}}{2}\left(-P\iint_{{\mathbb{R}}^{2}}\left(\frac{\phi(x)-\phi(y)}{x-y}\right)^{2}d\hat{\mu}_{N}(x)d\hat{\mu}_{N}(y)-\int_{\mathbb{R}}(V^{\prime\prime}\phi^{2}+\phi^{\prime
2})d\hat{\mu}_{N}\right)\right\\}\bigg{]}=1.$
Using Fubini’s theorem (the function $(x,y)\mapsto\frac{\phi(x)-\phi(y)}{x-y}$
being bounded continuous on ${\mathbb{R}}^{2}$), the first line in the
expectation value can be rewritten as $e^{t\sqrt{N}\Lambda_{N}}$ with
$\Lambda_{N}:=2P\iint_{{\mathbb{R}}^{2}}\frac{\phi(x)-\phi(y)}{x-y}d\mu_{P}(x)d(\hat{\mu}_{N}-\mu_{P})(y)+\int_{\mathbb{R}}(\phi^{\prime}-V^{\prime}\phi)d(\hat{\mu}_{N}-\mu_{P})+P\zeta_{N}(\phi)$
(42)
where we used equation (5) and $\zeta_{N}(\phi)$ is given by (35). Let
$F:\mathcal{P}({\mathbb{R}})\to{\mathbb{R}}$ be defined by
$F(\mu)=-P\iint_{{\mathbb{R}}^{2}}\left(\frac{\phi(x)-\phi(y)}{x-y}\right)^{2}d\mu(x)d\mu(y)-\int_{\mathbb{R}}(V^{\prime\prime}\phi^{2}+\phi^{\prime
2})d\mu\,.$ (43)
It is continuous for the topology of weak convergence since all the functions
in the integrals are bounded continuous. So far we have established that
$\lim_{N\to\infty}\mathbb{E}_{N}^{V,P}\left[e^{t\sqrt{N}\Lambda_{N}+\frac{t^{2}}{2}F(\hat{\mu}_{N})}\right]=1,$
with $\Lambda_{N}$ given by (42). We now replace in the latter equation the
term $F(\hat{\mu}_{N})$ by its limiting expression, $F(\mu_{P})$. Fix a metric
that is compatible with the weak convergence of probability measures on
${\mathbb{R}}$. For example,
$d_{\text{Lip}}(\mu,\nu)=\sup\left|\int fd\mu-\int fd\nu\right|\,,$ (44)
where the supremum runs over $f:{\mathbb{R}}\to{\mathbb{R}}$ bounded and
Lipschitz with $\|f\|_{\infty}\leqslant 1$ and Lipschitz constant
$|f|_{\text{Lip}}\leqslant 1$. By the large deviations principle for
$(\hat{\mu}_{N})_{N}$ under the probability (3) established by [GZ19, Theorem
1.1], for all $\delta>0$ the event
$\\{d_{\text{Lip}}(\hat{\mu}_{N},\mu_{P})>\delta\\}$ has (for $N$ big enough)
probability smaller than $e^{-Nc_{\delta}}$ where $c_{\delta}>0$. Hence,
$\lim_{N\to\infty}\mathbb{E}_{N}^{V,P}\left[e^{t\sqrt{N}\Lambda_{N}+\frac{t^{2}}{2}F(\hat{\mu}_{N})}\right]=\lim_{N\to\infty}\mathbb{E}_{N}^{V,P}\left[\mathbf{1}_{\\{d_{\text{Lip}}(\hat{\mu}_{N},\mu_{P})\leqslant\delta\\}}e^{t\sqrt{N}\Lambda_{N}+\frac{t^{2}}{2}F(\hat{\mu}_{N})}\right].$
By continuity of $F$ there is some $\varepsilon(\delta)$ which goes to $0$ as
$\delta\to 0$ such that, for $d_{\text{Lip}}(\nu,\mu_{P})\leqslant\delta$, we
have $|F(\nu)-F(\mu_{P})|\leqslant\varepsilon(\delta)$. Taking the
(decreasing) limit as $\delta$ goes to zero we deduce
$\lim_{N\to\infty}\mathbb{E}_{N}^{V,P}\left[e^{t\sqrt{N}\Lambda_{N}+\frac{t^{2}}{2}F(\hat{\mu}_{N})}\right]=\lim_{\delta\to
0}\lim_{N\to\infty}\mathbb{E}_{N}^{V,P}\left[\mathbf{1}_{\\{d_{\text{Lip}}(\hat{\mu}_{N},\mu_{P})\leqslant\delta\\}}e^{t\sqrt{N}\Lambda_{N}}\right]e^{\frac{t^{2}}{2}F(\mu_{P})}.$
But the same large deviations argument shows that
$\lim_{\delta\to
0}\lim_{N\to\infty}\mathbb{E}_{N}^{V,P}\left[\mathbf{1}_{\\{d_{\text{Lip}}(\hat{\mu}_{N},\mu_{P})\leqslant\delta\\}}e^{t\sqrt{N}\Lambda_{N}}\right]=\lim_{N\to\infty}\mathbb{E}_{N}^{V,P}\left[e^{t\sqrt{N}\Lambda_{N}}\right].$
Thus, we have shown that
$\lim_{N\to\infty}\mathbb{E}_{N}^{V,P}\left[e^{t\sqrt{N}\left(2P\iint_{{\mathbb{R}}^{2}}\frac{\phi(x)-\phi(y)}{x-y}d\mu_{P}(x)d(\hat{\mu}_{N}-\mu_{P})(y)+\int_{\mathbb{R}}(\phi^{\prime}-V^{\prime}\phi)d(\hat{\mu}_{N}-\mu_{P})+P\zeta_{N}(\phi)\right)}\right]=e^{-\frac{t^{2}}{2}F(\mu_{P})}\,,$
(45)
Which establishes that
$\sqrt{N}\Lambda_{N}=\sqrt{N}\Big{(}\nu_{N}(\Xi\phi)+P\zeta_{N}(\phi)\Big{)}$
converges in law towards a centered Gaussian random variable with announced
variance. We finally get rid of the remaining term $\zeta_{N}(\phi)$, using
Corollary 3.4: taking $\varepsilon=1/4$ for example, we see in particular that
$\sqrt{N}\zeta_{N}(\phi)$ converges in probability towards zero. The
conclusion follows from Slutsky’s lemma. ∎
We now extend the result of Proposition 5.1 to a more general set of
functions. With the notations of Proposition 5.1, we have
###### Theorem 5.2.
Let $\phi\in H^{2}({\mathbb{R}})\cap\mathcal{C}^{2}({\mathbb{R}})$ such that
$\phi^{\prime\prime}$ is bounded. Additionally, suppose that
$V^{(3)}\phi^{2}$, $V^{\prime\prime}\phi\phi^{\prime}$,
$V^{\prime\prime}\phi^{2}$ and $V^{\prime}\phi$ are bounded. Then, recalling
(40) we have the convergence in distribution as $N$ goes to infinity
$\sqrt{N}\nu_{N}(\Xi\phi)\to\mathcal{N}(0,q_{P}(\phi))\,.$
###### Proof.
For $N\geqslant 1$, let $E_{N}^{-},E_{N}^{+}$ be given by Corollary 4.2. Let
$\chi_{N}:{\mathbb{R}}\to[0,1]$ be $\mathcal{C}^{2}$ with compact support such
that
$\chi_{N}(x)=1\text{ for }x\in[E_{N}^{-}-1,E_{N}^{+}+1]\text{ and
}\chi_{N}(x)=0\text{ for }x\in[E_{N}^{-}-2,E_{N}^{+}+2]^{c}$
and such that, denoting $\phi_{N}=\phi\chi_{N}$,
$\sup_{N}\|\phi_{N}^{(k)}\|_{\infty}+\|\phi_{N}^{(k)}\|_{L^{2}({\mathbb{R}})}<+\infty$
for $k=0,1,2$ (we assumed $\phi\in H^{2}({\mathbb{R}})$, in particular $\phi$
and $\phi^{\prime}$ is bounded and such a $\chi_{N}$ exists). The point of
cutting $\phi$ outside the set $[E_{N}^{-}-1,E_{N}^{+}+1]$ is that with high
probability, the empirical measure $\hat{\mu}_{N}$ doesn’t see the difference
between $\phi$ and $\phi_{N}$.
The support of $\phi_{N}$ is then contained in $[E_{N}^{-}-2,E_{N}^{+}+2]$,
and we now argue that the proof of Proposition 5.1 can be adapted so that
$\sqrt{N}\nu_{N}(\Xi\phi_{N})\to\mathcal{N}(0,q_{P}(\phi))\,.$ (46)
Similarly as in Proposition 5.1, we perform in $Z_{N}^{V,P}$ the change of
variables $x_{i}=y_{i}+\frac{t}{\sqrt{N}}\phi_{N}(y_{i})$, $1\leqslant
i\leqslant N$, which is the same as before, but with $\phi$ replaced by
$\phi_{N}$. First, with $I_{N}:=[E_{N}^{-}-2,E_{N}^{+}+2]$, the error term
$K_{N}(t,\phi_{N})\leqslant
2\frac{t^{3}}{3N^{1/2}}\|\phi_{N}^{\prime}\|_{\infty}^{3}+\frac{t^{3}}{6N^{1/2}}\|\phi_{N}\|_{\infty}\sup_{d(x,I_{N})\leqslant
1}|V^{(3)}(x)|$
of the proof of Proposition 5.1 is still going to zero, because of our choice
of $\chi_{N}$ and Assumption 1.2. As previously, we then have
$\lim_{N\to\infty}\mathbb{E}_{N}^{V,P}\left[e^{t\sqrt{N}\Lambda_{N}(\phi_{N})+\frac{t^{2}}{2}F_{N}(\hat{\mu}_{N})}\right]=1$
(47)
with
$\Lambda_{N}(\phi_{N}):=2P\iint_{{\mathbb{R}}^{2}}\frac{\phi_{N}(x)-\phi_{N}(y)}{x-y}d\mu_{P}(x)d(\hat{\mu}_{N}-\mu_{P})(y)+\int_{\mathbb{R}}(\phi_{N}^{\prime}-V^{\prime}\phi_{N})d(\hat{\mu}_{N}-\mu_{P})+P\zeta_{N}(\phi_{N})\,,$
where $\zeta_{N}$ is given by (35), and
$F_{N}(\hat{\mu}_{N})=-P\iint_{{\mathbb{R}}^{2}}\left(\frac{\phi_{N}(x)-\phi_{N}(y)}{x-y}\right)^{2}d\hat{\mu}_{N}(x)d\hat{\mu}_{N}(y)-\int_{\mathbb{R}}(V^{\prime\prime}\phi_{N}^{2}+\phi_{N}^{\prime
2})d\hat{\mu}_{N}\,.$
Taking again the distance $d_{\text{Lip}}$ defined in (44), one can check that
for $\mu$, $\nu$ probability measures over ${\mathbb{R}}$,
$\left|F_{N}(\mu)-F_{N}(\nu)\right|\leqslant C_{N}d_{\text{Lip}}(\mu,\nu)\,,$
where $C_{N}$ is a term depending on the norms
$\|\phi_{N}^{\prime}\|_{\infty},\|\phi_{N}^{\prime\prime}\|_{\infty}$,
$\|V^{\prime\prime}\phi_{N}^{2}\|_{\infty}$ and
$\|(V^{\prime\prime}\phi_{N}^{2})^{\prime}\|_{\infty}$. The choice of
$\chi_{N}$ and the fact that $\phi$ is chosen so that $V^{(3)}\phi^{2}$ and
$V^{\prime\prime}\phi\phi^{\prime}$ are bounded guarantee that
$\|(V^{\prime\prime}\phi_{N}^{2})^{\prime}\|_{\infty}$ is bounded in $N$. The
other norms are easily bounded by hypothesis. Therefore $C_{N}$ can be seen to
be uniformly bounded in $N$, and we find some $C\geqslant 0$ independent of
$N$ such that
$\left|F_{N}(\mu)-F_{N}(\nu)\right|\leqslant Cd_{\text{Lip}}(\mu,\nu)\,.$
As in proposition 5.1, we use the large deviation principle for
$(\hat{\mu}_{N})$ to deduce
$\lim_{N\to+\infty}\mathbb{E}^{V,P}_{N}\left[e^{t\sqrt{N}\Lambda_{N}(\phi_{N})+\frac{t^{2}}{2}F_{N}(\hat{\mu}_{N})}\right]=\lim_{N\to+\infty}\mathbb{E}^{V,P}_{N}\left[e^{t\sqrt{N}\Lambda_{N}(\phi_{N})}\right]e^{\frac{t^{2}}{2}F_{N}(\mu_{P})}\,.$
By dominated convergence, $F_{N}(\mu_{P})$ converges to $F(\mu_{P})$, the
function $F$ being given by (43). This shows the convergence as $N$ goes to
infinity
$\lim_{N\to+\infty}\mathbb{E}^{V,P}_{N}\left[e^{t\sqrt{N}\Lambda_{N}(\phi_{N})}\right]=e^{-\frac{t^{2}}{2}F(\mu_{P})}\,,$
and $\sqrt{N}\Big{(}\nu_{N}(\Xi\phi_{N})+P\zeta_{N}(\phi_{N})\Big{)}$
converges towards a centered Gaussian variable with variance
$-F(\mu_{P})=q_{P}(\phi)$. Because
$\sup_{N}\|\phi_{N}\|_{H^{2}({\mathbb{R}})}$ is finite, we can apply again
Corollary 3.4 to deduce the convergence in law (46). We now have the
ingredients to conclude, by showing that the characteristic function
$\mathbb{E}^{V,P}_{N}\left[e^{\mathrm{i}t\sqrt{N}\nu_{N}(\Xi\phi)}\right]=\mathbb{E}^{V,P}_{N}\left[e^{\mathrm{i}t\sqrt{N}\int\Xi\phi
d\hat{\mu}_{N}}\right]e^{-\mathrm{i}t\sqrt{N}\int\Xi\phi d\mu_{P}}$
converges to the characteristic of a Gaussian variable with appropriate
variance. By Corollary 4.2, the probability under $\mathbb{P}^{V,P}_{N}$ of
the event
$\mathcal{E}_{N}=\bigg{\\{}x_{1},\ldots,x_{N}\in[E_{N}^{-}-1,E_{N}^{+}+1]\bigg{\\}}$
converges to $1$. Along with the convergence (46), we deduce
$\displaystyle
e^{-\frac{t^{2}}{2}q_{P}(\phi)}=\lim_{N}\mathbb{E}_{N}^{V,P}\left[e^{\mathrm{i}t\sqrt{N}\int\Xi\phi_{N}d\hat{\mu}_{N}}\right]e^{-\mathrm{i}t\sqrt{N}\int\Xi\phi_{N}d\mu_{P}}=\lim_{N}\mathbb{E}_{N}^{V,P}\left[\mathbf{1}_{\mathcal{E}_{N}}e^{\mathrm{i}t\sqrt{N}\int\Xi\phi_{N}d\hat{\mu}_{N}}\right]e^{-\mathrm{i}t\sqrt{N}\int\Xi\phi_{N}d\mu_{P}}\,,$
Where we used
$\left|\mathbb{E}^{V,P}_{N}\left[\mathbf{1}_{\mathcal{E}_{N}^{c}}e^{\mathrm{i}t\sqrt{N}\int\Xi\phi_{N}d\hat{\mu}_{N}}\right]e^{-\mathrm{i}t\sqrt{N}\int\Xi\phi_{N}d\mu_{P}}\right|\leqslant\mathbb{P}^{V,P}_{N}(\mathcal{E}_{N}^{c})\xrightarrow[N\to+\infty]{}0\,.$
Using that $\phi_{N}=\phi$ on $J_{N}={[E_{N}^{-}-1,E_{N}^{+}+1]}$,
$\displaystyle\int\Xi\phi_{N}d\mu_{P}$
$\displaystyle=2P\iint\frac{\phi_{N}(x)-\phi_{N}(y)}{x-y}d\mu_{P}(x)d\mu_{P}(y)+\int(\phi_{N}^{\prime}-V^{\prime}\phi_{N})d\mu_{P}$
$\displaystyle=2P\iint_{J_{N}^{2}}\frac{\phi(x)-\phi(y)}{x-y}d\mu_{P}(x)d\mu_{P}(y)+2P\iint_{(J_{N}^{2})^{c}}\frac{\phi_{N}(x)-\phi_{N}(y)}{x-y}d\mu_{P}(x)d\mu_{P}(y)$
$\displaystyle+\int_{J_{N}}(\phi^{\prime}-V^{\prime}\phi)d\mu_{P}+\int_{J_{N}^{c}}(\phi\chi_{N}^{\prime}+\phi^{\prime}\chi_{N}-V^{\prime}\phi\chi_{N})d\mu_{P}\,.$
By boundedness of $(\|\phi_{N}^{\prime}\|_{\infty})_{N}$, the second term is
bounded by
$C_{P}\iint_{(J_{N}^{2})^{c}}d\mu_{P}d\mu_{P}\leqslant
2C_{P}\mu_{P}(J_{N}^{c})=o(N^{-1/2})\,,$
where we used the union bound and Lemma 4.4. By the same estimate and the fact
that $\chi_{N}$ can be chosen so that $(\|\chi_{N}^{\prime}\|_{\infty})_{N}$
is bounded, and because $\phi^{\prime}$, $V^{\prime}\phi$ are bounded, the
last term is also $o(N^{-1/2})$. By the previous arguments, we also conclude
that
$2P\iint_{(J_{N}^{2})^{c}}\frac{\phi(x)-\phi(y)}{x-y}d\mu_{P}(x)d\mu_{P}(y)+\int_{J_{N}^{c}}(\phi^{\prime}-V^{\prime}\phi)d\mu_{P}=o(N^{-1/2})\,,$
thus
$\displaystyle\int\Xi\phi_{N}d\mu_{P}=\int\Xi\phi d\mu_{P}+o(N^{-1/2})\,,$
and so far we have
$e^{-\frac{t^{2}}{2}q_{P}(\phi)}=\lim_{N}\mathbb{E}_{N}^{V,P}\left[\mathbf{1}_{\mathcal{E}_{N}}e^{\mathrm{i}t\sqrt{N}\int\Xi\phi_{N}d\hat{\mu}_{N}}\right]e^{-\mathrm{i}t\sqrt{N}\int\Xi\phi
d\mu_{P}}\,.$
Finally, on $\mathcal{E}_{N}$, using $\phi_{N}=\phi$ and that $\hat{\mu}_{N}$
is supported in $J_{N}$,
$\displaystyle\int\Xi\phi_{N}d\hat{\mu}_{N}$
$\displaystyle=2P\iint_{J_{N}^{2}}\frac{\phi(x)-\phi(y)}{x-y}d\mu_{P}(x)d\hat{\mu}_{N}(y)+2P\iint_{(J_{N}^{2})^{c}}\frac{\phi_{N}(x)-\phi_{N}(y)}{x-y}d\mu_{P}(x)d\hat{\mu}_{N}(y)+\int_{J_{N}}(\phi^{\prime}-V^{\prime}\phi)d\hat{\mu}_{N}$
$\displaystyle=2P\iint\frac{\phi(x)-\phi(y)}{x-y}d\mu_{P}(x)d\hat{\mu}_{N}(y)+\int(\phi^{\prime}-V^{\prime}\phi)d\hat{\mu}_{N}+o(N^{-1/2})\,,$
Where in the second line we used, using Lemma 4.4 again, that
$\iint_{(J_{N}^{2})^{c}}\frac{\phi_{N}(x)-\phi_{N}(y)}{x-y}d\mu_{P}(x)d\hat{\mu}_{N}(y)=\iint_{J_{N}\times
J_{N}^{c}}\frac{\phi_{N}(x)-\phi_{N}(y)}{x-y}d\mu_{P}(x)d\hat{\mu}_{N}(y)=o(N^{-1/2})\,,$
and the same estimate holds for $\phi_{N}$ replaced by $\phi$. Therefore,
$e^{-\frac{t^{2}}{2}q_{P}}(\phi)=\lim_{N}\mathbb{E}_{N}^{V,P}\left[\mathbf{1}_{\mathcal{E}_{N}}e^{\mathrm{i}t\sqrt{N}\int\Xi\phi
d\hat{\mu}_{N}}\right]e^{-\mathrm{i}t\sqrt{N}\int\Xi\phi d\mu_{P}}\,.$
This establishes that
$\lim_{N}\mathbb{E}^{V,P}_{N}\left[e^{\mathrm{i}t\sqrt{N}\int\Xi\phi
d\hat{\nu}_{N}}\right]=e^{-\frac{t^{2}}{2}q_{P}(\phi)}\,,$
which concludes the proof. ∎
###### Remark 5.3.
Taking $\phi$ such that $\phi^{\prime}$ satisfies the conditions of Theorem
5.2, we then have
$\mathbb{E}_{N}^{V,P}\left[e^{t\sqrt{N}\nu_{N}(\mathcal{L}\phi)}\right]\underset{N\rightarrow\infty}{\longrightarrow}\exp\left\\{\frac{t^{2}}{2}q_{P}(\phi^{\prime})\right\\},$
(48)
where the operator $\mathcal{L}$ is defined as
$\mathcal{L}\phi:=\Xi\phi^{\prime}$, ie
$\mathcal{L}\phi=2P\int_{\mathbb{R}}\frac{\phi^{\prime}(x)-\phi^{\prime}(y)}{x-y}d\mu_{P}(y)+\phi^{\prime\prime}(x)-V^{\prime}(x)\phi^{\prime}(x)\,.$
(49)
Note that
$q_{P}^{V}(\phi^{\prime})=\big{(}\sigma_{P}^{V}\big{)}^{2}(\mathcal{L}\phi)$
where $\sigma_{P}^{V}$ is defined in (12). By Theorem 7.1, the class of
functions in $\mathcal{L}^{-1}(\mathcal{T})$ where
$\mathcal{T}:=\left\\{f\in\mathcal{C}^{1}({\mathbb{R}}),\exists\varepsilon>0,\,f(x)=\underset{|x|\rightarrow\infty}{O}\left(x^{-\frac{1}{2}-\varepsilon}\right),\;f^{\prime}(x)=\underset{|x|\rightarrow\infty}{O}\left(x^{-\frac{1}{2}-\varepsilon}\right),\int_{\mathbb{R}}f\rho_{P}=0\right\\}$
satisfies (48). This proves Theorem 1.3.
We now prove a more compact formula for the variance such as the one appearing
in [HL21].
###### Lemma 5.4.
The following equality holds for all $\phi\in\mathcal{T}$
$\braket{\mathcal{L}^{-1}\phi,\phi}_{\mathsf{H}}=\big{(}\sigma_{P}^{V}\big{)}^{2}(\phi):=\int_{\mathbb{R}}\Bigg{(}\big{(}\mathcal{L}^{-1}\phi\big{)}^{\prime\prime}(x)^{2}+V^{\prime\prime}(x)\big{(}\mathcal{L}^{-1}\phi\big{)}^{\prime}(x)^{2}\Bigg{)}d\mu_{P}(x)\\\
+P\iint_{{\mathbb{R}}^{2}}\Bigg{(}\frac{\big{(}\mathcal{L}^{-1}\phi\big{)}^{\prime}(x)-\big{(}\mathcal{L}^{-1}\phi\big{)}^{\prime}(y)}{x-y}\Bigg{)}^{2}d\mu_{P}(x)d\mu_{P}(y)$
(50)
###### Proof.
It suffices to show that
$\big{(}\sigma_{P}^{V}\big{)}^{2}(\mathcal{L}\phi)=\braket{\mathcal{L}\phi,\phi}_{\mathsf{H}}$
for all $\phi$, such that $\phi^{\prime}\in H^{2}({\mathbb{R}})$.
$\braket{\mathcal{L}\phi,\phi}_{\mathsf{H}}=-\int_{\mathbb{R}}\Big{(}\dfrac{(\phi^{\prime}\rho_{P})^{\prime}}{\rho_{P}}\Big{)}^{\prime}\phi^{\prime}\rho_{P}-2P\int_{\mathbb{R}}\mathcal{H}[\phi^{\prime}\rho_{P}]^{\prime}\phi^{\prime}\rho_{P}$
Proceeding to integration by parts in the first integral leads to
$-\int_{\mathbb{R}}\Big{(}\dfrac{(\phi^{\prime}\rho_{P})^{\prime}}{\rho_{P}}\Big{)}^{\prime}\phi^{\prime}\rho_{P}=\int_{\mathbb{R}}\Big{(}\dfrac{(\phi^{\prime}\rho_{P})^{\prime}}{\rho_{P}}\Big{)}^{2}\rho_{P}=\int_{\mathbb{R}}\phi^{\prime\prime
2}\rho_{P}+2\phi^{\prime}\phi^{\prime\prime}\rho_{P}^{\prime}+\phi^{\prime
2}\big{(}\dfrac{\rho_{P}^{\prime}}{\rho_{P}}\big{)}^{2}\rho_{P}\\\
=\int_{\mathbb{R}}\phi^{\prime\prime 2}\rho_{P}-\phi^{\prime
2}\dfrac{\rho_{P}^{\prime\prime}}{\rho_{P}}\rho_{P}+\phi^{\prime
2}\big{(}\dfrac{\rho_{P}^{\prime}}{\rho_{P}}\big{)}^{2}\rho_{P}$
Since
$\dfrac{\rho_{P}^{\prime\prime}}{\rho_{P}}=\Big{(}-V^{\prime\prime}-2P\mathcal{H}[\rho_{P}]^{\prime}+V^{\prime
2}+4P^{2}\mathcal{H}[\rho_{P}]^{2}+4PV^{\prime}\mathcal{H}[\rho_{P}]\Big{)}=-V^{\prime\prime}-2P\mathcal{H}[\rho_{P}]^{\prime}+\Big{(}\dfrac{\rho_{P}^{\prime}}{\rho_{P}}\Big{)}^{2}$
we obtain
$\braket{\mathcal{L}\phi,\phi}_{\mathsf{H}}=\int_{\mathbb{R}}\phi^{\prime\prime
2}\rho_{P}+V^{\prime\prime}\phi^{2}\rho_{P}-2P\int_{\mathbb{R}}\mathcal{H}[\phi^{\prime}\rho_{P}]^{\prime}\phi^{\prime}\rho_{P}+2P\int_{\mathbb{R}}\mathcal{H}[\rho_{P}]^{\prime}\phi^{\prime
2}\rho_{P}$
To conclude, we just have to show that
$\iint_{{\mathbb{R}}^{2}}\Big{(}\dfrac{\phi^{\prime}(x)-\phi^{\prime}(y)}{x-y}\Big{)}^{2}d\mu_{P}(x)d\mu_{P}(y)=2\int_{\mathbb{R}}\mathcal{H}[\rho_{P}]^{\prime}\phi^{\prime
2}\rho_{P}-\mathcal{H}[\phi^{\prime}\rho_{P}]^{\prime}\phi^{\prime}\rho_{P}$
First
$\int_{\mathbb{R}}\mathcal{H}[\phi^{\prime}\rho_{P}]^{\prime}\phi^{\prime}\rho_{P}=\int_{\mathbb{R}}\int_{\mathbb{R}}\dfrac{\phi^{\prime}(x)\phi^{\prime}(y)}{(y-x)^{2}}d\mu_{P}(x)d\mu_{P}(y)$
Secondly
$\int_{\mathbb{R}}\mathcal{H}[\rho_{P}]^{\prime}\phi^{\prime
2}\rho_{P}=\dfrac{1}{2}\iint_{{\mathbb{R}}^{2}}\dfrac{\phi^{\prime
2}(x)+\phi^{\prime 2}(y)}{(y-x)^{2}}d\mu_{P}(x)d\mu_{P}(y)$
which allows to conclude that
$\big{(}\sigma_{P}^{V}\big{)}^{2}(\mathcal{L}\phi)=\braket{\mathcal{L}\phi,\phi}_{\mathsf{H}}$.
∎
## 6 Inversion of $\mathcal{L}$
This section is dedicated to the definition of $\mathcal{L}$ given by (8) and
its domain and then we focus on its inversion. We rely heavily on results of
Appendix A: the diagonalization of the operator $\mathcal{A}$ by the use of
the theory of Schrödinger operators.
Let $P>0$ be fixed. We introduce the operators $\mathcal{A}$ and
$\mathcal{W}$, acting on sufficiently smooth functions of $L^{2}(\rho_{P})$,
by
$\mathcal{A}\phi=-\dfrac{\Big{(}\phi^{\prime}\rho_{P}\Big{)}^{\prime}}{\rho_{P}}=-\left(\phi^{\prime\prime}+\frac{\rho_{P}^{\prime}}{\rho_{P}}\phi^{\prime}\right)\quad\text{and}\quad\mathcal{W}\phi=-\mathcal{H}\big{[}\phi^{\prime}\rho_{P}\big{]}\,.$
(51)
One can show that the operator $\mathcal{A}$ corresponds to the operator
verifying:
$\braket{\phi,\psi}_{\mathsf{H}}=\int_{\mathbb{R}}\phi^{\prime}\psi^{\prime}d\mu_{P}=\int_{\mathbb{R}}\phi\mathcal{A}\psi
d\mu_{P}=\braket{\phi,\mathcal{A}\psi}_{L^{2}(\mu_{P})}$
We first show the following decomposition of $\mathcal{L}$.
###### Lemma 6.1.
For $\phi$ twice differentiable we have the following pointwise identity
$-\mathcal{L}\phi=\mathcal{A}\phi+2P\mathcal{W}\phi\,.$ (52)
###### Proof.
We write for $x\in{\mathbb{R}}$
$2P\int_{\mathbb{R}}\frac{\phi^{\prime}(x)-\phi^{\prime}(y)}{x-y}\rho_{P}(y)dy=-2P\phi^{\prime}(x)\mathcal{H}[\rho_{P}](x)+2P\mathcal{H}[\phi^{\prime}\rho_{P}](x)\,.$
(53)
Then,
$\mathcal{L}\phi=\phi^{\prime\prime}-V^{\prime}\phi^{\prime}-2P\phi^{\prime}\mathcal{H}[\rho_{P}]+2P\mathcal{H}\big{[}\phi^{\prime}\rho_{P}\big{]}\,.$
By (19) we have
$-V^{\prime}-2P\mathcal{H}[\rho_{P}]=\frac{\rho_{P}^{\prime}}{\rho_{P}}$,
which concludes the proof. ∎
In order to state the next theorem, whose proof we detail in the Appendix, we
introduce the following Sobolev-type spaces. Let
$H^{1}_{V^{\prime}}({\mathbb{R}}):=\Big{\\{}u\in
H^{1}({\mathbb{R}}),\,uV^{\prime}\in L^{2}({\mathbb{R}})\Big{\\}}\,.$
We now define
$\mathcal{D}(\mathcal{S})=\Big{\\{}u\in
H^{1}_{V^{\prime}}({\mathbb{R}}),-u^{\prime\prime}+(w_{P}+\alpha)u\in
L^{2}({\mathbb{R}})\Big{\\}}$
and
$\mathcal{D}_{L^{2}({\mathbb{R}})}(\mathcal{A}):=\rho_{P}^{-1/2}\mathcal{D}(\mathcal{S})$
and its homogeneous counterpart
$\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A}):=\Big{\\{}u\in\mathcal{D}_{L^{2}({\mathbb{R}})}(\mathcal{A}),\,\int_{\mathbb{R}}u\rho_{P}dx=0\Big{\\}}\,.$
Finally, we let $L^{2}_{0}(\rho_{P})$ be the subset of $L^{2}(\rho_{P})$ of
zero mean functions with respect to $\rho_{P}$.
We detail the proof of the following theorem in Appendix A which is based on
Schrödinger operators theory.
###### Theorem 6.2 (Diagonalization of $\mathcal{A}$ in
$L^{2}_{0}(\rho_{P})$).
There exists a sequence $0<\lambda_{1}<\lambda_{2}<\ldots$ going to infinity,
and a complete orthonormal set $(\phi_{n})_{n\geqslant 1}$ of
$L^{2}_{0}(\rho_{P})$ of associated eigenfunctions for $\mathcal{A}$, meaning
that
* •
$\operatorname{span}\\{\phi_{n},\,n\geqslant 1\\}$ is dense in
$L^{2}_{0}(\rho_{P})$,
* •
For all $i,j$, $\Braket{\phi_{i},\phi_{j}}_{L^{2}(\rho_{P})}=\delta_{i,j}$,
* •
For all $n\geqslant 1$, $\mathcal{A}\phi_{n}=\lambda_{n}\phi_{n}$.
Furthermore, each $\phi_{n}$ is in
$\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})$,
$\mathcal{A}:\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})\to
L^{2}_{0}(\rho_{P})$ is bijective, and we have the writing, for $u\in
L^{2}_{0}(\rho_{P})$
$\mathcal{A}^{-1}u=\sum_{n\geqslant
1}\lambda_{n}^{-1}\Braket{u,\phi_{n}}_{L^{2}(\rho_{P})}\phi_{n}\,.$
We see the operators $\mathcal{A}$ and $\mathcal{W}$ as unbounded operators on
the space
$\mathsf{H}=\Big{\\{}u\in
H^{1}(\rho_{P})\,|\,\int_{\mathbb{R}}u\rho_{P}dx=0\Big{\\}}$
endowed with the inner product
$\Braket{u,v}_{\mathsf{H}}=\Braket{u^{\prime},v^{\prime}}_{L^{2}(\rho_{P})}$.
This defines an inner product on $\mathsf{H}$ and makes it a complete space:
it can be seen that $H^{1}(\rho_{P})$ is the completion of
$\mathcal{C}_{c}^{\infty}({\mathbb{R}})$ with respect to the inner product
$\Braket{u,v}_{L^{2}(\rho_{P})}+\Braket{u^{\prime},v^{\prime}}_{L^{2}(\rho_{P})}$.
The space $\mathsf{H}$ is then the kernel of the bounded (with respect to
$\|\cdot\|_{\mathsf{H}}$) linear form,
$\langle\widetilde{1},\cdot\rangle_{L^{2}(\rho_{P})}$ on $H^{1}(\rho_{P})$,
and both inner products are equivalent on $\mathsf{H}$ because of the Poincaré
inequality, Proposition 2.6. The use of $\mathsf{H}$ is motivated by the fact
that both $\mathcal{A}$ and $\mathcal{W}$ are self-adjoint positive on this
space as we show in Lemma 6.4.
In the next proposition, we deduce from Theorem 6.2 the diagonalization of
$\mathcal{A}$ in $\mathsf{H}$.
###### Proposition 6.3 (Diagonalization of $\mathcal{A}$ in $\mathsf{H}$).
With the same eigenvalues $0<\lambda_{1}<\lambda_{2}<\ldots$ as in Theorem
6.2, there exists a complete orthonormal set $(\psi_{n})_{n\geqslant 1}$ of
$\mathsf{H}$ formed by eigenfunctions of $\mathcal{A}$.
###### Proof.
With $(\phi_{n})_{n\geqslant 1}$ of Theorem 6.2,
$\displaystyle\delta_{i,j}=\langle\phi_{i},\phi_{j}\rangle_{L^{2}(\rho_{P})}$
$\displaystyle=\frac{1}{\lambda_{j}}\langle\phi_{i},\mathcal{A}\phi_{j}\rangle_{L^{2}(\rho_{P})}$
$\displaystyle=\frac{1}{\lambda_{j}}\langle\phi_{i}^{\prime},\phi_{j}^{\prime}\rangle_{L^{2}(\rho_{P})}$
$\displaystyle=\frac{1}{\lambda_{j}}\langle\phi_{i},\phi_{j}\rangle_{\mathsf{H}}.$
With $\psi_{n}=\frac{1}{\sqrt{\lambda_{n}}}\phi_{n}$, $(\psi_{n})_{n\geqslant
1}$ is then orthonormal with respect to the inner product of $\mathsf{H}$. To
show that $\operatorname{span}\\{\psi_{n},n\geqslant 1\\}$ is dense in
$\mathsf{H}$, let $u\in\mathsf{H}$ be such that for all $j\geqslant 1$,
$\langle u,\phi_{j}\rangle_{\mathsf{H}}=0$. In the last series of equalities,
replace $\phi_{i}$ by $u$: we see that $u$ is orthogonal to each $\phi_{j}$ in
$L^{2}(\rho_{P})$, thus $u$ is a constant as shown in the proof of Lemma A.10,
and because $u\in\mathsf{H}$ it has zero mean against $\rho_{P}$. This shows
that $u=0$. ∎
We set for what follows
$\mathcal{D}(\mathcal{A}):=\left\\{u\in\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})\,|\,\mathcal{A}u\in\mathsf{H}\right\\}$
and
$\mathcal{D}(\mathcal{W}):=\left\\{u\in\mathsf{H}\,|\,\mathcal{W}u\in\mathsf{H}\right\\}$.
###### Lemma 6.4.
The following properties hold:
* •
The operator $\mathcal{W}:\mathcal{D}(\mathcal{W})\to\mathsf{H}$ is positive:
for all $\phi\in\mathcal{D}(\mathcal{W})$,
$\langle\mathcal{W}\phi,\phi\rangle_{\mathsf{H}}=\dfrac{1}{2}\|\phi^{\prime}\rho_{P}\|_{1/2}^{2}\geqslant
0\,,$
with equality only for $\phi=0$, where the $1/2$-norm of $u$ is given by
$\|u\|_{1/2}^{2}=\int_{\mathbb{R}}|x|.\left|\mathcal{F}[u](x)\right|^{2}dx\,.$
* •
Both $\mathcal{A}$ and $\mathcal{W}$ are self-adjoint for the inner product of
$\mathsf{H}$.
###### Proof.
To prove the first point, let $\phi\in\mathcal{D}(\mathcal{W})$. Then,
$2\pi\Braket{\mathcal{W}\phi,\phi}_{\mathsf{H}}=-2\pi\Braket{\mathcal{H}[\phi^{\prime}\rho_{P}]^{\prime},\phi^{\prime}\rho_{P}}_{L^{2}(dx)}=-\Braket{ix\mathcal{F}\big{[}\mathcal{H}[\phi^{\prime}\rho_{P}]\big{]},\mathcal{F}[\phi^{\prime}\rho_{P}]}_{L^{2}(dx)}\\\
=\pi\Braket{}{x}{\mathcal{F}[\phi^{\prime}\rho_{P}],\mathcal{F}[\phi^{\prime}\rho_{P}]}_{L^{2}(dx)}=\pi\|\phi^{\prime}\rho_{P}\|_{1/2}^{2}\geqslant
0\,,$
and because $\phi$ is in $\mathsf{H}$, this last quantity is zero if and only
if $\phi$ vanishes.
For the second point, let $u,v\in\mathcal{D}(\mathcal{W})$. Using Plancherel’s
isometry and i) of Lemma 2.1,
$\displaystyle\Braket{\mathcal{W}u,v}_{\mathsf{H}}=\Braket{(\mathcal{W}u)^{\prime},v^{\prime}\rho_{P}}_{L^{2}(dx)}=\frac{1}{2}\Braket{}{x}{\mathcal{F}[u^{\prime}\rho_{P}],\mathcal{F}[v^{\prime}\rho_{P}]}_{L^{2}(dx)}\,,$
and this last expression is symmetric in $(u,v)$. The proof of the self-
adjointness of $\mathcal{A}$ follows from integration by parts. ∎
###### Definition 6.5 (Quadratic form associated to $-\mathcal{L}$).
We define for all $u,v\in\mathsf{H}\cap\mathcal{C}_{c}^{\infty}({\mathbb{R}})$
the quadratic form associated to $-\mathcal{L}$ by
$q_{-\mathcal{L}}(u,v)=\braket{\mathcal{A}u,\mathcal{A}v}_{L^{2}(\rho_{P})}+2P\braket{\mathcal{F}[u^{\prime}\rho_{P}],\mathcal{F}[v^{\prime}\rho_{P}]}_{L^{2}(|x|dx)}$
Note that for all
$u,v\in\mathsf{H}\cap\mathcal{C}_{c}^{\infty}({\mathbb{R}})$,
$q_{-\mathcal{L}}(u,v)=\braket{-\mathcal{L}u,v}_{\mathsf{H}}$ and that
whenever $u\in\mathcal{D}(\mathcal{A})\cap\mathcal{D}(\mathcal{W})$,
$q_{-\mathcal{L}}(u,u)=\braket{\mathcal{A}u,u}_{\mathsf{H}}+2P\braket{\mathcal{W}u,u}_{\mathsf{H}}\geqslant\lambda_{1}(\mathcal{A})\|u\|_{\mathsf{H}}^{2}$
(54)
by Proposition 6.3 and Lemma 6.4. After extending the $q_{-\mathcal{L}}$ to
its form domain $Q(\mathcal{L})$ which is equal to
$\Big{\\{}u\in\mathsf{H},\mathcal{A}u\in
L^{2}(\rho_{P}),\,\mathcal{F}[u^{\prime}\rho_{P}]\in
L^{2}(|x|dx)\Big{\\}}=\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})$. The
equality comes from the fact that
$\mathcal{A}^{-1}\Big{(}L^{2}_{0}(\rho_{P})\Big{)}=\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})$,
that $\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})\subset\mathsf{H}$ and
that $\mathcal{F}[u^{\prime}\rho_{P}]\in L^{2}(x^{2}dx)$ whenever
$u\in\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})$, indeed
$u^{\prime}\rho_{P}\in H^{1}({\mathbb{R}})$ because
$(u^{\prime}\rho_{P})^{\prime}=-\rho_{P}\mathcal{A}u\in L^{2}({\mathbb{R}})$.
We now define $\mathcal{D}(\mathcal{L})$ the domain of definition of
$-\mathcal{L}$ by:
$\mathcal{D}(\mathcal{L}):=\Big{\\{}u\in Q(\mathcal{L}),v\mapsto
q_{-\mathcal{L}}(u,v)\text{ can be extended to a continuous linear form on
}\mathsf{H}\Big{\\}}$
###### Proposition 6.6.
$\mathcal{D}(\mathcal{L})=\mathcal{D}(\mathcal{A})\cap\mathcal{D}(\mathcal{W})$.
###### Proof.
Let $u\in\mathcal{D}(\mathcal{L})$, by Riesz’s theorem there exists
$f_{u}\in\mathsf{H}$, such that
$q_{-\mathcal{L}}(u,v)=\braket{f_{u},v}_{\mathsf{H}}$ for all
$v\in\mathsf{H}$, we set $-\mathcal{L}u:=f_{u}$, it is called the Friedrichs
extension of $-\mathcal{L}$. Then for all
$v\in\mathsf{H}\cap\mathcal{C}_{c}^{\infty}({\mathbb{R}})$, by integration by
part we get:
$\braket{-\mathcal{L}u,v}_{\mathsf{H}}=q_{-\mathcal{L}}(u,v)=\braket{u,\mathcal{A}v}_{\mathsf{H}}+2P\braket{u,\mathcal{W}v}_{\mathsf{H}},$
hence we deduce the distributional identity
$-\mathcal{L}u=\mathcal{A}u+2P\mathcal{W}u$. Since
$u\in\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})$, $\mathcal{W}u\in
H^{1}(\rho_{P})$ implying that $\mathcal{A}u\in\mathsf{H}$ and then that
$\mathcal{W}u\in\mathsf{H}$. The converse is trivially true. ∎
We are now ready to state the main theorem of this section, that is the
inversion of $\mathcal{L}$ on $\mathcal{D}(\mathcal{L})$.
###### Theorem 6.7 (Inversion of $\mathcal{L}$).
$-\mathcal{L}:\mathcal{D}(\mathcal{L})\longrightarrow\mathsf{H}$ is bijective.
Furthermore, $(-\mathcal{L})^{-1}$ is continuous from
$(\mathsf{H},\|.\|_{\mathsf{H}})$ to
$(\mathcal{D}(\mathcal{L}),q_{-\mathcal{L}})$.
###### Proof.
Let $f\in\mathsf{H}$, since $\braket{f,.}_{\mathsf{H}}$ is a linear form on
$Q(\mathcal{L})=\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})$ which is, by
(54), continuous with respect to $q_{-\mathcal{L}}$, one can applies Riesz’s
theorem so there exists a unique
$u_{f}\in\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})$, such that for all
$v\in\mathsf{H}$, $\braket{f,v}_{\mathsf{H}}=q_{-\mathcal{L}}(u_{f},v)$.
Since, $u_{f}$ is clearly in $\mathcal{D}(\mathcal{L})$ by definition of the
Friedrichs extension of $-\mathcal{L}$, we have $-\mathcal{L}u=f$. ∎
###### Remark 6.8.
We can diagonalize $\mathcal{L}$ by the same argument we used in Appendix A to
diagonalize $\mathcal{A}$ in $L^{2}_{0}(\rho_{P})$.
We now state a result that could allow one to extract more regularity for
$\mathcal{L}^{-1}$ by the help of an explicit form that uses Fredholm
determinant theory for Hilbert-Schmidt operators, the reader can refer to
[GGK12].
###### Definition 6.9 (Fredholm determinant).
Let $\mathcal{U}$ be a self-adjoint Hilbert-Schmidt operator, we denote the
Fredholm determinant by $\det(I+\mathcal{U})$.
###### Theorem 6.10 (Determinant formula for $\mathcal{L}^{-1}$).
For all $u\in\mathsf{H}$, such that
$x\mapsto\dfrac{1}{\rho_{P}(x)}\displaystyle\int_{x}^{+\infty}u(t)\rho_{P}(t)dt$
is integrable at $+\infty$, we have:
$\mathcal{L}^{-1}u=\mathcal{A}^{-1}u-\rho_{P}^{-1/2}\mathcal{R}\big{[}\rho_{P}^{1/2}\mathcal{A}^{-1}u\big{]}$
(55)
where $\mathcal{R}$ is the kernel operator defined for all $v\in
L^{2}({\mathbb{R}})$ by:
$\mathcal{R}[v](x)=\int_{\mathbb{R}}R(x,y)v(y)dy$
where
$R(x,y)=\displaystyle\dfrac{1}{\det(I+\mathcal{K})}\sum_{n\geqslant
0}\dfrac{1}{n!}\int_{{\mathbb{R}}^{n}}\det_{n+1}\begin{bmatrix}K(x,y)&K(x,\lambda_{b})\\\
K(\lambda_{a},y)&K(\lambda_{a},\lambda_{b})\end{bmatrix}_{a,b=1\dots
n}d\lambda_{1}\dots d\lambda_{n}$
where $\mathcal{K}$ is the kernel operator defined for all $w\in
L^{2}(\rho_{P})$ by:
$\mathcal{K}[w](x)=\int_{\mathbb{R}}K(x,y)w(y)dy$ (56)
with
$K(x,y)=-2P\rho_{P}(x)\rho_{P}(y)\ln\Big{|}1-\dfrac{y}{x}\Big{|}.$ (57)
###### Proof.
Let $f\in\mathsf{H}$, there exists a unique $u\in\mathcal{D}(\mathcal{A})$
such that $\mathcal{A}u=f$. Since
$(u^{\prime}\rho_{P})^{\prime}=\rho_{P}\mathcal{A}u\in L^{2}({\mathbb{R}})$,
hence $u^{\prime}\rho_{P}\in H^{1}({\mathbb{R}})$ so
$u^{\prime}(x)\rho_{P}(x)\underset{|x|\rightarrow+\infty}{\longrightarrow}0$.
By definition, $-\dfrac{(u^{\prime}\rho_{P})^{\prime}}{\rho_{P}}=f\text{ hence
}$
$(\mathcal{A}^{-1}f)^{\prime}(x)\rho_{P}(x)=u^{\prime}(x)\rho_{P}(x)=\int_{x}^{+\infty}f(t)\rho_{P}(t)dt.$
(58)
Using the fact that $\int_{\mathbb{R}}u(x)\rho_{P}(x)dx=0$, integrating again
we get:
$u(x)=-\int_{x}^{+\infty}\dfrac{ds}{\rho_{P}(s)}\int_{s}^{+\infty}f(t)\rho_{P}(t)dt+C$
where
$C=\displaystyle\int_{\mathbb{R}}\rho_{P}(x)dx\int_{x}^{+\infty}\dfrac{ds}{\rho_{P}(s)}\int_{s}^{+\infty}f(t)\rho_{P}(t)dt$.
Now let $g\in\mathsf{H}$, there exists a unique
$v\in\mathcal{D}(\mathcal{L})$, such that
$-\mathcal{L}v=\mathcal{A}v+2P\mathcal{W}v=g$ and then
$v+2P\mathcal{W}\mathcal{A}^{-1}v=\mathcal{A}^{-1}g$. using (58), we get:
$\mathcal{W}\mathcal{A}^{-1}v(x)=\fint_{\mathbb{R}}\dfrac{ds}{s-x}\int_{s}^{+\infty}dtv(t)\rho_{P}(t)$
By Sokhotski-Plejmel formula, we have:
$\fint_{\mathbb{R}}\dfrac{ds}{s-x}\int_{s}^{+\infty}dtv(t)\rho_{P}(t)=\underset{M\rightarrow+\infty}{\lim}\lim_{\varepsilon\rightarrow{0}}\int_{-M}^{M}\dfrac{ds}{2}\Big{\\{}\dfrac{1}{x-s+\mathrm{i}\varepsilon}+\dfrac{1}{x-s-\mathrm{i}\varepsilon}\Big{\\}}\int_{s}^{+\infty}dtv(t)\rho_{P}(t)$
We then proceed to an integration by part:
$\fint_{\mathbb{R}}\dfrac{ds}{s-x}\int_{s}^{+\infty}dtv(t)\rho_{P}(t)=\underset{M\rightarrow+\infty}{\lim}\lim_{\varepsilon\rightarrow{0}}\Big{[}-\dfrac{\ln\big{(}(x-s)^{2}+\varepsilon^{2}\big{)}}{2}\int_{s}^{+\infty}dtv(t)\rho_{P}(t)\Big{]}_{-M}^{M}\\\
-\int_{\mathbb{R}}ds\ln|x-s|v(s)\rho_{P}(s)$
To conclude that
$\mathcal{W}\mathcal{A}^{-1}v(x)=-\int_{\mathbb{R}}ds\ln|x-s|v(s)\rho_{P}(s)$,
we just need to show that
$\ln(x)\int_{x}^{+\infty}dtv(t)\rho_{P}(t)\underset{|x|\rightarrow\infty}{\longrightarrow}0$
which can be seen by Cauchy-Schwarz inequality:
$\Big{|}\ln(x)\int_{x}^{+\infty}dtv(t)\rho_{P}(t)\Big{|}\leqslant|\ln(x)|\|v\|_{L^{2}(\rho_{P})}.\rho_{P}(x)^{1/4}\Big{(}\int_{\mathbb{R}}\rho_{P}(t)^{1/2}dt\Big{)}^{1/2}.$
In this inequality, we used that $\rho_{P}$ is decreasing in a neighborhood of
$+\infty$, hence
$\ln|x|\int_{s}^{+\infty}dtv(t)\rho_{P}(t)\underset{x\rightarrow+\infty}{\longrightarrow}0$
the exact same argument allows us to conclude when $x$ goes to $-\infty$.
Using the fact that $\int_{\mathbb{R}}v(t)\rho_{P}(t)dt=0$, we obtain the
following equality:
$v-2P\int_{\mathbb{R}}ds\ln|x-s|v(s)\rho_{P}(s)=\mathcal{A}^{-1}g:=h.$
Now setting $\tilde{v}(t)=\rho_{P}^{1/2}(t)v(t)$ and
$\tilde{h}=\rho_{P}^{1/2}(t)h(t)$, we obtain
$\tilde{v}+\mathcal{K}[\tilde{v}]=\tilde{h}$ where $\mathcal{K}$ is defined in
(56). Since its kernel (defined in (57)) belongs to $L^{2}({\mathbb{R}}^{2})$,
$\mathcal{K}$ is Hilbert-Schmidt. Hence by Fredholm determinant theory:
$\tilde{v}=\tilde{h}-\mathcal{R}[\tilde{h}]$
or
$\mathcal{L}^{-1}g=\mathcal{A}^{-1}g-\rho_{P}^{-1/2}\mathcal{R}\big{[}\rho_{P}^{1/2}\mathcal{A}^{-1}g\big{]}$
as expected. ∎
## 7 Regularity of the inverse of $\mathcal{L}$ and completion of the proof
of Theorem 1.3
Since we have proven the central limit theorem for functions of the type
$\mathcal{L}\phi$ with $\phi$ regular enough and satisfying vanishing
asymptotic conditions at infinity, we exhibit a class of functions $f$ for
which $\mathcal{L}^{-1}f$ is regular enough to satisfy conditions of Theorem
5.2. We define $\mathcal{T}$ the subset of $\mathsf{H}$ defined by
$\mathcal{T}:=\left\\{f\in\mathcal{C}^{1}({\mathbb{R}}),\exists\varepsilon>0,\,f(x)=\underset{|x|\rightarrow\infty}{O}\left(x^{-\frac{1}{2}-\varepsilon}\right),\;f^{\prime}(x)=\underset{|x|\rightarrow\infty}{O}\left(x^{-\frac{1}{2}-\varepsilon}\right),\int_{\mathbb{R}}f\rho_{P}=0\right\\}$
###### Theorem 7.1.
For all $f\in\mathcal{T}$, there exists a unique
$u\in\mathcal{C}^{3}({\mathbb{R}})$ such that $u^{\prime}\in
H^{2}({\mathbb{R}})$ with $u^{(3)}$ bounded wich verifies:
* •
$u^{\prime}(x)=\underset{|x|\rightarrow\infty}{O}\left(\dfrac{1}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)}\right)$
* •
$u^{\prime\prime}(x)=\underset{|x|\rightarrow\infty}{O}\left(\dfrac{1}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)}\right)$
* •
$u^{(3)}(x)=\underset{|x|\rightarrow\infty}{O}\left(\dfrac{1}{x^{\frac{1}{2}+\varepsilon}}\right)$
such that $f=\mathcal{L}u$.
###### Proof.
Let $f\in\mathcal{T}\subset\mathsf{H}$, then since $-\mathcal{L}$ is bijective
from $\mathcal{D}(\mathcal{L})\rightarrow\mathsf{H}$, there exists a unique
$u\in\mathcal{D}(\mathcal{L})$ such that $-\mathcal{L}u=f$ ie:
$-u^{\prime\prime}-\dfrac{\rho_{P}^{\prime}}{\rho_{P}}u^{\prime}-2P\mathcal{H}[u^{\prime}\rho_{P}]=f$
(59)
Hence we have
$-(u^{\prime}\rho_{P})^{\prime}=\rho_{P}\Big{(}f+2P\mathcal{H}[u^{\prime}\rho_{P}]\Big{)}.$
(60)
Since
$u\in\mathcal{D}(\mathcal{L})\subset\\{u\in\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A}),\mathcal{A}u\in\mathsf{H}\\}$,
the functions $u^{\prime}\rho_{P}$ and its distributional derivatives
$(u^{\prime}\rho_{P})^{\prime}=-\rho_{P}\mathcal{A}u$ and
$(u^{\prime}\rho_{P})^{\prime\prime}=-\dfrac{\rho_{P}^{\prime}}{\rho_{P}}\rho_{P}^{1/2}.\Big{(}\rho_{P}^{1/2}\mathcal{A}u\Big{)}-\rho_{P}\Big{(}\mathcal{A}u\Big{)}^{\prime}$
are in $L^{2}({\mathbb{R}})$. In particular $u^{\prime}\rho_{P}$ goes to zero
at infinity, and $\mathcal{H}[u^{\prime}\rho_{P}]\in
H^{2}({\mathbb{R}})\subset\mathcal{C}^{1}({\mathbb{R}})$. So we can integrate
(60) on $[x,+\infty[$ , since by Lemma 2.3, the right-hand side behaves like a
$\underset{|x|\rightarrow\infty}{O}\left(\dfrac{\rho_{P}(x)}{|x|^{\frac{1}{2}+\varepsilon}}\right)$,
to get the following expression
$u^{\prime}(x)\rho_{P}(x)=\int_{x}^{+\infty}\dfrac{\rho_{P}(t)}{\rho_{P}^{\prime}(t)}(f+2P\mathcal{H}[u^{\prime}\rho_{P}]).\rho_{P}^{\prime}(t)dt$
(61)
From this expression, we can see that
$u^{\prime}\in\mathcal{C}^{2}({\mathbb{R}})$ so we just have to check the
integrability condition at infinity and the fact that $u^{(3)}$ is bounded.
After proceeding to an integration by parts, which is permitted by the
previous argument, we obtain:
$u^{\prime}(x)=-\dfrac{\rho_{P}(x)}{\rho_{P}^{\prime}(x)}\Big{(}f(x)+2P\mathcal{H}[u^{\prime}\rho_{P}](x)\Big{)}-\dfrac{1}{\rho_{P}(x)}\int_{x}^{+\infty}\Bigg{(}\dfrac{\rho_{P}(t)}{\rho_{P}^{\prime}(t)}(f+2P\mathcal{H}[u^{\prime}\rho_{P}])\Bigg{)}^{\prime}\rho_{P}(t)dt$
(62)
and we define
$R_{1}(x):=\displaystyle\dfrac{1}{\rho_{P}(x)}\int_{x}^{+\infty}\Bigg{(}\dfrac{\rho_{P}(t)}{\rho_{P}^{\prime}(t)}(f+2P\mathcal{H}[u^{\prime}\rho_{P}])\Bigg{)}^{\prime}\rho_{P}(t)dt$,
we will need to show that $R_{1}$ is a remainder of order
$\underset{x\rightarrow+\infty}{O}\left(\dfrac{1}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\right)$
at infinity. In this case, we will have
$u^{\prime}(x)=\underset{x\rightarrow+\infty}{O}\left(\dfrac{1}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)}\right)$
which will be useful for the following. If we reinject (62) in (59), we find:
$u^{\prime\prime}=-(f+2P\mathcal{H}[u^{\prime}\rho_{P}])-\dfrac{\rho^{\prime}_{P}}{\rho_{P}}\Big{(}-\dfrac{\rho_{P}}{\rho_{P}^{\prime}}\Big{(}f+2P\mathcal{H}[u^{\prime}\rho_{P}]\Big{)}-R_{1}\Big{)}=\dfrac{\rho_{P}^{\prime}}{\rho_{P}}R_{1}$
(63)
Hence
$u^{\prime\prime}(x)=\dfrac{\rho_{P}^{\prime}}{\rho_{P}^{2}}(x)\int_{x}^{+\infty}\rho_{P}(t)dt\Bigg{\\{}\underbrace{\Big{(}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}\Big{)}^{\prime}(t)}_{=\underset{t\rightarrow+\infty}{O}\Big{(}\frac{V^{\prime\prime}(t)}{V^{\prime}(t)^{2}}\Big{)}}\underbrace{\big{[}f+2P\mathcal{H}[u^{\prime}\rho_{P}]\big{]}(t)}_{=\underset{t\rightarrow+\infty}{O}\Big{(}t^{-\frac{1}{2}-\varepsilon}\Big{)}}+\underbrace{\dfrac{\rho_{P}}{\rho_{P}^{\prime}}(t)}_{=\underset{t\rightarrow+\infty}{O}\Big{(}\frac{1}{V^{\prime}(t)}\Big{)}}\underbrace{\big{[}f^{\prime}-2P\mathcal{H}[\rho_{P}\mathcal{A}u]\big{]}(t)}_{=\underset{t\rightarrow+\infty}{O}\Big{(}t^{-\frac{1}{2}-\varepsilon}\Big{)}}\Bigg{\\}}.$
The fact that
$\mathcal{H}[\rho_{P}\mathcal{A}u](t)=\underset{t\rightarrow+\infty}{O}(t^{-2})$
comes again from lemma 2.3. Finally we have that,
$u^{(3)}(x)=\Big{(}\dfrac{\rho_{P}^{\prime}}{\rho_{P}^{2}}\Big{)}^{\prime}(x)\rho_{P}(x)R_{1}(x)-\Big{(}\dfrac{\rho_{P}^{\prime}}{\rho_{P}^{2}}\Big{)}(x)\Bigg{(}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}(f+2P\mathcal{H}[u^{\prime}\rho_{P}])\Bigg{)}^{\prime}(x)\rho_{P}(x)\\\
=\Big{(}\underbrace{\dfrac{\rho_{P}^{\prime\prime}}{\rho_{P}}-2\dfrac{\rho_{P}^{\prime
2}}{\rho_{P}^{2}}\Big{)}(x)}_{=\underset{x\rightarrow+\infty}{O}\Big{(}V^{\prime}(x)^{2}\Big{)}}R_{1}(x)-\underbrace{\Big{(}\dfrac{\rho_{P}^{\prime}}{\rho_{P}}\Big{)}(x)\Bigg{(}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}(f+2P\mathcal{H}[u^{\prime}\rho_{P}])\Bigg{)}^{\prime}(x)}_{=\underset{x\rightarrow+\infty}{O}\Big{(}\dfrac{V^{\prime\prime}(x)}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)}+x^{-\frac{1}{2}-\varepsilon}\Big{)}}$
The second term is
$\underset{x\rightarrow+\infty}{O}\Big{(}x^{-\frac{1}{2}-\varepsilon}\Big{)}$
by the assumption that
$\dfrac{V^{\prime\prime}}{V^{\prime}}(x)=\underset{|x|\rightarrow\infty}{O}(1)$.
Hence, we just have to check that
$R_{1}(x)=\underset{x\rightarrow+\infty}{O}\Big{(}\dfrac{1}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\Big{)}$
to establish that $u^{\prime}$, $u^{\prime\prime}$, $u^{(3)}$ are in
$L^{2}({\mathbb{R}})$. By a comparison argument, we control $R_{1}$ by
controlling
$I_{1}(x):=\int_{x}^{+\infty}\dfrac{\rho_{P}(t)}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)}dt$
By integration by parts:
$I_{1}(x):=-\underbrace{\dfrac{\rho_{P}(x)}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}(x)}_{=\underset{x\rightarrow+\infty}{O}\Big{(}\dfrac{\rho_{P}(x)}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\Big{)}}-\int_{x}^{+\infty}\rho_{P}(t)\Big{(}\dfrac{1}{t^{\frac{1}{2}+\varepsilon}V^{\prime}}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}\Big{)}^{\prime}(t)dt\\\
=\underset{x\rightarrow+\infty}{O}\Big{(}\frac{\rho_{P}(x)}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\Big{)}-\int_{x}^{+\infty}\rho_{P}(t)dt\Bigg{\\{}\dfrac{-\frac{1}{2}-\varepsilon}{t^{\frac{3}{2}+\varepsilon}V^{\prime}(t)}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}(t)-\dfrac{V^{\prime\prime}(t)}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)^{2}}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}+\dfrac{1}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)}\Big{(}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}\Big{)}^{\prime}(t)\Bigg{\\}}$
(64)
By the same argument as before, the last integral is of the form
$\displaystyle\int_{x}^{+\infty}\underset{t\rightarrow+\infty}{O}\Big{(}\dfrac{\rho_{P}(t)}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)^{2}}\Big{)}dt$
so if
$I_{2}(x):=\displaystyle\int_{x}^{+\infty}\dfrac{\rho_{P}(t)}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)^{2}}dt=\underset{x\rightarrow+\infty}{O}\Big{(}\dfrac{\rho_{P}(x)}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\Big{)}$
then so is $I_{1}$. By integration by parts, we obtain:
$I_{2}(x)=\rho_{P}(x)\dfrac{1}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}(x)-\int_{x}^{+\infty}\rho_{P}(t)dt\Bigg{\\{}\Big{(}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}\Big{)}^{\prime}(t)\dfrac{1}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)^{2}}-\dfrac{\rho_{P}}{\rho_{P}^{\prime}}(t)\Big{(}\dfrac{1}{t^{\frac{3}{2}+\varepsilon}V^{\prime}(t)^{2}}+\dfrac{2V^{\prime\prime}(t)}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)^{3}}\Big{)}\Bigg{\\}}\\\
=\underset{x\rightarrow+\infty}{O}\Big{(}\dfrac{\rho_{P}(x)}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\Big{)}+\int_{x}^{+\infty}\underset{t\rightarrow+\infty}{O}\Big{(}\dfrac{\rho_{P}(t)}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)^{3}}\Big{)}dt$
The last integral is a
$\underset{x\rightarrow+\infty}{O}\Big{(}\dfrac{\rho_{P}(x)}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\Big{)}$
because, again, by integration by part:
$\int_{x}^{+\infty}\dfrac{\rho_{P}(t)}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)^{3}}dt=\rho_{P}(x)\dfrac{1}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{3}}\dfrac{\rho_{P}}{\rho_{P}^{\prime}}(x)-\int_{x}^{+\infty}\underset{t\rightarrow+\infty}{O}\Big{(}\dfrac{\rho_{P}(t)}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)^{4}}\Big{)}$
and finally
$\int_{x}^{+\infty}\dfrac{\rho_{P}(t)}{t^{\frac{1}{2}+\varepsilon}V^{\prime}(t)^{4}}dt\leqslant\dfrac{\rho_{P}(x)}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\int_{x}^{+\infty}\dfrac{dt}{V^{\prime}(t)^{2}}=\underset{x\rightarrow+\infty}{O}\Big{(}\dfrac{\rho_{P}(x)}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\Big{)}$
In the final step, we used the fact that
$x\mapsto\dfrac{\rho_{P}(x)}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)}$ is
decreasing in a neighborhood of $+\infty$ (which can be checked by
differentiating) and that $x\mapsto\dfrac{1}{V^{\prime}(x)^{2}}$ is integrable
at $\infty$ by assumption iv). Hence
$R_{1}(x)=\underset{x\rightarrow+\infty}{O}\Big{(}\dfrac{1}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)^{2}}\Big{)}$
(the exact same result can be shown at $-\infty$), which leads to the fact
$u^{\prime}(x)=\underset{|x|\rightarrow+\infty}{O}\Big{(}\dfrac{1}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)}\Big{)},\hskip
28.45274ptu^{\prime\prime}(x)=\underset{|x|\rightarrow+\infty}{O}\Big{(}\dfrac{1}{x^{\frac{1}{2}+\varepsilon}V^{\prime}(x)}\Big{)}\hskip
11.38092pt\text{and}\hskip
11.38092ptu^{(3)}(x)=\underset{|x|\rightarrow+\infty}{O}\Big{(}\dfrac{1}{x^{\frac{1}{2}+\varepsilon}}\Big{)}$
(65)
which establishes that these functions are in $L^{2}$ in a neighborhood of
$\infty$. Since we already showed that
$u\in\mathcal{C}^{3}({\mathbb{R}})\subset H^{3}_{\text{loc}}({\mathbb{R}})$,
it establishes that $u\in
H^{3}({\mathbb{R}})\cap\mathcal{C}^{3}({\mathbb{R}})$ with $u^{(3)}$ bounded.
To complete the proof we just have to show that $(u^{\prime})^{2}V^{(3)}$,
$u^{\prime}u^{\prime\prime}V^{\prime\prime}$,
$(u^{\prime})^{2}V^{\prime\prime}$ and $u^{\prime}V^{\prime}$ are bounded
which is easily checked by (65) and Assumption 1.1 iv). ∎
###### Remark 7.2.
We choose here the functions that vanishes at infinity at worst like
$|x|^{-1/2-\varepsilon}$, but functions like
$x\mapsto|x|^{-1/2}\ln^{-1/2-\varepsilon}|x|$ or
$x\mapsto|x|^{-1/2}\ln^{-1/2}|x|\ln^{-1/2-\varepsilon}\ln|x|$ also work, the
proof being the same. The only hypotheses that we use is that $f\in
H^{1}({\mathbb{R}})\cap\mathcal{C}^{1}({\mathbb{R}})$, that
$f^{\prime}=\underset{|x|\rightarrow+\infty}{O}\left(f(x)\right)$ and that $f$
is decreasing (resp. increasing) in a neighborhood of + (resp.-) $\infty$.
## Appendix A Appendix: proof of Theorem 6.2
In order to analyze $\mathcal{A}$, we let, for $u\in L^{2}({\mathbb{R}})$,
$\mathcal{S}u:=\rho_{P}^{1/2}\mathcal{A}\rho_{P}^{-1/2}u\,.$
Note that
$u\in\left(L^{2}({\mathbb{R}}),\|.\|_{L^{2}(dx)}\right)\mapsto\rho_{P}^{-1/2}u\in\left(L^{2}(\rho_{P}),\|.\|_{L^{2}(\rho_{P})}\right)$
is an isometry. It turns out that it will be easier to study first the
operator $\mathcal{S}$ in order to get the spectral properties of
$\mathcal{A}$.
###### Proposition A.1.
The operator $\mathcal{S}$ is a Schrödinger operator: it admits the following
expression for all $u\in C_{c}^{2}({\mathbb{R}})$:
$\mathcal{S}u=-u^{\prime\prime}+w_{P}u$ with
$w_{P}=\frac{1}{2}\left(\frac{1}{2}V^{\prime
2}-V^{\prime\prime}+2PV^{\prime}\mathcal{H}[\rho_{P}]-2P\mathcal{H}[\rho_{P}^{\prime}]+2P^{2}\mathcal{H}[\rho_{P}]^{2}\right)=\dfrac{1}{2}\Big{[}(\ln\rho_{P})^{\prime\prime}+\dfrac{1}{2}(\ln\rho_{P})^{\prime
2}\Big{]}\,.$
Furthermore, $w_{P}$ is continuous and we have
$w_{P}(x)\underset{\infty}{\sim}\dfrac{V^{\prime}(x)^{2}}{4}\underset{|x|\rightarrow\infty}{\longrightarrow}+\infty$.
###### Proof.
We compute directly
$\displaystyle\dfrac{\Big{(}\rho_{P}\big{(}\rho_{P}^{-1/2}u\big{)}^{\prime}\Big{)}^{\prime}}{\rho_{P}}$
$\displaystyle=\big{(}\rho_{P}^{-1/2}u\big{)}^{\prime\prime}+\dfrac{\rho_{P}^{\prime}}{\rho_{P}}\big{(}\rho_{P}^{-1/2}u\big{)}^{\prime}$
$\displaystyle=\big{(}\rho_{P}^{-1/2}u^{\prime}-\dfrac{1}{2}\rho_{P}^{-3/2}\rho_{P}^{\prime}u\big{)}^{\prime}+\rho_{P}^{\prime}\rho_{P}^{-3/2}u^{\prime}-\dfrac{1}{2}\rho_{P}^{-5/2}\big{(}\rho_{P}^{\prime}\big{)}^{2}u$
$\displaystyle=\rho_{P}^{-1/2}u^{\prime\prime}+\dfrac{1}{4}\rho_{P}^{-5/2}\big{(}\rho_{P}^{\prime}\big{)}^{2}u-\dfrac{1}{2}\rho_{P}^{-3/2}\rho_{P}^{\prime\prime}u$
$\displaystyle=\rho_{P}^{-1/2}\Big{[}u^{\prime\prime}+\dfrac{1}{4}\rho_{P}^{-2}\big{(}\rho_{P}^{\prime}\big{)}^{2}u-\dfrac{1}{2}\rho_{P}^{-1}\rho_{P}^{\prime\prime}u\Big{]}$
$\displaystyle=\rho_{P}^{-1/2}\Bigg{(}u^{\prime\prime}-\dfrac{1}{2}\Big{[}\big{(}\dfrac{\rho_{P}^{\prime\prime}}{\rho_{P}}\big{)}-\dfrac{1}{2}\big{(}\dfrac{\rho_{P}^{\prime}}{\rho_{P}}\big{)}^{2}\Big{]}u\Bigg{)}$
$\displaystyle=\rho_{P}^{-1/2}\Bigg{(}u^{\prime\prime}-\dfrac{1}{2}\Big{[}(\ln\rho_{P})^{\prime\prime}+\dfrac{1}{2}(\ln\rho_{P})^{\prime
2}\Big{]}u\Bigg{)}=\rho_{P}^{-1/2}\Bigg{(}u^{\prime\prime}-w_{P}u\Bigg{)}\,.$
Now, using Lemma 2.2, we have
$w_{P}=\frac{1}{2}\left(\frac{1}{2}V^{\prime
2}-V^{\prime\prime}+2PV^{\prime}\mathcal{H}[\rho_{P}]-2P\mathcal{H}[\rho_{P}^{\prime}]+2P^{2}\mathcal{H}[\rho_{P}]^{2}\right)\,.$
Notice that $\mathcal{H}[\rho_{P}^{\prime}]$ and $\mathcal{H}[\rho_{P}]$ are
bounded since they belong to $H^{1}({\mathbb{R}})$, as we showed in Lemma 2.2
that $\rho_{P}$ is $H^{2}({\mathbb{R}})$. Along with Assumption 1.1 iii) and
Lemma 2.3, we deduce $w_{P}(x)\underset{\infty}{\sim}\dfrac{1}{4}V^{\prime
2}(x)$. ∎
###### Remark A.2.
Note that the function $w_{P}$ need not be positive on ${\mathbb{R}}$. In
fact, neglecting the terms involving the Hilbert transforms of $\rho_{P}$ and
$\rho_{P}^{\prime}$, $w_{P}$ would only be positive outside of a compact set.
However, using the positivity of $\mathcal{A}$, which will be shown further in
the article, we can show that the operator $-u^{\prime\prime}+w_{P}u$ is
itself positive on $L^{2}({\mathbb{R}})$. It can also be checked that, by
integration by parts, $\mathcal{S}$ is self-adjoint on
$\mathcal{C}_{c}^{\infty}({\mathbb{R}})$ with the inner product of
$L^{2}({\mathbb{R}})$.
We now introduce an extension of $\mathcal{S}$ by defining its associated
bilinear form.
###### Definition A.3 (Quadratic form associated to $\mathcal{S}$).
Let $\alpha\geqslant 0$ such that $w_{P}+\alpha\geqslant 1$. We define the
quadratic form associated to $\mathcal{S}+\alpha I$, defined for all
$u\in\mathcal{C}_{c}^{\infty}({\mathbb{R}})$ by
$q_{\alpha}(u,u):=\int_{\mathbb{R}}u^{\prime
2}dx+\int_{\mathbb{R}}u^{2}(w_{P}+\alpha)dx$
This quadratic form can be extended to a larger domain denoted by
$Q(\mathcal{S}+\alpha I)$, called the form domain of the operator
$\mathcal{S}+\alpha I$. By the theory of Schrödinger operators, it is well-
known (see [Dav96][Theorem 8.2.1]) that such a domain is given by
$Q(\mathcal{S}+\alpha I)=\left\\{u\in
H^{1}({\mathbb{R}}),u(w_{P}+\alpha)^{1/2}\in
L^{2}({\mathbb{R}})\right\\}=\left\\{u\in H^{1}({\mathbb{R}}),uV^{\prime}\in
L^{2}({\mathbb{R}})\right\\}=:H^{1}_{V^{\prime}}({\mathbb{R}})\,.$
The space $H^{1}_{V^{\prime}}({\mathbb{R}})$ can be seen to be the completion
of $\mathcal{C}_{c}^{\infty}({\mathbb{R}})$ under the norm $q_{\alpha}$. Now
that the quadratic form associated to $\mathcal{S}+\alpha I$ has been extended
to its form domain, it is possible to go back to the operator and extend it by
its Friedrichs extension.
###### Theorem A.4 (Friedrichs extension of $\mathcal{S}+\alpha I$).
There exists an extension $(S+\alpha I)_{F}$ of the operator
$\mathcal{S}+\alpha I$, called the Friedrichs extension of $\mathcal{S}+\alpha
I$ defined on $\mathcal{D}\Big{(}(S+\alpha I)_{F}\Big{)}=\Big{\\{}u\in
H^{1}_{V^{\prime}}({\mathbb{R}}),-u^{\prime\prime}+(w_{P}+\alpha)u\in
L^{2}({\mathbb{R}})\Big{\\}}$ .
###### Proof.
We denote
$\mathcal{D}\Big{(}(S+\alpha I)_{F}\Big{)}\\\ :=\Big{\\{}v\in
H^{1}_{V^{\prime}}({\mathbb{R}}),u\in
H^{1}_{V^{\prime}}({\mathbb{R}})\longmapsto q_{\alpha}(u,v)\text{ can be
extended to a continuous linear form on }L^{2}({\mathbb{R}})\Big{\\}}$
If $v\in\mathcal{D}\Big{(}(S+\alpha I)_{F}\Big{)}$, by Riesz’s theorem there
exists a unique $f_{v}\in L^{2}({\mathbb{R}})$ such that
$q_{\alpha}(u,v)=\braket{u,f_{v}}_{L^{2}(dx)}$ holds for all $u\in
L^{2}({\mathbb{R}})$ and we can set $(S+\alpha I)_{F}v:=f_{v}$. Note that it
is indeed a way of extending $\mathcal{S}+\alpha I$ since for all
$u,v\in\mathcal{C}_{c}^{\infty}({\mathbb{R}})$,
$q_{\alpha}(u,v)=\braket{u,(\mathcal{S}+\alpha I)v}_{L^{2}(dx)}$.
We want to show that $\mathcal{D}\Big{(}(S+\alpha I)_{F}\Big{)}=\Big{\\{}u\in
H^{1}_{V^{\prime}}({\mathbb{R}}),-u^{\prime\prime}+(w_{P}+\alpha)u\in
L^{2}({\mathbb{R}})\Big{\\}}$. Let $f\in\mathcal{D}\Big{(}(S+\alpha
I)_{F}\Big{)}$ and $g:=(S+\alpha I)_{F}f\in L^{2}({\mathbb{R}})$. By
definition of $q_{\alpha}$, for all
$u\in\mathcal{C}_{c}^{\infty}({\mathbb{R}})$:
$\int_{\mathbb{R}}gudx=\int_{\mathbb{R}}f^{\prime}u^{\prime}dx+\int_{\mathbb{R}}(w_{P}+\alpha)fudx=-\int_{\mathbb{R}}fu^{\prime\prime}dx+\int_{\mathbb{R}}(w_{P}+\alpha)fudx$
Therefore in the sense of distributions, we get
$-f^{\prime\prime}+(w_{P}+\alpha)=g$ which is a function in
$L^{2}({\mathbb{R}})$, hence $f\in\Big{\\{}u\in
H^{1}_{V^{\prime}}({\mathbb{R}}),-u^{\prime\prime}+(w_{P}+\alpha)u\in
L^{2}({\mathbb{R}})\Big{\\}}$. Conversely, if $f\in
H^{1}_{V^{\prime}}({\mathbb{R}})$ such that
$-f^{\prime\prime}+(w_{P}+\alpha)f\in L^{2}({\mathbb{R}})$, it is possible to
extend $u\mapsto q_{\alpha}(f,u)$ to a continuous linear form on
$L^{2}({\mathbb{R}})$ by
$u\mapsto\int_{\mathbb{R}}u\Big{(}-f^{\prime\prime}+f(w_{P}+\alpha)\Big{)}dx$
which concludes the fact that $\mathcal{D}\Big{(}(S+\alpha
I)_{F}\Big{)}=\Big{\\{}u\in
H^{1}_{V^{\prime}}({\mathbb{R}}),-u^{\prime\prime}+(w_{P}+\alpha)u\in
L^{2}({\mathbb{R}})\Big{\\}}$. ∎
In the following, we will deal only with $(\mathcal{S}+\alpha
I)_{F}:\mathcal{D}\Big{(}(S+\alpha I)_{F}\Big{)}\longrightarrow
L^{2}({\mathbb{R}})$ and denote it $\mathcal{S}+\alpha
I:\mathcal{D}(\mathcal{S}+\alpha I)$.
###### Remark A.5.
Note that in the previous proof, the application of Riesz’s theorem doesn’t
allow to say that $(\mathcal{S}+\alpha I):v\in\Big{(}\mathcal{D}(S+\alpha
I),\|.\|_{q_{\alpha}}\Big{)}\mapsto
f_{v}\in\Big{(}L^{2}({\mathbb{R}}),\|.\|_{L^{2}(dx)}\Big{)}$, where
$\|.\|_{q_{\alpha}}$ stands for the norm associated to the bilinear positive
definite form $q_{\alpha}$, is continuous. It can be seen by the fact that
$v\in\Big{(}\mathcal{D}(S+\alpha I),\|.\|_{q_{\alpha}}\Big{)}\mapsto
q(.,v)\in\Big{(}L^{2}({\mathbb{R}})^{\prime},\|.\|_{L^{2}(dx)^{\prime}}\Big{)}$,
where $L^{2}({\mathbb{R}})^{\prime}$ stands for the topological dual of
$L^{2}({\mathbb{R}})$ equipped with its usual norm, is not continuous. Indeed
the $\|.\|_{q_{\alpha}}$ norm doesn’t control the second derivative of $v$ and
hence doesn’t provide any module of continuity for the
$L^{2}({\mathbb{R}})$-extended linear form $q(.,v)$.
Also note that, even though it would be convenient that
$\mathcal{D}\Big{(}(S+\alpha
I)_{F}\Big{)}=L^{2}({\mathbb{R}},(w_{P}+\alpha)^{2}dx)\cap
H^{2}({\mathbb{R}})$ it is not true without more properties on $w_{P}$. Such a
result holds, for example when $w_{P}$ belongs to $B_{2}$, the class of
reverse Hölder weights, see [ABA07][Theorem 1.1].
###### Theorem A.6 (Inversion of $\mathcal{S}+\alpha I$).
For every $f\in L^{2}({\mathbb{R}})$, there exists a unique
$u\in\mathcal{D}\Big{(}(S+\alpha I)_{F}\Big{)}$ such that $(\mathcal{S}+\alpha
I)u=f$. Furthermore, the map $(S+\alpha I)^{-1}$ is continuous from
$\big{(}L^{2}({\mathbb{R}}),\|.\|_{L^{2}(dx)}\big{)}$ to
$\big{(}\mathcal{D}(S+\alpha I),\|.\|_{q_{\alpha}}\big{)}$.
###### Proof.
Let $f\in L^{2}({\mathbb{R}})$, the map $u\longmapsto\braket{u,f}_{L^{2}(dx)}$
is continuous on
$\big{(}H_{V^{\prime}}^{1}({\mathbb{R}}),\|.\|_{q_{\alpha}}\big{)}$ which is a
Hilbert space. Therefore by Riesz’s theorem, there exists a unique $v_{f}\in
H^{1}_{V^{\prime}}({\mathbb{R}})$ such that for all $u\in
H^{1}_{V^{\prime}}({\mathbb{R}})$,
$\braket{f,u}_{L^{2}(dx)}=q_{\alpha}(v_{f},u)$ from which we deduce that, in
the sense of distributions, $f=-v_{f}^{\prime\prime}+(w_{P}+\alpha)v_{f}$
which implies that $v_{f}\in\mathcal{D}(S+\alpha I)$. Since
$v_{f}\in\mathcal{D}(S+\alpha I)$, we have then for all $u\in
L^{2}({\mathbb{R}})$,
$\braket{f,u}_{L^{2}(dx)}=q_{\alpha}(v_{f},u)=\Braket{(\mathcal{S}+\alpha)v_{f},u}_{L^{2}(dx)}$,
hence $(\mathcal{S}+\alpha I)v_{f}=f$. Finally, by Riesz’s theorem, $f\in
L^{2}({\mathbb{R}})\mapsto v_{f}\in H^{1}_{V^{\prime}}({\mathbb{R}})$ is
continuous hence so is $(\mathcal{S}+\alpha I)^{-1}$. ∎
###### Remark A.7.
It would be tempting to use Banach’s isomorphism theorem to say that since
$(\mathcal{S}+\alpha I)^{-1}$ is bijective and continuous, so must be
$\mathcal{S}+\alpha I$. But since $\big{(}\mathcal{D}(S+\alpha
I),\|.\|_{q_{\alpha}}\big{)}$ is not a Banach space (it’s not closed in
$H^{1}_{V^{\prime}}({\mathbb{R}})$) we can’t apply it.
We are now able to diagonalize the resolvent of $\mathcal{S}$.
###### Theorem A.8 (Diagonalization of $(\mathcal{S}+\alpha I)^{-1}$).
There exists a complete orthonormal set $(\psi_{n})_{n\geqslant 0}$ of
$L^{2}({\mathbb{R}})$ (meaning that
$\overline{\operatorname{span}\\{\psi_{n},\ n\geqslant
0\\}}^{\|.\|_{L^{2}(dx)}}=L^{2}({\mathbb{R}})$
and $\langle\psi_{i},\psi_{j}\rangle_{L^{2}(dx)}=\delta_{i,j}$), where each
$\psi_{n}\in\mathcal{D}(S+\alpha I)$ and
$\big{(}\mu_{n}(\alpha)\big{)}_{n\geqslant 0}\in[0,1]^{\mathbb{N}}$ with
$\mu_{n}(\alpha)\underset{N\rightarrow\infty}{\longrightarrow}0$ such that
$(\mathcal{S}+\alpha I)^{-1}\psi_{n}=\mu_{n}(\alpha)\psi_{n}$ for all
$n\geqslant 0$. We also have
${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\big{(}\mathcal{S}+\alpha
I\big{)}^{-1}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{\mathcal{L}\big{(}L^{2}(dx)\big{)}}\leqslant
1$.
###### Proof.
By Proposition A.1, $w_{P}+\alpha$ is continuous and goes to infinity at
infinity. By Rellich criterion [RS78][see Theorem XIII.65], the unit ball of
$\mathcal{D}(S+\alpha I)$, ie the set
$\Big{\\{}u\in\mathcal{D}(S+\alpha I),\,\int_{\mathbb{R}}u^{\prime
2}+\int_{\mathbb{R}}(w_{P}+\alpha)u^{2}\leqslant 1\Big{\\}}$
considered as a subset of $L^{2}({\mathbb{R}})$ is relatively compact in
$\big{(}L^{2}({\mathbb{R}}),\|.\|_{L^{2}(dx)}\big{)}$. Hence, we can conclude
that the injection $\iota:\big{(}\mathcal{D}(S+\alpha
I),\|.\|_{q_{\alpha}}\big{)}\longrightarrow\big{(}L^{2}({\mathbb{R}}),\|.\|_{L^{2}(dx)}\big{)}$
is a compact operator. Since $(S+\alpha
I)^{-1}:\big{(}L^{2}({\mathbb{R}}),\|.\|_{L^{2}(dx)}\big{)}\longrightarrow\big{(}\mathcal{D}(S+\alpha
I),\|.\|_{q_{\alpha}}\big{)}$ is continuous then $(\mathcal{S}+\alpha I)^{-1}$
is compact from $\big{(}L^{2}({\mathbb{R}}),\|.\|_{L^{2}(dx)}\big{)}$ to
itself. The fact that $(\mathcal{S}+\alpha I)^{-1}$ is self-adjoint and
positive allows us to apply the spectral theorem to obtain
$\big{(}\mu_{n}(\alpha)\big{)}_{n\geqslant 0}$ positive eigenvalues verifying
$\mu_{n}(\alpha)\underset{N\rightarrow\infty}{\longrightarrow}0$ by
compactness and a Hilbertian basis $(\psi_{n})_{n\geqslant 0}\in
L^{2}({\mathbb{R}})^{\mathbb{N}}$, such that for all $n\geqslant 0$,
$(\mathcal{S}+\alpha I)^{-1}\psi_{n}=\mu_{n}(\alpha)\psi_{n}$. It is then easy
to see that for all $n$, $\psi_{n}\in\mathcal{D}(S+\alpha I)$ since they
belong to the range of $(\mathcal{S}+\alpha I)^{-1}$. Finally, since for all
$\phi\in L^{2}({\mathbb{R}})$, $\Braket{(\mathcal{S}+\alpha
I)\phi,\phi}_{L^{2}(dx)}\geqslant\|\phi\|_{L^{2}(dx)}^{2}$, the spectrum of
$(\mathcal{S}+\alpha I)^{-1}$ is contained in $[0,1]$. It allows us to
conclude that
${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|(\mathcal{S}+\alpha
I)^{-1}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{L^{2}(dx)}\leqslant
1$. ∎
Since for all $u\in H^{1}_{V^{\prime}}({\mathbb{R}})$, $(S+\alpha)u\in
L^{2}({\mathbb{R}})$ iff $\mathcal{S}u\in L^{2}({\mathbb{R}})$, if we define
$\mathcal{D}(\mathcal{S})$ in the same manner that we did before,
$\mathcal{D}(\mathcal{S})=\mathcal{D}(S+\alpha I)$. It is now straightforward
to see how to extend $\mathcal{A}=\rho_{P}^{-1/2}\mathcal{S}\rho_{P}^{1/2}$ on
$\mathcal{D}_{L^{2}({\mathbb{R}})}(\mathcal{A}):=\rho_{P}^{-1/2}\mathcal{D}(\mathcal{S})$
equipped with the norm $\|.\|_{q_{\alpha},\rho_{P}}$ to
$\big{(}L^{2}(\rho_{P}),\|.\|_{L^{2}(\rho_{P})}\big{)}$. The norm
$\|.\|_{q_{\alpha},\rho_{P}}$ is defined for all
$u\in\mathcal{D}_{L^{2}({\mathbb{R}})}(\mathcal{A})$ by
$\|u\|_{q_{\alpha},\rho_{P}}=\int_{\mathbb{R}}u^{\prime
2}\rho_{P}dx+\int_{\mathbb{R}}u^{2}(w_{P}+\alpha)\rho_{P}dx\,.$
It is easy to see that $(\mathcal{A}+\alpha I)^{-1}$ is continuous.
###### Remark A.9.
The kernel of $\mathcal{A}$ is generated by the function $\widetilde{1}$.
Indeed if $\phi\in\mathcal{D}_{L^{2}({\mathbb{R}})}(\mathcal{A})$ is in the
kernel of $\mathcal{A}$ then
$0=-\dfrac{\big{(}\phi^{\prime}\rho_{P}\big{)}^{\prime}}{\rho_{P}}\Rightarrow\exists
c\in{\mathbb{R}},\,\phi^{\prime}=\dfrac{c}{\rho_{P}}$
But since $\phi^{\prime}$ is in $L^{2}(\rho_{P})$ then $c=0$ which implies
that $\phi$ is constant. We must restrict $\mathcal{A}$ to the orthogonal of
$Ker\mathcal{A}$ with respect to the inner product of $L^{2}(\rho_{P})$, ie
$\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A}):=\left\\{u\in\mathcal{D}_{L^{2}({\mathbb{R}})}(\mathcal{A})\,|\,\int_{\mathbb{R}}u\rho_{P}=0\right\\}\,.$
Doing so makes $\mathcal{A}$ injective.
Before inverting $\mathcal{A}$, we need the following lemma:
###### Lemma A.10.
The following equality holds
$(\mathcal{A}+\alpha
I)\Big{(}\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})\Big{)}=L^{2}_{0}(\rho_{P}):=\Big{\\{}u\in
L^{2}(\rho_{P}),\,\int_{\mathbb{R}}u\rho_{P}dx=0\Big{\\}}$
###### Proof.
Let $\phi=\widetilde{c}$ for $c\in{\mathbb{R}}$, $(\mathcal{A}+\alpha
I)\phi=\widetilde{\alpha c}$ then $(\mathcal{A}+\alpha
I)({\mathbb{R}}.\widetilde{1})={\mathbb{R}}\widetilde{1}$. Hence since
$\mathcal{A}+\alpha I$ is self-adjoint with respect to the inner product of
$L^{2}(\rho_{P})$ and that ${\mathbb{R}}\widetilde{1}$ is stable by
$\mathcal{A}+\alpha I$, then $(\mathcal{A}+\alpha
I)\Big{(}({\mathbb{R}}.\widetilde{1})^{\perp}\cap\mathcal{D}_{L^{2}({\mathbb{R}})}(\mathcal{A})\Big{)}\subset({\mathbb{R}}.\widetilde{1})^{\perp}$.
For the converse, let $u\in({\mathbb{R}}.\widetilde{1})^{\perp}$, since
$\mathcal{A}+\alpha I$ is bijective, there exists
$v\in\mathcal{D}_{L^{2}({\mathbb{R}})}(\mathcal{A})$ such that
$u=(\mathcal{A}+\alpha I)v$. For all $w\in{\mathbb{R}}.\widetilde{1}$,
$0=\Braket{u,w}_{L^{2}(\rho_{P})}=\Braket{(\mathcal{A}+\alpha
I)v,w}_{L^{2}(\rho_{P})}=\Braket{v,(\mathcal{A}+\alpha I)w}_{L^{2}(\rho_{P})}$
Hence $v\in\big{[}(\mathcal{A}+\alpha
I)({\mathbb{R}}\widetilde{1})\big{]}^{\perp}={\mathbb{R}}\widetilde{1}^{\perp}$
and so $({\mathbb{R}}.\widetilde{1})^{\perp}\subset(\mathcal{A}+\alpha
I)\Big{(}({\mathbb{R}}.\widetilde{1})^{\perp}\Big{)}$. ∎
It is easy to see that $L_{0}^{2}(\rho_{P})$ is a closed subset of
$L^{2}(\rho_{P})$ as it is the kernel of the linear form $\phi\in
L^{2}(\rho_{P})\mapsto\Braket{\phi,\widetilde{1}}_{L^{2}(\rho_{P})}$, making
it a Hilbert space.
###### Proposition A.11 (Diagonalization and invertibility of $\mathcal{A}$).
There exists a complete orthonormal set of
$\left(L_{0}^{2}(\rho_{P}),\Braket{.,.}_{L^{2}(\rho_{P})}\right)$,
$(\phi_{n})_{n\in\mathbb{N}}\in\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})^{\mathbb{N}}$
such that $\mathcal{A}\phi_{n}=\lambda_{n}\phi_{n}$ (meaning that
$\overline{\operatorname{span}\\{\phi_{n},\ n\geqslant
0\\}}^{\|.\|_{L^{2}(\rho_{P})}}=L^{2}_{0}(\rho_{P})$
and $\langle\phi_{i},\phi_{j}\rangle_{L^{2}(\rho_{P})}=\delta_{i,j}$).
Furthermore,
$\mathcal{A}:\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})\longrightarrow
L^{2}_{0}(\rho_{P}):=\Big{\\{}u\in
L^{2}(\rho_{P}),\,\int_{\mathbb{R}}u\rho_{P}dx=0\Big{\\}}$ is bijective,
$\mathcal{A}^{-1}$ is continuous when considered as an operator of
$L^{2}_{0}(\rho_{P})$.
###### Proof.
Since $(\mathcal{S}+\alpha I)^{-1}$ considered as an operator of
$L^{2}({\mathbb{R}})$, is compact so is $(\mathcal{A}+\alpha I)^{-1}$ on
$L^{2}(\rho_{P})$ and since $\mathcal{A}$ is self-adjoint, by the spectral
theorem, $(\mathcal{A}+\alpha I)^{-1}$ is diagonalizable. With the notations
of Theorem A.8, $(\mathcal{A}+\alpha I)^{-1}$ has eigenvalues
$\big{(}\mu_{n}(\alpha)\big{)}_{n\geqslant 0}$ and corresponding
eigenfunctions
$\phi_{n}=\rho_{P}^{-1/2}\psi_{n}\in\mathcal{D}_{L^{2}({\mathbb{R}})}(\mathcal{A})$.
Hence for all $n\in\mathbb{N}$, $\mathcal{A}\phi_{n}=\lambda_{n}\phi_{n}$ with
$\lambda_{n}:=\big{(}\dfrac{1}{\mu_{n}(\alpha)}-\alpha\big{)}$. Now,
$\lambda_{n}\|\phi_{n}\|_{L^{2}(\rho_{P})}=\int_{\mathbb{R}}(\mathcal{A}\phi_{n})\phi_{n}\rho_{P}dx=-\int_{\mathbb{R}}(\rho_{P}\phi_{n}^{\prime})^{\prime}\phi_{n}=\int_{\mathbb{R}}\phi_{n}^{\prime
2}\rho_{P}\geqslant\,0\,.$
Furthermore, the kernel of $\mathcal{A}$ is ${\mathbb{R}}.\widetilde{1}$, thus
the spectrum of $\mathcal{A}$ restricted to
$\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})$ is positive. But since
$(\mathcal{A}+\alpha I)^{-1}$ is a compact operator of $L^{2}(\rho_{P})$ and
that $(\mathcal{A}+\alpha I)$ maps ${\mathbb{R}}.\widetilde{1}^{\perp}$ to
${\mathbb{R}}.\widetilde{1}^{\perp}$ with respect to the inner product of
$L^{2}(\rho_{P})$ (see lemma A.10), then $\big{(}\mathcal{A}+\alpha
I\big{)}^{-1}$ is compact as an operator from $L^{2}_{0}(\rho_{P})$ to itself.
By Fredholm alternative, for every $\lambda\in{\mathbb{R}}$ $\lambda\neq 0$,
either $(\mathcal{A}+\alpha I)^{-1}-\lambda I$ is bijective either $\lambda\in
Sp\big{(}(\mathcal{A}+\alpha I)^{-1}\big{)}$. These conditions are equivalent
to: either $\mathcal{A}+(\alpha-\dfrac{1}{\lambda})I$ is bijective as an
operator from $\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})$ to
$L^{2}_{0}(\rho_{P})$, either $-\alpha+\dfrac{1}{\lambda}\in
Sp\big{(}\mathcal{A}\big{)}$. If we set $\lambda=\dfrac{1}{\alpha}$ then
either $\mathcal{A}$ is bijective either $0\in Sp(\mathcal{A})$, since the
latter is wrong then
$\mathcal{A}:\mathcal{D}_{L^{2}({\mathbb{R}}),0}(\mathcal{A})\rightarrow
L^{2}_{0}(\rho_{P})$ is bijective. The spectrum of $\mathcal{A}$ is
$\left(\dfrac{1}{\mu_{n}(\alpha)}-\alpha\right)_{n\geqslant
0}\subset(\lambda_{1},+\infty)\subset(0,+\infty)$, where $\lambda_{1}$ is the
smallest eigenvalue, hence we deduce that
${\left|\kern-1.07639pt\left|\kern-1.07639pt\left|\mathcal{A}^{-1}\right|\kern-1.07639pt\right|\kern-1.07639pt\right|}_{\mathcal{L}\left(L^{2}(\rho_{P})\right)}\leqslant\lambda_{1}^{-1}$.
∎
## References
|
# A Meta-Learning Method for Estimation of Causal Excursion Effects to Assess
Time-Varying Moderation
Jieru Shi, Walter Dempsey
Department of Biostatistics, University of Michigan
###### Abstract
Twin revolutions in wearable technologies and smartphone-delivered digital
health interventions have significantly expanded the accessibility and uptake
of mobile health (mHealth) interventions across various health science
domains. Sequentially randomized experiments called micro-randomized trials
(MRTs) have grown in popularity to empirically evaluate the effectiveness of
these mHealth intervention components. MRTs have given rise to a new class of
causal estimands known as “causal excursion effects”, which enable health
scientists to assess how intervention effectiveness changes over time or is
moderated by individual characteristics, context, or responses in the past.
However, current data analysis methods for estimating causal excursion effects
require pre-specified features of the observed high-dimensional history to
construct a working model of an important nuisance parameter. While machine
learning algorithms are ideal for automatic feature construction, their naive
application to causal excursion estimation can lead to bias under model
misspecification, potentially yielding incorrect conclusions about
intervention effectiveness. To address this issue, this paper revisits the
estimation of causal excursion effects from a meta-learner perspective, where
the analyst remains agnostic to the choices of supervised learning algorithms
used to estimate nuisance parameters. The paper presents asymptotic properties
of the novel estimators and compares them theoretically and through extensive
simulation experiments, demonstrating relative efficiency gains and supporting
the recommendation for a doubly robust alternative to existing methods.
Finally, the practical utility of the proposed methods is demonstrated by
analyzing data from a multi-institution cohort of first-year medical residents
in the United States (NeCamp et al., 2020).
Keywords: Debiased/Orthogonal Estimation, Machine Learning, Double Robustness,
Causal Excursion Effect, Mobile Health, Time-Varying Treatment.
## 1 Introduction
The use of smart devices (e.g., smartphones, smartwatches) and other wearables
to deliver digital interventions to improve health outcomes has grown
significantly in the past few years. Low-cost, accessible digital
interventions can be delivered everywhere, anytime, and in any amount, even to
reticent or hard-to-reach populations. Interventions of this type are
hypothesized to result in meaningful short- and long-term behavior changes.
The assessment of such time-varying effects prompted the development of micro-
randomized trials (MRTs), in which individuals are randomized to receive
notifications at hundreds or thousands of decision points.
The MRT enables estimation of proximal or lagged effects of push notifications
on pre-specified outcomes of interest, referred to as “causal excursion
effects” (Boruvka et al., 2018; Qian et al., 2020; Dempsey et al., 2020; Shi
et al., 2022). Semiparametric inference of the causal excursion effects can be
conducted via a weighted, centered least squares (WCLS) criterion (Boruvka et
al., 2018). A key feature of implementing the WCLS criterion is that health
scientists must pre-specify features from the high-dimensional observed
history to formulate a linear working model for a critical nuisance parameter,
which is a challenging, non-trivial task.
Machine learning (ML) algorithms offer powerful tools to automatically
construct features for the nuisance components, but their naive application to
semiparametric inference can lead to bias in estimation. Chernozhukov et al.
(2018) shows that Neyman-orthogonal moments and cross-fitting can remove the
impact of regularization bias and overfitting caused by naive application of
the ML methods. Later, several meta-learner algorithms emerged that can take
advantage of supervised learning or regression methods in ML and statistics
—such as random forests, Bayesian Additive Regression Trees (BART) or neural
networks— to estimate the nuisance components (Hill, 2011; Semenova and
Chernozhukov, 2021; Künzel et al., 2019; Nie and Wager, 2021; Kennedy, 2020).
These papers provide flexible, well-performing methods for estimating
conditional average treatment effects (CATE) in randomized controlled trials
and observational studies, and illustrate how ML methods can be applied for
semiparametric inference.
While meta-learning approaches have been developed extensively for CATE
estimation, their application to longitudinal studies has been relatively
limited. Viviano and Bradic (2021) proposed a dynamic covariate balancing
method when high-dimensional covariates are present. Bodory et al. (2022) used
this approach to examine effects under dynamic treatment regimes using Double
Machine Learning (DML) (Chernozhukov et al., 2018) and semiparametrically
efficient estimation. In this setting, DML is used to control for observed and
time-varying covariates in a data-driven way, across treatment sequences in
different time periods. Lewis and Syrgkanis (2020) proposed a new DML approach
for estimating causal effect under dynamic treatment regimes by using
g-estimation – a sequential residualization approach that uses supervised
learning of debiased outcomes on debiased treatments over a specific time
period based on linear parameterization of blip functions.
While prior studies on DML methods for longitudinal analysis have mainly
focused on estimating the average treatment effect (ATE) on a distal outcome,
often under pre-specified or dynamic non-random treatment sequences, our
current work takes a different perspective. Specifically, we propose a meta-
learning framework that assesses time-varying causal effect moderation in
MRTs, providing a novel solution for estimating moderated treatment effects on
proximal outcomes using DML methods. The proposed method can help health
scientists improve their ability to answer critical scientific questions
regarding time-varying effect moderation, and find out when, in what context,
and what intervention content to deliver to each person to make the
intervention most effective (Qian et al., 2022).
### 1.1 Outline
The rest of the paper proceeds as follows. Section 2 reviews existing analytic
techniques for MRTs and ML methods in causal inference. We then summarize our
main contributions in Section 3 and explain why allowing for the use of ML
approaches in WCLS is challenging. We propose two new inferential methods in
Section 4, which leverage supervised learning methods to improve the
efficiency of time-varying treatment effect estimation. In Section 5.1, we
outline the inferential algorithms. Section 5.2 and 5.3 present the asymptotic
theory as well as discuss the relative efficiency gain of the proposed
methods. Section 6 discusses the extension of the proposed methods to settings
such as missing data, lagged effects, and binary outcomes. Section 7 uses
simulation studies to compare various estimators and standard errors. Section
8 illustrates the efficiency improvement using our proposed methods with a
recent MRT: the Intern Health Study (NeCamp et al., 2020). The paper concludes
with a brief discussion in Section 9. All technical proofs are collected in
the Supplementary Materials.
## 2 Preliminaries
### 2.1 Micro-Randomized Trials (MRT)
An MRT consists of a sequence of within-subject decision times $t=1,\ldots,T$
at which treatment options are randomly assigned (Liao et al., 2016).
Individual-level data can be summarized as
$\\{O_{0},O_{1},A_{1},O_{2},A_{2},\ldots,O_{T},A_{T},O_{T+1}\\}$ where $t$
indexes a sequence of decision points, $O_{0}$ is the baseline information,
$O_{t}$ is the information collected between time $t-1$ and $t$, and $A_{t}$
is the treatment option provided at time $t$; here we consider binary
treatment options, i.e., $A_{t}\in\\{0,1\\}$. In an MRT, $A_{t}$ is randomized
with randomization probabilities that may depend on the complete observed
history $H_{t}:=\\{O_{0},O_{1},A_{1},\ldots,A_{t-1},O_{t}\\}$, denoted
$\mathbf{p}=\\{p_{t}(A_{t}\,|\,H_{t})\\}_{t=1}^{T}$. Treatment options are
designed to impact a proximal response, denoted by $Y_{t+1}$, which is a
function of the observed history and the latest treatment, i.e.,
$Y_{t+1}=y(H_{t},A_{t})$ (Dempsey et al., 2020).
### 2.2 Estimands and Inferential Methods: A Review
The class of estimands, referred to as “causal excursion effects”, was
developed to assess whether mobile health interventions influence the proximal
health outcomes they were designed to impact (Heron and Smyth, 2010). These
time-varying effects are a function of the decision point $t$ and a set of
moderators $S_{t}$ and marginalize over all other observed and unobserved
variables (Dempsey et al., 2020; Qian et al., 2020). We provide formal
definitions using potential outcomes (Rubin, 1978; Robins, 1986).
Let $Y_{t+1}(\bar{a}_{t-1})$ denote the potential outcome for the proximal
response under treatment sequence $\bar{a}_{t-1}$. Let $O_{t}(\bar{a}_{t-1})$
denote the potential information collected between time $t-1$ and $t$. Let
$S_{t}(\bar{a}_{t-1})$ denote the potential outcome for a time-varying effect
moderator which is a deterministic function of the potential history up to
time $t$, $H_{t}(\bar{a}_{t-1})$. We consider the setting in which the
potential outcomes are i.i.d over users according to a distribution
$\mathcal{P}$, i.e.,
$\left\\{O_{t}(\bar{a}_{t-1}),Y_{t+1}(\bar{a}_{t-1})\right\\}_{t=1}^{T}\overset{\text{i.i.d}}{\sim}\mathcal{P}$.
The causal excursion effect estimand is defined as:
$\beta_{\mathbf{p}}(t;s)=\mathbb{E}_{{\mathbf{p}}}\left[Y_{t+1}\left(\bar{A}_{t-1},A_{t}=1\right)-Y_{t+1}\left(\bar{A}_{t-1},A_{t}=0\right)\,|\,S_{t}(\bar{A}_{t-1})=s\right].$
(1)
Equation (1) is defined with respect to a reference distribution $\mathbf{p}$,
i.e., the joint distribution of treatments
$\bar{A}_{t-1}:=\\{A_{1},A_{2},\dots,A_{t-1}\\}$. We follow common practice in
observational mobile health studies where analyses such as GEEs (Liang and
Zeger, 1986) are conducted marginally over $\mathbf{p}$. To express the
proximal response in terms of the observed data, we assume positivity,
consistency, and sequential ignorability (Robins, 1994, 1997):
###### Assumption 2.1.
We assume consistency, positivity, and sequential ignorability:
* •
Consistency: For each $t\leq T$,
$\\{Y_{t+1}(\bar{A}_{t}),O_{t}(\bar{A}_{t-1}),A_{t}(\bar{A}_{t-1})\\}=\\{Y_{t+1},O_{t},A_{t}\\}$,
i.e., observed values equal the corresponding potential outcomes;
* •
Positivity: if the joint density $\\{A_{t}=a_{t},H_{t}=h_{t}\\}$ is greater
than zero, then $P(A_{t}=a_{t}\,|\,H_{t}=h_{t})>0$;
* •
Sequential ignorability: For each $t\leq T$, the potential outcomes
$\\{Y_{t+1}(\bar{a}_{t}),O_{t+1}(\bar{a}_{t}),A_{t+1}(\bar{a}_{t}),\dots,Y_{T+1}(\bar{a}_{T})\\}$
are independent of $A_{t}$ conditional on the observed history $H_{t}$.
Under Assumption 2.1, (1) can be re-expressed in terms of observable data:
$\beta_{\mathbf{p}}(t;s)=\mathbb{E}\left[\mathbb{E}_{{\mathbf{p}}}\left[Y_{t+1}\mid
A_{t}=1,H_{t}\right]-\mathbb{E}_{{\mathbf{p}}}\left[Y_{t+1}\mid
A_{t}=0,H_{t}\right]\mid S_{t}=s\right].$ (2)
The causal excursion effect is typically assumed to take a known linear form,
i.e., $\beta_{{\mathbf{p}}}(t;s)=f_{t}(s)^{\top}\beta^{\star}$, where
$f_{t}(s)\in\mathbb{R}^{q}$ is a feature vector comprised of a $q$-dimensional
summary of observed information depending only on state $s$ and decision point
$t$. MRTs are experimental studies with pre-specified randomization
probabilities. It is therefore common to impose the following condition:
###### Assumption 2.2.
The randomization probability $p_{t}(A_{t}|H_{t})$ is known or correctly
specified via a parametric model $p_{t}(A_{t}|H_{t};\theta)$ for
$\theta\in\mathbb{R}^{d}$.
A consistent estimator $\hat{\beta}_{n}$ can be obtained by minimizing a
weighted and centered least squares (WCLS) criterion (Boruvka et al., 2018):
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(Y_{t+1}-g_{t}(H_{t})^{\top}\alpha-(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta)^{2}\right],$
(3)
where $\mathbb{P}_{n}$ is an operator denoting the sample average,
$W_{t}=\tilde{p}_{t}(A_{t}|S_{t})/p_{t}(A_{t}|H_{t})$ is a weight where the
numerator is an arbitrary function with range $(0,1)$ that only depends on
$S_{t}$, and $g_{t}(H_{t})\in\mathbb{R}^{p}$ are $p$ control variables.
Important to this paper, the linear term $g_{t}(H_{t})^{\top}\alpha$ is a
working model for $\mathbb{E}[W_{t}Y_{t+1}|H_{t}]$, which can be viewed as a
nuisance function. A high-quality model of the nuisance function can help
reduce variance and construct more powerful test statistics. See Boruvka et
al. (2018) for more details on the estimand formulation and consistency,
asymptotic normality, and robustness properties of this method.
###### Remark 2.3.
Correct causal effect specification, i.e., $\beta_{{\bf
p}}(t;s)=f_{t}(s)^{\top}\beta^{\star}$ is not required. Instead, we can follow
prior literature (Dempsey et al., 2020; Shi et al., 2022) and interpret the
proposed linear form as a working model. Specifically, $\hat{\beta}$ is a
consistent and asymptotically normal estimator for
$\beta^{\star}=\arg\min_{\beta}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))(\beta(t;S_{t})-f_{t}(S_{t})^{\top}\beta)^{2}\right].$
Therefore, the working model can be interpreted as an $L_{2}$ projection of
the true causal excursion effect onto the space spanned by a $q$-dimensional
feature vector that only includes $t$ and $s$, denoted by
$f_{t}(s)^{\top}\beta^{\star}$ (Dempsey et al., 2020). Interpretation as a
projection or as a correctly specified causal effect can be viewed as a bias-
variance trade-off. The projection interpretation guarantees well-defined
parameter interpretation in practice.
### 2.3 Machine Learning and Causal Effect Estimation
Chernozhukov et al. (2018) provided a generic DML approach for obtaining valid
inferential statements about focal parameters, using Neyman-orthogonal scores
and cross-fitting, in settings where nuisance parameters are estimated using
ML methods. As a motivating example, here we consider the following partially
linear regression (PLR) model as in Robinson (1988):
$\displaystyle Y$
$\displaystyle=A\beta_{0}+m_{0}(X)+U,~{}~{}~{}~{}~{}~{}~{}\mathbb{E}[U|X,A]=0,$
$\displaystyle A$
$\displaystyle=p_{0}(X)+V,~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mathbb{E}[V|X]=0.$
Here, $Y$ is the outcome variable, $A$ represents the treatment indicator,
$X=(X_{1},...,X_{p})$ consists of baseline covariates, and $U$ and $V$ are
noise variables. The parameter of interest is $\beta_{0}$, i.e., the treatment
effect.
In many applications, the dimension of baseline covariates, $dim(X)=p$, is
large relative to the sample size $N$. To capture this, modern theoretical
analyses consider $p$ increasing with sample size. Traditional analyses that
limit the complexity of the nuisance functions $g_{0}=(m_{0},p_{0})$ will fail
in these settings. Chernozhukov et al. (2018) apply Neyman orthogonality and
sample splitting to overcome the failures of traditional methods. Suppose, for
the sake of clarity, that we randomly split the sample into two parts: a main
part of size n, with observation numbers indexed by $i\in\mathcal{I}$, and an
auxiliary part of size $N-n$, with observations indexed by
$i\in\mathcal{I}^{\complement}$. Let $\hat{g}_{0}$ be the estimator of the
nuisance parameter $g_{0}$, which is obtained using the auxiliary sample, and
the estimator of $\beta_{0}$ is obtained using the main sample and satisfying:
$\frac{1}{n}\sum_{i\in\mathcal{I}}\psi(W;\hat{\beta}_{0},\hat{g}_{0})=0,$
where $\psi$ is an orthogonalized or debiased score function, i.e., it
satisfies the property that the Gateaux derivative operator with respect to
$g$ vanishes when evaluated at the true parameter values:
$\partial_{g}\mathbb{E}\left[\psi(W;\beta_{0},g_{0})\right](g-g_{0})=0.$ (4)
Using moment conditions that satisfy Equation (4) to construct estimators and
inference procedures that are robust to mistakes in nuisance parameters has a
long history in statistics. We refer to property (4) as _Neyman orthogonality_
and to $\psi$ as the _Neyman orthogonal score function_ due to the fundamental
contributions in Neyman (1979), where this notion was introduced. The score
functions $\psi$ are not sensitive to biased estimation of $g_{0}$ in the
sense that (4) holds. Neyman orthogonality and sample splitting are the main
tools for establishing good behavior of an estimator for $\beta_{0}$.
## 3 Contributions
The objective of this paper is to advance the understanding of optimal methods
for estimating causal excursion effects. The main challenges in achieving this
are two-fold: First, in MRTs where treatments, responses, and moderators all
vary with time, there is a need to effectively incorporate supervised learning
methods and sample splitting into the causal excursion effect estimation
without introducing bias. Second, it is increasingly common for Assumption 2.2
to be violated due to unknown randomization probabilities or implemented
probabilities not matching the recorded probabilities. In these settings, the
current WCLS approach can only provide consistent estimates if the working
_linear_ outcome regression model for $\mathbb{E}[Y_{t+1}|H_{t},A_{t}]$ is
correctly specified. It is therefore important to develop an inferential
procedure that is robust to model misspecification.
This paper makes three original contributions: First, we introduce two
inferential procedures for estimating causal excursion effects that
incorporate DML techniques, called “R-WCLS” and “DR-WCLS”. To mitigate
overfitting bias, cross-fitting is employed to estimate the first stage of
data-driven plug-ins. Second, we demonstrate the $\sqrt{n}$-consistency and
asymptotic normality of the proposed estimators under regularity conditions,
while remaining agnostic to the specific supervised learning algorithm used to
learn the plug-in models. Third, we provide theoretical guarantees of double-
robustness and gains in estimation efficiency relative to the WCLS approach.
## 4 A Supervised Learning Algorithm Agnostic Approach to Moderation Analysis
We begin with a moment-based estimand, as defined in Equation (2), and assume
a parametric model for the causal excursion effect, denoted as
$\beta(t;s)=f_{t}(S_{t})^{\top}\beta^{\star}$, where
$\beta^{\star}\in\mathbb{R}^{q}$. The WCLS criterion provides a set of
estimating equations used to perform inference about the causal parameter
$\beta^{\star}$. This approach suggests that the nuisance parameter can be
expressed as a sequence of expectations
$\mathbf{g}=\\{g_{t}(H_{t})=\mathbb{E}[W_{t}Y_{t+1}|H_{t}]\\}_{t=1}^{T}$, with
a population value of $\mathbf{g}^{\star}$. To estimate these quantities, the
WCLS criterion only considers linear working models
$\\{g_{t}(H_{t})^{\top}\alpha\\}_{t=1}^{T}$.
Constructing linear working models, however, can pose a significant challenge
as researchers must pre-specify features from the high-dimensional observed
history $H_{t}$, which is a non-trivial task. To address this challenge and
increase the modeling flexibility of nuisance functions, we reframe the
estimating equation (3) in a more general form that puts no modeling
assumptions in $g_{t}(H_{t})$ and allows its dimensions to grow with sample
size. Here, we assume that $\beta^{\star}$ satisfies the moment conditions:
$\mathbb{E}[\psi(\beta^{\star};\mathbf{g}^{\star})]=0$, where
$\psi(\beta;\mathbf{g})$ is the estimating equation for $\beta$:
$\psi(\beta;\mathbf{g})=\sum_{t=1}^{T}W_{t}\Big{[}Y_{t+1}-g_{t}(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta\Big{]}(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t}).$
(5)
We can recover WCLS by replacing $g_{t}(H_{t})$ with a linear working model
with fixed dimension, i.e., $g(H_{t})^{\top}\alpha$ for
$\alpha\in\mathbb{R}^{p}$. To ensure robustness and valid inference for
$\beta$, we require Neyman orthogonality for the estimating equation
$\psi(\beta;\mathbf{g})$ (Chernozhukov et al., 2015). The Gateaux derivative
operator with respect to $\mathbf{g}$ is:
$G(\mathbf{g})=\mathbb{E}\left[\sum_{t=1}^{T}W_{t}g_{t}(H_{t})(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right](\mathbf{g}-\mathbf{g}^{\star})=0,$
(6)
thus (5) satisfies Neyman orthogonality. Intuitively, Neyman orthogonality
implies that the moment conditions used to identify $\beta^{\star}$ are
sufficiently insensitive to the nuisance parameter estimates, allowing us to
directly plug in estimates of $\mathbf{g}^{\star}$ while still obtaining high-
quality inference for $\beta$.
We now consider estimating $g_{t}(H_{t})$. By definition,
$g_{t}(H_{t})=\mathbb{E}[W_{t}Y_{t+1}|H_{t}]$ is the conditional expectation
of the weighted proximal outcome. Let $g_{t}(H_{t},A_{t})$ denote a working
model for $\mathbb{E}[Y_{t+1}|H_{t},A_{t}]$, then we have the decomposition:
$g_{t}(H_{t})=\tilde{p}_{t}(1|S_{t})g_{t}(H_{t},1)+(1-\tilde{p}_{t}(1|S_{t}))g_{t}(H_{t},0)$.
Based on this, we propose the following _R-WCLS criterion_ , which minimizes:
$\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-\Big{(}\tilde{p}_{t}(1|S_{t})g_{t}(H_{t},1)+(1-\tilde{p}_{t}(1|S_{t}))g_{t}(H_{t},0)\Big{)}-(A_{t}-\tilde{p}_{t}(1|S_{t})))f_{t}(S_{t})^{\top}\beta\right)^{2}\right].$
(7)
The proposed approach (7) allows us to learn these conditional expectations
without pre-specifying features to construct a parametric working model. This
reflects the fact that in practice, only a small subset of the parameters are
of key scientific interest, and an analyst may prefer to be agnostic about the
nuisance parameters. As an example, we can use supervised learning methods to
train the working model $g_{t}(H_{t},A_{t})$, along with cross-fitting to
obtain informative error analysis, avoid overfitting, and optimize out-of-
sample prediction accuracy, which is particularly useful when the dimension of
the complete observed history is high. Theorem 5.2 in Section 5.2 shows that
the estimator $\hat{\beta}^{(R)}_{n}$ obtained by minimizing Equation (7) is a
consistent estimator of the parameter of interest $\beta^{\star}$ under the
assumption that the randomization probability $p_{t}(A_{t}|H_{t})$ is either
known or correctly specified.
###### Remark 4.1 (Connection to the WCLS criterion).
The R-WCLS criterion replaces $g_{t}(H_{t})^{\top}\alpha$ in the WCLS
criterion, which was a linear working model for
$\mathbb{E}[W_{t}Y_{t+1}|H_{t}]$, with a general choice of working models.
Setting $g_{t}(H_{t},A_{t})$ to be the linear working model
$g_{t}(H_{t})^{\top}\alpha+(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta$,
the R-WCLS criterion recovers the original WCLS criterion. Thus, (7) is a
strict generalization of (3).
###### Remark 4.2 (Connection to the R-learner).
In traditional causal inference with a single treatment $A$, fully-observed
set of confounders $X$, and outcome $Y$, a two-stage estimator, referred to as
the R-Learner, was previously proposed by Nie and Wager (2021). Beyond our
extension to the time-varying setting, there are two key distinguishing
features of R-WCLS in (7) when compared with the R-Learner. First, we focus on
estimating a low-dimensional target parameter, whereas R-learner seeks to
estimate the conditional average treatment effect and allows it to be a
complex function of baseline covariates. Second, the weight $W_{t}$ in R-WCLS
criterion implicitly depends on the propensity $\tilde{p}_{t}(1|S_{t})$, we
thereby replace the R-learner data-adaptive model for
$\mathbb{E}[W_{t}Y_{t+1}|H_{t}]$ with one for each
$\mathbb{E}[Y_{t+1}|H_{t},a]$, $a\in\\{0,1\\}$, which is invariant to
different choices of moderators $S_{t}$.
### 4.1 A Doubly-Robust Alternative
The above discussion relies on Assumption 2.2 being held. In many MRTs, one
may fail to correctly implement or collect the desired randomization
probabilities, leading to unknown randomization probabilities or uncertainty
in their recorded values. In such cases, the R-WCLS criterion in (7) can only
provide consistent estimates of $\beta^{\star}$ if the fully conditional model
for $\mathbb{E}[Y_{t+1}|H_{t},A_{t}]$ has been correctly specified:
$\mathbb{E}[Y_{t+1}|H_{t},A_{t}]=\left(\tilde{p}_{t}(1|S_{t})g_{t}^{\star}(H_{t},1)+(1-\tilde{p}_{t}(1|S_{t}))g_{t}^{\star}(H_{t},0)\right)+(A_{t}-\tilde{p}_{t}(1|S_{t})))f_{t}(S_{t})^{\top}\beta^{\star}.$
This implies that
$g^{\star}_{t}(H_{t},1)-g^{\star}_{t}(H_{t},0)=f_{t}(S_{t})^{\top}\beta^{\star}$,
where the fully conditional treatment effect depends only on the specified
moderators $S_{t}$ and the linear model is correctly specified. However, in
practice, $S_{t}$ is often a subset of the potential moderators, so this
assumption is not expected to hold. Therefore, an estimating procedure that
does not rely on a correct model specification will be preferred.
In this section, we present a novel derivation of an alternative, doubly
robust estimator called the _DR-WCLS criterion_ , which is given by:
$\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\frac{W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\right)f_{t}(S_{t})\right]=0,$
(8)
where $\beta(t;H_{t})\coloneqq g_{t}(H_{t},1)-g_{t}(H_{t},0)$ is the causal
excursion effect under the fully observed history $H_{t}$, and
$\tilde{\sigma}^{2}_{t}(S_{t})\coloneqq\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))$.
Theorem 5.3 below shows that the estimator $\hat{\beta}^{(DR)}_{n}$ obtained
from solving (8) is doubly-robust, i.e., (8) will yield a consistent estimator
of $\beta^{\star}$ if either the randomization probability
$p_{t}(A_{t}|H_{t})$ _or_ the conditional expectation $g_{t}(H_{t},A_{t})$ is
correctly specified.
###### Remark 4.3 (Connection to the DR-learner).
The general DR-learner approach was first proposed by Van Der Laan and Rubin
(2006). In later research, the DR-learner, a two-stage doubly robust estimator
with a single treatment $A$, a fully observed set of confounders $X$, and an
outcome $Y$, was proposed (Kennedy, 2020). Beyond our extension to the time-
varying setting, there are two key distinguishing features of (8) when
compared with existing variants of the DR-learner. First, the causal excursion
effect is a marginal effect, so the weights $W_{t}$ and the treatment-
centering probability are dependent on the moderators, whereas the DR-learner
estimates a fully conditional causal effect. Second, time-varying treatments
and the projection interpretation (see Dempsey et al. (2020) for a detailed
discussion) in the feature space $f_{t}(S_{t})$ require the additional weights
$\tilde{\sigma}^{2}_{t}(S_{t})$ in the DR-WCLS estimating equations.
### 4.2 Connection Between R-WCLS and DR-WCLS
In recent work from Morzywolek et al. (2023), a unified framework was
presented for estimating heterogeneous treatment effects, resulting in a class
of weighted loss functions with nuisance parameters. They showed that the
R-Learner (Nie and Wager, 2021) and the DR-Learner (Kennedy, 2020) can be seen
as special cases resulting from particular weighting choices. Here, we present
a complementary viewpoint by showing a simple relationship between the two
proposed methods. We begin by adding and subtracting
$g_{t}(H_{t},A_{t})=A_{t}g_{t}(H_{t},1)+(1-A_{t})g_{t}(H_{t},0)$ from Equation
(7):
$\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g_{t}(H_{t},A_{t})+\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)\left(\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\right)\right)^{2}\right].$
One can then obtain an estimate of $\beta^{\star}$ by solving the following
estimating equation:
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g_{t}(H_{t},A_{t})\right)\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)f_{t}(S_{t})\right]+$
(9)
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)^{2}\left(\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\right)f_{t}(S_{t})\right].$
(10)
Under the correct specification of the randomization probabilities, the
Gateaux derivative with respect to $\mathbf{g}$ of both terms (9) and (10)
will be 0. However, if the randomization probabilities are not specified
correctly, term (10) may not have a Gateaux derivative of 0. To address this,
we replace the stochastic term $W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}$ in
(10) with its expectation under the correct randomization probability:
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})(\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{\star})f_{t}(S_{t})\right].$
After this substitution, we recover (8). And by doing so, the Gateaux
derivative with respect to $\mathbf{g}$ of both terms will no longer be
affected by the randomization probability specification. The above derivation
links the R-WCLS and DR-WCLS, showing that the doubly-robust estimators can be
constructed from R-learner methods. Finally, (7) and (8) yield estimation
procedures that are presented in Section 5.1.
## 5 Algorithm and Asymptotic Theory
This section presents the inferential algorithms for R-WCLS and DR-WCLS. In
addition, we provide the corresponding asymptotic theory that can be used for
hypothesis testing and constructing confidence intervals.
### 5.1 Algorithm
Both R-WCLS and DR-WCLS algorithms exploit the structure of (7) and (8) to
characterize the problem as a two-stage weighted regression estimation that
regresses estimated _pseudo-outcomes_ on a feature vector. Cross-fitting is
employed to obtain asymptotic theories and convergence results that are
agnostic of the supervised learning algorithm used for the estimation of the
nuisance parameters, avoiding the Donsker conditions that were prevalent in
the classic semi-parametric inference literature.
##### Step I
Let $K$ be a fixed integer. Form a $K$-fold random partition of
$\\{1,2,\dots,n\\}$ by dividing it to equal parts, each of size $n/k$,
assuming $n$ is a multiple of $k$. From each set $I_{k}$, let
$I_{k}^{\complement}$ denote the observation indices that are not in $I_{k}$.
##### Step II
Learn the appropriate working models for each fold $I_{k}$ using the
individuals in $I_{k}^{\complement}$. Let $\hat{g}_{t}^{(k)}(H_{t},A_{t})$,
$\hat{p}_{t}^{(k)}(1|H_{t})$, and $\hat{\tilde{p}}_{t}^{(k)}(1|S_{t})$ denote
the estimates for $\mathbb{E}[Y_{t+1}|H_{t},A_{t}]$,
$\mathbb{E}[A_{t}|H_{t}]$, and $\mathbb{E}[A_{t}|S_{t}]$ respectively, i.e.,
estimates of the nuisance parameters in the $k$th fold. Note that when
randomization probabilities are known, $\hat{p}_{t}(A_{t}|H_{t})$ is set equal
to $p_{t}(A_{t}|H_{t})$.
##### Step III
Construct the pseudo-outcomes and perform weighted regression estimation:
* •
R-WCLS: For individual $j$ at time $t$, define the pseudo-outcome:
$\tilde{Y}^{(R)}_{t+1,j}:=Y_{t+1,j}-\hat{g}^{(k)}_{t}(H_{t,j},A_{t,j})+\left(A_{t,j}-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t,j})\right)\left(\hat{g}_{t}^{(k)}(H_{t,j},1)-\hat{g}_{t}^{(k)}(H_{t,j},0)\right),$
where $j\in I_{k}$. Then regress $\tilde{Y}^{(R)}_{t+1}$ on
$(A_{t}-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})^{\top}\beta$ with
weights
$\hat{W}_{t}^{(k)}=\hat{\tilde{p}}_{t}^{(k)}(A_{t}|S_{t})/\hat{p}_{t}^{(k)}(A_{t}|H_{t})$
to obtain estimate $\hat{\beta}_{n}^{(R)}$.
* •
DR-WCLS: For individual $j$ at time $t$, define the pseudo-outcome:
$\tilde{Y}^{(DR)}_{t+1,j}:=\frac{\hat{W}^{(k)}_{t,j}(A_{t,j}-\hat{\tilde{p}}^{(k)}_{t}(1|S_{t,j}))(Y_{t+1,j}-\hat{g}_{t}^{(k)}(H_{t,j},A_{t,j}))}{\hat{\tilde{p}}^{(k)}_{t}(1|S_{t,j})(1-\hat{\tilde{p}}^{(k)}_{t}(1|S_{t,j}))}+\left(\hat{g}_{t}^{(k)}(H_{t,j},1)-\hat{g}_{t}^{(k)}(H_{t,j},0)\right),$
where $j\in I_{k}$. Then regress $\tilde{Y}^{(DR)}_{t+1}$ on
$f_{t}(S_{t})^{\top}\beta$ with weights
$\hat{\tilde{p}}^{(k)}_{t}(1|S_{t})(1-\hat{\tilde{p}}^{(k)}_{t}(1|S_{t}))$ to
obtain $\hat{\beta}_{n}^{(DR)}$.
###### Remark 5.1.
Without sample splitting, the estimated nuisance functions are now correlated,
hence introducing spurious correlation in Step III. Typical approaches would
constrain the function $g(\cdot)$ in Step II to belong to a function class
with relatively simple statistical complexity, typically referred to as a
Donsker function class. Then the Step III estimate is $\sqrt{n}$-consistent
and asymptotically normal. Chen et al. (2022) shows that neither the sample
splitting nor Donsker property is required if the estimate $\hat{g}(\cdot)$
satisfies leave-one-out stability properties and the moment function satisfies
the weak mean-squared-continuity property of Chernozhukov et al. (2021). This
allows for sample reuse, which can benefit moderately sized sample regimes.
Here we aim to stay agnostic about the choice of $g(\cdot)$, but we consider
extensions that do not require sample splitting as important future work.
### 5.2 Asymptotic Properties
Here, we demonstrate the asymptotic theory for both R-WCLS and DR-WCLS
estimators obtained using the algorithm described above. All asymptotic
statements assume $\hat{p}_{t}(A_{t}|H_{t})$ is bounded away from 0 and 1, $T$
and $K$ both finite and fixed, and $n$ increasing to infinity.
###### Theorem 5.2 (Asymptotic property of R-WCLS estimator).
Under Assumption 2.2, and given invertibility and moment conditions, the
estimator $\hat{\beta}^{(R)}_{n}$ that minimizes (7) is consistent and
asymptotically normal such that
$\sqrt{n}(\hat{\beta}^{(R)}_{n}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma_{R})$,
where $\Sigma_{R}$ is defined in Appendix A. In particular, with the algorithm
outlined in Section 5.1, $\Sigma_{R}$ can be consistently estimated by:
$\left[\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left\\{\dot{m}(\hat{\beta},\hat{\eta}_{k})\right\\}\right]^{-1}\times\left[\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left\\{m(\hat{\beta},\hat{\eta}_{k})m(\hat{\beta},\hat{\eta}_{k})^{\top}\right\\}\right]\times\left[\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left\\{\dot{m}(\hat{\beta},\hat{\eta}_{k})\right\\}\right]^{-1},$
(11)
where
$m(\hat{\beta},\hat{\eta}_{k})=\sum_{t=1}^{T}\hat{W}^{(k)}_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})^{\top}\hat{\beta}^{(R)}_{n}\right)(A_{t}-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})$,
$\dot{m}(\hat{\beta},\hat{\eta}_{k})=\sum_{t=1}^{T}\hat{\tilde{p}}_{t}^{(k)}(1|S_{t})(1-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}$,
and $\mathbb{P}_{n,k}\\{\bullet\\}$ refers to the empirical average within
fold $k$.
If Assumption 2.2 is violated, using the R-WCLS criterion in equation (7) will
only produce consistent estimates if the fully conditional model for
$\mathbb{E}[Y_{t+1}|H_{t},A_{t}]$ is correctly specified, which is difficult
to achieve in practice due to the complexity of the true data-generating
process. Therefore, the DR-WCLS estimator is especially valuable for
safeguarding against model misspecification. The asymptotic properties of the
DR-WCLS estimator are as follows:
###### Theorem 5.3 (Asymptotic property of DR-WCLS estimator).
Given invertibility and moment conditions, the estimator
$\hat{\beta}^{(DR)}_{n}$ that solves (8) is subject to an error term, which
(up to a multiplicative constant) is bounded above by:
$\mathbf{\hat{B}}=\mathbb{E}\left[\sum_{t=1}^{T}\sum_{a\in\\{0,1\\}}\left\|p_{t}(A_{t}=1|H_{t})-\hat{p}_{t}(A_{t}=1|H_{t})\right\|\left\|g_{t}(H_{t},a)-\hat{g}_{t}(H_{t},a)\right\|\right],$
(12)
where $\left\|X\right\|:=(\mathbb{E}[X^{2}])^{1/2}$. If
$\mathbf{\hat{B}}=o_{p}(n^{-1/2})$, then $\hat{\beta}^{(DR)}_{n}$ is
consistent and asymptotically normal such that
$\sqrt{n}(\hat{\beta}^{(DR)}_{n}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma_{DR})$,
where $\Sigma_{DR}$ is defined in Appendix B. In particular, with the
algorithm outlined in Section 5.1, $\Sigma_{DR}$ can be consistently estimated
by formula (11) with
$m(\hat{\beta},\hat{\eta}_{k})=\sum_{t=1}^{T}\hat{\tilde{p}}_{t}^{(k)}(1|S_{t})(1-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))(\tilde{Y}^{(DR)}_{t+1}-f_{t}(S_{t})\hat{\beta}^{(DR)}_{n})f_{t}(S_{t})$,
and
$\dot{m}(\hat{\beta},\hat{\eta}_{k})=\sum_{t=1}^{T}\hat{\tilde{p}}_{t}^{(k)}(1|S_{t})(1-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}$.
It follows that $\hat{\beta}^{(DR)}_{n}$ is doubly robust since it is
consistent when either (1) the treatment model is correctly specified or (2)
the conditional model is correctly specified. Importantly, the model-agnostic
error bound applies to arbitrary first-stage estimators. The bound
$\mathbf{\hat{B}}$ on the DR-WCLS estimator error shows that it can only
deviate from $\beta^{\star}$ by at most a (smoothed) product of errors in the
estimation of treatment propensities and conditional expectation of outcomes,
thus allowing faster rates for estimating the causal effect even when the
nuisance estimates converge at slower rates. For detailed proofs of Theorems
5.2 and 5.3, please refer to Appendices A and B respectively.
###### Remark 5.4.
A variety of flexible options for nuisance estimates are available to attain
the convergence rate of $o_{p}(n^{-1/2})$. For example, if
$\left\|p_{t}(1|H_{t})-\hat{p}_{t}(1|H_{t})\right\|=o_{p}(n^{-1/4})$ and
$\left\|g_{t}(H_{t},a)-\hat{g}_{t}(H_{t},a)\right\|=o_{p}(n^{-1/4})$, then the
product term is $o_{p}(n^{-1/2})$ and is thus asymptotically negligible. This
occurs when both $\hat{g}(H_{t},a)$ and $\hat{p}_{t}(a|H_{t})$ are based on
correctly specified parametric models, but also achievable for many ML methods
under structured assumptions on the nuisance parameters, for example,
regularized estimators such as the Lasso, and random forest (Chernozhukov et
al., 2018; Athey et al., 2018). Worth noticing, in this setting, completely
nonparametric estimators are usually not an option as they tend to converge at
rates slower than $n^{-1/4}$ unless there are strong smoothness or sparsity
assumptions in place (Kennedy, 2016).
### 5.3 Estimation Efficiency Comparison
Previous research (Qian et al., 2020; Shi et al., 2022) has demonstrated that
a locally efficient, semiparametric estimator for the fully conditional causal
effect (i.e., $S_{t}=H_{t}$) can be derived based on semiparametric efficiency
theory (Robins, 1994; Newey, 1990; Tsiatis, 2007). These findings provide the
motivation for the development of the proposed methods described above. In
this section, we investigate the relative efficiency between the proposed
estimators and WCLS, with a focus on the situation where $S_{t}\subset H_{t}$.
The term “more efficient” here means one method achieves lower asymptotic
variance than another method for any linear combination of the causal
parameter estimates, i.e., the asymptotic variance of $c^{\top}\hat{\beta}$ is
smaller for any $c\in\mathbb{R}^{q}$. This is equivalent to the difference
between the asymptotic variance matrices being negative semidefinite.
#### 5.3.1 An Augmented R-WCLS Estimator
To make full use of the estimates $g_{t}(H_{t},A_{t})$, we propose an
augmented version of the R-WCLS criterion. According to Equation (7),
$g_{t}(H_{t},A_{t})$ is only included to construct the plug-in estimator
$g_{t}(H_{t})$, but the difference between $g_{t}(H_{t},1)$ and
$g_{t}(H_{t},0)$, i.e., the causal excursion effect under fully observed
history, is not incorporated into the estimating equation. Therefore, an
augmented R-WCLS criterion can efficiently use this information in the
following manner:
$0=\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g_{t}(H_{t})-\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)\left(f_{t}(S_{t})^{\top}\beta+\Delta_{t}^{\perp}\right)\right)\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)f_{t}(S_{t})\right],$
(13)
where $\Delta_{t}^{\perp}$ denotes the projection of the fully conditional
causal effect $\beta(t;H_{t})$ onto the orthogonal complement of
$f_{t}(S_{t})$. The definition of the orthogonal complement can be found in
(32), while the details on constructing $\Delta_{t}^{\perp}$ are provided in
Appendix D.1 for further reference. The estimator obtained from solving
Equation (13) has similar asymptotic properties as R-WCLS and it leads to the
following lemma. Proof can be found in Appendix D.2.
###### Lemma 5.5.
Let $\hat{\beta}^{(AR)}_{n}$ denote the augmented R-WCLS estimator obtained
from solving Equation (13). Under Assumption 2.2, given invertibility and
moment conditions, $\hat{\beta}^{(AR)}_{n}$ is consistent and asymptotically
normal such that
$\sqrt{n}(\hat{\beta}^{(AR)}_{n}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma_{AR})$,
where $\Sigma_{AR}$ is defined in Appendix D.2.
#### 5.3.2 Efficiency Improvement
We compare the estimation efficiency of the meta-learning estimators proposed
with that of the existing WCLS methods. Before presenting the theorem, we
first list the additional required assumptions :
###### Assumption 5.6.
In addition to Assumption 2.2, we make the following assumptions:
1. (a)
The residual $e_{t}\coloneqq Y_{t+1}-g(H_{t},A_{t})$ is uncorrelated with
future states given history $H_{t}$ and treatment $A_{t}$, i.e.,
$\mathbb{E}[e_{t}f_{t^{\prime}}(S_{t^{\prime}})\Delta^{\perp}_{t^{\prime}}|H_{t},A_{t}]=0,~{}t<t^{\prime}$;
2. (b)
The estimator $\hat{g}(H_{t},A_{t})$ is consistent for the true conditional
expectation $g(H_{t},A_{t})$.
Assumption 5.6 a implies that the residuals do not convey any additional
information beyond that which is already contained in the observed history.
Such an assumption would hold when considering Markov Decision Processes
(MDPs), where the current state and treatment determine future states.
Assumption 5.6b is derived from the discussion in Theorem 5.3, where we
require the error term to converge at a rate of $o_{p}(n^{-1/2})$. This
assumption, in conjunction with Assumption 2.2, ensures the consistency of the
DR-WCLS estimator.
###### Theorem 5.7 (Efficiency gain over WCLS estimator).
Under Assumption 2.2 and given invertibility and moment conditions:
1. (a)
if Assumption 5.6a also holds, the augmented R-WCLS estimator is guaranteed to
be at least as efficient as WCLS;
2. (b)
if Assumption 5.6b holds, the DR-WCLS estimator is guaranteed to be at least
as efficient as the R-WCLS estimator.
Theorem 5.7a requires Assumption 5.6a as a sufficient condition when
considering a smoothed model for the causal excursion effect
$f_{t}(S_{t})^{\top}\beta^{\star}$ over time. However, Assumption 5.6a is not
necessary, since efficiency gains can always be guaranteed if the causal
excursion effect is modeled nonparametrically over time, i.e.,
$f_{t}(S_{t})^{\top}\beta_{t}^{\star}$. Theorem 5.7b indicates further
asymptotic efficiency gains by employing our proposed doubly-robust
alternative. Bang and Robins (2005) do comment that doubly-robust estimators
(i.e., DR-WCLS) may be less efficient in finite samples than the inverse
probability of treatment weighting (IPTW) estimators (i.e., R-WCLS) under
extremely strong model misspecification. In the current context, if
$\hat{p}(1|H_{t})$ is based on a correctly specified parametric model so that
$\left\|p_{t}(1|H_{t})-\hat{p}_{t}(1|H_{t})\right\|=O_{p}(n^{-1/2})$, then
$\hat{g}_{t}(H_{t},a_{t})$ need only to be consistent, i.e,
$\left\|g_{t}(H_{t},a_{t})-\hat{g}_{t}(H_{t},a_{t})\right\|=o_{p}(1)$ for the
DR-WCLS estimator to be least asymptotically as efficient as the R-WCLS
estimator. Our model-agnostic approach reduces the risk of severe model
misspecification. The detailed proof is provided in Appendix E.1 and E.2.
## 6 Extensions
### 6.1 Missing Data
In mHealth studies, it is common for both the proximal outcome $Y_{t+1}$ and
elements of the history $H_{t}$ to be missing. In the case study of a 6-month
MRT on medical interns presented in Section 8, for example, the proximal
outcomes are self-reported mood score and step count. Self-reports are often
missing due to non-response, while step count can be missing due to
individuals not wearing the wrist sensors. Current methods are lacking for
addressing missing data in the context of MRT (Boruvka et al., 2018; Dempsey
et al., 2020; Qian et al., 2020).
Here we extend the DR-WCLS criterion to be robust to missing data.
Specifically, we consider two types of missingness: (1) in the outcome
$Y_{t+1}$ and (2) in the observed history $H_{t}$ being used in the supervised
learning algorithm. We do _not_ consider missingness in the moderator set
$S_{t}$ which we assume is completely observed, but consider this important
future work. Let $R_{t}$ be the binary indicator of whether the proximal
outcome $Y_{t+1}$ is observed ($R_{t}=1$) or not ($R_{t}=0$) at decision time
$t$, and $R_{t}(\bar{a}_{t})$ denotes the potential observation status.
Clearly, missingness is a post-treatment variable and therefore we require
additional assumptions:
###### Assumption 6.1.
We assume consistency, missing at random, and positivity:
* •
Consistency: For each $t\leq T$, $R_{t}(\bar{A}_{t})=R_{t}$, i.e., the
observed missing data indicator is equal to the corresponding potential
outcome observation status;
* •
Missing at random: For each $t\leq T$, $R_{t}(\bar{a}_{t})$ is independent of
$A_{t}$ conditional on the observed history $H_{t}$;
* •
Positivity: if the joint density $\\{R_{t}=r_{,}H_{t}=h_{t},A_{t}=a_{t}\\}$ is
greater than zero, then $p(R_{t}=1|H_{t},A_{t})=p(R_{t}|H_{t})>0$.
Under Assumption 6.1, we can derive a doubly robust extension for missing data
by augmenting the DR-WCLS criterion:
$\begin{split}\mathbb{P}_{n}\bigg{[}\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\bigg{(}&\frac{{\bf
1}(R_{t}=1)}{p(R_{t}|H_{t})}\frac{W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\bigg{)}f_{t}\bigg{]}=0.\end{split}$
(14)
Equation (14) is equal to (8) except that we multiply the first term by the
inverse probability of missing data. As the data-missing mechanism is a
complex nuisance function, it can also be considered part of the meta-learning
algorithm. Theorem 5.3 can be extended to the current setting, leading to
Corollary 6.1.1. See Appendix F for the proofs.
###### Corollary 6.1.1.
(Asymptotic property for DR-WCLS estimator with missing data) Under Assumption
6.1, given invertibility and moment conditions, the estimator
$\hat{\beta}_{n}$ that solves (14) is subject to an error term, which (up to a
multiplicative constant) is bounded above by:
$\mathbf{\hat{B}}^{R}=\mathbb{E}\left[\sum_{t=1}^{T}\sum_{a\in\\{0,1\\}}\left\|p(R_{t}=1|H_{t})p_{t}(a|H_{t})-\hat{p}(R_{t}=1|H_{t})\hat{p}_{t}(a|H_{t})\right\|\left\|g_{t}(H_{t},a)-\hat{g}_{t}(H_{t},a)\right\|\right].$
(15)
If we further assume $\mathbf{\hat{B}}^{R}=o_{p}(n^{-1/2})$, $\hat{\beta}_{n}$
is consistent and asymptotically normal such that
$\sqrt{n}(\hat{\beta}_{n}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma^{R}_{DR})$,
where $\Sigma^{R}_{DR}$ is defined in Appendix F.
### 6.2 Lagged Effects
Beyond the interest on proximal outcomes, additional attention has been paid
to lagged outcomes defined over future decision points with a fixed window
length $\Delta>1$, denoted as $Y_{t,\Delta}$, which is a known function of the
observed history and latest treatment:
$Y_{t,\Delta}=y(H_{t+\Delta-1},A_{t+\Delta-1})$. In practice, $\Delta$ is
explicitly chosen to avoid the curse of the horizon problem (Dempsey et al.,
2020). While this has been true to date, we acknowledge that larger $\Delta$
will be more common as MRT data sets grow in size and these longer-term
outcomes become of primary interest. Under Assumption 2.1, the causal estimand
for lagged effect can be expressed in terms of observable data (Shi et al.,
2022):
$\beta_{\mathbf{p},\pi}(t+\Delta;s)=\mathbb{E}\left[\mathbb{E}_{{\mathbf{p}}}\left[W_{t,\Delta-1}Y_{t,\Delta}\mid
A_{t}=1,H_{t}\right]-\mathbb{E}_{{\mathbf{p}}}\left[W_{t,\Delta-1}Y_{t,\Delta}\mid
A_{t}=0,H_{t}\right]\mid S_{t}=s\right],$ (16)
where
$W_{t,u}=\prod_{s=1}^{u}\pi_{t}(A_{t+s}|H_{t+s})/p_{t}(A_{t+s}|H_{t+s})$, with
$W_{t,0}=1$. Here, we assume the reference distribution for treatment
assignments from $t+1$ to $t+\Delta-1$ ($\Delta>1$) is given by a
randomization probability generically represented by
$\\{\pi_{u}(a_{u}|H_{u})\\}_{u=t+1}^{t+\Delta-1}$. This generalization
contains previous definitions such as lagged effects (Boruvka et al., 2018)
where $\pi_{u}=p_{u}$ and deterministic choices such as
$a_{t+1:(t+\Delta-1)}={\bf 0}$ (Dempsey et al., 2020; Qian et al., 2020),
where $\pi_{u}={\bf 1}\\{a_{u}=0\\}$ and ${\bf 1}\\{\cdot\\}$ is the indicator
function.
A brief discussion in Shi et al. (2022) presented an approach to improve the
estimation efficiency of the lagged effect and alleviate the curse of the
horizon (Liu et al., 2018). Specifically, it was shown that an optimal
estimating function will be orthogonal to the score functions for the
treatment selection probabilities (Bickel et al., 1993). This implies the
estimator may be improved by replacing the estimating equation by itself minus
its projection on the score functions for the treatment selection
probabilities (Murphy et al., 2001). This can be done in the case of the DR-
WCLS estimating equation as follows:
$\begin{split}\mathbb{P}_{n}\Bigg{[}\sum_{t=1}^{T-\Delta+1}\Bigg{[}&W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))\Bigg{(}W_{t,\Delta-1}\left(Y_{t,\Delta}-g_{t+\Delta-1}(H_{t+\Delta-1},A_{t+\Delta-1})\right)\\\
&-\sum_{u=0}^{\Delta-2}W_{t,u}\left(g_{t+u}(H_{t+u},A_{t+u})-\sum_{a_{t+u+1}}\pi(a_{t+u+1}|H_{t+u+1})g_{t+u+1}(H_{t+u+1},a_{t+u+1})\right)\Bigg{)}\\\
&+\tilde{\sigma}_{t}^{2}(S_{t})\left(\beta(t+\Delta,H_{t})-f_{t}(S_{t})^{\top}\beta\right)\Bigg{]}f_{t}(S_{t})^{\top}\Bigg{]}=0,\end{split}$
(17)
where $g_{t+u}(H_{t+u},A_{t+u})$ is a working model for
$\mathbb{E}[W_{t+u+1:t+\Delta-1}Y_{t,\Delta}|H_{t+u},A_{t+u}]$. Specifically,
$g_{t+\Delta-1}(H_{t+\Delta-1},A_{t+\Delta-1})=\mathbb{E}[Y_{t,\Delta}|H_{t+\Delta-1},A_{t+\Delta-1}]$,
and
$\mathbb{E}[g_{t+u-1}(H_{t+u-1},A_{t+u-1})]=\mathbb{E}\left[\sum_{a_{t+u}}\pi_{t+u}(a_{t+u}|H_{t+u})g_{t+u}(H_{t+u},a_{t+u})\right]$.
The parameterized linear working model of the conditional expectation
$g_{t+u}(H_{t+u},A_{t+u})$ in Murphy et al. (2001) can be improved by
leveraging supervised learning algorithms to construct data-adaptive
estimates. Based on the estimating Equation (17), the estimator
$\hat{\beta}^{\Delta}_{n}$ has the following property:
###### Corollary 6.1.2 (Asymptotic property for DR-WCLS estimator for lagged
outcomes).
Given invertibility and moment conditions, the estimation
$\hat{\beta}^{\Delta}_{n}$ obtained by solving Equation (17) is subject to an
error term, which is (up to a multiplicative constant) bounded above by
$\sum_{u=0}^{\Delta-1}\mathbf{\hat{B}}_{u}$, where
$\begin{split}\mathbf{\hat{B}}_{u}=\mathbb{E}\Bigg{[}\sum_{t=1}^{T-u}\sum_{a_{t+u}}&\left\|\hat{p}_{t+u}(a_{t+u}|H_{t+u})-p_{t+u}(a_{t+u}|H_{t+u})\right\|\left\|\hat{g}_{t+u}(H_{t+u},a_{t+u})-g_{t+u}(H_{t+u},a_{t+u})\right\|\Bigg{]}.\end{split}$
(18)
If we assume that $\mathbf{\hat{B}}_{u}=o_{p}(n^{-1/2})$, the estimator
$\hat{\beta}^{\Delta}_{n}$ is consistent and asymptotically normal such that
$\sqrt{n}(\hat{\beta}^{\Delta}_{n}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma^{\Delta}_{DR})$,
where $\Sigma^{\Delta}_{DR}$ is defined in Appendix G.
When $\Delta$ is large, correctly specifying the conditional expectation model
$g_{t}(H_{t},A_{t})$ is particularly useful to avoid the variance estimation
growing exponentially due to the weight $W_{t,\Delta}$, thus offering a remedy
for the curse of the horizon.
### 6.3 Binary Outcomes
Qian et al. (2020) proposed an estimator of the marginal excursion effect
(EMEE) by adopting a log relative risk model to examine whether a particular
time-varying intervention has an effect on a longitudinal binary outcome. The
causal excursion effect is defined by:
$\displaystyle\beta_{\mathbf{p}}(t;s)$
$\displaystyle=\log\frac{\mathbb{E}\left[Y_{t+1}(\bar{A}_{t-1},1)\,|\,S_{t}(\bar{A}_{t-1})=s\right]}{\mathbb{E}\left[Y_{t+1}(\bar{A}_{t-1},0)\,|\,S_{t}(\bar{A}_{t-1})=s\right]}$
(19)
$\displaystyle=\log\frac{\mathbb{E}\left[\mathbb{E}\left[Y_{t+1}\,|\,{A_{t}=1,H_{t}}\right]\,|\,S_{t}=s\right]}{\mathbb{E}\left[\mathbb{E}\left[Y_{t+1}\,|\,{A_{t}=0,H_{t}}\right]\,|\,S_{t}=s\right]}.$
(20)
Assuming $\beta_{{\bf p}}(t;s)=f_{t}(s)^{\top}\beta^{\star}$, where
$f_{t}(s)\in\mathbb{R}^{q}$ is a feature vector of a $q$-dimension and only
depends on state $s$ and decision point $t$, a consistent estimator for
$\beta^{*}$ can be obtained by solving a set of weighted estimating equations:
$\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}e^{-A_{t}f_{t}(S_{t})^{\top}\beta}\left(Y_{t+1}-e^{g_{t}(H_{t})^{\top}\alpha+A_{t}f_{t}(S_{t})^{\top}\beta}\right)\left(\begin{array}[]{c}g_{t}(H_{t})\\\
(A_{t}-\tilde{p}_{t}(1\mid S_{t}))f_{t}(S_{t})\end{array}\right)\right]=0.$
(21)
See Qian et al. (2020) for more details on the estimand formulation and
consistency, asymptotic normality, and robustness properties of the EMEE
estimation method.
Based on Equation (21), we propose a doubly-robust alternative of EMEE, termed
“DR-EMEE”. A doubly robust estimator for the log-relative risk is constructed
by solving the following set of estimating equations:
$\begin{split}\mathbb{P}_{n}\bigg{[}\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\bigg{(}&\frac{W_{t}e^{-A_{t}f_{t}(S_{t})^{\top}\beta}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}+\\\
&~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}e^{-f_{t}(S_{t})^{\top}\beta}g_{t}(H_{t},1)-g_{t}(H_{t},0)\bigg{)}f_{t}\bigg{]}=0.\end{split}$
(22)
###### Corollary 6.1.3 (Asymptotic property for DR-EMEE estimator).
Upon correctly specifying either conditional expectation model
$g_{t}(H_{t},A_{t})$ or treatment randomization probability
$p_{t}(A_{t}|H_{t})$, given invertibility and moment conditions, the estimator
$\hat{\beta}_{n}$ obtained from solving Equation (22) is consistent and
asymptotically normal such that
$\sqrt{n}(\hat{\beta}_{n}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma^{b}_{DR})$,
where $\Sigma^{b}_{DR}$ is defined in Appendix H.
## 7 Simulation
### 7.1 Simulation Setup
To evaluate the proposed estimator, we extend the simulation setup in Boruvka
et al. (2018). We first present a base data generation model. Consider an MRT
with a known randomization probability, and $g(H_{t})$ in the generative model
is a complex function of high-dimensional history information $H_{t}$. Let
$S_{t}\in\\{-1,1\\}$ denote a single state variable, which is an effect
moderator, and $S_{t}\subset H_{t}$. We have the generative model as:
$Y_{t,j}=g_{t}(H_{t})+\big{(}A_{t,j}-p_{t}(1|H_{t})\big{)}(\beta_{10}+\beta_{11}S_{t,j})+e_{t,j}.$
(23)
The randomization probability is
$p_{t}(1|H_{t})=\text{expit}(\eta_{1}A_{t-1,j}+\eta_{2}S_{t,j})$ where
$\text{expit}(x)=(1+\exp(-x))^{-1}$; the state dynamics are given by
$\mathbb{P}(S_{t,j}=1|A_{t-1},H_{t-1})=1/2$ with $A_{0}=0$, and the
independent error term satisfies $e_{t,j}\sim\mathcal{N}(0,1)$ with
$\text{Corr}(e_{u,j},e_{t,j^{\prime}})=1[j=j^{\prime}]0.5^{|u-t|/2}$. As in
Boruvka et al. (2018), we set $\eta_{1}=-0.8,\eta_{2}=0.8,\beta_{10}=-0.2$,
and $\beta_{11}=0.2$. The marginal proximal effect is equal to
$\beta_{10}+\beta_{11}\mathbb{E}\left[S_{t,j}\right]=\beta_{10}=-0.2$. The
marginal treatment effect is thus constant in time and is given by
$\beta_{0}^{\star}=\beta_{10}=-0.2$.
In the following, we set the complex function $g(H_{t})$ as a decision tree,
and the flow chart Figure 2 in Appendix I visualizes the decision-making
process as well as the outcomes. We consider the estimation of the fully
marginal proximal treatment effect, thus $f_{t}(S_{t})=1$ in Equation (23)
(i.e., $S_{t}=\emptyset$). The results below report average point estimate
(Est), standard error (SE), and 95% confidence interval coverage probabilities
(CP) across 1000 replicates. Here, we report results with $N=100$ showing the
relative advantage of R-WCLS and DR-WCLS over WCLS.
### 7.2 Estimation Methods
##### WCLS
The estimation model in Boruvka et al. (2018) assumes a linear function for
the control variables, i.e., $g(H_{t};\alpha)=g(H_{t})^{\top}\alpha$. This
method is guaranteed to produce an unbiased estimate with a valid confidence
interval. Thus, it will be used as a reference for comparison of the
estimation results from the following method.
##### R-WCLS
This estimation model incorporates modern supervised learning techniques to
construct a $\hat{g}(H_{t})$ for the control variables. The plug-in estimators
$\hat{g}(H_{t},1)$ and $\hat{g}(H_{t},0)$ are learned separately with
corresponding treatment assignments in training data. As the number of control
variables is relatively small in this simulation study, we will use random
forests. The plug-in estimator is then
$\hat{g}(H_{t})=\tilde{p}_{t}(1|S_{t})\hat{g}(H_{t},1)+(1-\tilde{p}_{t}(1|S_{t}))\hat{g}(H_{t},0)$.
##### DR-WCLS
The plug-in estimators $\hat{g}(H_{t},1)$ and $\hat{g}(H_{t},0)$ can be
obtained the same way as above, and using these two estimators, we can get
$\hat{\beta}(t;H_{t})=\hat{g}(H_{t},1)-\hat{g}(H_{t},0)$.
### 7.3 Simulation Results
Table 1 reports the simulation results. “%RE gain” indicates the percentage of
times we achieve an efficiency gain out of 1000 Monte Carlo replicates. “mRE”
stands for the average relative efficiency, and “RSD” represents the relative
standard deviation between two estimates. The proposed R-WCLS and DR-WCLS
methods significantly improve the efficiency of the WCLS when estimating the
fully marginal causal effect. In addition, we find that mRE varies with
$\beta_{11}$. R-WCLS has higher mRE than DR-WCLS when $\beta_{11}$ is small,
and this reverses when $\beta_{11}$ increases. In our simulation, $\beta_{11}$
being large indicates that an important moderator $S_{t,j}$ was not included
in the causal effect model (i.e., $f_{t}(S_{t})^{\top}\beta=\beta_{0}$).
Therefore, when model misspecification occurs, DR-WCLS shows better
performance than R-WCLS.
Table 1: Fully marginal causal effect estimation efficiency comparison. Method | $\beta_{11}$ | Est | SE | CP | %RE gain | mRE | RSD
---|---|---|---|---|---|---|---
WCLS | 0.2 | -0.198 | 0.049 | 0.946 | - | - | -
0.5 | -0.195 | 0.050 | 0.945 | - | - | -
0.8 | -0.193 | 0.053 | 0.951 | - | - | -
R-WCLS | 0.2 | -0.200 | 0.044 | 0.950 | 100% | 1.231 | 1.260
0.5 | -0.199 | 0.045 | 0.944 | 100% | 1.218 | 1.255
0.8 | -0.200 | 0.048 | 0.956 | 99.9% | 1.203 | 1.236
DR-WCLS | 0.2 | -0.200 | 0.045 | 0.954 | 99.7% | 1.216 | 1.249
0.5 | -0.199 | 0.045 | 0.947 | 99.9% | 1.228 | 1.261
0.8 | -0.200 | 0.047 | 0.954 | 99.7% | 1.254 | 1.282
## 8 Intern Health Study: A Worked Example
The Intern Health Study (IHS) is a 6-month micro-randomized trial on medical
interns (NeCamp et al., 2020), which aimed to investigate when to provide
mHealth interventions to individuals in stressful work environments to improve
their behavior and mental health. In this section, we evaluate the
effectiveness of targeted notifications in improving individuals’ moods and
step counts. The exploratory and MRT analyses conducted in this paper focus on
weekly randomization, thus, an individual was randomized to receive mood,
activity, sleep, or no notifications with equal probability ($1/4$ each) every
week. We choose the outcome $Y_{t+1,j}$ as the self-reported mood score (a
Likert scale taking values from 1 to 10) and step count (cubic root) for
individual $j$ in study week $t$.
Missing data occurred throughout the trial when interns did not complete the
self-reported mood survey or were not wearing their assigned Fitbit wrist-worn
device; thus, multiple imputation was originally used to impute missing daily
data. See NeCamp et al. (2020) for further details. The following analysis is
based on one of the imputed data sets. The data set used in the analyses
contains 1562 participants. The average weekly mood score when a notification
is delivered is 7.14, and 7.16 when there is no notification; The average
weekly step count (cubic root) when a notification is delivered is 19.1, and
also 19.1 when there is no notification. In Section 8.1 and 8.2, we evaluate
the targeted notification treatment effect for medical interns using our
proposed methods and WCLS.
### 8.1 Comparison of the Marginal Effect Estimation
First, we are interested in assessing the fully marginal excursion effect
(i.e., $\beta(t)=\beta_{0}^{\star}$). For an individual $j$, the study week is
coded as a subscript $t$. $Y_{t+1,j}$ is the self-reported mood score or step
count (cubic root) for individual $j$ in study week $t+1$. $A_{t}$ is defined
as the specific type of notification that targets improving the outcome. For
example, if the outcome is the self-reported mood score, sending mood
notifications would be the action, thus, $\mathbb{P}(A_{t}=1)=0.25$. We
analyze the marginal causal effect $\beta_{0}$ of the targeted notifications
on self-reported mood score and step count using the following model for WCLS:
$Y_{t+1,j}\sim g_{t}(H_{t,j})^{\top}\alpha+(A_{t,j}-\tilde{p}_{t})\beta_{0}.$
The term $g_{t}(H_{t})^{\top}\alpha$ represents a linear working model of
prognostic control variables including two baseline characteristics, study
week $t$, and the prior week’s outcome $Y_{t,j}$. For R-WCLS and DR-WCLS
methods, we include a total of 12 control variables and use random forests to
construct the plug-in estimators $\hat{g}(H_{t},A_{t})$ as described in
Section 7.2. For a detailed description of the control variables, see Appendix
J.
Table 2: IHS Study: Fully marginal treatment effect estimation. Outcome | Method | Estimation | Std.err | P-value | RE
---|---|---|---|---|---
Mood | WCLS | -0.016 | $9.03\times 10^{-3}$ | 0.078 | -
R-WCLS | -0.017 | $8.14\times 10^{-3}$ | 0.038 | 1.23
DR-WCLS | -0.017 | $8.18\times 10^{-3}$ | 0.042 | 1.22
Steps | WCLS | 0.070 | $2.41\times 10^{-2}$ | 0.004 | -
R-WCLS | 0.065 | $2.34\times 10^{-2}$ | 0.005 | 1.06
DR-WCLS | 0.070 | $2.37\times 10^{-2}$ | 0.003 | 1.03
We report various estimators in Table 2 and present more details in Appendix
J. In comparison with WCLS, the estimations using R-WCLS and DR-WCLS have a
tangible improvement in the standard error estimates. We conclude that sending
activity notifications can increase (the cubic root of) step counts by 0.07,
and sending mood notifications can negatively affect users’ moods by -0.017,
with statistical significance at level 95%. By comparison, R-WCLS and DR-WCLS
have enough power to detect a causal relationship between sending mobile
prompts and lower mood scores, while WCLS does not.
### 8.2 Time-varying Treatment Effect Estimation
For further analysis, we include study week in the moderated treatment effect
model: $\beta(t)=\beta_{0}^{\star}+\beta_{1}^{\star}t$, and examine how
treatment effect varies over time. Estimated time-varying treatment moderation
effects and their relative efficiency are shown in Figure 1. The shaded area
in Figure 1 represents the 95% confidence band of the moderation effects as a
function of study week. Narrower confidence bands were observed for estimators
constructed using both R-WCLS and DR-WCLS methods. Relative efficiencies
between 1.2 and 1.3 were observed over the study week.
Figure 1: Causal effects estimates with confidence intervals of R-WCLS (left)
and DR-WCLS (middle), and their relative efficiency in comparisons with WCLS
(right).
Based on the results above, we can conclude that sending notifications doesn’t
have a significant impact on mood scores for the first 12 weeks. Nevertheless,
sending notifications later in the study is less likely to improve the mood of
participants. In light of this, it might not be ideal to overburden
participants for an extended period if the notifications don’t serve any
therapeutic purpose. Additionally, we assessed the time-varying treatment
effect on step count, which is detailed in Appendix J.
### 8.3 Treatment Effect Estimation with Missing Data
Next, we apply our proposed methods to evaluate the treatment effect based on
the raw observed data rather than the imputed dataset. To maintain consistency
with previous analyses, we still use the weekly average mood score and step
count (cubic root) as outcomes. Self-report mood scores and step counts are
collected every day, so if no records were observed for the entire week, we
indicate the weekly outcome as missing. Otherwise, the average mood score and
step count (cubic root) are calculated as outcomes. For mood outcome, there is
in total 31.3% person/week missing, and for step count outcome, there is in
total 48.1% person/week missing.
We carried out the same analysis as above for marginal treatment effects.
Inverse probability weighting is used when implementing estimation using WCLS
and R-WCLS criteria. Estimated treatment effects and their relative efficiency
are shown in Table 3. It is no longer evident that mood notifications have a
significant overall impact on participants’ moods, but the step count analysis
still indicates a positive effect of sending activity notifications on
participants’ physical activity levels.
Table 3: IHS Study: Fully marginal treatment effect estimation with missing outcomes. Outcome | Method | Estimation | Std.err | P-value
---|---|---|---|---
Mood | WCLS | $7.71\times 10^{-3}$ | $1.73\times 10^{-2}$ | 0.655
R-WCLS | $1.81\times 10^{-3}$ | $1.62\times 10^{-2}$ | 0.911
DR-WCLS | $3.00\times 10^{-3}$ | $1.68\times 10^{-2}$ | 0.858
Steps | WCLS | $6.71\times 10^{-2}$ | $3.94\times 10^{-2}$ | 0.088
R-WCLS | $7.43\times 10^{-2}$ | $4.05\times 10^{-2}$ | 0.067
DR-WCLS | 0.104 | $4.09\times 10^{-2}$ | 0.011
## 9 Discussion
Scientists wish to leverage the large volume of data generated by mobile
health systems to better answer scientific questions regarding the time-
varying intervention effects. The application of ML algorithms can make good
use of high-dimensional mobile health data to provide high-quality predictions
of proximal outcomes. In this paper, we proposed two methods, termed R-WCLS
and DR-WCLS respectively, to estimate time-varying treatment effects and
illustrated both the theoretical and practical benefits of incorporating
machine learning algorithms into the inference procedure. Particularly, both
R-WCLS and DR-WCLS criteria provide sufficient flexibility in their nuisance
model specifications and improve estimation efficiency over existing
approaches. A crucial feature of the DR-WCLS criterion that is not shared by
the WCLS and R-WCLS criteria is that it is doubly robust. This feature is
critical even in MRTs where the treatment model may be misspecified and
missing data are common. The DR-WCLS criterion is especially powerful when
both the treatment randomization probability and conditional expectation model
are accurately specified, which results in the highest relative asymptotic
efficiency.
Although this work represents a major step forward in analyzing MRT data,
there are still some interesting questions to explore in the future. For
example, all the discussion above relies on sample splitting having little to
no impact on the estimation asymptotically, but the effect of the particular
random split on the estimate can be important in finite samples. Chernozhukov
et al. (2018) introduced a finite-sample adjustment to incorporate uncertainty
induced by sample splitting, which can be a straightforward and useful
extension to our proposed methods. In addition, the accuracy of different
supervised learning algorithms in estimating nuisance components might differ,
and it is worth researching whether there are methods that work best in
practice. In relation to the same topic, we can learn the nuisance component
using non-parametric models, such as kernel regression. These methods enable
us to obtain an explicit error bound for the DR-WCLS estimator, which serves
as sufficient conditions for the nuisance model’s smoothness and dimension for
the estimator to be $\sqrt{n}$-consistent (Kennedy, 2020). Last but not least,
Han and Wang (2013) proposed a “multiply robust” estimator that is more robust
than doubly robust estimators, which allows multiple models for both the
propensity score and the outcome regression. This estimator is consistent if
any of the multiple models are correctly specified. It is worth considering
extending our proposed method to a multiply robust version.
## References
* Athey et al. (2018) Athey, S., G. W. Imbens, and S. Wager (2018, 02). Approximate Residual Balancing: Debiased Inference of Average Treatment Effects in High Dimensions. Journal of the Royal Statistical Society Series B: Statistical Methodology 80(4), 597–623.
* Bang and Robins (2005) Bang, H. and J. M. Robins (2005). Doubly robust estimation in missing data and causal inference models. Biometrics 61(4), 962–973.
* Bickel et al. (1993) Bickel, P. J., C. A. Klaassen, P. J. Bickel, Y. Ritov, J. Klaassen, J. A. Wellner, and Y. Ritov (1993). Efficient and adaptive estimation for semiparametric models, Volume 4. Springer.
* Bodory et al. (2022) Bodory, H., M. Huber, and L. Lafférs (2022). Evaluating (weighted) dynamic treatment effects by double machine learning. The Econometrics Journal 25(3), 628–648.
* Boruvka et al. (2018) Boruvka, A., D. Almirall, K. Witkiewitz, and S. A. Murphy (2018). Assessing time-varying causal effect moderation in mobile health. Journal of the American Statistical Association 113(523), 1112–1121.
* Chen et al. (2022) Chen, Q., V. Syrgkanis, and M. Austern (2022). Debiased machine learning without sample-splitting for stable estimators. arXiv preprint arXiv:2206.01825.
* Chernozhukov et al. (2018) Chernozhukov, V., D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, W. Newey, and J. Robins (2018). Double/debiased machine learning for treatment and structural parameters.
* Chernozhukov et al. (2015) Chernozhukov, V., C. Hansen, and M. Spindler (2015). Post-selection and post-regularization inference in linear models with many controls and instruments. American Economic Review 105(5), 486–90.
* Chernozhukov et al. (2021) Chernozhukov, V., W. K. Newey, and R. Singh (2021). A simple and general debiased machine learning theorem with finite sample guarantees. arXiv preprint arXiv:2105.15197.
* Dempsey et al. (2020) Dempsey, W., P. Liao, S. Kumar, and S. A. Murphy (2020). The stratified micro-randomized trial design: sample size considerations for testing nested causal effects of time-varying treatments. The annals of applied statistics 14(2), 661.
* Han and Wang (2013) Han, P. and L. Wang (2013). Estimation with missing data: beyond double robustness. Biometrika 100(2), 417–430.
* Heron and Smyth (2010) Heron, K. E. and J. M. Smyth (2010). Ecological momentary interventions: incorporating mobile technology into psychosocial and health behaviour treatments. British journal of health psychology 15(1), 1–39.
* Hill (2011) Hill, J. L. (2011). Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics 20(1), 217–240.
* Kennedy (2016) Kennedy, E. H. (2016). Semiparametric theory and empirical processes in causal inference. In Statistical causal inferences and their applications in public health research, pp. 141–167. Springer.
* Kennedy (2020) Kennedy, E. H. (2020). Optimal doubly robust estimation of heterogeneous causal effects. arXiv preprint arXiv:2004.14497.
* Künzel et al. (2019) Künzel, S. R., J. S. Sekhon, P. J. Bickel, and B. Yu (2019). Metalearners for estimating heterogeneous treatment effects using machine learning. Proceedings of the national academy of sciences 116(10), 4156–4165.
* Lewis and Syrgkanis (2020) Lewis, G. and V. Syrgkanis (2020). Double/debiased machine learning for dynamic treatment effects via g-estimation. arXiv preprint arXiv:2002.07285.
* Liang and Zeger (1986) Liang, K.-Y. and S. L. Zeger (1986). Longitudinal data analysis using generalized linear models. Biometrika 73(1), 13–22.
* Liao et al. (2016) Liao, P., P. Klasjna, A. Tewari, and S. Murphy (2016). Micro-randomized trials in mhealth. Statistics in Medicine 35(12), 1944–71.
* Liu et al. (2018) Liu, Q., L. Li, Z. Tang, and D. Zhou (2018). Breaking the curse of horizon: Infinite-horizon off-policy estimation. Advances in Neural Information Processing Systems 31.
* Morzywolek et al. (2023) Morzywolek, P., J. Decruyenaere, and S. Vansteelandt (2023). On a general class of orthogonal learners for the estimation of heterogeneous treatment effects. arXiv preprint arXiv:2303.12687.
* Murphy et al. (2001) Murphy, S. A., M. J. van der Laan, J. M. Robins, and C. P. P. R. Group (2001). Marginal mean models for dynamic regimes. Journal of the American Statistical Association 96(456), 1410–1423.
* NeCamp et al. (2020) NeCamp, T., S. Sen, E. Frank, M. A. Walton, E. L. Ionides, Y. Fang, A. Tewari, and Z. Wu (2020). Assessing real-time moderation for developing adaptive mobile health interventions for medical interns: micro-randomized trial. Journal of medical Internet research 22(3), e15033.
* Newey (1990) Newey, W. K. (1990). Semiparametric efficiency bounds. Journal of applied econometrics 5(2), 99–135.
* Neyman (1979) Neyman, J. (1979). C ($\alpha$) tests and their use. Sankhyā: The Indian Journal of Statistics, Series A, 1–21.
* Nie and Wager (2021) Nie, X. and S. Wager (2021). Quasi-oracle estimation of heterogeneous treatment effects. Biometrika 108(2), 299–319.
* Qian et al. (2022) Qian, T., A. E. Walton, L. M. Collins, P. Klasnja, S. T. Lanza, I. Nahum-Shani, M. Rabbi, M. A. Russell, M. A. Walton, H. Yoo, et al. (2022). The microrandomized trial for developing digital interventions: Experimental design and data analysis considerations. Psychological methods.
* Qian et al. (2020) Qian, T., H. Yoo, P. Klasnja, D. Almirall, and S. A. Murphy (2020). Estimating time-varying causal excursion effect in mobile health with binary outcomes.
* Robins (1986) Robins, J. (1986). A new approach to causal inference in mortality studies with a sustained exposure period-application to control of the healthy worker survivor effect. Mathematical Modelling 7(9), 1393–1512.
* Robins (1994) Robins, J. M. (1994). Correcting for non-compliance in randomized trials using structural nested mean models. Communications in Statistics-Theory and methods 23(8), 2379–2412.
* Robins (1997) Robins, J. M. (1997). Causal inference from complex longitudinal data. In Latent variable modeling and applications to causality, pp. 69–117. Springer.
* Robinson (1988) Robinson, P. M. (1988). Root-n-consistent semiparametric regression. Econometrica: Journal of the Econometric Society, 931–954.
* Rubin (1978) Rubin, D. (1978). Bayesian inference for causal effects: The role of randomization. The Annals of Statistics 6(1), 34–58.
* Semenova and Chernozhukov (2021) Semenova, V. and V. Chernozhukov (2021). Debiased machine learning of conditional average treatment effects and other causal functions. The Econometrics Journal 24(2), 264–289.
* Shi et al. (2022) Shi, J., Z. Wu, and W. Dempsey (2022). Assessing time-varying causal effect moderation in the presence of cluster-level treatment effect heterogeneity and interference. Biometrika.
* Shi et al. (2023) Shi, J., Z. Wu, and W. Dempsey (2023). Incorporating auxiliary variables to improve the efficiency of time-varying treatment effect estimation. Manuscript in preparation.
* Tsiatis (2007) Tsiatis, A. (2007). Semiparametric Theory and Missing Data. Springer Science & Business Media.
* Van Der Laan and Rubin (2006) Van Der Laan, M. J. and D. Rubin (2006). Targeted maximum likelihood learning. The international journal of biostatistics 2(1).
* Van der Vaart (2000) Van der Vaart, A. W. (2000). Asymptotic statistics, Volume 3. Cambridge university press.
* Viviano and Bradic (2021) Viviano, D. and J. Bradic (2021). Dynamic covariate balancing: estimating treatment effects over time. arXiv preprint arXiv:2103.01280.
## Appendix A Proof of Theorem 5.2
Assume $\hat{\beta}^{(R)}_{n}$ minimizes the R-WCLS criterion:
$\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-\left(\tilde{p}_{t}(1|S_{t})g_{t}(1,H_{t})+(1-\tilde{p}_{t}(1|S_{t}))g_{t}(0,H_{t})\right)-(A_{t}-\tilde{p}_{t}(1|S_{t})))f_{t}(S_{t})^{\top}\beta\right)^{2}\right].$
(24)
Denote
$g^{\star}_{t}(H_{t})=\tilde{p}_{t}(1|S_{t})g_{t}(H_{t},1)+(1-\tilde{p}_{t}(1|S_{t}))g_{t}(H_{t},0)=\mathbb{E}[W_{t}Y_{t+1}|H_{t}]$,
where we applied a supervised-learning algorithm and get an estimator
$\hat{g}_{t}(H_{t})$. The asymptotic properties of the R-WCLS estimator follow
from the expansion:
$\displaystyle 0$
$\displaystyle=\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-\hat{g}(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\hat{\beta}^{(R)}_{n}\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right]$
$\displaystyle=\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g^{\star}_{t}(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right]$
$\displaystyle~{}~{}~{}~{}~{}-\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}f_{t}(S_{t})f_{t}(S_{t})^{\top}\right](\hat{\beta}^{(R)}_{n}-\beta^{\star})$
$\displaystyle~{}~{}~{}~{}~{}+\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}_{t}(H_{t})-\hat{g}(H_{t}))f_{t}(S_{t})\right]$
(25)
Because
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]\overset{P}{\to}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right],$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})(g^{\star}(H_{t})-\hat{g}(H_{t}))\right]\overset{P}{\to}0.~{}~{}~{}~{}\text{(by
design)}$
The second property holds true for any $\hat{g}(H_{t})$ because:
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})(g^{\star}(H_{t})-\hat{g}(H_{t}))\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\Bigg{[}\sum_{t=1}^{T}\mathbb{E}[\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})(g^{\star}(H_{t})-\hat{g}(H_{t}))|A_{t}=1]$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\mathbb{E}[(1-\tilde{p}_{t}(1|S_{t}))(0-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})(g^{\star}(H_{t})-\hat{g}(H_{t}))|A_{t}=0]\Bigg{]}$
$\displaystyle=$ $\displaystyle 0.$
By solving (A), we obtain:
$\displaystyle n^{1/2}(\hat{\beta}^{(R)}_{n}-\beta^{\star})$
$\displaystyle=\left\\{\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]\right\\}^{-1}\times$
$\displaystyle~{}~{}~{}~{}~{}~{}n^{1/2}\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g^{\star}_{t}(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right].$
Denote $\tilde{Y}^{(R)}_{t+1}=Y_{t+1}-g^{\star}_{t}(H_{t})$, and we obtain
$\displaystyle n^{1/2}(\hat{\beta}^{(R)}_{n}-\beta^{\star})$
$\displaystyle=n^{1/2}~{}\mathbb{P}_{n}\Bigg{[}\sum_{t=1}^{T}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\Bigg{]}+o_{p}(1).$
By definition of $\beta^{\star}$:
$\mathbb{E}\left[\sum_{t=1}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right]=0$
Consequently, under regularity conditions, the estimator
$\hat{\beta}^{(R)}_{n}\overset{P}{\to}\beta^{\star}$; that is,
$\hat{\beta}^{(R)}_{n}$ is a consistent estimator of $\beta^{\star}$. The
influence function for $\hat{\beta}^{(R)}_{n}$ is:
$\displaystyle\sum_{t=1}^{T}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t}).$
(26)
Then under moment conditions, we have asymptotic normality with variance given
by $\Sigma_{R}=Q^{-1}WQ^{-1}$, where
$\displaystyle Q$
$\displaystyle=\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right],$
$\displaystyle W$
$\displaystyle=\mathbb{E}\left[\left(\sum_{t=1}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right)^{2}\right],$
due to space constraints, we use $\mathbb{E}[X^{2}]$ to denote
$\mathbb{E}[XX^{\top}]$. In conclusion, we establish that the estimator
minimizing the R-WCLS criterion $\hat{\beta}^{(R)}_{n}$ is consistent and
asymptotically normal:
$n^{1/2}(\hat{\beta}^{(R)}_{n}-\beta^{\star})\sim\mathcal{N}(0,\Sigma_{R}).$
We further prove that the variance is consistently estimated by Equation (11).
Using sample splitting, the estimating equation can be written as:
$\displaystyle 0$
$\displaystyle=\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-\hat{g}^{(k)}(H_{t})-(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\hat{\beta}^{(R)}_{n}\right)(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})\right]$
$\displaystyle=\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g^{\star}_{t}(H_{t})-(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})\right]$
$\displaystyle~{}~{}~{}~{}~{}-\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))^{2}f_{t}(S_{t})f_{t}(S_{t})^{\top}\right](\hat{\beta}^{(R)}_{n}-\beta^{\star})$
$\displaystyle~{}~{}~{}~{}~{}+\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}_{t}(H_{t})-\hat{g}^{(k)}(H_{t}))f_{t}(S_{t})\right]$
(27)
Assume $K$ is finite and fixed, and we have the same reasoning as above:
$\displaystyle\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))^{2}f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]\overset{P}{\to}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}^{(k)}_{t}(1|S_{t})(1-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right],$
$\displaystyle\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})(g^{\star}(H_{t})-\hat{g}^{(k)}(H_{t}))\right]\overset{P}{\to}0.~{}~{}~{}~{}\text{(by
design)}$
Then we obtain the following:
$\displaystyle n^{1/2}(\hat{\beta}^{(R)}_{n}-\beta^{\star})$
$\displaystyle=\left\\{\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]\right\\}^{-1}\times$
$\displaystyle
n^{1/2}\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g^{\star}_{t}(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right]$
Further,
$\displaystyle n^{1/2}(\hat{\beta}^{(R)}_{n}-\beta^{\star})$
$\displaystyle=n^{1/2}~{}\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\Bigg{[}\sum_{t=1}^{T}~{}\left\\{\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}^{(k)}_{t}(1|S_{t})(1-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]\right\\}^{-1}\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})\Bigg{]}+o_{p}(1).$
By definition of $\beta^{\star}$:
$\mathbb{E}\left[\sum_{t=1}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})\right]=0$
Consequently, under regularity conditions, the estimator
$\hat{\beta}^{(R)}_{n}\overset{P}{\to}\beta^{\star}$; that is,
$\hat{\beta}^{(R)}_{n}$ is consistent. The influence function for
$\hat{\beta}^{(R)}_{n}$ is:
$\displaystyle\frac{1}{K}\sum_{k=1}^{K}\sum_{t=1}^{T}~{}\left\\{\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}^{(k)}_{t}(1|S_{t})(1-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]\right\\}^{-1}\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta^{\star}\right)(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))f_{t}(S_{t}).$
(28)
Recall
$\displaystyle m$
$\displaystyle=\sum_{t=1}^{T}\hat{W}^{(k)}_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})^{\top}\hat{\beta}^{(R)}_{n}\right)(A_{t}-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t}),$
$\displaystyle\dot{m}$ $\displaystyle=\frac{\partial
m(\beta,\eta)}{\partial\beta}=\sum_{t=1}^{T}\hat{\tilde{p}}_{t}^{(k)}(1|S_{t})(1-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}.$
Then the variance can be consistently estimated by:
$\left[\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\dot{m}(\hat{\beta},\hat{\eta}_{k})\right]^{-1}\times\left[\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}m(\hat{\beta},\hat{\eta}_{k})m(\hat{\beta},\hat{\eta}_{k})^{\top}\right]\times\left[\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\dot{m}(\hat{\beta},\hat{\eta}_{k})\right]^{-1}.$
## Appendix B Proof of Theorem 5.3
### B.1 Double robustness property
The following is proof of the double robustness of the DR-WCLS estimator.
Assume $\hat{\beta}^{(DR)}_{n}$ minimizes the _DR-WCLS_ criterion:
$\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\frac{W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\right)^{2}\right].$
Here the true randomization probability is $p_{t}(A_{t}|H_{t})$, and the
conditional expectation (also known as the outcome regression):
$\mathbb{E}[Y_{t+1}|H_{t},A_{t}]=g_{t}^{\star}(H_{t},A_{t}).$
Denote $\beta(t;H_{t})=g_{t}^{\star}(H_{t},1)-g_{t}^{\star}(H_{t},0)$. The
corresponding ML estimators are denoted as $\hat{g}_{t}(H_{t},A_{t})$ and
$\hat{\beta}(t;H_{t})$. We consider the estimating equation of the objective
function above:
$\displaystyle 0$
$\displaystyle=\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\frac{W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{(DR)}\right)f_{t}(S_{t})\right]$
$\displaystyle=\mathbb{E}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))f_{t}(S_{t})\right]+\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{(DR)}\right)f_{t}(S_{t})\right].$
If the conditional expectation $g_{t}^{\star}(H_{t},A_{t})$ is correctly
specified, we have:
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}^{\star}(H_{t},A_{t}))f_{t}(S_{t})\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))\mathbb{E}[Y_{t+1}-g_{t}^{\star}(H_{t},A_{t})|H_{t},A_{t}]f_{t}(S_{t})\right]$
$\displaystyle=$ $\displaystyle 0,$
and we are left to solve:
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{(DR)}\right)f_{t}(S_{t})\right]=0.$
Under regularity conditions, the estimator
$\hat{\beta}^{(DR)}_{n}\overset{P}{\to}\beta^{\star}$; that is,
$\hat{\beta}^{(DR)}_{n}$ is consistent. Another case is when the treatment
randomization probability is correctly specified. Then we have:
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-\hat{g}_{t}(H_{t},A_{t}))f_{t}(S_{t})\right]+\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\hat{\beta}(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{(DR)}\right)f_{t}(S_{t})\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))\Big{(}\mathbb{E}[Y_{t+1}|H_{t},A_{t}=1]-\mathbb{E}[Y_{t+1}|H_{t},A_{t}=0]-\hat{\beta}(H_{t},A_{t})\Big{)}f_{t}(S_{t})\right]$
$\displaystyle+\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\hat{\beta}(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{(DR)}\right)f_{t}(S_{t})\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\mathbb{E}[Y_{t+1}|H_{t},A_{t}=1]-\mathbb{E}[Y_{t+1}|H_{t},A_{t}=0]-f_{t}(S_{t})^{\top}\beta^{(DR)}\right)f_{t}(S_{t})\right]$
Under regularity conditions, the estimator
$\hat{\beta}^{(DR)}_{n}\overset{P}{\to}\beta^{\star}$; that is,
$\hat{\beta}^{(DR)}_{n}$ is consistent.
### B.2 Asymptotic properties for DR-WCLS estimators
Assume $\hat{\beta}^{(DR)}_{n}$ minimizes the _DR-WCLS_ criterion:
$\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\frac{\hat{W}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-\hat{g}_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}+\hat{\beta}(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{(DR)}\right)^{2}\right].$
The estimated treatment randomization probability is denoted as
$\hat{p}_{t}=\hat{p}(A_{t}|H_{t})$, thus we have the weight $W_{t}$ estimated
by $\hat{W}_{t}=\tilde{p}_{t}(A_{t}|S_{t})/\hat{p}_{t}(A_{t}|H_{t})$. And the
estimating equation is:
$\displaystyle 0$
$\displaystyle=\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\frac{\tilde{p}_{t}(A_{t}|S_{t})(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-\hat{g}_{t}(H_{t},A_{t}))}{\hat{p}_{t}(A_{t}|H_{t})\tilde{\sigma}^{2}_{t}(S_{t})}+\hat{\beta}(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{(DR)}\right)f_{t}(S_{t})\right].$
(29)
Expand the right-hand side, we have:
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\frac{\tilde{p}_{t}(A_{t}|S_{t})(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-\hat{g}_{t}(H_{t},A_{t}))}{\hat{p}_{t}(A_{t}|H_{t})\tilde{\sigma}^{2}_{t}(S_{t})}+\hat{\beta}(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{(DR)}\right)f_{t}(S_{t})\right]$
$\displaystyle=$
$\displaystyle\mathbb{P}_{n}\Bigg{[}\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\Big{(}\frac{\tilde{p}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t})+g^{\star}_{t}(H_{t},A_{t})-\hat{g}_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}\left(\frac{1}{\hat{p}_{t}}-\frac{1}{p_{t}}+\frac{1}{p_{t}}\right)+$
$\displaystyle\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{\star}+(\hat{\beta}(t;H_{t})-\beta(t;H_{t}))-f_{t}(S_{t})^{\top}(\beta^{(DR)}-\beta^{\star})\Big{)}f_{t}(S_{t})\Bigg{]}$
$\displaystyle=$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\frac{W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{\star}\right)f_{t}(S_{t})\right]+$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{p}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t}))\left(\frac{1}{\hat{p}_{t}}-\frac{1}{p_{t}}\right)f_{t}(S_{t})\right]+$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{p}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}_{t}(H_{t},A_{t})-\hat{g}_{t}(H_{t},A_{t}))\left(\frac{1}{\hat{p}_{t}}-\frac{1}{p_{t}}\right)f_{t}(S_{t})\right]+$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}(H_{t},A_{t})-\hat{g}_{t}(H_{t},A_{t}))f_{t}(S_{t})\right]+\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})(\hat{\beta}(t;H_{t})-\beta(t;H_{t}))f_{t}(S_{t})\right]$
$\displaystyle-\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f^{\top}_{t}(S_{t})\right](\hat{\beta}^{(DR)}_{n}-\beta^{\star})$
Because
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{p}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t}))\left(\frac{1}{\hat{p}_{t}}-\frac{1}{p_{t}}\right)f_{t}(S_{t})\right]$
$\displaystyle\overset{P}{\to}0,~{}~{}~{}~{}\text{(correct model
specification)}$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f^{\top}_{t}(S_{t})\right]$
$\displaystyle\overset{P}{\to}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f^{\top}_{t}(S_{t})\right],$
and
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}(H_{t},A_{t})-\hat{g}_{t}(H_{t},A_{t}))f_{t}(S_{t})\right]+$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})(\hat{\beta}(t;H_{t})-\beta(t;H_{t}))f_{t}(S_{t})\right]\overset{P}{\to}0.~{}~{}~{}~{}\text{(terms
cancellation)}$
Apart from the nicely-behaved term above, the only term that might be
problematic is:
$\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{p}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}_{t}(H_{t},A_{t})-\hat{g}_{t}(H_{t},A_{t}))\left(\frac{1}{\hat{p}_{t}}-\frac{1}{p_{t}}\right)f_{t}(S_{t})\right],$
which will converge to:
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\left(\underbrace{\sum_{a\in\\{0,1\\}}\frac{\tilde{\sigma}^{2}_{t}(S_{t})}{a\hat{p}_{t}(1|H_{t})+(1-a)(1-\hat{p}_{t}(1|H_{t}))}(p_{t}(1|H_{t})-\hat{p}_{t}(1|H_{t}))(g(H_{t},a)-\hat{g}(H_{t},a))}_{\text{(I)}}\right)f_{t}(S_{t})\right].$
In our context, $T$ is finite and fixed. Therefore, by the fact that
$\hat{p}_{t}(1|H_{t}))$ is bounded away from zero and one, along with the
Cauchy–Schwarz inequality, we have that (up to a multiplicative constant) term
(I) is bounded above by:
$\mathbf{\hat{B}}=\mathbb{E}\left[\sum_{t=1}^{T}\sum_{a\in\\{0,1\\}}\left\|p_{t}(1|H_{t})-\hat{p}_{t}(1|H_{t})\right\|\left\|g(H_{t},a)-\hat{g}(H_{t},a)\right\|\right].$
(30)
Assuming we have nuisance estimates that can make $\mathbf{\hat{B}}$
asymptotically negligible, then the DR-WCLS estimator satisfies:
$\displaystyle n^{1/2}(\hat{\beta}^{(DR)}_{n}-\beta^{\star})$
$\displaystyle=n^{1/2}~{}\mathbb{P}_{n}\Bigg{[}\sum_{t=1}^{T}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f^{\top}_{t}(S_{t})\right]^{-1}\tilde{\sigma}^{2}_{t}(S_{t})(\beta(t;H_{t})-f_{t}(S_{t})\beta^{\star})f_{t}(S_{t})\Bigg{]}+o_{p}(1),$
and it is efficient with influence function:
$\displaystyle\sum_{t=1}^{T}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}\tilde{\sigma}^{2}_{t}(S_{t})(\beta(t;H_{t})-f_{t}(S_{t})\beta^{\star})f_{t}(S_{t}).$
Under moment conditions, we have asymptotic normality with variance given by
$\Sigma_{DR}=Q^{-1}WQ^{-1}$, where
$\displaystyle Q$
$\displaystyle=\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f_{t}(S_{t})^{\top}\right],$
$\displaystyle W$
$\displaystyle=\mathbb{E}\left[\left(\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})(\beta(t;H_{t})-f_{t}(S_{t})\beta^{\star})f_{t}(S_{t})\right)^{2}\right].$
### B.3 Asymptotic variance using sample splitting
Built on the previous doubly robust property, we know that if either the
conditional expectation model $g_{t}(H_{t},A_{t})$ or the treatment
randomization probability $p_{t}(A_{t}|H_{t})$ is correctly specified, we can
obtain a consistent estimator of $\beta^{\star}$. In this section, we provide
the asymptotic variance estimation under sample splitting. Without loss of
generality, we assume that the treatment randomization probability
$p_{t}(A_{t}|H_{t})$ is correctly specified. For simplicity, we use
$\tilde{\sigma}_{t}^{2(k)}$ to denote $\tilde{\sigma}^{2}_{t}(S_{t})^{(k)}$
The asymptotic properties of the DR-WCLS estimator follow from the expansion:
$\displaystyle 0$
$\displaystyle=\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2(k)}\left(\frac{W_{t}(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))(Y_{t+1}-\hat{g}^{(k)}_{t}(H_{t},A_{t}))}{\tilde{\sigma}_{t}^{2(k)}}+\hat{\beta}^{(k)}(t;H_{t})-f_{t}(S_{t})^{\top}\hat{\beta}^{(DR)}_{n}\right)f_{t}(S_{t})\right]$
$\displaystyle=\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\Bigg{[}\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2(k)}\Big{(}\frac{W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}(H_{t},A_{t})+g^{\star}(H_{t},A_{t})-\hat{g}^{(k)}_{t}(H_{t},A_{t}))}{\tilde{\sigma}_{t}^{2(k)}}+$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}\beta(t;H_{t})+(\hat{\beta}^{(k)}(t;H_{t})-\beta(t;H_{t}))-f_{t}(S_{t})^{\top}\beta^{\star}-f_{t}(S_{t})^{\top}(\hat{\beta}^{(DR)}_{n}-\beta^{\star})\Big{)}f_{t}(S_{t})\Bigg{]}$
$\displaystyle=\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2(k)}\left(\frac{W_{t}(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}(H_{t},A_{t}))}{\tilde{\sigma}_{t}^{2(k)}}+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{\star}\right)f_{t}(S_{t})\right]$
$\displaystyle+\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))(g^{\star}(H_{t},A_{t})-\hat{g}^{(k)}_{t}(H_{t},A_{t}))f_{t}(S_{t})\right]$
$\displaystyle+\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2(k)}(\hat{\beta}^{(k)}(t;H_{t})-\beta(t;H_{t}))f_{t}(S_{t})\right]$
$\displaystyle-\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2(k)}f_{t}(S_{t})f^{\top}_{t}(S_{t})\right](\hat{\beta}^{(DR)}_{n}-\beta^{\star})$
Because
$\displaystyle\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))(g^{\star}(H_{t},A_{t})-\hat{g}^{(k)}_{t}(H_{t},A_{t}))f_{t}(S_{t})\right]+$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2(k)}(\hat{\beta}^{(k)}(t;H_{t})-\beta(t;H_{t}))f_{t}(S_{t})\right]\overset{P}{\to}0,$
and
$\displaystyle\mathbb{P}_{n,k}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2(k)}f_{t}(S_{t})f^{\top}_{t}(S_{t})\right]$
$\displaystyle\overset{P}{\to}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2(k)}f_{t}(S_{t})f^{\top}_{t}(S_{t})\right].$
Denote
$\tilde{Y}^{(DR)}_{t+1}=\frac{W_{t}(A_{t}-\tilde{p}^{(k)}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}(H_{t},A_{t}))}{\tilde{\sigma}_{t}^{2(k)}}+\beta(t;H_{t}),$
we obtain:
$\displaystyle n^{1/2}(\hat{\beta}^{(DR)}_{n}-\beta^{\star})$
$\displaystyle=n^{1/2}~{}\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\Bigg{[}\sum_{t=1}^{T}~{}\left\\{\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f^{\top}_{t}(S_{t})\right]\right\\}^{-1}\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\tilde{\sigma}_{t}^{2(k)}(\tilde{Y}^{(DR)}_{t+1}-f_{t}(S_{t})\beta^{\star})f_{t}(S_{t})\Bigg{]}+o_{p}(1).$
By definition of $\beta^{\star}$:
$\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2(k)}\left(\tilde{Y}^{(DR)}_{t+1}-f_{t}(S_{t})^{\top}\beta^{\star}\right)f_{t}(S_{t})\right]=0.$
Consequently, the influence function for $\hat{\beta}^{(DR)}_{n}$ is:
$\displaystyle\frac{1}{K}\sum_{k=1}^{K}\sum_{t=1}^{T}~{}\left\\{\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2(k)}f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]\right\\}^{-1}\tilde{\sigma}_{t}^{2(k)}(\tilde{Y}^{(DR)}_{t+1}-f_{t}(S_{t})\beta^{\star})f_{t}(S_{t}).$
(31)
Then under moment conditions, we have asymptotic normality with variance given
by Equation (11) where:
$\displaystyle m$
$\displaystyle=\sum_{t=1}^{T}\hat{\tilde{p}}_{t}^{(k)}(1|S_{t})(1-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))(\tilde{Y}^{(DR)}_{t+1}-f_{t}(S_{t})\hat{\beta}^{(DR)}_{n})f_{t}(S_{t}),$
$\displaystyle\dot{m}$ $\displaystyle=\frac{\partial
m(\beta,\eta)}{\partial\beta}=\sum_{t=1}^{T}\hat{\tilde{p}}_{t}^{(k)}(1|S_{t})(1-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}.$
In conclusion, we establish that the estimator minimizing the DR-WCLS
criterion $\hat{\beta}^{(DR)}_{n}$ is consistent and asymptotically normal.
## Appendix C Sample Splitting
The estimation technique developed in this paper relies on K-fold cross-
validation obtained by randomly partitioning the sample, i.e., estimate the
nuisance models $\hat{g}_{t}(H_{t}),\hat{g}_{t}(H_{t},A_{t})$ and
$\hat{p}_{t}(t;H_{t})$ on one part of the data (training data) and estimate
the parameter of interest $\hat{\beta}$ on the other part of the data (test
data).
Cross-fitting plays an important role here: the regression procedure as
defined estimates the pseudo-outcome on a separate sample, independent from
the one used in the second-stage regression (Kennedy, 2020), which allows for
informative error analysis while being agnostic about the first- and second-
stage methods.
## Appendix D Proof of Lemma 5.5
### D.1 Implementation of the augmented R-WCLS criterion
Let $f_{t}(S_{t})^{\perp}$ denote the orthogonal complement of $f_{t}(S_{t})$
in $H_{t}$, which refers to the set of random variables that are uncorrelated
with $f_{t}(S_{t})$ smoothing over time. Here, a rigorous definition of the
orthogonal complement of $f_{t}(S_{t})$ is given below (Shi et al., 2023):
$\displaystyle f_{t}(S_{t})^{\perp}\coloneqq\left\\{X_{t}\in
H_{t}:\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1-\tilde{p}_{t})X_{t}f_{t}(S_{t})\right]=0\right\\}.$
(32)
To construct $\Delta_{t}^{\perp}$, i.e., the projection of $\beta(t;H_{t})$
onto $f_{t}(S_{t})^{\perp}$, we can apply a linear working model as follows:
$\beta(t;H_{t})\sim(f_{t}(S_{t})^{\perp})^{\top}\eta+f_{t}(S_{t})^{\top}\beta.$
Therefore, $\Delta_{t}^{\perp}=(f_{t}(S_{t})^{\perp})^{\top}\eta$. This
approach allows us to effectively leverage the information from the nuisance
functions $\beta(t;H_{t})=g_{t}(H_{t},1)-g_{t}(H_{t},0)$, which can be
decomposed into $\Delta_{t}^{\perp}$ and $f_{t}(S_{t})^{\top}\beta$. Most
importantly, the inclusion of $\Delta_{t}^{\perp}$ in the estimating equation
does not compromise the consistency of the estimator $\hat{\beta}_{n}^{(AR)}$.
### D.2 Asymptotic properties
Assume $\hat{\beta}^{(AR)}_{n}$ minimizes the augmented R-WCLS criterion and
therefore it solves:
$0=\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g_{t}(H_{t})-\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)\left(f_{t}(S_{t})^{\top}\beta+\Delta_{t}^{\perp}\right)\right)\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)f_{t}(S_{t})\right].$
The asymptotic properties of the augmented R-WCLS estimator follow from the
expansion:
$\displaystyle 0$
$\displaystyle=\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-\hat{g}(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))\left(f_{t}(S_{t})^{\top}\hat{\beta}^{(AR)}_{n}+\hat{\Delta}_{t}^{\perp}\right)\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right]$
$\displaystyle=\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g^{\star}_{t}(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))\left(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp}\right)\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right]$
$\displaystyle~{}~{}~{}~{}~{}+\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}_{t}(H_{t})-\hat{g}(H_{t}))f_{t}(S_{t})\right]$
$\displaystyle~{}~{}~{}~{}~{}-\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}(\hat{\Delta}_{t}^{\perp}-\Delta_{t}^{\perp})f_{t}(S_{t})\right]$
$\displaystyle~{}~{}~{}~{}~{}-\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}f_{t}(S_{t})f_{t}(S_{t})^{\top}\right](\hat{\beta}^{(AR)}_{n}-\beta^{\star})$
Because
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]\overset{P}{\to}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right],$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})(g^{\star}(H_{t})-\hat{g}(H_{t}))\right]\overset{P}{\to}0,~{}~{}~{}~{}\text{(by
design)}$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}(\hat{\Delta}_{t}^{\perp}-\Delta_{t}^{\perp})f_{t}(S_{t})\right]\overset{P}{\to}0.~{}~{}~{}~{}\text{(orthogonal
projection)}$
We obtain:
$\displaystyle n^{1/2}(\hat{\beta}^{(AR)}_{n}-\beta^{\star})$
$\displaystyle=\left\\{\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]\right\\}^{-1}\times$
$\displaystyle
n^{1/2}\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g^{\star}_{t}(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))\left(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp}\right)\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right]$
Recall $\tilde{Y}^{(R)}_{t+1}=Y_{t+1}-g^{\star}_{t}(H_{t})$, and we obtain
$\displaystyle n^{1/2}(\hat{\beta}^{(AR)}_{n}-\beta^{\star})$
$\displaystyle=n^{1/2}~{}\mathbb{P}_{n}\Bigg{[}\sum_{t=1}^{T}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))\left(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp}\right)\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\Bigg{]}+o_{p}(1).$
By definition of $\beta^{\star}$:
$\mathbb{E}\left[\sum_{t=1}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))\left(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp}\right)\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right]=0.$
Consequently, under regularity conditions, the estimator
$\hat{\beta}^{(AR)}_{n}\overset{P}{\to}\beta^{\star}$; that is,
$\hat{\beta}^{(AR)}_{n}$ is consistent. The influence function for
$\hat{\beta}^{(AR)}_{n}$ is:
$\displaystyle\sum_{t=1}^{T}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))\left(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp}\right)\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t}).$
(33)
Then under moment conditions, we have asymptotic normality with variance given
by $\Sigma_{R}=Q^{-1}WQ^{-1}$, where
$\displaystyle Q$
$\displaystyle=\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right],$
$\displaystyle W$
$\displaystyle=\mathbb{E}\left[\left(\sum_{t=1}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))\left(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp}\right)\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right)^{2}\right].$
In conclusion, we establish that the estimator minimizing the augmented R-WCLS
criterion $\hat{\beta}^{(AR)}_{n}$ is consistent and asymptotically normal.
Under sample splitting, the asymptotic variance can be estimated by Equation
(11) with:
$\displaystyle m$
$\displaystyle=\sum_{t=1}^{T}\hat{W}^{(k)}_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))\left(f_{t}(S_{t})^{\top}\hat{\beta}^{(AR)}_{n}+\hat{\Delta}_{t}^{\perp(k)}\right)\right)(A_{t}-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t}),$
$\displaystyle\dot{m}$ $\displaystyle=\frac{\partial
m(\beta,\eta)}{\partial\beta}=\sum_{t=1}^{T}\hat{\tilde{p}}_{t}^{(k)}(1|S_{t})(1-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}.$
## Appendix E Proof of Theorem 5.7
### E.1 Efficiency gain over WCLS estimator
To reconcile the notations, we write the estimating equation in a general
form, from which can obtain a consistent estimate of $\beta^{\star}$ by
solving:
$\mathbb{E}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g_{t}(H_{t})-\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)\left(f_{t}(S_{t})^{\top}\beta+\Delta_{t}^{\perp}\right)\right)\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)f_{t}(S_{t})\right]=0.$
For WCLS, denote the linear working model for
$\mathbb{E}[Y_{t+1}|H_{t},A_{t}]$ as $\tilde{g}_{t}(H_{t},A_{t})$. We can then
write the estimating equation as:
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-\tilde{g}_{t}(H_{t})-\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)(f_{t}(S_{t})^{\top}\beta+\tilde{\Delta}_{t}^{\perp})\right)\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)f_{t}(S_{t})\right]=0.$
For an augmented R-WCLS estimator, recall
$\tilde{Y}^{(R)}_{t+1}=Y_{t+1}-g(H_{t})$, the estimating equation can be
written as:
$\displaystyle 0$
$\displaystyle=\mathbb{E}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g_{t}(H_{t})-\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)\left(f_{t}(S_{t})^{\top}\beta+\Delta_{t}^{\perp}\right)\right)\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)f_{t}(S_{t})\right]$
$\displaystyle=\mathbb{E}\left[\sum_{t=1}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)\left(f_{t}(S_{t})^{\top}\beta+\Delta_{t}^{\perp}\right)\right)\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)f_{t}(S_{t})\right]$
Since both methods can produce consistent estimations, now we consider the
comparison between their asymptotic variances. For WCLS, the asymptotic
variance can be calculated as:
$\displaystyle\Sigma=$
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}\times$
$\displaystyle~{}~{}~{}~{}~{}\mathbb{E}\left[\left(\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-\tilde{g}_{t}(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))(f_{t}(S_{t})^{\top}\beta^{\star}+\tilde{\Delta}_{t}^{\perp})\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right)^{2}\right]\times$
$\displaystyle~{}~{}~{}~{}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}$
and for the augmented R-WCLS, the asymptotic variance can be calculated as:
$\displaystyle\Sigma^{(AR)}=$
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mathbb{E}\left[\left(\sum_{t=1}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp})\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right)^{2}\right]\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}.$
Denote $\epsilon(H_{t})=g(H_{t})-\tilde{g}(H_{t})$, we have the following
derivation:
$\displaystyle\mathbb{E}\left[\left(\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-\tilde{g}_{t}(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))(f_{t}(S_{t})^{\top}\beta^{\star}+\tilde{\Delta}_{t}^{\perp})\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right)^{2}\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\left(\sum_{t=1}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}+\epsilon(H_{t})-(A_{t}-\tilde{p}_{t}(1|S_{t}))(f_{t}(S_{t})^{\top}\beta^{\star}+\tilde{\Delta}_{t}^{\perp}+\Delta_{t}^{\perp}-\Delta_{t}^{\perp})\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right)^{2}\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\left(\sum_{t=1}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp})\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right)^{2}\right]$
$\displaystyle~{}~{}~{}~{}~{}~{}+\mathbb{E}\Bigg{[}\left(\sum_{t=1}^{T}W_{t}\epsilon(H_{t},A_{t})(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right)^{2}\Bigg{]}$
$\displaystyle\geq$
$\displaystyle\mathbb{E}\left[\left(\sum_{t=1}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t}(1|S_{t}))(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp})\right)(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})\right)^{2}\right]$
where
$\epsilon(H_{t},A_{t})=\epsilon(H_{t})+(A_{t}-\tilde{p}_{t}(1|S_{t}))(\Delta_{t}^{\perp}-\tilde{\Delta}_{t}^{\perp})$.
The interaction term is 0 because:
$\displaystyle\mathbb{E}\left[\sum_{t,t^{\prime}}^{T}W_{t}\left(\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t})(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp})\right)(A_{t}-\tilde{p}_{t})f_{t}(S_{t})W_{t^{\prime}}\epsilon(H_{t^{\prime}},A_{t^{\prime}})(A_{t^{\prime}}-\tilde{p}_{t^{\prime}})f_{t^{\prime}}(S_{t^{\prime}})^{\top}\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\sum_{t,t^{\prime}}^{T}W_{t}\left(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t})\right)(A_{t}-\tilde{p}_{t})f_{t}(S_{t})W_{t^{\prime}}\epsilon(H_{t^{\prime}},A_{t^{\prime}})(A_{t^{\prime}}-\tilde{p}_{t^{\prime}})f_{t^{\prime}}(S_{t^{\prime}})^{\top}\right]$
Here the first to second line uses the fact that
$f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp}=g_{t}(H_{t},1)-g_{t}(H_{t},0)$
so we can then get
$\tilde{Y}^{(R)}_{t+1}-(A_{t}-\tilde{p}_{t})(f_{t}(S_{t})^{\top}\beta^{\star}+\Delta_{t}^{\perp})=Y_{t+1}-g^{\star}_{t}(H_{t},A_{t})$.
For $t\geq t^{\prime}$, by iterated expectation, we have:
$\displaystyle\mathbb{E}\left[\sum_{t,t^{\prime}}^{T}W_{t}\underbrace{\mathbb{E}\left[\left(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t})\right)|H_{t},A_{t}\right]}_{=0}W_{t^{\prime}}\epsilon(H_{t^{\prime}},A_{t^{\prime}})(A_{t^{\prime}}-\tilde{p}_{t^{\prime}})f_{t^{\prime}}(S_{t^{\prime}})^{\top}(A_{t}-\tilde{p}_{t})f_{t}(S_{t})\right]$
$\displaystyle=0.$
For $t<t^{\prime}$, by iterated expectation, we have:
$\displaystyle\mathbb{E}\left[\sum_{t,t^{\prime}}^{T}W_{t}\left(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t})\right)\mathbb{E}\left[W_{t^{\prime}}\epsilon(H_{t^{\prime}},A_{t^{\prime}})(A_{t^{\prime}}-\tilde{p}_{t^{\prime}})f_{t^{\prime}}(S_{t^{\prime}})^{\top}|H_{t^{\prime}}\right](A_{t}-\tilde{p}_{t})f_{t}(S_{t})\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\sum_{t,t^{\prime}}^{T}W_{t}\left(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t})\right)\mathbb{E}\left[\tilde{p}_{t^{\prime}}(1-\tilde{p}_{t^{\prime}})\Delta_{t^{\prime}}^{\perp}f_{t^{\prime}}(S_{t^{\prime}})^{\top}|H_{t^{\prime}}\right](A_{t}-\tilde{p}_{t})f_{t}(S_{t})\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\sum_{t,t^{\prime}}^{T}W_{t}\mathbb{E}\left[\underbrace{\left(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t})\right)\tilde{p}_{t^{\prime}}(1-\tilde{p}_{t^{\prime}})\Delta_{t^{\prime}}^{\perp}f_{t^{\prime}}(S_{t^{\prime}})^{\top}|A_{t},H_{t}}_{\text{conditionally
independent}}\right](A_{t}-\tilde{p}_{t})f_{t}(S_{t})\right]$ $\displaystyle=$
$\displaystyle\mathbb{E}\left[\sum_{t,t^{\prime}}^{T}W_{t}\underbrace{\mathbb{E}\left[Y_{t+1}-g^{\star}_{t}(H_{t},A_{t})|A_{t},H_{t}\right]}_{=0}\mathbb{E}\left[\tilde{p}_{t^{\prime}}(1-\tilde{p}_{t^{\prime}})\Delta_{t^{\prime}}^{\perp}f_{t^{\prime}}(S_{t^{\prime}})^{\top}|A_{t},H_{t}\right](A_{t}-\tilde{p}_{t})f_{t}(S_{t})\right]$
$\displaystyle=$ $\displaystyle 0$
Therefore, the derivation above shows that $\Sigma^{(AR)}-\Sigma$ is negative
semidefinite. This indicates using the augmented R-WCLS to estimate treatment
effect $\beta^{\star}$ is more efficient than WCLS. In the case when we
estimate $\beta_{t}^{\star}$ nonparametrically rather than smoothing over
time, the interaction terms for $t\neq t^{\prime}$ don’t exist, therefore the
conclusion holds without the conditional independence assumption.
### E.2 Efficiency comparison between R-WCLS and DR-WCLS
As stated in Section 4.2, the difference between R-WCLS and DR-WCLS criterion
is DR-WCLS replaces one term in R-WCLS with its expectation. Denote the
following:
$\displaystyle
M_{DR,t}=\tilde{\sigma}^{2}_{t}(S_{t})\left(\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\right)f_{t}(S_{t}),$
$\displaystyle
M_{R,t}=W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))\left(\tilde{Y}_{t+1}^{(R)}-(A_{t}-\tilde{p}_{t})f_{t}(S_{t})^{\top}\beta\right)f_{t}(S_{t}).$
Here we apply the Conditional Jensen’s Inequality: since $g(x)=x^{2}$ is
convex, $\mathbb{E}[X^{2}|Z]\geq(\mathbb{E}[X|Z])^{2}$ holds. Then we have the
following inequality hold:
$\displaystyle\mathbb{E}\left[\left(\sum_{t=1}^{T}M_{R,t}\right)^{2}\,|\,H_{t}\right]\geq\left(\mathbb{E}\left[\sum_{t=1}^{T}M_{R,t}\,|\,H_{t}\right]\right)^{2}$
By iterative expectation, we can easily show:
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}M_{R,t}\,|\,H_{t}\right]=\sum_{t=1}^{T}M_{DR,t},$
thus, it’s equivalently saying:
$\displaystyle\mathbb{E}\left[\left(\sum_{t=1}^{T}M_{R,t}\right)^{2}\,|\,H_{t}\right]\geq\left(\sum_{t=1}^{T}M_{DR,t}\right)^{2}$
Both sides of the inequality are functions of random variables collected in
$H_{t}$, thus the inequality still holds after taking expectation w.r.t.
$H_{t}$, which yields:
$\displaystyle\mathbb{E}\left[\left(\sum_{t=1}^{T}M_{R,t}\right)^{2}\right]\geq\mathbb{E}\left[\left(\sum_{t=1}^{T}M_{DR,t}\right)^{2}\right]$
Following this, the asymptotic variance of the DR-WCLS estimator can be
calculated as:
$\displaystyle\Sigma^{(DR)}=$
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}\times\mathbb{E}\left[\left(\sum_{t=1}^{T}M_{DR,t}\right)^{2}\right]\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}$
Recall $\Sigma^{(R)}$ from previous proof:
$\displaystyle\Sigma^{(R)}=$
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}\times\mathbb{E}\left[\left(\sum_{t=1}^{T}M_{R,t}\right)^{2}\right]\times$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}$
Therefore $\Sigma^{(DR)}-\Sigma^{(R)}$ is negative semidefinite. In
conclusion, we proved that the DR-WCLS estimator is more efficient than R-WCLS
estimator when we estimate the time-varying treatment effect.
## Appendix F Proof of Corollary 6.1.1
### F.1 Double robustness property
To derive the DR-WCLS criterion (14) with missing indicator $R_{t}$. Under
Assumption 2.1 and 6.1, the pseudo outcome $\tilde{Y}_{t+1}^{(DR)}$ can be
written as:
$\displaystyle\tilde{Y}_{t+1}^{(DR)}=$
$\displaystyle\beta(t;H_{t})+\frac{A_{t}R_{t}(Y_{t+1}-g(H_{t},A_{t}))}{p_{t}(A_{t},R_{t}|H_{t})}-\frac{(1-A_{t})R_{t}(Y_{t+1}-g(H_{t},A_{t}))}{p_{t}(A_{t},R_{t}|H_{t})}$
$\displaystyle=$
$\displaystyle\beta(t;H_{t})+\frac{A_{t}R_{t}(Y_{t+1}-g(H_{t},A_{t}))}{p(R_{t}|H_{t})p(A_{t}|H_{t})}-\frac{(1-A_{t})R_{t}(Y_{t+1}-g(H_{t},A_{t}))}{p(R_{t}|H_{t})p(A_{t}|H_{t})}$
$\displaystyle=$
$\displaystyle\beta(t;H_{t})+\frac{R_{t}}{p(R_{t}|H_{t})}\left[\frac{A_{t}(Y_{t+1}-g(H_{t},A_{t}))}{p(A_{t}|H_{t})}-\frac{(1-A_{t})(Y_{t+1}-g(H_{t},A_{t}))}{p(A_{t}|H_{t})}\right]$
$\displaystyle=$ $\displaystyle\beta(t;H_{t})+\frac{{\bf
1}(R_{t}=1)}{p(R_{t}|H_{t})}\frac{W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}$
and the corresponding estimating equation is:
$\displaystyle\mathbb{P}_{n}\Bigg{[}\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\Big{(}\frac{{\bf
1}(R_{t}=1)}{p(R_{t}|H_{t})}\frac{W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\Big{)}f_{t}\Bigg{]}$
Furthermore, based on previous proofs, we can conclude that the
$\hat{\beta}_{n}$ obtained by solving the above estimating equation is doubly
robust.
### F.2 Asymptotic normality
We use the same notation as the previous section, assume
$\hat{\beta}^{(DR)}_{n}$ solves:
$\displaystyle 0$
$\displaystyle=\mathbb{P}_{n}\Bigg{[}\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\Big{(}\frac{{\bf
1}(R_{t}=1)}{p(R_{t}|H_{t})}\frac{W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\hat{\beta}^{(DR)}_{n}\Big{)}f_{t}\Bigg{]}$
When the true randomization probability $p_{t}=p(A_{t}|H_{t})$ and missing
mechanism $p_{t}^{R}=p(R_{t}|H_{t})$ are unknown, we have the weight $W_{t}$
estimated by
$\hat{W}_{t}=\tilde{p}_{t}(A_{t}|S_{t})/\hat{p}_{t}(A_{t}|H_{t})$, and missing
mechanism estimated by $\hat{p}(R_{t}|H_{t})$. Then the estimating equation
can be decomposed as:
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\frac{R_{t}}{\hat{p}(R_{t}|H_{t})}\frac{\tilde{p}_{t}(A_{t}|S_{t})(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-\hat{g}_{t}(H_{t},A_{t}))}{\hat{p}_{t}(A_{t}|H_{t})\tilde{\sigma}^{2}_{t}(S_{t})}+\hat{\beta}(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{(DR)}\right)f_{t}(S_{t})\right]$
$\displaystyle=$
$\displaystyle\mathbb{P}_{n}\Bigg{[}\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\Big{(}\frac{R_{t}\tilde{p}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t})+g^{\star}_{t}(H_{t},A_{t})-\hat{g}_{t}(H_{t},A_{t}))}{\tilde{\sigma}^{2}_{t}(S_{t})}\left(\frac{1}{\hat{p}_{t}\hat{p}_{t}^{R}}-\frac{1}{p_{t}p_{t}^{R}}+\frac{1}{p_{t}p_{t}^{R}}\right)$
$\displaystyle+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{\star}+(\hat{\beta}(t;H_{t})-\beta(t;H_{t}))-f_{t}(S_{t})^{\top}(\beta^{(DR)}-\beta^{\star})\Big{)}f_{t}(S_{t})\Bigg{]}$
$\displaystyle=$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})\left(\frac{R_{t}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t}))}{p_{t}^{R}\tilde{\sigma}^{2}_{t}(S_{t})}+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta^{\star}\right)f_{t}(S_{t})\right]+$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}R_{t}\tilde{p}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t}))\left(\frac{1}{\hat{p}_{t}^{R}\hat{p}_{t}}-\frac{1}{p_{t}^{R}p_{t}}\right)f_{t}(S_{t})\right]+$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}R_{t}\tilde{p}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}_{t}(H_{t},A_{t})-\hat{g}_{t}(H_{t},A_{t}))\left(\frac{1}{\hat{p}_{t}^{R}\hat{p}_{t}}-\frac{1}{p_{t}^{R}p_{t}}\right)f_{t}(S_{t})\right]+$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\frac{R_{t}}{p_{t}^{R}}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}(H_{t},A_{t})-\hat{g}_{t}(H_{t},A_{t}))f_{t}(S_{t})\right]+$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})(\hat{\beta}(t;H_{t})-\beta(t;H_{t}))f_{t}(S_{t})\right]-$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f^{\top}_{t}(S_{t})\right](\hat{\beta}^{(DR)}_{n}-\beta^{\star})$
Because
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}R_{t}\tilde{p}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g^{\star}_{t}(H_{t},A_{t}))\left(\frac{1}{\hat{p}_{t}^{R}\hat{p}_{t}}-\frac{1}{p_{t}^{R}p_{t}}\right)f_{t}(S_{t})\right]\overset{P}{\to}0,$
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f^{\top}_{t}(S_{t})\right]\overset{P}{\to}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f^{\top}_{t}(S_{t})\right],$
and
$\displaystyle\mathbb{P}_{n}\left[\sum_{t=1}^{T}\frac{R_{t}}{p_{t}^{R}}W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}(H_{t},A_{t})-\hat{g}_{t}(H_{t},A_{t}))f_{t}(S_{t})\right]+$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})(\hat{\beta}(t;H_{t})-\beta(t;H_{t}))f_{t}(S_{t})\right]\overset{P}{\to}0.$
Apart from the nicely-behaved terms above, the only term that might be
problematic is:
$\mathbb{P}_{n}\left[\sum_{t=1}^{T}R_{t}\tilde{p}_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(g^{\star}_{t}(H_{t},A_{t})-\hat{g}_{t}(H_{t},A_{t}))\left(\frac{1}{\hat{p}_{t}^{R}\hat{p}_{t}}-\frac{1}{p_{t}^{R}p_{t}}\right)f_{t}(S_{t})\right],$
which will converge to:
$\displaystyle\mathbb{E}\left[\sum_{t=1}^{T}\left(\underbrace{\sum_{a\in\\{0,1\\}}{\bf
c}(a)(p_{t}^{R}(1|H_{t})p_{t}(a|H_{t})-\hat{p}_{t}^{R}(1|H_{t})\hat{p}_{t}(a|H_{t}))(g^{\star}(H_{t},a)-\hat{g}(H_{t},a))}_{\text{(II)}}\right)f_{t}(S_{t})\right],$
where ${\bf
c}(a)=\frac{\tilde{\sigma}^{2}_{t}(S_{t})/\hat{p}_{t}^{R}(1|H_{t})}{a\hat{p}_{t}(1|H_{t})+(a-1)(1-\hat{p}_{t}(1|H_{t}))}$.
In our context, $T$ is finite and fixed. Therefore, by the fact that
$\hat{p}_{t}(1|H_{t}))$ is bounded away from zero and one, along with the
Cauchy–Schwarz inequality, we have that (up to a multiplicative constant) term
(II) is bounded above by:
$\mathbf{\hat{B}}^{R}=\mathbb{E}\left[\sum_{t=1}^{T}\sum_{a\in\\{0,1\\}}\left\|p_{t}^{R}(1|H_{t})p_{t}(a|H_{t})-\hat{p}_{t}^{R}(1|H_{t})\hat{p}_{t}(a|H_{t})\right\|\left\|g^{\star}(H_{t},a)-\hat{g}(H_{t},a)\right\|\right].$
(34)
Same argument as in the previous section, if $\hat{p}(a|H_{t})$ and
$\hat{p}_{t}^{R}(1|H_{t})$ are based on a correctly specified parametric
model, so that
$\left\|\hat{p}_{t}^{R}(1|H_{t})\hat{p}_{t}(a|H_{t})-p_{t}^{R}(1|H_{t})p_{t}(a|H_{t})\right\|=O_{p}(n^{-1/2})$,
then we only need $\hat{g}(H_{t},a)$ to be
consistent,$\left\|g^{\star}(H_{t},a)-\hat{g}(H_{t},a)\right\|=o_{p}(1)$, to
make $\mathbf{\hat{B}}^{R}$ asymptotically negligible. Thus if we know the
treatment and data missingness mechanism, the outcome model can be very
flexible. Another way to achieve efficiency is if we have both
$\left\|\hat{p}_{t}^{R}(1|H_{t})\hat{p}_{t}(a|H_{t})-p_{t}^{R}(1|H_{t})p_{t}(a|H_{t})\right\|=o_{p}(n^{-1/4})$
and $\left\|g^{\star}(H_{t},a)-\hat{g}(H_{t},a)\right\|=o_{p}(n^{-1/4})$, so
that their product term is $o_{p}(n^{-1/2})$ and asymptotically negligible
(Kennedy, 2016). This of course occurs if both $\hat{g}(H_{t},a)$ and
$\hat{p}_{t}^{R}(1|H_{t})\hat{p}_{t}(a|H_{t})$ are based on correctly
specified models, but it can also hold even for estimators that are very
flexible and not based on parametric models.
Assuming we have nuisance estimates that can make $\mathbf{\hat{B}}^{R}$
asymptotically negligible, then the DR-WCLS estimator satisfies:
$\displaystyle n^{1/2}(\hat{\beta}^{(DR)}_{n}-\beta^{\star})$
$\displaystyle=n^{1/2}~{}\mathbb{P}_{n}\Bigg{[}\sum_{t=1}^{T}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f^{\top}_{t}(S_{t})\right]^{-1}\tilde{\sigma}^{2}_{t}(S_{t})(\beta(t;H_{t})-f_{t}(S_{t})\beta^{\star})f_{t}(S_{t})\Bigg{]}+o_{p}(1),$
and it is efficient with influence function:
$\displaystyle\sum_{t=1}^{T}~{}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f_{t}(S_{t})^{\top}\right]^{-1}\tilde{\sigma}^{2}_{t}(S_{t})(\beta(t;H_{t})-f_{t}(S_{t})\beta^{\star})f_{t}(S_{t}).$
Under moment conditions, we have asymptotic normality with variance given by
$\Sigma^{R}_{DR}=Q^{-1}WQ^{-1}$, where
$\displaystyle Q$
$\displaystyle=\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})f_{t}(S_{t})f_{t}(S_{t})^{\top}\right],$
$\displaystyle W$
$\displaystyle=\mathbb{E}\left[\left(\sum_{t=1}^{T}\tilde{\sigma}^{2}_{t}(S_{t})(\beta(t;H_{t})-f_{t}(S_{t})\beta^{\star})f_{t}(S_{t})\right)^{2}\right].$
### F.3 Algorithm
##### Step I
Let $K$ be a fixed integer. Form a K-fold random partition of
$\\{1,2,\dots,N\\}$ by dividing it to equal parts, each of size $n:=N/K$,
assuming $N$ is a multiple of $K$. Form each set $I_{k}$, let
$I_{k}^{\complement}$ denote the observation indices that are not in $I_{k}$.
##### Step II
For each fold, use any supervised learning algorithm to estimate the
appropriate working models. Let $\hat{g}_{t}^{(k)}(H_{t},A_{t})$,
$\hat{p}_{t}^{(k)}(1|H_{t})$, $\hat{p}_{t}^{(k)}(R_{t}|H_{t})$ and
$\hat{\tilde{p}}_{t}^{(k)}(1|S_{t})$ denote the estimates for
$\mathbb{E}[Y_{t+1}|H_{t},A_{t}]$,
$\mathbb{E}[A_{t}|H_{t}]$,$\mathbb{E}[R_{t}|H_{t}]$, and
$\mathbb{E}[A_{t}|S_{t}]$ respectively using individuals in
$I_{k}^{\complement}$, i.e., estimates of the nuisance parameters the $k$th
fold.
##### Step III
Construct the pseudo-outcomes and perform weighted regression estimation:
$\tilde{Y}^{(DR)}_{t+1}:=\frac{{\bf
|
# LLaVA-Gemma: Accelerating Multimodal Foundation Models with a Compact
Language Model
Musashi Hinck111Equal Contributions, order decided by LLaVA-Gemma 2b Matthew
L. Olson111Equal Contributions, order decided by LLaVA-Gemma 2b David Cobbley
Shao-Yen Tseng Vasudev Lal
Cognitive AI, Intel Labs
Santa Clara, CA USA
{musashi.hinck,matthew.lyle.olson,david.j.cobbley,shao-
<EMAIL_ADDRESS>
###### Abstract
We train a suite of multimodal foundation models (MMFM) using the popular
LLaVA framework with the recently released Gemma family of large language
models (LLMs). Of particular interest is the 2B parameter Gemma model, which
provides opportunities to construct capable small-scale MMFMs. In line with
findings from other papers in this space, we test the effect of ablating three
design features: pretraining the connector, utilizing a more powerful image
backbone, and increasing the size of the language backbone. The resulting
models, which we call LLaVA-Gemma, exhibit moderate performance on an array of
evaluations, but fail to improve past the current comparably-sized SOTA
models. Closer analysis of performance shows mixed effects; skipping
pretraining tends to reduce performance, larger vision models sometimes
improve performance, and increasing language model size has inconsistent
effects. We publicly release training recipes, code and weights for our models
for the LLaVA-Gemma models111https://huggingface.co/intel/llava-gemma-2b/,
https://huggingface.co/intel/llava-gemma-7b/ .
## 1 Introduction
In this paper, we introduce LLaVA-Gemma, a suite of vision-language assistants
trained from the Gemma Large Language Model (LLM) variants, Gemma-2B and
Gemma-7B [17]. Our work is inspired by the rapid progress in small but capable
visual language models (VLMs), such as LLaVA-Phi [23], which have demonstrated
remarkable efficiency and effectiveness in various language understanding
tasks. LLaVA-Gemma distinguishes itself among small VLMs due to the public
release of similarly trained, different-sized LLMs Gemma-2B and Gemma-7B.
The unique release of the Gemma models offers an opportunity to contrast model
performance in relation to parameter size and visual encoding capabilities. By
possessing two variants with different parameter sizes, LLaVA-Gemma allows
researchers to investigate the trade-offs between computational efficiency and
the richness of visual and linguistic understanding. With these two variants,
we perform a deeper exploration of how varying levels of model complexity
influence the effectiveness of visual encoding, providing valuable insights
into the optimization of small VLMs for diverse tasks and environments.
Furthermore, the use of significantly more unique tokens, at $256k$, offers an
opportunity to investigate how a massively increased token set effects multi-
modal performance.
Recent advancements in (LLMs) [20] and multimodal foundation models (MMFMs)
[7] have propelled the interest and development of Large Multimodal Models
(LMMs). Notable models like GPT-4 [1], LLaVA [10, 9], and their derivatives
have demonstrated significant performance in vision-language tasks such as
Visual Question Answering (VQA) and image captioning [5]. However, the
computational demands of deploying these models have led to the exploration of
small-scale LMMs. Our work aims to provide a unified analysis of small-scale
LMMs, examining how model selections, training recipes, and data contribute to
performance, which is distinct from existing works such as LLaVA-Phi.
Our contributions are as follows:
1. 1.
We introduce LLaVA-Gemma, a MMFM that leverages the compact yet powerful Gemma
language models for efficient multimodal interactions.
2. 2.
We extensively evaluate Gemma-2B and Gemma-7B model variants provides valuable
insights into the trade-offs between computational efficiency and the richness
of visual and linguistic understanding in LLMs.
3. 3.
We present a deep exploration into alternate design choices and visualize
attention with relevancy maps to enhance our understanding of the model’s
performance and attention.
Language | Vision | Pretrain | | MME | MM- | POPE | | | ScienceQA
---|---|---|---|---|---|---|---|---|---
Backbone | Backbone | Connector | GQA | Cog. | Per. | Vet | Acc. | F1 | VQAv2 | MMVP | Image
gemma-2b-it | CLIP | Yes | 0.531 | 236 | 1130 | 17.7 | 0.850 | 0.839 | 70.7 | 0.287 | 0.564
gemma-2b-it | CLIP | No | 0.481 | 249 | 935 | 13.1 | 0.784 | 0.762 | 61.7 | 0.180 | 0.549
gemma-2b-it | DinoV2 | Yes | 0.587 | 307 | 1133 | 19.1 | 0.853 | 0.838 | 71.4 | 0.227 | 0.555
gemma-2b-it | DinoV2 | No | 0.501 | 309 | 959 | 14.5 | 0.793 | 0.772 | 61.7 | 0.180 | 0.568
gemma-7b-it | CLIP | Yes | 0.472 | 254 | 895 | 18.2 | 0.848 | 0.829 | 68.7 | 0.327 | 0.625
gemma-7b-it | CLIP | No | 0.472 | 278 | 857 | 19.1 | 0.782 | 0.734 | 65.1 | 0.240 | 0.636
gemma-7b-it | DinoV2 | Yes | 0.519 | 257 | 1021 | 14.3 | 0.794 | 0.762 | 65.2 | 0.327 | 0.628
gemma-7b-it | DinoV2 | No | 0.459 | 226 | 771 | 12.2 | 0.693 | 0.567 | 57.4 | 0.267 | 0.598
Phi-2b | CLIP | Yes | - | - | 1335 | 28.9 | - | 0.850 | 71.4 | - | 0.684
Llama-2-7b | CLIP | Yes | 0.620 | 348 | 1511 | 30.6 | 0.850 | 0.859 | 78.5 | 46.1 | 0.704
Table 1: Performance of LLaVA-Gemma models across seven benchmarks.
Highlighted box indicates strongest performance amongst LLaVA-Gemma models.
Bottom two rows show self-reported performance of Llava Phi-2 and LLaVA-v1.5
respectively.
## 2 Methods
We follow the LLaVA framework [9] with a few design modifications. This
framework combines a pretrained vision encoder (such as CLIP [14]) and
pretrained language model (such as Llama-2 [19]) into a multimodal model using
a MLP connector and a two-stage training procedure.
The first stage pretrains the MLP connector by freezing the vision and
language models and training on custom dataset of 595k samples filtered from
CC3M [15]. The second stage jointly finetunes the language model and connector
using a custom mixture 665k multimodal instruction tuning examples. This
dataset includes synthetic data generated [10], as well as examples from
established vision-language training sets such as GQA [5] and TextCaps [16].
We deviate from the original recipe in three ways: the language model, the
vision encoder and the pretraining stage. For the language backbone, we use
the recently released Gemma models [17]. Two aspects of Gemma make it an
interesting candidate for our experiments. Whereas LLaVA uses the 7 and
13-billion parameter vicuña langauge models [22], Gemma offers 2 and 7-billion
parameter versions. Next, Gemma uses a significantly larger token set than any
other LLM, with 256k unique tokens (compared to a standard 50k), which offers
a unique opportunity to see the effects of a massively more diverse embeddings
space. Other papers exploring the design space of Vision Language Models
(VLMs) find the vision encoder is important for achieving strong performance
[12]. Correspondingly, we explore the use of the larger 1-billion parameter
DINOv2 image encoder [13] as the vision tower. Related work on VLMs [6] finds
that skipping the initial pretraining stage improves downstream performance.
For all designs, we train a version with and without the initial pretraining
step.
## 3 Results
We evaluate the LlaVA-Gemma models on a similar collection of benchmarks to
other LMM works: GQA [5]; MME [3]; MM-Vet [21]; POPE (accuracy and F1) [8];
VQAv2 [4]; MMVP [18]; the image subset of ScienceQA [11]. Our experiments
provide insights into the efficacy of various design choices within the LLaVA
framework. As shown in table 1, the performance of LLaVA-Gemma models across
seven benchmarks reveals interesting patterns, particularly concerning the
choice of vision encoder and the impact of pretraining the connector.
### 3.1 Influence of Vision Encoder on Performance
For the 2B backbone, exchanging the CLIP vision encoder for DinoV2 appears to
generally improve performance, with DinoV2 variants outperforming CLIP
variants on all benchmarks except POPE-F1 and MMVP. When using a 7B backbone,
the picture is murkier; although we see improvements for GQA and MME, we see a
decline in performance on MM-Vet, POPE, VQA and ScienceQA. This may suggest an
interaction between the capability of the language model and the richness of
the representation provided by the vision encoder, or to the possibility that
the 7b-Dino combination is undertrained.
### 3.2 Effects of Pretraining
We find that skipping the initial connector pretraining almost always reduces
model performance. With the exceptions of 2B-Dino on MME Cognition and 7B-CLIP
on MME Cognition, MM-Vet and ScienceQA, the variant with a pretrained
connector outperforms its counterpart that skipped pretraining. These results
do not support the hypothesis posited in Karamcheti et al. [6].
### 3.3 Comparison to Baselines
Contrasting the results of LLaVA-Gemma with the self-reported performances of
Phi-2b and Llama-2-7b models provides additional context. The LLaVA-Gemma
models only reach parity on comparably-sized baselines for the VQA benchmark
between 2B models. Given the absence of strong a priori reasons to expect
Gemma-based LLaVA models to perform worse, understanding this “poor”
performance is a direction of future interest.
### 3.4 Speed of Training and Inference
We compare the training and eval speed for the two models sizes. In our
experiments, the training time for the Gemma-2B model on 8 Intel Gaudi 2® AI
accelerators was 4 hours, while the larger Gemma-7B model required 16 hours to
train under the same conditions. This indicates that the Gemma-7B model, with
its increased parameter count, takes approximately four times longer to train
compared to the Gemma-2B model. The relative speed of the Gemma-7B model is
thus 0.25x compared to the Gemma-2B model. These results highlight the trade-
off between model size and training efficiency, with larger models requiring
significantly more computational resources and time.
## 4 Analysis
### 4.1 Impact of Alternate Design Choices
Figure 1: Effect of design choices differs between evaluations. Point
indicates average change in probability of correct answer versus baseline
design.
Table 1 suggests that the gemma-2b-dino recipe generally provides stronger
evaluation results, but these results are mixed. To better assess the effect
of the design choices, we fit a collection of linear models to measure the
average associated change in the probability of a correct prediction as a
function of each of the three ablations: skipping pretraining, changing the
vision backbone, and increasing the size of the LM backbone from 2B to 7B. We
study these effects separately for each benchmark.
Figure 1 shows the average effects of design choices for four benchmarks where
we have observation-level errors. Skipping pretraining appears to either have
a strong negative (GQA, POPE) or weak/insignificant effect (MME, ScienceQA).
Changing the vision encoder to DinoV2 improves performance on GQA and MME, but
slightly worsens performance on POPE and has no significant effect on the
probability of correct predictions on ScienceQA. Notably, in our experiments
increasing the LM backbone to the 7B parameter variant had a strong negative
effect on GQA, MME and POPE, but strong positive effect on ScienceQA. Taken
together, these heterogeneous results underscore the need for more granular
analysis of errors and design choices.
### 4.2 Visualizing Attention with Relevancy Maps
To better understand the differences between our the LLaVA-Gemma models, we
use relevancy maps [2] to visualize where the model focuses its attention.
These relevancy maps provide a token-wise understanding of the model’s
attention by highlighting the most relevant parts of the input and is
specially designed to maintain the total relevancy across layers for
transformer based models.
We apply an qualitative example of these relevancy maps from the Eyes-wide-
shut (MMVP) dataset. This dataset is of particular interest as it is designed
to find image-caption pairs that a CLIP model finds to be similar, but are
distinct. As the traditional LLaVA recipe uses CLIP, we compare our CLIP
backboned models to find a case where the Gemma 2b model fails, but Gemma 7b
is successful.
Figure 2: Relevancy map comparison between LLaVA-Gemma 2b (Left) and LLaVA-
Gemma 7b (Right) with gradients on the first relevant output token. For the
question “Is the duck floating? (a) Yes (b) No”, despite using the identical
CLIP vision encoder, the smaller model does not attend to the visual input.
Figure 2 shows an example of the differences in attention to the visual
aspects of the scene between the LLaVA-Gemma 2b and LLaVA-Gemma 7b models. The
relevancy maps for the LLaVA-Gemma 2b model show a dispersed and unfocused
pattern of attention, which correlates with its failure to accurately
interpret the scene. In contrast, the LLaVA-Gemma 7b model exhibits a more
concentrated and relevant pattern of attention, particularly focusing border
between objects: the duck, the water, and the rock being stood on. This
visualization not only highlights the superior performance of the LLaVA-Gemma
7b model, but also illuminates an interesting case where leveraging a more
powerful LLM ensures improved visual token attention.
## 5 Discussion
In this paper, we introduced LLaVA-Gemma, a compact vision-language model
leveraging the Gemma Large Language Model in two variants, Gemma-2B and
Gemma-7B. Our work provides a unique opportunity for researchers to explore
the trade-offs between computational efficiency and multimodal understanding
in small-scale models. The availability of both variants allows for a
comparative analysis that sheds light on how model size impacts performance in
various tasks. Our evaluations demonstrate the versatility and effectiveness
of LLaVA-Gemma across a range of datasets, highlighting its potential as a
benchmark for future research in small-scale vision-language models. With
these models, future practitioners can optimize the performance of small-scale
multimodal models more directly.
## References
* Achiam et al. [2023] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, et al. Gpt-4 technical report. _arXiv preprint arXiv:2303.08774_ , 2023.
* Chefer et al. [2021] Hila Chefer, Shir Gur, and Lior Wolf. Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In _Int. Conf. Comput. Vis._ , pages 397–406, 2021.
* Fu et al. [2023] Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, and Rongrong Ji. Mme: A comprehensive evaluation benchmark for multimodal large language models. _arXiv preprint arXiv:2306.13394_ , 2023.
* Goyal et al. [2017] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answering. In _IEEE Conf. Comput. Vis. Pattern Recog._ , pages 6904–6913, 2017.
* Hudson and Manning [2019] Drew A Hudson and Christopher D Manning. Gqa: A new dataset for real-world visual reasoning and compositional question answering. In _IEEE Conf. Comput. Vis. Pattern Recog._ , pages 6700–6709, 2019.
* Karamcheti et al. [2024] Siddharth Karamcheti, Suraj Nair, Ashwin Balakrishna, Percy Liang, Thomas Kollar, and Dorsa Sadigh. Prismatic vlms: Investigating the design space of visually-conditioned language models. _arXiv preprint arXiv:2402.07865_ , 2024.
* Li et al. [2023a] Chunyuan Li, Zhe Gan, Zhengyuan Yang, Jianwei Yang, Linjie Li, Lijuan Wang, and Jianfeng Gao. Multimodal foundation models: From specialists to general-purpose assistants. _arXiv preprint arXiv:2309.10020_ , 1(2):2, 2023a.
* Li et al. [2023b] Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. Evaluating object hallucination in large vision-language models. _arXiv preprint arXiv:2305.10355_ , 2023b.
* Liu et al. [2023] Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. _arXiv preprint arXiv:2310.03744_ , 2023.
* Liu et al. [2024] Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. _Adv. Neural Inform. Process. Syst._ , 36, 2024.
* Lu et al. [2022] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question answering. _Adv. Neural Inform. Process. Syst._ , 35:2507–2521, 2022.
* McKinzie et al. [2024] Brandon McKinzie, Zhe Gan, Jean-Philippe Fauconnier, Sam Dodge, Bowen Zhang, Philipp Dufter, Dhruti Shah, Xianzhi Du, Futang Peng, Floris Weers, et al. Mm1: Methods, analysis & insights from multimodal llm pre-training. _arXiv preprint arXiv:2403.09611_ , 2024.
* Oquab et al. [2023] Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. _arXiv preprint arXiv:2304.07193_ , 2023.
* Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_ , pages 8748–8763. PMLR, 2021.
* Sharma et al. [2018] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In _ACL_ , pages 2556–2565, Melbourne, Australia, 2018. Association for Computational Linguistics.
* Sidorov et al. [2020] Oleksii Sidorov, Ronghang Hu, Marcus Rohrbach, and Amanpreet Singh. Textcaps: a dataset for image captioning with reading comprehension. In _Eur. Conf. Comput. Vis._ , pages 742–758. Springer, 2020.
* Team et al. [2024] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open models based on gemini research and technology. _arXiv preprint arXiv:2403.08295_ , 2024.
* Tong et al. [2024] Shengbang Tong, Zhuang Liu, Yuexiang Zhai, Yi Ma, Yann LeCun, and Saining Xie. Eyes wide shut? exploring the visual shortcomings of multimodal llms, 2024.
* Touvron et al. [2023] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, et al. Llama 2: Open foundation and fine-tuned chat models, 2023.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Adv. Neural Inform. Process. Syst._ , 30, 2017.
* Yu et al. [2023] Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang. Mm-vet: Evaluating large multimodal models for integrated capabilities. _arXiv preprint arXiv:2308.02490_ , 2023.
* Zheng et al. [2023] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, et al. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
* Zhu et al. [2024] Yichen Zhu, Minjie Zhu, Ning Liu, Zhicai Ou, Xiaofeng Mou, and Jian Tang. Llava-phi: Efficient multi-modal assistant with small language model. _arXiv preprint arXiv:2401.02330_ , 2024.
|
Hashing it Out: Predicting Unhealthy Conversations on Twitter
Steven Leung1, Filippos Papapolyzos2
1 UC Berkeley
2 UC Berkeley
<EMAIL_ADDRESS>
## Abstract
Personal attacks in the context of social media conversations often lead to
fast-paced derailment, leading to even more harmful exchanges being made.
State-of-the-art systems for the detection of such conversational derailment
often make use of Deep Learning approaches for prediction purposes. In this
paper, we show that an Attention-based BERT architecture, pre-trained on a
large Twitter corpus and fine-tuned on our task, is efficient and effective in
making such predictions. This model shows clear advantages in performance to
the existing LSTM model we use as a baseline. Additionally, we show that this
impressive performance can be attained through fine-tuning on a relatively
small, novel dataset, particularly after mitigating overfitting issues through
synthetic oversampling techniques. By introducing the first transformer based
model for forecasting conversational events on Twitter, this work lays the
foundation for a practical tool to encourage better interactions on one of the
world’s most ubiquitous social media platforms.
## 1 Introduction
Social media has become one of the primary stages where peer-to-peer
discussions take place, spanning anything from everyday local matters to high
level philosophical topics. A very frequent phenomenon that is noticed in such
conversations is their rapid deterioration upon the presence of a personal
attack, often leading to personal threats and insults which heavily undermine
the level of discussion. In order to battle this issue, Social Media
companies, such as Twitter and Facebook, have come up with systems that make
use of human moderators who review content flagged by users. In the majority
of cases, content that goes against the company’s policy is removed [13].
The way this moderation tactic is set up presents a series of challenges and
limitations, the most obvious being the cost of moderation services. Secondly,
there have been cases of unreliable moderation in Twitter, with users
receiving temporary bans simply for mentioning some specific word in their
tweets. Lastly, it has to be appreciated that moderation can be quite an
emotionally taxing job as content under review will often be very disturbing.
The most basic limitation of this approach, however, is that moderation only
happens a posteriori and solely after users choose to report the abusive
content. This leaves a significant space in time in which a conversation can
escalate further and users might become the victims or perpetrators of
personal attack and/or verbal abuse. To the best of our knowledge, no effort
is done on the part of Twitter in order to prevent such harmful content rather
than moderate it. We trust that if Social Media companies chose to assume a
preventative strategy to moderation, this would reduce the instances of
personal attacks and improve the overall content quality.
There are a series of issues that make detecting potentially harmful language
a challenging topic. As mentioned in Twitter’s enforcement policy [13],
context is of utmost importance; the use of specific language might have a
different interpretation when the preceding conversation is better examined.
Twitter’s policy mentions that factors such as whether “the behavior is
directed at an individual, group, or protected category of people” and “the
severity of the violation” are also considered prior to taking action.
The aim of this project is to provide a Deep Learning approach to content
moderation, by means of an attention-based neural network model that predicts
whether a Twitter conversation is likely to deteriorate and lead to a personal
attack. We believe that this will not only provide an advantage to moderators
but could also potentially be utilized to give warnings to users when they are
about to tweet something which may lead to escalation.
## 2 Related Work
A lot of our inspiration for this project was drawn from the 2018 paper
Conversations Gone Awry: Detecting Early Signs of Conversational Failure
(Zhang et al., 2018) [17]. Specifically, in this paper the authors try to
identify predictors of conversational derailment and make predictions using a
variety of strategies. An interesting insight from the paper is that
conversations containing an attacker-initiated direct question are most likely
to derail. The authors also claim that human-based predictions of
conversational derailment are at an accuracy level of 72%.
Past work has also been done on forecasting hostility on a dataset of
Instagram comments (Liu et al., 2018) [7], analyzing features of personal
attack in Wikipedia comments (Wulczyn et al., 2017) [16] and the use of
attention models to improve the performance of RNN and CNN approaches to
content moderation (Pavlopoulos et al., 2017) [9].
In the 2019 paper titled Trouble on the Horizon: Forecasting the Derailment of
Online Conversations as they Develop, Chang et al. [2], the authors use a
Recurrent Neural Network (RNN) based approach to predict conversational
derailment on a dataset of Wikipedia talk page discussions developed as part
of the aforementioned 2018 paper, and a dataset of conversations in the
ChangeMyView subreddit. CRAFT, their model architecture, yields an accuracy
score of 66.5% which, to the best of our knowledge, is the highest accuracy
score achieved on this specific task.
Despite the success of the RNN-based CRAFT model, we choose to explore a pre-
trained self-attention model for our conversational task for several reasons.
First, work by Vaswani et al. [14], and others have shown performance
advantages of transformers vs RNNs. Namely, RNNs have displayed challenges
with long range dependencies and word sense disambiguation relative to
transformers [12]. We deemed that using an Attention model could allow us to
curb the sequential architecture of RNNs, allowing the model to make use of
previous context without information loss. This is grounded in the idea that
previous tweets in a conversation may have an equally strong effect on the
probability of derailment of a subsequent tweet. In addition, transfer
learning allows us to take advantage of pre-trained models, such as BERTweet,
which is trained on millions of examples in the unique language of Twitter,
thus improving the overall performance of our approach. Finally, due to the
linear scaling self-attention models as opposed to the quadratic scaling of
RNNs, our model is more scalable to the vast conversational world of Twitter.
## 3 Dataset
For this project we have compiled our own dataset, consisting of 5656 tweets
tagged on one binary dimension of personal attack. Tweets are labelled with
conversation ID’s, vital for our task of using prior tweets to predict the
outcome of a subsequent tweet. We acquired our data through the Twitter API.
In order to have a higher probability of finding positive examples of personal
attack, we selected conversations from a collection of controversial topics
such as Universal Basic Income, abortion, and immigration.
In tagging each tweet as either containing a personal attack or not, we used
the Wiktionary definition of a personal attack: ”an abusive remark on or
relating to somebody’s person instead of providing evidence when examining
another person’s claims or comments”. We used three methods in tagging. First,
we made use of the participant recruitment platform Mechanical Turk. Secondly,
we were fortunate enough to find a dataset on Zenodo titled “Hate speech and
personal attack dataset in English social media” [3] that contained tweet
level tags. We were able to match these tweets to the rest of the conversation
through the Twitter API. Finally, we tagged the remainder of tweets ourselves.
From the 5656 tweets, we isolated 1177 positive examples, leaving 4479
negative examples which translated to a heavy class imbalance. Our budget and
project time frame were significant limitations to the quality and scale of
our data collection, forcing us to seek alternative methods of reducing the
effect of positive example under-representation, which came at their own
costs. Specifically, we made use of oversampling for positive examples, which
came at the expense of overfitting.
### 3.1 Synthetic Oversampling
A method we made use of to reduce class imbalance in our dataset was a
technique called synthetic oversampling. Specifically, we deemed that by
creating artificial context examples for the positive class and supplying them
to our model in the training phase we could reduce the effect of overfitting
while also providing a larger quantity of plausible training examples. This
technique works by replacing words in the original context with similar words
to create new contexts and resupplying them to the model as separate training
examples.
Our inspiration was drawn from the 2016 paper A Study of Synthetic
Oversampling for Twitter Imbalanced Sentiment Analysis (Ah-Pine & Soriano,
2016) [1] in which the authors present a general algorithm for creating
synthetic samples as follows: a random tweet x is drawn from a distribution P,
its nearest neighbors NN(x) are determined, a neighbor x’ is chosen “according
to a probability distribution over NN(x)” and a synthetic sample is created in
the form $y=x+a(x’-x)$, where $a~{}unif(0,1)$. A popular version of this
algorithm is known as SMOTE (Synthetic Minority Oversampling TEchnique)
(Chawla, 2002) [4], which assumes a uniform distribution over P. The
application of this algorithm in the context of words requires a vector space
mapping which, in our case, was achieved using the GloVe Twitter Embeddings.
Specifically, the original context is first broken down into a series of
tokens which are tagged by part-of-speech and are filtered for stopwords,
which are not further processed. Using the GloVe Twitter embeddings, a
dictionary of k-nearest neighbors is computed for each token using the
Euclidean distance metric. We randomly pick one of the 3 closest neighbors
with non-uniform probability, giving larger probability to the closest
neighbor. We then randomly choose the filtered tokens to be replaced in the
original context with probability P = 0.2 and repeat this process n times to
produce a list of similar but non-identical contexts. Using this process we
developed 355 synthetic positive examples.
## 4 Transformer Model
Our general model for forecasting conversational events is the BERT-base
BERTweet transformer model, fine-tuned on our conversational Twitter data.
### 4.1 Conversational Structure
##### Conversations
For the purposes of our model, a conversation is made up of an initial tweet
(what we call a top-level tweet) and all subsequent tweets following that
tweet that are in direct reply to one another. While Twitter does allow for
the branching of conversations (at any point in a conversation, users can
reply directly to the top-level tweet or to a reply, a reply of a reply, etc.)
that allows for complex conversational structures, we limit our model to
looking at single conversational branches at a time for each input.
Thus, given top-level tweet T1, a conversation consists of a sequence of N
tweets $C=\\{T_{1},...,T_{N}\\}$
##### Context
Tweets are the individual components of the conversation, made by a single
user. For the sake of forecasting, we are using the two tweets prior to
forecast whether the current tweet in question will be a personal attack. The
two tweets prior are referred to as the ”context”. This is the input into our
model.
The two context tweets are concatenated with the $</s>$ token in between to
preserve the delineation of tweets. The context is then tokenized using the
BERTweet tokenizer, with a normalization feature to handle common
conversational elements in Twitter such as username handles, hashtags and
urls. The classification token, $[CLS]$, is appended to the front of the input
for use in the classification process. Thus, given the tweet in question,
$T_{N}$, the two tweet context, $T_{N}-1$ and $T_{N}-2$, is converted to
variable M length tokens, yielding the conversational context,
$C_{N}=[CLS],T_{N-1,1},T_{N-1,2},T_{N-1,...},T_{N-1,M}</s>,T_{N-2,1},T_{N-2,2},T_{N-2,...},T_{N-2,M}$
The most recent tweet in the context is placed in the front of the context to
avoid truncating the most recent tweet in the case that the context exceeds
BERTweet tokenizer’s 130 token length limit.
Figure 1: Model architecture. The tokenized two tweet context is fed into pre-
trained BERTweet. Forecasting whether the subsequent tweet will be a personal
attack or not is treated as a classification problem through a logistic
regression head on the CLS token final embedding.
### 4.2 Fine-tuning BERTweet
The pre-trained BERTweet model that we use is a fine-tuned version of the
popular BERT model. At a high-level, BERT is a bi-directional encoder model
trained to predict masked tokens and whether one sentence directly follows
another in the source text [5]. We use the base model configuration consisting
of 12 transformer blocks, 12 self-attention heads, and a hidden size of 768.
In the BERTweet implementation, BERT base is fine-tuned on 850M tweets for
optimal performance on Twitter data in regards to the tasks of part-of-speech
tagging, named-entity recognition and text classification [8]. We choose this
pre-trained model based on its superior performance to other transformer
models such RobertaLarge and XLM-R large, specifically on Twitter data on the
tasks mentioned above.
During the fine-tuning process, we update the parameters of BERTweet on the
specific task of personal attack forecasting using Twitter conversation
context in accordance with the method proposed in Howard et al. [6]. We append
a classification head on top of BERT using the huggingfaces model configuation
Bertforsequenceclassification [15]. In this implementation, the $[CLS]$ token
is used to classify whether the tweet in question is forecasted to be a
personal attack or not, in that the final hidden state $h$ of this token is
used to classify the sequence. A simple linear dense layer classification head
is added to the top of BERTweet to forecast the probability of a personal
attack before passing to a sigmoid transformation for binary classification:
$\begin{split}{P}(\text{Attack}\mid\text{$h_{1...768}$})=\text{Sigmoid}(W\text{$h_{1...768}$}+b)\end{split}$
(1)
This classification method is discussed in further depth by Sun et al. [11].
## 5 Evaluation and Analysis
### 5.1 Results
We test and evaluate three models to examine the effect of both the model
architecture and synthetic oversampling. We use the CRAFT model as our
baseline in the same form as proposed by Chang et al.. CRAFT is trained and
evaluated on our Twitter conversation data using standard oversampling on the
positive class. We then fine-tune and evaluate BERTweet with standard
oversampling to measure the impact of a transformer implementation as opposed
to the LSTM implementation proposed in CRAFT. Finally, we fine-tune and
evaluate BERTweet on our Twitter conversation data with synthetic oversampling
of the positive class.
Model | A | P | R | F1 | AUPR
---|---|---|---|---|---
CRAFT | 0.76 | 0.52 | 0.57 | 0.54 | 0.55
BT | 0.82 | 0.62 | 0.76 | 0.68 | 0.76
BT SOS | 0.85 | 0.69 | 0.72 | 0.70 | 0.78
Table 1: Comparison of performance between CRAFT with random oversampling
(CRAFT), BERTweet fine-tuned with random oversampling (BT) and BERTweet fine-
tuned with synthetic oversampling (BT SOS). Classification threshold of .5.
We evaluate our model in relation to several key metrics - including total
accuracy on both positive and negative classes and precision, recall, F1 score
and area under the precision recall curve for the positive class. While it is
of vital importance to flag as many upcoming personal attacks as possible
(recall), it is also of crucial importance to maintain credibility by not
flagging innocuous conversations (precision). Thus, we pay particular
attention to the area under the precision recall curve (AUPR) as a measure of
success. This method is preferable to the ROC curve due to the high class
imbalance. The AUPR metric was used heavily in the ”Trouble on the Horizon”
paper, in which the authors achieved an AUPR of .70 on their final model [2],
although we do not make an explicit comparison to this result due to the
disparate nature of our conversational data.
Figure 2: Precision-recall curves and the area under each curve.
The transformer based BERTweet model outperforms the LSTM based CRAFT model in
all metrics. Implementing synthetic oversampling also improves all metrics,
with the exception of recall on the positive class on the classification
threshold of .5. Given the higher AUPR score, however, we see that synthetic
oversampling has generally improved our model’s ability to classify the
positive class.
### 5.2 Analysis
We now examine the behavior of our model to better understand what aspects of
the data it is using for predictions and where it is falling short.
#### 5.2.1 Model Behavior
##### The Lead Up to an Attack
We first examine whether the model is using the full two tweet context we are
supplying to make its predictions or if a single tweet context is sufficient.
Intuitively, we expected only a minor drop in performance since our assumption
was that most of the signal indicating an attack was coming would be captured
in the tweet immediately preceding the attack tweet. We tested this hypothesis
by truncating the training data to a single tweet. We were surprised to see a
dramatic drop in performance, with the model performing at .74 accuracy and
only .50 AUPR. This indicates that the tweet two prior does indeed provide a
large amount of useful information in making predictions.
##### Conversational Dynamics
Secondly, we look at whether the model is using information about
conversational dynamics in the context to inform its predictions. To look at
this, we remove the tweet separator token, $</s>$, so that any indication of
delineation between speakers is lost. Model performance again drops, although
not as significantly as in our first test, with an accuracy of .77 and AUPR of
.76. This meets our expectation since the location of inflammatory language
that leads to a personal attack relative to the token (in other words, which
tweet in the context it is in) would presumably be valuable information for
the model to use in making a prediction on the current tweet.
#### 5.2.2 Model Limitations
##### Limited Dataset
While building our model, one of the major challenges we faced was the
relatively small size of the dataset we were fine-tuning on, particularly in
the positive class. The main consequence of this was a tendency of our model
to overfit the training dataset. We dealt with this issue through a variety of
methods, including synthetic oversampling (as mentioned earlier), reducing
batch size to 10, maintaining the learning rate at the default level of 5e-5
and ensuring that training did not exceed 4 epochs.
Even so, we occasionally see unusual spikes in the negative log likelihood
loss pattern on the validation set related to our limited dataset. These
spikes would occur despite an increasing accuracy and AUPR score. In other
words, our model is making more correct predictions on the positive and
negative class on our validation data but is also more over-confident in the
bad predictions it is making. We posit two potential explanation for this.
Firstly, despite the measures above, our model is likely still overfitting to
an extent on the training data. This leads to over-confident incorrect
predictions due to random noise as opposed to true signal in our validation
data. Secondly, due to the small number of positive examples in our validation
set (154), a small change in parameters between epochs could result in
dramatic swings in total loss. In other words, a few very bad predictions
could have a large impact on the loss calculation, resulting in the spike we
witnessed.
##### GIF Confusion
We observe a higher occurrence of url strings in the context strings where our
model makes misclassifications on the test set. The ratio of url strings to
context strings is .65 in the misclassified examples vs only .5 in the
correctly classified examples. This makes sense intuitively since these url
strings contain a large amount of information relative to whether a personal
attack is coming that is not interpretable by our model. The best example of
this is GIFs, or animated images. While these GIFs are represented in our data
as simple url strings, the actual content of the GIF could be highly
inflammatory, benign or even friendly, which could be highly indicative of the
presence or absence of a personal attack in the subsequent tweet. While we did
not have time to explore this further, we believe this would be a fruitful
area of future work, as we will discuss in more detail in the future work
section.
##### Nuanced Langugage
In their paper, Price et al. note the difficulty for neural models to identify
nuanced language that indicates negative interactions such as sarcasm,
dismissiveness, and condescension. [10]. Notably, BERT was shown by the
authors to be particularly poor at identifying sarcasm. These are language
qualities that could be highly indicative of impending personal attacks. Since
this is not currently accounted for in our model, we believe this is likely a
source of confusion contributing to our existing loss.
## 6 Conclusions and Future Work
In summary, we introduced a transformer based model for forecasting
conversational events on a novel dataset of Twitter conversations. This model
indicates some ability to understand conversational dynamics between speakers.
It fills a void in the existing literature in that it provides state-of-the-
art predictive performance on Twitter, a platform that has not been studied in
the space of conversational forecasting.
Given Twitter’s stated goal of having healthier conversations on its platform,
we hope this study is a foundation for future work specific to this ubiquitous
channel of communications. One vital area of expansion will be to incorporate
additional topics of conversation into the model. While we focused our study
on controversial political and societal topics, additional work is needed to
ensure the model generalizes to more mundane topics, since the signals leading
towards a personal attack could be very different for these conversations.
While our model performed impressively in regards to our test data, we are
confident that future work could further improve performance of the model. Our
model could be improved by addressing all the sources of loss mentioned
earlier. Giving the model a sense of nuanced language by appending an
attention head for detecting the 6 attributes of unhealthy conversations as
noted in Price et al. would do well to address this deficiency. Another clear
area of improvement, as referenced earlier, would be to communicate the
sentiment of GIFs as input to the model. Finally, the robustness of the model
would be greatly improved by anyone with the resources to reliably collect and
accurately tag additional tweets.
As the model becomes more capable, we hope it can become a practical tool to
assist Twitter users to interact with more civility and awareness. Given this
fine-tuned ability to recognize conversations headed for derailment, Twitter
could proactively warn users. We believe that this awareness would allow users
to guide themselves towards a more productive resolution, allowing all parties
involved to have a better, more positive experience.
Figure 3: A mock-up of a conversation warning notification on a sample
conversation.
Github repo: https://github.com/stevendleung/Hashing-It-Out/tree/main
## 7 References
## References
* 1. J. Ah-Pine and E.-P. Soriano-Morales. A Study of Synthetic Oversampling for Twitter Imbalanced Sentiment Analysis. In Workshop on Interactions between Data Mining and Natural Language Processing (DMNLP 2016), Proceedings of the Workshop on Interactions between Data Mining and Natural Language Processing, Riva del Garda, Italy, Sept. 2016.
* 2. J. P. Chang and C. Danescu-Niculescu-Mizil. Trouble on the horizon: Forecasting the derailment of online conversations as they develop, 2019.
* 3. P. Charitidis, S. Doropoulos, S. Vologiannidis, I. Papastergiou, and S. Karakeva. Hate speech and personal attack dataset in English social media, Oct. 2019.
* 4. N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer. Smote: Synthetic minority over-sampling technique. Journal of Artificial Intelligence Research, 16:321–357, Jun 2002.
* 5. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019.
* 6. J. Howard and S. Ruder. Universal language model fine-tuning for text classification, 2018.
* 7. P. Liu, J. Guberman, L. Hemphill, and A. Culotta. Forecasting the presence and intensity of hostility on instagram using linguistic and social features, 2018.
* 8. D. Q. Nguyen, T. Vu, and A. T. Nguyen. Bertweet: A pre-trained language model for english tweets. 2020\.
* 9. J. Pavlopoulos, P. Malakasiotis, and I. Androutsopoulos. Deeper attention to abusive user content moderation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1125–1135, Copenhagen, Denmark, Sept. 2017. Association for Computational Linguistics.
* 10. I. Price, J. Gifford-Moore, J. Fleming, S. Musker, M. Roichman, G. Sylvain, N. Thain, L. Dixon, and J. Sorensen. Six attributes of unhealthy conversation, 2020.
* 11. C. Sun, X. Qiu, Y. Xu, and X. Huang. How to fine-tune bert for text classification?, 2020.
* 12. G. Tang, M. Müller, A. Rios, and R. Sennrich. Why self-attention? a targeted evaluation of neural machine translation architectures, 2018.
* 13. Twitter. Our approach to policy development and enforcement philosophy.
* 14. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017.
* 15. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Funtowicz, J. Davison, S. Shleifer, P. von Platen, C. Ma, Y. Jernite, J. Plu, C. Xu, T. L. Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush. Huggingface’s transformers: State-of-the-art natural language processing, 2020.
* 16. E. Wulczyn, N. Thain, and L. Dixon. Ex machina: Personal attacks seen at scale, 2017.
* 17. J. Zhang, J. Chang, C. Danescu-Niculescu-Mizil, L. Dixon, Y. Hua, D. Taraborelli, and N. Thain. Conversations gone awry: Detecting early signs of conversational failure. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1350–1361, Melbourne, Australia, July 2018. Association for Computational Linguistics.
|
Plugin estimators for selective classification
with out-of-distribution detection
Harikrishna Narasimhan
Google Research, Mountain View
Aditya Krishna Menon
Google Research, New York
Wittawat Jitkrittum
Google Research, New York
Sanjiv Kumar
Google Research, New York
§ INTRODUCTION
§ BACKGROUND AND NOTATION
§ BAYES-OPTIMAL SELECTIVE CLASSIFICATION WITH OOD DETECTION
§ PLUG-IN ESTIMATORS TO THE BAYES-OPTIMAL SCOD RULE
§ EXPERIMENTAL RESULTS
§ DISCUSSION AND FUTURE WORK
§ PROOFS
We first define a joint marginal distribution $\Pcomb$ that samples from $\Pr_{\rm in}(x)$ and $\Pr_{\rm out}(x)$ with equal probabilities.
We then rewrite the objective in (<ref>) in terms of the joint marginal distribution:
\begin{align*}
{L_{\rm scod}(h, r)} &= \Ex_{x \sim \Pcomb}\left[ T_1( h( x ), r( x ) ) + T_2( h( x ), r( x ) ) \right] \\
T_1( h( x ), r( x ) ) &= ( 1 - \costin - \costout ) \cdot \Ex_{y|x \sim \Pr_{\rm in}}\left[\frac{\Pr_{\rm in}(x)}{\Pcomb(x)} \cdot \1( y \neq h( x ), h( x ) \neq \abstain) \right] \\
&= ( 1 - \costin - \costout ) \cdot \sum_{y \in [L]}\Pr_{\rm in}(y|x) \cdot \frac{\Pr_{\rm in}(x)}{\Pcomb(x)} \cdot \1( y \neq h( x ), h( x ) \neq \abstain) \\
T_2( h( x ), r( x ) ) &= \costin \cdot \frac{\Pr_{\rm in}(x)}{\Pcomb(x)} \cdot\1( h( x ) = \abstain ) + + \costout \cdot \1( h( x ) \neq \abstain ).
\end{align*}
The conditional risk that a classifier $h$ incurs when abstaining (i.e., predicting $r( x ) = 1$) on a fixed instance $x$ is given by:
\[
\costin \cdot \frac{\Pr_{\rm in}(x)}{\Pcomb(x)}.
\]
The conditional risk associated with predicting a base class $y \in [L]$ on instance $x$ is given by:
\[
% (1 - \eta_{\rm wild}(x)) \cdot \left(1 - \Pr_{\rm in}(y|x)\right) +
% \eta_{\rm wild}(x) \cdot c_{\rm wild}.
( 1 - \costin - \costout ) \cdot \frac{\Pr_{\rm in}(x)}{\Pcomb(x)} \cdot \left( 1 - \Pr_{\rm in}(y|x) \right)
+ \costout \cdot \frac{\Pr_{\rm out}(x)}{\Pcomb(x)}
\]
The Bayes-optimal classifier then predicts the label with the lowest conditional risk.
When $\Pr_{\rm in}( x ) = 0$, this amounts to predicting abstain ($r( x ) = 1$).
When $\Pr_{\rm in}( x ) > 0$, the optimal classifier predicts $r( x ) = 1$ when:
\begin{align*}
&\costin \cdot \frac{\Pr_{\rm in}(x)}{\Pcomb(x)}
< ( 1 - \costin - \costout ) \cdot \frac{\Pr_{\rm in}(x)}{\Pcomb(x)} \cdot \min_{y \in [L]}\left( 1 - \Pr_{\rm in}(y|x) \right)
+ \costout \cdot \frac{\Pr_{\rm out}(x)}{\Pcomb(x)}
\\
\costin \cdot {\Pr_{\rm in}(x)}
< ( 1 - \costin - \costout ) \cdot {\Pr_{\rm in}(x)} \cdot \min_{y \in [L]}\left( 1 - \Pr_{\rm in}(y|x) \right)
+ \costout \cdot {\Pr_{\rm out}(x)}
\\
% \\
\costin \cdot {\Pr_{\rm in}(x)}
< ( 1 - \costin - \costout ) \cdot {\Pr_{\rm in}(x)} \cdot \left( 1 - \max_{y \in [L]}\Pr_{\rm in}(y|x) \right)
+ \costout \cdot {\Pr_{\rm out}(x)} \\
% \\
\costin
< ( 1 - \costin - \costout ) \cdot \left( 1 - \max_{y \in [L]}\Pr_{\rm in}(y|x) \right)
+ \costout \cdot \frac{\Pr_{\rm out}(x)}{\Pr_{\rm in}(x)}.
\end{align*}
Otherwise, the classifier does not abstain ($r( x ) = 0$),
and predicts $\argmax_{y \in [L]}\, \Pr_{\rm in}(y|x)$, as desired.
Recall that in open-set classification,
the outlier distribution is $\Pr_{\rm out}( x ) = \Pr_{\rm te}(x \mid y=L)$,
the training distribution is
\begin{align*}
\PTr( x \mid y ) &= \mathbb{P}_{\rm te}( x \mid y ) \\
\piTr( y ) &= \Pr_{\rm in}( y ) \\
&= \frac{1( y \neq L )}{1 - \pi_{\rm te}( L )} \cdot \pi_{\rm te}( y ).
\end{align*}
We will find it useful to derive the following quantities.
\begin{align*}
\PTr( x, y ) &= \piTr( y ) \cdot \PTr( x \mid y ) \\
&= \frac{1( y \neq L ) }{1 - \pi_{\rm te}( L )} \cdot \pi_{\rm te}( y ) \cdot \mathbb{P}_{\rm te}( x \mid y ) \\
&= \frac{1( y \neq L ) }{1 - \pi_{\rm te}( L )} \cdot \mathbb{P}_{\rm te}( x, y ) \\
\PTr( x ) &= \sum_{y \in [L]} \PTr( x, y ) \\
&= \sum_{y \in [L]} \piTr( y ) \cdot \PTr( x \mid y ) \\
&= \frac{1}{1 - \pi_{\rm te}( L )} \sum_{y \neq L} \pi_{\rm te}( y ) \cdot \mathbb{P}_{\rm te}( x \mid y ) \\
&= \frac{1}{1 - \pi_{\rm te}( L )} \sum_{y \neq L} \mathbb{P}_{\rm te}( y \mid x ) \cdot \mathbb{P}_{\rm te}( x ) \\
&= \frac{\mathbb{P}_{\rm te}( y \neq L \mid x )}{1 - \pi_{\rm te}( L )} \cdot \mathbb{P}_{\rm te}( x ) \\
\PTr( y \mid x ) &= \frac{\PTr( x, y )}{\PTr( x )} \\
&= \frac{1( y \neq L ) }{1 - \pi_{\rm te}( L )} \cdot \frac{1 - \pi_{\rm te}( L )}{\mathbb{P}_{\rm te}( y \neq L \mid x )} \cdot \frac{\mathbb{P}_{\rm te}( x, y )}{\mathbb{P}_{\rm te}( x )} \\
&= \frac{1( y \neq L )}{\mathbb{P}_{\rm te}( y \neq L \mid x )} \cdot \mathbb{P}_{\rm te}( y \mid x ).
\end{align*}
The first part follows from standard results in cost-sensitive learning <cit.>:
\begin{align*}
r^*(x) = 1
\costin \cdot \Pr_{\rm in}( x ) - \costout \cdot \Pr_{\rm out}( x ) < 0 \\
\costin \cdot \Pr_{\rm in}( x ) < \costout \cdot \Pr_{\rm out}( x ) \\
\costin \cdot \Pr_{\rm te}( x \mid y \neq L ) < \costout \cdot \Pr_{\rm te}( x \mid y = L ) \\
\costin \cdot \Pr_{\rm te}( y \neq L \mid x ) \cdot \Pr_{\rm te}( y = L ) < \costout \cdot \Pr_{\rm te}( y = L \mid x ) \cdot \Pr_{\rm te}( y \neq L ) \\
\frac{\costin \cdot \Pr_{\rm te}( y = L )}{\costout \cdot \Pr_{\rm te}( y \neq L )} < \frac{\Pr_{\rm te}( y = L \mid x )}{\Pr_{\rm te}( y \neq L \mid x )} \\
\mathbb{P}_{\rm te}( y = L \mid x ) > F\left( \frac{\costin \cdot \Pr_{\rm te}( y = L )}{\costout \cdot \Pr_{\rm te}( y \neq L )} \right).
% \mathbb{P}_{\rm te}( y = L \mid x ) > \frac{ \costin }{ \costin + \costout} \\
\end{align*}
We further have for threshold $t^*_{\rm osc} \defEq F\left( \frac{\costin \cdot \Pr_{\rm te}( y = L )}{\costout \cdot \Pr_{\rm te}( y \neq L )} \right)$,
\begin{align*}
\mathbb{P}_{\rm te}( y = L \mid x ) \geq t^*_{\rm osc} &\iff \mathbb{P}_{\rm te}( y \neq L \mid x ) \leq 1 - t^*_{\rm osc} \\
&\iff \frac{1}{\mathbb{P}_{\rm te}( y \neq L \mid x )} \geq \frac{1}{1 - t^*_{\rm osc}} \\
&\iff \frac{\max_{y' \neq L} \mathbb{P}_{\rm te}( y' \mid x)}{\mathbb{P}_{\rm te}( y \neq L \mid x )} \geq \frac{\max_{y' \neq L} \mathbb{P}_{\rm te}( y' \mid x)}{1 - t^*_{\rm osc}} \\
&\iff \max_{y' \neq L} \PTr( y' \mid x ) \geq \frac{\max_{y' \neq L} \mathbb{P}_{\rm te}( y' \mid x)}{1 - t^*_{\rm osc}}.
\end{align*}
That is, we want to reject when the maximum softmax probability is higher than some (sample-dependent) threshold.
Fix $\epsilon \in (0,1)$.
We consider two cases for threshold $t_{\rm msp}$:
Case (i): $t_{\rm msp} \leq \frac{1}{L-1}$. Consider a distribution where for all instances $x$, $\mathbb{P}_{\rm te}( y = L \mid x ) = 1 - \epsilon$ and $\mathbb{P}_{\rm te}( y' \mid x) = \frac{\epsilon}{L-1}, \forall y' \ne L$. Then the Bayes-optimal classifier accepts any instance $x$ for all thresholds $t \in \big(0, 1-\epsilon\big)$. In contrast, Chow's rule would compute $\max_{y \ne L}\PTr( y \mid x) = \frac{1}{L-1},$ and thus reject all instances $x$.
Case (ii): $t_{\rm msp} > \frac{1}{L-1}$. Consider a distribution where for all instances $x$, $\mathbb{P}_{\rm te}( y = L \mid x ) = \epsilon$ and $\mathbb{P}_{\rm te}( y' \mid x) = \frac{1-\epsilon}{L-1}, \forall y' \ne L$. Then the Bayes-optimal classifier would reject any instance $x$ for thresholds $t \in \big(\epsilon, 1\big)$, whereas Chow's rule would accept all instances.
Taking $\epsilon \rightarrow 0$ completes the proof.
Let $\Pr^*$ denote the joint distribution that draws a sample from $\Pr_{\rm in}$ and $\Pr_{\rm out}$ with equal probability. Denote $\gamma_{\rm in}(x) = \frac{ \Pr_{\rm in}(x) }{ \Pr_{\rm in}(x) + \Pr_{\rm out}(x) }$.
The joint risk in (<ref>) can be written as:
\begin{align*}
\lefteqn{L_{\rm scod}(h, r) }\\
(1 - \costin - \costout) \cdot \Pr_{\rm in}( y \neq {h}( x ), r( x ) = 0 ) +
\costin \cdot \Pr_{\rm in}( r( x ) = 1 ) + \costout \cdot \Pr_{\rm out}( r( x ) = 0 )\\
\Ex_{x \sim \Pr^*}\Big[ (1 - \costin - \costout) \cdot \gamma_{\rm in}(x)
\cdot
\sum_{y \ne h(x)} \Pr_{\rm in}( y \mid x) \cdot \1( r( x ) = 0 ) \\[-5pt]
& \hspace{3.5cm} +
\costin \cdot \gamma_{\rm in}(x) \cdot \1( r( x ) = 1 )
+ \costout \cdot (1 - \gamma_{\rm in}(x)) \cdot \1( r( x ) = 0 ) \Big].
\end{align*}
For class probability estimates $\hat{\Pr}_{\rm in}(y \mid x) \approx \Pr_{\rm in}(y \mid x)$, and scorers
$\hat{s}_{\rm sc}(x) = \max_{y \in [L]} \hat{\Pr}_{\rm in}(y \mid x)$ and
$\hat{s}_{\rm ood}(x) \approx \frac{ \Pr_{\rm in}(x) }{ \Pr_{\rm out}(x)}$, we construct a classifier $\hat{h}(x) \in \argmax_{y \in [L]} \hat{\eta}_y(x)$ and black-box rejector:
\begin{equation}
\label{eqn:plug-in-black-box-rewritten}
\hat{r}_{\rm BB}( x ) = 1 \iff ( 1 - \costin - \costout ) \cdot (1 - \hat{s}_{\rm sc}( x )) + {\costout} \cdot \left( \frac{ 1 }{ \hat{s}_{\rm ood}( x ) } \right) > c_{\rm in}.
\end{equation}
Let $(h^*, r^*)$ denote the optimal classifier and rejector as defined in (<ref>).
We then wish to bound the following regret:
\begin{align*}
L_{\rm scod}(\hat{h}, \hat{r}_{\rm BB}) - L_{\rm scod}(h^*, r^*) &=
\underbrace{L_{\rm scod}(\hat{h}, \hat{r}_{\rm BB}) - L_{\rm scod}(h^*, \hat{r}_{\rm BB})}_{ \text{term}_1 } + \underbrace{L_{\rm scod}(h^*, \hat{r}_{\rm BB}) - L_{\rm scod}(h^*, r^*)}_{ \text{term}_2 }.
\end{align*}
We first bound the first term:
\begin{align*}
\text{term}_1 &=
\Ex_{x \sim \Pr^*}\left[ (1 - \costin - \costout) \cdot \gamma_{\rm in}(x)
\cdot \1( \hat{r}_{\rm BB}( x ) = 0 ) \cdot
\Big( \sum_{y \ne \hat{h}(x)} \Pr_{\rm in}( y \mid x)
- \sum_{y \ne h^*(x)} \Pr_{\rm in}( y \mid x) \Big)
\right] \\
&= \Ex_{x \sim \Pr^*}\left[ \omega(x) \cdot
\Big( \sum_{y \ne \hat{h}(x)} \Pr_{\rm in}( y \mid x)
- \sum_{y \ne h^*(x)} \Pr_{\rm in}( y \mid x) \Big)
\right],
\end{align*}
where we denote $\omega(x) = (1 - \costin - \costout) \cdot \gamma_{\rm in}(x)
\cdot \1( \hat{r}_{\rm BB}( x ) = 0 )$.
Furthermore, we can write:
\begin{align*}
\lefteqn{\text{term}_1}\\
% &=
% \Ex_{x \sim \Pr^*}\left[ \omega(x) \cdot
% \Big( \sum_{y \ne \hat{h}(x)} \Pr_{\rm in}( y \mid x)
% - \sum_{y \ne h^*(x)} \Pr_{\rm in}( y \mid x) \Big)
% \right] \\
\Ex_{x \sim \Pr^*}\left[ \omega(x) \cdot
\Big( \sum_{y \ne \hat{h}(x)} \Pr_{\rm in}( y \mid x)
\sum_{y \ne h^*(x)} \hat{\Pr}_{\rm in}( y \mid x)
\sum_{y \ne h^*(x)} \hat{\Pr}_{\rm in}( y \mid x)
- \sum_{y \ne h^*(x)} \Pr_{\rm in}( y \mid x) \Big)
\right] \\
\Ex_{x \sim \Pr^*}\left[ \omega(x) \cdot
\Big( \sum_{y \ne \hat{h}(x)} \Pr_{\rm in}( y \mid x)
\sum_{y \ne \hat{h}(x)} \hat{\Pr}_{\rm in}( y \mid x)
\sum_{y \ne h^*(x)} \hat{\Pr}_{\rm in}( y \mid x)
- \sum_{y \ne h^*(x)} \Pr_{\rm in}( y \mid x) \Big)
\right]\\
2 \cdot \Ex_{x \sim \Pr^*}\left[ \omega(x) \cdot
\sum_{y \in [L]}
\left| \Pr_{\rm in}( y \mid x)
\hat{\Pr}_{\rm in}( y \mid x) \right|
\right]\\
2 \cdot \Ex_{x \sim \Pr^*}\left[
\sum_{y \in [L]}
\left| \Pr_{\rm in}( y \mid x)
\hat{\Pr}_{\rm in}( y \mid x) \right|
\right],
\end{align*}
where the third step uses the definition of $\hat{h}$ and the fact that $\omega(x) > 0$; the last step uses the fact that $\omega(x) \leq 1$.
We bound the second term now. For this, we first define:
\begin{align*}
L_{\rm rej}(r)
\Ex_{x \sim \Pr^*}\bigg[
\left(
(1 - \costin - \costout) \cdot \gamma_{\rm in}(x) \cdot ( 1 - \max_{y \in [L]} \Pr_{\rm in}(y \mid x) ) + \costout \cdot (1 - \gamma_{\rm in}(x)) \right) \cdot \1(r(x) = 0)\\
& \hspace{10cm}
+ \costin \cdot \gamma_{\rm in}(x) \cdot \1(r(x) = 1) \bigg].
\end{align*}
\begin{align*}
{ \hat{L}_{\rm rej}(r) }
\Ex_{x \sim \Pr^*}\bigg[
\left(
(1 - \costin - \costout) \cdot \hat{\gamma}_{\rm in}(x) \cdot ( 1 - \max_{y \in [L]} \hat{\Pr}_{\rm in}(y \mid x) ) + \costout \cdot (1 - \hat{\gamma}_{\rm in}(x)) \right) \cdot \1(r(x) = 0)\\
& \hspace{10cm}
+ \costin \cdot \hat{\gamma}_{\rm in}(x) \cdot \1(r(x) = 1) \bigg],
\end{align*}
where we denote $\hat{\gamma}_{\rm in}(x) = \frac{\hat{s}_{\rm ood}(x)}{1 + \hat{s}_{\rm ood}(x)}$.
Notice that $r^*$ minimizes $L(r)$ over all rejectors $r: \mathcal{X} \rightarrow \{0, 1\}$. Similarly, note that $\hat{r}_{\rm BB}$ minimizes $\hat{L}(r)$ over all rejectors $r: \mathcal{X} \rightarrow \{0, 1\}$.
Then the second term can be written as:
\begin{align*}
\text{term}_2
L_{\rm rej}(\hat{r}_{\rm BB}) - L_{\rm rej}(r^*)\\
&= L_{\rm rej}(\hat{r}_{\rm BB}) -
\hat{L}_{\rm rej}(r^*)
\hat{L}_{\rm rej}(r^*)
L_{\rm rej}(r^*)
\\
L_{\rm rej}(\hat{r}_{\rm BB}) -
\hat{L}_{\rm rej}(\hat{r}_{\rm BB})
\hat{L}_{\rm rej}(r^*)
L_{\rm rej}(r^*)\\
2 \cdot(1 - \costin - \costout) \cdot \left|\max_{y \in [L]} \Pr_{\rm in}(y \mid x) - \max_{y \in [L]} \hat{\Pr}_{\rm in}(y \mid x) \right|\cdot|\gamma_{\rm in}(x) - \hat{\gamma}_{\rm in}(x)|\\
\hspace{6cm}+
2 \cdot \big(
(1 - \costin - \costout) + \costout + \costin \big) \cdot |\gamma_{\rm in}(x) - \hat{\gamma}_{\rm in}(x)|
\\
&\leq 2 \cdot(1 - \costin - \costout) \cdot (1) \cdot|\gamma_{\rm in}(x) - \hat{\gamma}_{\rm in}(x)| +
2 \cdot (1) \cdot|\gamma_{\rm in}(x) - \hat{\gamma}_{\rm in}(x)|
\\
&\leq 4 \cdot |\gamma_{\rm in}(x) - \hat{\gamma}_{\rm in}(x)|\\
&= 4 \cdot \left|
\frac{ \Pr_{\rm in}(x) }{ \Pr_{\rm in}(x) + \Pr_{\rm out}(x) } - \frac{\hat{s}_{\rm ood}(x)}{1 + \hat{s}_{\rm ood}(x)}
\right|,
\end{align*}
where the third step follows from $\hat{r}_{\rm BB}$ being a minimizer of $\hat{L}_{\rm rej}(r)$, the fourth step uses the fact that $\left|\max_{y \in [L]} \Pr_{\rm in}(y \mid x) - \max_{y \in [L]} \hat{\Pr}_{\rm in}(y \mid x) \right| \leq 1$, and the fifth step uses the fact that $c_{\rm in} + c_{\rm out} \leq 1$.
Combining the bounds on $\text{term}_1$ and $\text{term}_2$ completes the proof.
We first note that $f^*(x) \propto \log(\mathbb{P}_{\rm in}(y \mid x))$ and $s^*(x) = \log\big( \frac{ \mathbb{P}^*(z = 1 \mid x) }{ \mathbb{P}^*(z = 0 \mid x) } \big)$.
Regret Bound 1: We start with the first regret bound. We expand the multi-class cross-entropy loss to get:
\begin{align*}
\mathbb{E}_{( x, y ) \sim \mathbb{P}_{\rm in}}\left[ \ell_{\rm mc}( y, f( x ) )
\right] &=
\mathbb{E}_{x \sim \mathbb{P}_{\rm in}}\left[ -\sum_{y \in [L]} \mathbb{P}_{\rm in}(y \mid x) \cdot \log\left( p_y( x ) \right)
\right] \\
\mathbb{E}_{( x, y ) \sim \mathbb{P}_{\rm in}}\left[ \ell_{\rm mc}( y, f^*( x ) )
\right] &=
\mathbb{E}_{x \sim \mathbb{P}_{\rm in}}\left[ -\sum_{y \in [L]} \mathbb{P}_{\rm in}(y \mid x) \cdot \log\left( \mathbb{P}_{\rm in}(y \mid x) \right)
\right].
\end{align*}
The right-hand side of the first bound can then be expanded as:
\begin{align}
\mathbb{E}_{( x, y ) \sim \mathbb{P}_{\rm in}}\left[ \ell_{\rm mc}( y, f( x ) )
\right] -
\mathbb{E}_{( x, y ) \sim \mathbb{P}_{\rm in}}\left[ \ell_{\rm mc}( y, f^*( x ) )
\right]
\mathbb{E}_{x \sim \mathbb{P}_{\rm in}}\left[ \sum_{y \in [L]} \mathbb{P}_{\rm in}(y \mid x) \cdot \log\left( \frac{ \mathbb{P}_{\rm in}(y \mid x) }{ p_y( x ) } \right)
\right],
\label{eqn:kl-rewritten}
\end{align}
which the KL-divergence between $\mathbb{P}_{\rm in}(y \mid x) $ and $p_y(x)$.
The KL-divergence between two probability mass functions $p$ and $q$ over $\mathcal{U}$ can be lower bounded by:
\begin{equation}
\text{KL}(p || q) \geq \frac{1}{2} \left( \sum_{u \in \mathcal{U}} |p(u) - q(u)| \right)^2.
\label{eqn:kld-bound}
\end{equation}
Applying (<ref>) to (<ref>), we have:
\begin{align*}
\sum_{y \in [L]} \mathbb{P}_{\rm in}(y \mid x) \cdot \log\left( \frac{ \mathbb{P}_{\rm in}(y \mid x) }{ p_y( x ) } \right)
\frac{1}{2}\left(
\sum_{y \in [L]} \left|\mathbb{P}_{\rm in}(y \mid x) - p_y( x ) \right|
\right)^2,
\end{align*}
and therefore:
\begin{align*}
\mathbb{E}_{( x, y ) \sim \mathbb{P}_{\rm in}}\left[ \ell_{\rm mc}( y, f( x ) )
\right] -
\mathbb{E}_{( x, y ) \sim \mathbb{P}_{\rm in}}\left[ \ell_{\rm mc}( y, f^*( x ) )
\right]
\frac{1}{2}\cdot\mathbb{E}_{x \sim \mathbb{P}_{\rm in}}\left[ \left(
\sum_{y \in [L]} \left|\mathbb{P}_{\rm in}(y \mid x) - p_y( x ) \right|
\right)^2
\right] \\
\frac{1}{2}
\left(
\mathbb{E}_{x \sim \mathbb{P}_{\rm in}}\left[
\sum_{y \in [L]} \left|\mathbb{P}_{\rm in}(y \mid x) - p_y( x ) \right|
\right]
\right)^2,
\end{align*}
\[
\mathbb{E}_{x \sim \mathbb{P}_{\rm in}}\left[
\sum_{y \in [L]} \big|
\Pr_{\rm in}(y \mid x) - p_y(x) \big| \right]
\leq
\sqrt{2}\sqrt{
\mathbb{E}_{( x, y ) \sim \mathbb{P}_{\rm in}}\left[ \ell_{\rm mc}( y, f( x ) )
\right]
\,-\,
\mathbb{E}_{( x, y ) \sim \mathbb{P}_{\rm in}}\left[ \ell_{\rm mc}( y, f^*( x ) )
\right]
\]
Regret Bound 2: We expand the binary sigmoid cross-entropy loss to get:
\begin{align*}
\mathbb{E}_{( x, z ) \sim \mathbb{P}^*}\left[ \ell_{\rm bc}(z , s( x ) )\right]
\mathbb{E}_{x \sim \mathbb{P}^*}\left[ -\mathbb{P}^*(z = 1 \mid x) \cdot \log\left( p_\perp( x ) \right)
\,-\,
\mathbb{P}^*(z = -1 \mid x) \cdot \log\left( 1 - p_\perp( x ) \right)
\right]\\
\mathbb{E}_{( x, z ) \sim \mathbb{P}^*}\left[ \ell_{\rm bc}(z , s^*( x ) )\right]
\mathbb{E}_{x \sim \mathbb{P}^*}\left[ -\mathbb{P}^*(z = 1 \mid x) \cdot \log\left( \mathbb{P}^*(z = 1 \mid x) \right)
\,-\,
\mathbb{P}^*(z = -1 \mid x) \cdot \log\left( \mathbb{P}^*(z = -1 \mid x) \right)
\right],
\end{align*}
and furthermore
\begin{align*}
\lefteqn{
\mathbb{E}_{( x, z ) \sim \mathbb{P}^*}\left[ \ell_{\rm bc}( z, s( x ) )
\right]
\,-\,
\mathbb{E}_{( x, z ) \sim \mathbb{P}^*}\left[ \ell_{\rm bc}(z , s^*( x ) )\right]}\\
\mathbb{E}_{x \sim \mathbb{P}^*}\left[
\mathbb{P}^*(z = 1 \mid x) \cdot \log\left( \frac{ \mathbb{P}^*(z = 1 \mid x) }{ p_\perp(x) } \right) \,+\,
\mathbb{P}^*(z = -1 \mid x) \cdot \log\left( \frac{ \mathbb{P}^*(z = -1 \mid x) }{ 1 - p_\perp(x) } \right)
\right]
\\
\geq
\mathbb{E}_{x \sim \mathbb{P}^*}\left[
\frac{1}{2}\left( |\mathbb{P}^*(z = 1 \mid x) - p_\perp(x)| + |\mathbb{P}^*(z = -1 \mid x) - (1 - p_\perp(x))|\right)^2
\right]
\\
\mathbb{E}_{x \sim \mathbb{P}^*}\left[
\frac{1}{2}\left( |\mathbb{P}^*(z = 1 \mid x) - p_\perp(x)| + |(1 - \mathbb{P}^*(z = 1 \mid x)) - (1 - p_\perp(x))|\right)^2
\right]
\\
\mathbb{E}_{x \sim \mathbb{P}^*}\left[
|\mathbb{P}^*(z = 1 \mid x) - p_\perp(x)|^2
\right]\\
2\cdot \left(\mathbb{E}_{x \sim \mathbb{P}^*}\left[
|\mathbb{P}^*(z = 1 \mid x) - p_\perp(x)|
\right]\right)^2,
\end{align*}
where the second step uses the bound in (<ref>) and the last step uses Jensen's inequality. Taking square-root on both sides and noting that $\mathbb{P}^*(z = 1 \mid x) = \frac{\Pr_{\rm in}( x )}{\Pr_{\rm in}( x ) + \Pr_{\rm out}( x )}$ completes the proof.
§ TECHNICAL DETAILS: COUPLED LOSS
Our second loss function
seeks to learn an augmented scorer $\bar{f} \colon \XCal \to \Real^{L+1}$, with the additional score corresponding to a “reject class”, denoted by $\perp$, and is based on the following simple observation:
$$ z_{y'}( x ) = \begin{cases}
(1 - \costin - \costout) \cdot \Pr_{\rm in}( y \mid x ) & \text{ if } y' \in [ L ] \\
(1 - 2 \cdot \costin - \costout) + \costout \cdot \frac{\Pr_{\rm out}( x )}{\Pr_{\rm in}( x )} & \text{ if } y' = \perp,
\end{cases} $$
and let
$\zeta_{y'}( x ) = \frac{ z_{y'}( x ) }{ Z( x ) }$
$Z( x ) \defEq {\sum_{y'' \in [ L ] \cup \{ \perp \}} z_{y''}( x ) }$.
Now suppose that one has an estimate
$\hat{\zeta}$ of $\zeta$.
This yields an alternate plug-in estimator of the Bayes-optimal SCOD rule (<ref>):
\begin{equation}
\label{eqn:reject-coupled}
\hat{r}( x ) = 1 \iff \max_{y' \in [L]} \hat{\zeta}_{y'}( x ) < \hat{\zeta}_{\perp}( x ).
\end{equation}
One may readily estimate
with a standard multi-class loss $\ell_{\rm mc}$,
with suitable modification:
\begin{equation}
\label{eqn:css-surrogate-repeat}
\E{(x,y) \sim \Pr_{\rm in}}{ \ell_{\rm mc}( y, \bar{f}( x ) ) } + (1 - \costin) \cdot \E{x \sim \Pr_{\rm in}}{ \ell_{\rm mc}( \perp, \bar{f}( x ) ) } + \costout \cdot \E{x \sim \Pr_{\rm out}}{ \ell_{\rm mc}( \perp, \bar{f}( x ) ) }.
\end{equation}
Compared to the decoupled loss (<ref>), the key difference is that the penalties on the rejection logit $\bar{f}_\perp( x )$ involve the classification logits as well.
§ TECHNICAL DETAILS: ESTIMATING THE OOD MIXING WEIGHT $\PI_{\RM MIX}$
To obtain the latter,
we apply a simple transformation as follows.
$\Pr_{\rm mix} = \pi_{\rm mix} \cdot \Pr_{\rm in} + (1-\pi_{\rm mix}) \cdot \Pr_{\rm out}$ with $\pi_{\rm mix} < 1$.
if $\Pr_{\rm in}( x ) > 0$,
\frac{ \Pr_{\rm out}(x) }{ \Pr_{\rm in}(x) } = \frac{1}{1-\pi_{\rm mix}} \cdot \left( \frac{ \Pr_{\rm mix}(x) }{ \Pr_{\rm in}(x) } - \pi_{\rm mix} \right).
The above transformation requires knowing the mixing proportion $\pi_{\rm mix}$ of inlier samples in the unlabeled dataset.
However, as it measures the fraction of OOD samples during deployment,
$\pi_{\rm mix}$ is typically unknown.
We may however estimate this with (A2).
Observe that for a strictly inlier example $x \in S^*_{\rm in}$,
we have $\frac{ \Pr_{\rm mix}(x) }{ \Pr_{\rm in}(x)} = \pi_{\rm mix}$, i.e., $\exp( -\hat{s}(x) ) \approx \pi_{\rm mix}$.
Therefore, we can estimate
\begin{align*}
\hat{s}_{\rm ood}(x) = \left(\frac{1}{1-\hat{\pi}_{\rm mix}} \cdot \left( \exp( -\hat{s}(x) ) - \hat{\pi}_{\rm mix} \right)\right)^{-1}
\quad
\text{where}
\quad
\hat{\pi}_{\rm mix} = \frac{1}{|S^*_{\rm in}|}\sum_{x \in S^*_{\rm in}} \exp(-\hat{s}(x)).
\end{align*}
We remark here that this problem is roughly akin to class prior estimation for PU learning <cit.>,
and noise rate estimation for label noise <cit.>.
As in those literatures,
estimating $\pi_{\rm mix}$ without any assumptions is challenging.
Our assumption on the existence of a Strict Inlier set $S^*_{\rm in}$ is analogous to assuming the existence of a golden label set in the label noise literature <cit.>.
Expanding the right-hand side, we have:
\begin{align*}
\frac{1}{1-\pi_{\rm mix}} \cdot \left( \frac{ \Pr_{\rm mix}(x) }{ \Pr_{\rm in}(x) } - \pi_{\rm mix} \right)
&= \frac{1}{1-\pi_{\rm mix}} \cdot \left( \frac{ \pi_{\rm mix} \cdot \Pr_{\rm in}(x) + (1-\pi_{\rm mix}) \cdot \Pr_{\rm out}(x) }{ \Pr_{\rm in}(x) } - \pi_{\rm mix} \right)\\
&= \frac{ \Pr_{\rm out}(x) }{ \Pr_{\rm in}(x) },
\end{align*}
as desired.
§ TECHNICAL DETAILS: PLUG-IN ESTIMATORS WITH AN ABSTENTION BUDGET
Observe that (<ref>) is equivalent to solving the Lagrangian:
\begin{align}
\label{eqn:budget-constrainted-ood}
\min_{h, r} \max_{\lambda} \left[ F( h, r; \lambda ) \right]& \\
\nonumber
F( h, r; \lambda ) \defEq ( 1 - \costFN ) \cdot \Pr_{\rm in}( y \neq {h}( x ), r( x ) = 0 ) & +
\costin(\lambda) \cdot \Pr_{\rm out}( {r}( x ) = 0 ) +
\costout(\lambda) \cdot \Pr_{\rm in}( r( x ) = 1 ) +
\nu_\lambda
\\
\nonumber
\left( \costin(\lambda), \costout(\lambda), \nu_\lambda \right) \defEq ( \costFN - \lambda \cdot (1 - \pi^*_{\rm in}),
\lambda \cdot \pi^*_{\rm in}, \lambda \cdot (1 - \pi^*_{\rm in}) - \lambda \cdot b_{\rm rej} ).
% \costin(\lambda) &\defEq ( \costFN - \lambda \cdot (1 - \pi^*_{\rm in}) ) \\
% \nonumber
% \costout(\lambda) &\defEq \lambda \cdot \pi^*_{\rm in} \\
% \nonumber
% \gamma_\lambda &\defEq \lambda \cdot (1 - \pi^*_{\rm in}) - \lambda \cdot b_{\rm rej}.
\end{align}
Solving (<ref>) requires optimising over both $(h, r)$ and $\lambda$.
Suppose momentarily that $\lambda$ is fixed.
Then, $F( h, r; \lambda )$ is exactly a scaled version of
the soft-penalty objective (<ref>).
we can use Algorithm <ref> to construct a plug-in classifier that minimizes the above joint risk.
To find the optimal $\lambda$,
we only need to implement the surrogate minimisation step in Algorithm <ref> once to estimate the relevant probabilities.
We can then construct multiple
plug-in classifiers for different values of $\lambda$,
and perform an inexpensive threshold search:
amongst the classifiers satisfying the budget constraint,
we pick
the one that minimises (<ref>).
The above requires estimating $\pi^*_{\rm in}$, the fraction of inliers observed during deployment.
Following (A2), one plausible estimate is $\pi_{\rm mix}$, the fraction of inliers in the “wild” mixture set $S_{\rm mix}$.
Remark. The previous work of <cit.> for OOD detection also seeks to solve an optimization problem with explicit constraints on abstention rates.
However, there are some subtle, but important, technical differences between their formulation and ours.
Like us, <cit.> also seek to jointly learn a classifier and an OOD scorer, with constraints on the classification and abstention rates, given access to samples from $\Pr_{\rm in}$ and $\Pr_{\rm mix}$.
For a joint classifier $h: \XCal \rightarrow [L]$ and rejector $r: \XCal \rightarrow \{0, 1\}$, their formulation can be written as:
\begin{align}
\lefteqn{
\min_{h}~
\Pr_{\rm out}\left( r(x) = 0 \right) }
\label{eq:ks-original}
\\
\text{s.t.}\hspace{20pt}
& \Pin\left( r(x) = 1 \right) \leq \kappa
\nonumber
\\
& \Pin\left( {h}(x) \ne y,\, r(x) = 0 \right) \leq \tau,
\nonumber
\end{align}
for given targets $\kappa, \tau \in (0,1)$.
While $\Pr_{\rm out}$ is not directly available,
provide a simple solution to solving (<ref>) using only access to $\Pr_{\rm mix}$ and $\Pr_{\rm in}$. They show that under some mild assumptions, replacing $\Pr_{\rm out}$ with $\Pr_{\rm mix}$ in the above problem does not alter the optimal solution. The intuition behind this is that when the first constraint on the inlier abstention rate is satisfied with equality, we have $\Pr_{\rm mix}\left( r(x) = 0 \right) = \pi_{\rm mix} \cdot (1 - \costin) + (1 - \pi_{\rm mix}) \cdot \Pr_{\rm out}\left( r(x) = 0 \right)$, and minimizing this objective is equivalent to minimizing the OOD objective in (<ref>).
This simple trick of replacing $\Pr_{\rm out}$ with $\Pr_{\rm mix}$ will only work when we have an explicit constraint on the inlier abstention rate, and will not work for the formulation we are interested in (<ref>). This is because in our formulation, we impose a budget on the overall abstention rate (as this is a more intuitive quantity that a practitioner may want to constraint), and do not explicitly control the abstention rate on $\Pr_{\rm in}$.
In comparison to <cit.>, the plug-in based approach we prescribe is more general, and can be applied to optimize any objective that involves as a weighted combination of the mis-classification error and the abstention rates on the inlier and OOD samples. This includes both the budget-constrained problem we consider in (<ref>), and the constrained problem of in (<ref>).
§ TECHNICAL DETAILS: RELATION OF PROPOSED LOSSES TO EXISTING LOSSES
Equation <ref> generalises
several existing proposals in the SC and OOD detection literature.
In particular,
reduces to the
loss proposed in <cit.>,
when $\Pr_{\rm in} = \Pr_{\rm out}$,
i.e., when one only wishes to abstain on low confidence ID samples.
this also corresponds to the decoupled loss
for OOD detection
in <cit.>;
crucially, however,
reject only based on whether $\bar{f}_{\perp}( x ) < 0$,
rather than comparing $\bar{f}_{\perp}( x )$ and $\max_{y' \in [L]} \bar{f}_{y'}( x )$.
The latter is essential to match the Bayes-optimal predictor in (<ref>).
the coupled loss in (<ref>) reduces to the
cost-sensitive softmax cross-entropy
in <cit.>
when $\costout = 0$,
and the OOD detection
loss of <cit.> when $\costin = 0, \costout = 1$.
§ ADDITIONAL EXPERIMENTS
We provide details about the hyper-parameters and dataset splits used in the experiments, as well as, additional experimental results and plots that were not included in the main text. The in-training experimental results are averaged over 5 random trials.
§.§ Hyper-parameter choices
We provide details of the learning rate (LR) schedule and other hyper-parameters used in our experiments.
Dataset Model LR Schedule Epochs Batch sizeCIFAR-40/100
CIFAR ResNet 56 1.0 anneal 256 1024
We use SGD with momentum as the optimization
algorithm for all models. For annealing schedule, the specified learning
rate (LR) is the initial rate, which is then decayed by a factor of
ten after each epoch in a specified list. For CIFAR, these epochs
are 15, 96, 192 and 224.
§.§ Baseline details
We provide further details about the baselines we compare with. The following baselines are trained on only the inlier data.
* MSP or Chow's rule: Train a scorer $f: \cX \rightarrow \R^L$ using CE loss, and threshold the MSP
to decide to abstain <cit.>.
* MaxLogit: Same as above, but instead threshold the maximum logit $\max_{y \in [L]} f_y(x)$ <cit.>.
* Energy score: Same as above, but threshold the energy function $-\log\sum_y \exp(f_y(x))$
* ODIN: Train a scorer $f: \cX \rightarrow \R^L$ using CE loss, and uses a combination of input noise and temperature-scaled MSP to decide when to abstain <cit.>.
* SIRC: Train a scorer $f: \cX \rightarrow \R^L$ using CE loss, and compute a post-hoc deferral rule that combines the MSP score with either the $L_1$-norm or the residual score of the embedding layer from the scorer $f$ <cit.>.
* CSS: Minimize the cost-sensitive softmax L2R loss of <cit.> using only the inlier dataset to learn a scorer $f \colon \XCal \to \Real^{L + 1}$, augmented with a rejection score $f_\perp( x )$, and abstain iff $f_{\perp}( x ) > \max_{y' \in [ L ]} f_{y'}( x ) + t$, for threshold $t$.
The following baselines additional use the unlabeled data containing a mix of inlier and OOD samples.
* Coupled CE (CCE): Train a scorer $f \colon \XCal \to \Real^{L + 1}$, augmented with a rejection score $f_\perp( x )$ by optimizing the CCE loss of <cit.>, and abstain iff $f_{\perp}( x ) > \max_{y' \in [ L ]} f_{y'}( x ) + t$, for threshold $t$.
* De-coupled CE (DCE): Same as above but uses the DCE loss of <cit.> for training.
* Outlier Exposure (OE): Train a scorer using the OE loss of <cit.> and threshold the MSP.
§.§ Data split details
For the CIFAR-100 experiments
where we use a wild sample containing a mix of ID and OOD examples, we split the original CIFAR-100 training set into two halves, use one half as the inlier sample and the other half to construct the wild sample. For evaluation, we combine the orignal CIFAR-100 test set with the respective OOD test set. In each case, the larger of the ID and OOD dataset is down-sampled to match the desired ID-OOD ratio. The experimental results are averaged over 5 random trials.
For the pre-trained ImageNet experiments, we sample equal number of examples from the ImageNet validation sample and the OOD dataset, and annotate them with the pre-trained model. The number of samples is set to the smaller of the size of the OOD dataset or 5000.
§.§ Comparison to CSS and ODIN baselines
We present some representative results in Table <ref> comparing our proposed methods against the cost-sensitive softmax (CSS) of <cit.>, a representative learning-to-reject baseline, and the ODIN method of <cit.>, an OOD detection baseline. As expected, the CSS baseline, which does not have OOD detection capabilities is seen to under-perform. The ODIN, baseline, on the other hand, is occasionally seen to be competitive.
AUC-RC ($\downarrow$) for CIFAR-100 as ID, and a “wild” comprising of 90% ID and only 10% OOD.
The OOD part of the wild set is drawn from the same OOD dataset from which the test set is drawn.
We compare the proposed methods with the cost-sensitive softmax (CSS) learning-to-reject loss of <cit.> and the ODIN method of <cit.>.
We set $c_{\rm fn} = 0.75$.
ID + OOD training with
$\Pr^{\rm tr}_{\rm out} = \Pr^{\rm te}_{\rm out}$
Method / $\Pr^{\rm te}_{\rm out}$ SVHN Places OpenImages
CSS 0.286 0.263 0.254
ODIN 0.218 0.217 0.217
Plug-in BB [$L_1$] 0.196 0.210 0.222
Plug-in BB [Res] 0.198 0.236 0.251
Plug-in LB* 0.221 0.199 0.225
§.§ Experimental plots
We present experimental plots in Figure <ref> of the joint risk in Section <ref> as a function of the fraction of samples abstained. We also plot the inlier accuracy, the OOD precision, and the OOD recall as a function of samples abstained. These metrics are described below:
\begin{align*}
\text{inlier-accuracy}(h, r) &= \frac{
\sum_{(x,y) \in S_{\rm in}}\1(y = \bar{h}(x), r(x) = 0)
\sum_{x \in S_{\rm all}}\1( r(x) = 0 )
\text{ood-precision}(h, r) &= \frac{
\sum_{(x,y) \in S_{\rm out}}\1( r(x) = 1)
\sum_{x \in S_{\rm all}}\1( r(x) = 1)
\text{ood-recall}(\bar{h}) &= \frac{
\sum_{x \in S_{\rm out}}\1( r(x) = 1)
|S_{\rm out}|
\end{align*}
where $S_{\rm all} = \{x: (x, y) \in S_{\rm in}\} \cup S_{\rm out}$ is the combined set of ID and OOD instances.
One can see a few general trends.
The joint risk decreases with more abstentions; the inlier accuracy increases with abstentions.
The OOD precision is the highest initially when the abstentions are on the OOD samples, but decreases when the OOD samples are exhausted, and the abstentions are on the inlier samples; the opposite is true for OOD recall.
[$\Pr_{\rm in}$: CIFAR-100; $\Pr_{\rm out}$: SVHN]
[$\Pr_{\rm in}$: CIFAR-100; $\Pr_{\rm out}$: Places365]
[$\Pr_{\rm in}$: CIFAR100; $\Pr_{\rm out}$: LSUN]
[$\Pr_{\rm in}$: CIFAR-100; $\Pr_{\rm out}$: Texture]
[$\Pr_{\rm in}$: CIFAR-100; $\Pr_{\rm out}$: Open Images]
[$\Pr_{\rm in}$: CIFAR-100; $\Pr_{\rm out}$: CelebA]
Plots of classification and OOD detection metrics as a function of the fraction of abstained samples (averaged over 5 trials). We use CIFAR-100 as the ID sample, and a mix of CIFAR-100 and each of SVHN, Places265, LSUN, LSUN-R, Texture, Open Images and CelebA as the wild sample, and evaluate on the respective OOD dataset. The wild sample contains 90% ID and only 10% OOD samples. The test contains equal proportions of ID and OOD samples. For the joint risk, lower values are better. For all other metrics, higher values are better.
We set $c_{\rm fn} = 0.75$.
§.§ Varying OOD mixing proportion in test set
We repeat the experiments in Table <ref> on CIFAR-100 and 100K Random Images with varying proportions of OOD samples in the test set, and present the results in Table <ref>. One among the proposed plug-in methods continues to perform the best.
Area Under the Risk-Coverage Curve (AUC-RC) for methods trained with CIFAR-100 as the ID sample and a mix of CIFAR-100 and 300K Random Images as the wild sample, and with the proportion of OOD samples in test set varied.
The wild set contains 10% ID and 90% OOD.
Base model is ResNet-56.
We set $c_{\rm fn} = 0.75$.
A * against a method indicates that it uses both ID and OOD samples for training.
Lower values are better.
Test OOD proportion = 0.25
Test OOD proportion = 0.75
Method / $\Pr^{\rm te}_{\rm out}$ SVHN Places LSUN LSUN-R
Texture SVHN Places LSUN LSUN-R Texture
MSP 0.171 0.186 0.176 0.222 0.192
0.501 0.518 0.506 0.564 0.532
MaxLogit 0.156 0.175 0.163 0.204 0.183
0.464 0.505 0.478 0.545 0.512
Energy 0.158 0.177 0.162 0.206 0.181
0.467 0.502 0.477 0.538 0.509
SIRC [$L_1$] 0.158 0.181 0.159 0.218 0.180
0.480 0.513 0.485 0.560 0.509
SIRC [Res] 0.141 0.181 0.152 0.219 0.194
0.456 0.516 0.476 0.561 0.535
CCE* 0.175 0.191 0.153 0.131 0.154
0.460 0.487 0.425 0.374 0.429
DCE* 0.182 0.200 0.155 0.136 0.162
0.467 0.498 0.414 0.372 0.428
OE* 0.179 0.174 0.147 0.117 0.148
0.492 0.487 0.440 0.371 0.440
Plug-in BB [$L_1$] 0.127 0.164 0.128 0.180 0.134
0.395 0.457 0.397 0.448 0.414
Plug-in BB [Res] 0.111 0.175 0.129 0.182 0.248
0.377 0.484 0.407 0.449 0.645
Plug-in LB* 0.160 0.169 0.133 0.099 0.132
0.468 0.489 0.418 0.351 0.430
§.§ Varying OOD cost parameter
We repeat the experiments in Table <ref> on CIFAR-100 and 100K Random Images with varying values of cost parameter $c_{\rm fn}$, and present the results in Table <ref>. One among the proposed plug-in methods continues to perform the best, although the gap between the best and second-best methods increases with $c_{\rm fn}$.
Area Under the Risk-Coverage Curve (AUC-RC) for methods trained with CIFAR-100 as the ID sample and a mix of CIFAR-100 and 300K Random Images as the wild sample, and for different values of cost parameter $c_{\rm fn}$.
The wild set contains 10% ID and 90% OOD.
Base model is ResNet-56.
$c_{\rm fn} = 0.5$
$c_{\rm fn} = 0.9$
Method / $\Pr^{\rm te}_{\rm out}$ SVHN Places LSUN LSUN-R
Texture SVHN Places LSUN LSUN-R Texture
MSP 0.261 0.271 0.265 0.299 0.278
0.350 0.374 0.360 0.448 0.394
MaxLogit 0.253 0.271 0.259 0.293 0.277
0.304 0.350 0.318 0.410 0.360
Energy 0.254 0.273 0.262 0.293 0.277
0.303 0.349 0.317 0.407 0.359
SIRC [$L_1$] 0.252 0.270 0.257 0.298 0.267
0.319 0.368 0.327 0.440 0.358
SIRC [Res] 0.245 0.270 0.251 0.297 0.282
0.286 0.371 0.311 0.440 0.397
CCE* 0.296 0.307 0.283 0.269 0.286
0.282 0.318 0.233 0.179 0.240
DCE* 0.303 0.317 0.285 0.270 0.292
0.289 0.331 0.225 0.177 0.238
OE* 0.287 0.283 0.270 0.255 0.272
0.327 0.315 0.252 0.173 0.251
Plug-in BB [$L_1$] 0.237 0.258 0.239 0.267 0.244
0.207 0.280 0.207 0.266 0.226
Plug-in BB [Res] 0.228 0.266 0.241 0.269 0.321
0.185 0.322 0.218 0.266 0.599
Plug-in LB* 0.256 0.265 0.243 0.222 0.245
0.299 0.326 0.234 0.165 0.246
§.§ Confidence intervals
In Table <ref>, we report 95% confidence intervals for the experiments on CIFAR-100 and 100K Random Images
from Table <ref>. In each case, the differences between the best performing plug-in method and the baselines are statistically significant.
Area Under the Risk-Coverage Curve (AUC-RC) for methods trained with CIFAR-100 as the ID sample and a mix of CIFAR-100 and 300K Random Images as the wild sample, with 95% confidence intervals included.
The wild set contains 10% ID and 90% OOD.
The test sets contain 50% ID and 50% OOD samples.
Base model is ResNet-56.
We set $c_{\rm fn} = 0.75$.
Method / $\Pr^{\rm te}_{\rm out}$ SVHN Places LSUN LSUN-R
MSP 0.317 $\pm$ 0.023 0.336 $\pm$ 0.010 0.326 $\pm$ 0.005 0.393 $\pm$ 0.018 0.350 $\pm$ 0.004
MaxLogit 0.286 $\pm$ 0.012 0.321 $\pm$ 0.011 0.299 $\pm$ 0.009 0.365 $\pm$ 0.016 0.329 $\pm$ 0.013
Energy 0.286 $\pm$ 0.012 0.320 $\pm$ 0.013 0.296 $\pm$ 0.008 0.364 $\pm$ 0.015 0.326 $\pm$ 0.014
SIRC [$L_1$] 0.294 $\pm$ 0.021 0.331 $\pm$ 0.010 0.300 $\pm$ 0.007 0.387 $\pm$ 0.017 0.326 $\pm$ 0.006
SIRC [Res] 0.270 $\pm$ 0.019 0.332 $\pm$ 0.009 0.289 $\pm$ 0.007 0.384 $\pm$ 0.019 0.353 $\pm$ 0.003
CCE* 0.288 $\pm$ 0.017 0.315 $\pm$ 0.018 0.252 $\pm$ 0.004 0.213 $\pm$ 0.001 0.255 $\pm$ 0.004
DCE* 0.295 $\pm$ 0.015 0.326 $\pm$ 0.028 0.246 $\pm$ 0.004 0.212 $\pm$ 0.001 0.260 $\pm$ 0.005
OE* 0.313 $\pm$ 0.015 0.304 $\pm$ 0.006 0.261 $\pm$ 0.001 0.204 $\pm$ 0.002 0.260 $\pm$ 0.002
Plug-in BB [$L_1$] 0.223 $\pm$ 0.004 0.286 $\pm$ 0.013 0.227 $\pm$ 0.007 0.294 $\pm$ 0.021 0.240 $\pm$ 0.006
Plug-in BB [Res] 0.205 $\pm$ 0.002 0.309 $\pm$ 0.009 0.235 $\pm$ 0.005 0.296 $\pm$ 0.012 0.457 $\pm$ 0.008
Plug-in LB* 0.290 $\pm$ 0.017 0.306 $\pm$ 0.016 0.243 $\pm$ 0.003 0.186 $\pm$ 0.001 0.248 $\pm$ 0.006
§.§ AUC and FPR95 metrics for OOD scorers
Table <ref> reports the AUC-ROC and FPR@95TPR metrics for the OOD scorers used by different methods, treating OOD samples as positives and ID samples as negatives. Note that the CCE, DCE and OE methods which are trained with both ID and OOD samples are seen to perform the best on these metrics. However, this superior performance in OOD detection doesn't often translate to good performance on the SCOD problem (as measured by AUC-RC). This is because these methods abstain solely based on the their estimates of the ID-OOD density ratio, and do not trade-off between accuracy and OOD detection performance.
AUC-ROC (($\uparrow$)) and FPR@95TPR ($\downarrow$) metrics for OOD scorers used by different methods trained. We use CIFAR-100 as the ID sample and a mix of 50% CIFAR-100 and 50% 300K Random Images as the wild sample.
Base model is ResNet-56.
We set $c_{\rm fn} = 0.75$ in the plug-in methods. The CCE, DCE and OE methods which are trained with both ID and OOD samples are seen to perform the best on these metrics. However, this superior performance in OOD detection doesn't often translate to good performance on the SCOD problem (as measured by AUC-RC in Table <ref>).
OOD AUC-ROC
OOD FPR95
Method / $\Pr^{\rm te}_{\rm out}$ SVHN Places LSUN LSUN-R
Texture SVHN Places LSUN LSUN-R Texture
MSP 0.629 0.602 0.615 0.494 0.579
0.813 0.868 0.829 0.933 0.903
MaxLogit 0.682 0.649 0.692 0.564 0.634
0.688 0.846 0.754 0.916 0.864
Energy 0.685 0.654 0.698 0.568 0.645
0.680 0.843 0.742 0.915 0.850
SIRC [$L_1$] 0.699 0.621 0.700 0.516 0.663
0.788 0.871 0.819 0.930 0.882
SIRC [Res] 0.777 0.613 0.735 0.513 0.566
0.755 0.870 0.800 0.929 0.900
CCE* 0.772 0.725 0.878 0.995 0.883
0.647 0.775 0.520 0.022 0.570
DCE* 0.770 0.709 0.905 0.998 0.888
0.693 0.807 0.466 0.007 0.562
OE* 0.699 0.725 0.861 0.998 0.873
0.797 0.792 0.689 0.004 0.706
Plug-in BB [$L_1$] 0.897 0.718 0.896 0.684 0.876
0.473 0.716 0.496 0.717 0.580
Plug-in BB [Res] 0.963 0.667 0.885 0.680 0.432
0.251 0.777 0.559 0.726 0.996
Plug-in LB* 0.710 0.683 0.860 0.997 0.853
0.749 0.801 0.653 0.009 0.697
§.§ Results on CIFAR-40 ID sample
Following <cit.>, we present in Table <ref> results of experiments where we use CIFAR-40 (a subset of CIFAR-100 with 40 classes) as the ID-only training dataset, and we evaluate on CIFAR-60 (the remainder with 60 classes), SVHN, Places, LSUN and LSUN-R as OOD datasets.
Area Under the Risk-Coverage Curve (AUC-RC) for different methods with CIFAR-40 as the inlier dataset and the training set comprising of only inlier samples, when evaluated on the following OOD datasets: CIFAR60, SVHN, Places, LSUN-C and LSUN-R. The test sets contain 50% ID samples and 50% OOD samples. We set $c_{\rm fn} = 0.75$. The last three rows contain results for the proposed methods.
5cTest OOD dataset
Method CIFAR60 SVHN Places LSUN-C LSUN-R
MSP 0.262 0.238 0.252 0.282 0.243
MaxLogit 0.272 0.223 0.242 0.252 0.231
Energy 0.266 0.221 0.244 0.248 0.230
SIRC [$\|z\|_1$] 0.263 0.226 0.249 0.266 0.241
SIRC [Res] 0.258 0.209 0.250 0.244 0.241
SIRC [$\|z\|_1$, Bayes-opt] 0.290 0.195 0.243 0.191 0.228
SIRC [Res, Bayes-opt] 0.309 0.175 0.279 0.204 0.247
§.§ Additional results on pre-trained ImageNet models
Following <cit.>, we present additional results with pre-trained models with ImageNet-200 (a subset of ImageNet with 200 classes) as the inlier dataset in Table <ref>. The base model is a ResNet-50.
AUC-RC ($\downarrow$) for methods trained with ImageNet-200 as the inlier dataset and without OOD samples. The base model is a pre-trained ResNet-50 model.
Lower values are better.
8cID-only training
Method / $\Pr^{\rm te}_{\rm out}$ Places LSUN CelebA Colorectal
iNaturalist-O Texture
ImageNet-O Food32
MSP 0.183 0.186 0.156 0.163 0.161 0.172 0.217 0.181
MaxLogit 0.173 0.184 0.146 0.149 0.166 0.162 0.209 0.218
Energy 0.176 0.185 0.145 0.146 0.172 0.166 0.211 0.225
SIRC [$L_1$] 0.185 0.195 0.155 0.165 0.166 0.172 0.214 0.184
SIRC [Res] 0.180 0.179 0.137 0.140 0.151 0.167 0.219 0.174
Plug-in BB [$L_1$] 0.262 0.261 0.199 0.225 0.228 0.270 0.298 0.240
Plug-in BB [Res] 0.184 0.172 0.135 0.138 0.145 0.194 0.285 0.164
4cID-only training
Method / $\Pr^{\rm te}_{\rm out}$ Near-ImageNet-200
Caltech65 Places32 Noise
MSP 0.209 0.184 0.176 0.188
MaxLogit 0.220 0.171 0.170 0.192
Energy 0.217 0.175 0.169 0.190
SIRC [$L_1$] 0.205 0.182 0.174 0.191
SIRC [Res] 0.204 0.177 0.173 0.136
Plug-in BB [$L_1$] 0.264 0.242 0.256 0.344
Plug-in BB [Res] 0.247 0.202 0.171 0.136
§ ILLUSTRATING THE FAILURE OF MSP FOR OOD DETECTION
§.§ Illustration of MSP failure for open-set classification
Figure <ref> shows a graphical illustration of the example discussed in Example <ref>,
wherein the MSP baseline can fail for open-set classification.
Examples of two open-set classification settings (a) and (b) with $L=10$ classes, where the inlier class distributions $\PTr( y \mid x )= \frac{\mathbb{P}_{\rm te}( y \mid x )}{\mathbb{P}_{\rm te}( y \neq 10 \mid x )}$ over the first 9 classes are identical, but the unknown class density $\Pr^*(10|x)$ is significantly different. Consequently, the MSP baseline, which relies only on the inlier class probabilities, will output the same rejection decision for both settings, whereas the Bayes-optimal classifier, which rejects by thresholding $\Pr^*(10|x)$, may output different decisions for the two settings.
[Uniform outlier distribution $\Pr_{\rm out}$.]
[Open-set classification.]
Example of two settings
where the maximum softmax probability (MSP) baseline fails for OOD detection.
Setting (a) considers
low-density OOD detection,
positive and negative samples drawn from a one-dimensional Gaussian distribution.
Samples away from the origin will have $\Pr( x ) \sim 0$,
and are thus outliers under the Bayes-optimal OOD detector.
the MSP baseline will deem
samples near the origin to be outliers,
as these have maximal
$\max_{y} \Pr( y \mid x )$.
This illustrates the distinction between abstentions favoured by L2R
(low label certainty)
OOD detection
(low density).
Setting (b) considers
open-set classification
where there are $L = 4$ total classes, with the fourth class
(denoted by ${\color{red} \blacktriangledown}$)
assumed to comprise outliers not seen during training.
Each class-conditional is an isotropic Gaussian (left).
Note that
the maximum inlier class-probability $\PTr( y \mid x )$ scores OOD samples significantly higher than ID samples (right).
the MSP baseline,
declares samples with low $\max_{y} \PTr( y \mid x )$ as outliers,
will perform poorly.
§.§ Illustration of maximum logit failure for open-set classification
For the same setting as Figure <ref>,
we show
in Figure <ref>
the maximum logit computed over the inlier distribution.
As with the maximum probability, the outlier samples tend to get a higher score than the inlier samples.
For the same reason, rejectors that threshold the margin between the highest and the second-highest probabilities, instead of the maximum class probability, can also fail.
The use of other SC methods such as the cost-sensitive softmax cross-entropy <cit.> may not be successful either, because the optimal solutions for these methods have the same form as MSP.
For the same setting as Figure <ref>,
we show
the maximum logit computed over the inlier distribution.
As with the maximum probability, the outlier samples tend to get a higher score than the inlier samples.
§ ILLUSTRATING THE IMPACT OF ABSTENTION COSTS
§.§ Impact of varying abstention costs $\costin, \costout$
Our joint objective that allows for abstentions on both “hard” and “outlier” samples is controlled by parameters $\costin, \costout$.
These reflect the costs on not correctly abstaining on samples from either class of anomalous sample.
Figure <ref> and <ref> show the impact of varying these parameters while the other is fixed, for the synthetic open-set classification example of Figure <ref>.
The results are intuitive:
varying $\costin$ tends to favour abstaining on samples that are at the class boundaries,
while varying $\costout$ tends to favour abstaining on samples from the outlier class.
Figure <ref> confirms that when both $\costin, \costout$ are varied, we achieve abstentions on both samples at the class boundaries, and samples from the outlier class.
Impact of varying $\costin$ for a fixed $\costout = 0.0$.
The left plot shows the standard dataset, with $\costin = 1.0$.
For intermediate $\costin = 0.5$ (middle), we abstain
(denoting by $\times$)
only on the samples at the class boundaries.
For $\costin = 0.0$ (right), we abstain on all samples.
Impact of varying $\costout$ for a fixed $\costin = 1.0$.
The left plot shows the standard dataset, with $\costout = 0.0$.
For intermediate $\costout = 1.0$ (middle), we abstain
(denoting by $\times$)
only on the samples from the outlier class.
For larger $\costout = 10.0$ (right), we start abstaining on inlier samples as well.
Impact of varying both $\costin$ and $\costout$.
The left plot shows the standard dataset, with $\costin = 1.0, \costout = 0.0$.
Setting $\costin = 0.5, \costout = 1.0$ (middle)
and $\costin = 0.5, \costout = 10.0$ (right)
is shown to favour abstaining
(denoting by $\times$)
on both the samples at class boundaries, and the outlier samples.
§.§ Impact of $\costout$ on OOD Detection Performance
For the same setting as Figure <ref>,
we consider the OOD detection performance of the
$s( x ) = \max_{y \in [L]} \Pr_{\rm in}( y \mid x ) - \costout \cdot \frac{\Pr_{\rm in}( x )}{\Pr_{\rm out}( x )}$
as $\costout$ is varied.
Note that thresholding of this score determines the Bayes-optimal classifier.
Rather than pick a fixed threshold, we use this score to compute the AUC-ROC for detecting whether a sample is from the outlier class, or not.
As expected, as $\costout$ increases
i.e., there is greater penalty on not rejecting an OOD sample
the AUC-ROC improves.
For the same setting as Figure <ref>,
we consider the OOD detection performance of the
score $s( x ) = \max_{y \in [L]} \Pr_{\rm in}( y \mid x ) - \costout \cdot \frac{\Pr_{\rm in}( x )}{\Pr_{\rm out}( x )}$ as $\costout$ is varied.
Specifically, we use this score to compute the AUC-ROC for detecting whether a sample is from the outlier class, or not.
As expected, as $\costout$ increases, the AUC-ROC improves.
§ LIMITATIONS AND BROADER IMPACT
Recall that our proposed plug-in rejectors seek to optimize for overall classification and OOD detection accuracy while keeping the total fraction of abstentions within a limit. However, the improved overall accuracy may come at the cost of poorer performance on smaller sub-groups. For example, <cit.> show that Chow's rule or the MSP scorer “can magnify existing accuracy disparities between various groups within a population, especially in the presence of spurious correlations”.
It would be of interest to carry out a similar study with the two plug-in based rejectors proposed in this paper, and to understand how both their inlier classification accuracy and their OOD detection performance varies across sub-groups. It would also be of interest to explore variants of our proposed rejectors that mitigate such disparities among sub-groups.
Another limitation of our proposed plug-in rejectors is that they are only as good as the estimators we use for the density ratio $\frac{\Pr_{\rm in}(x)}{\Pr_{\rm out}(x)}$. When our estimates of the density ratio are not accurate, the plug-in rejectors are seen to often perform worse than the SIRC baseline that use the same estimates. Exploring better ways for estimating the density ratio is an important direction for future work.
Beyond SCOD, the proposed rejection strategies are also applicable to the growing literature on adaptive inference <cit.>. With the wide adoption of large-scale machine learning models with billions of parameters, it is becoming increasingly important that we are able to perform speed up the inference time for these models. To this end, adaptive inference strategies have gained popularity, wherein one varies the amount of compute the model spends on an example, by for example, exiting early on “easy” examples.
The proposed approaches for SCOD may be adapted to equip early-exit models to not only exit early on high-confidence “easy” samples, but also exit early on samples that are deemed to be outliers. In the future, it would be interesting to explore the design of such early-exit models that are equipped with an OOD detector to aid in their routing decisions.
|
# Geometric-algebraic approach to aqueous solutions of diprotic acids and its
buffer mixtures
Juan C. Morales [ Carlos A. Arango [<EMAIL_ADDRESS>
###### Abstract
A closed-form analytical expression for $\ce{[H3O+]}$ has been obtained for
aqueous solutions of diprotic acids and its soluble salts. This formula allows
to calculate the pH of aqueous solutions of diprotic acids, their buffer
solutions, and the titrations of these two by a strong base, from the values
of p$K_{1}$, p$K_{2}$, and the effective concentrations of the acid and the
base, $\bar{C}_{\mathrm{a}}$ and $\bar{C}_{\mathrm{b}}$ respectively. It is
shown that a strong base titration of an acid, or its buffer solutions, is
always a linear path in the $\bar{C}_{\mathrm{a}}$–$\bar{C}_{\mathrm{b}}$
plane, which allows a simple analysis of the pH stability of buffer solutions.
The mathematical analysis of the equilibrium equations of the dissolution of a
diprotic acid in water and the physical constraints allowed to obtain two
approximate equations for the diprotic acids. One of the approximations is
useful for acids with $\mathrm{p}K_{2}-\mathrm{p}K_{1}\leq\log_{10}4$, the
other for acids with $\mathrm{p}K_{2}-\mathrm{p}K_{1}\leq-\log_{10}4$.
###### keywords:
diprotic weak acids, diprotic acid buffer solutions, acid-base titration,
buffer stability
Icesi]Department of Pharmaceutical and Chemical Sciences, Universidad Icesi,
Cali, Colombia Icesi]Department of Pharmaceutical and Chemical Sciences,
Universidad Icesi, Cali, Colombia
## 1 Introduction
Diprotic acids are of central importance in biochemistry, physiology, and
industrial and environmental chemistry. In biochemistry, several amino acids
behave as diprotic acids with two dissociated protons: one proton on the
$\alpha$ amino group and one on the $\alpha$ carboxyl group [1]. In
physiology, the regulation of blood pH cannot be understood without
considering the buffer made by carbonic acid, $\ce{H2CO3}$, and the
bicarbonate ion, $\ce{HCO3-}$ [2]. In environmental chemistry, the current
model for understanding ocean acidification is based on the aqueous chemical
equilibrium between $\ce{CO2}$, $\ce{H2CO3}$, $\ce{HCO3-}$, and $\ce{CO3-}$
[3].
A Brønsted diprotic acid is a chemical substance $\ce{H2B}$ that partially
dissolves in water producing hydronium ion, $\ce{H3O+}$, and the conjugate
base, $\ce{HB-}$. This conjugate base further dissociates partially producing
the second conjugate base, $\ce{B^{2-}}$. In the state of equilibrium, the
concentrations of the chemical species are constant [4, 5]. The equilibrium
concentrations of the chemical species are given by the equations of chemical
equilibrium, and the chemical and electric balance [6]. The aqueous
dissociation of a diprotic acid and, its soluble salts, involves five chemical
species and five mathematical relations between these species, therefore, in
principle is possible to obtain the equilibrium concentrations of all the
chemical species by solving this system of equations. In practice, the system
of equations involves nonlinear terms making difficult to obtain exact
mathematical expression for the equilibrium concentrations. For the
dissociation of a diprotic acid and its soluble salts, the algebraic
manipulation of the system of equations gives a quartic equation for the
concentration of $\ce{H3O+}$, $\ce{[H3O+]}$. The equilibrium concentration of
the hydronium ion is obtained by finding the roots of its quartic equation.
Although there is a quartic formula that gives the explicit roots of a quartic
equation, it is not practical to use due to its complexity. The use of the
quartic formulas gives four roots, each of them implies the execution of at
least 57 mathematical operations. Although these type of calculation is a
simple task for modern computers, the formulas obtained from the quartic
equation are not simplified which causes accumulation of computational error.
On the other hand, graphical and numerical solutions are easily obtained using
numerical calculators and computer software [7]. Although the graphical-
numerical approach is fast and efficient to calculate concentrations as
function of the pH, it has some disadvantages against an analytical closed
form solution. The analytical solution can be differentiated to study buffers
and buffer stability, or can be easily function-composed to study titrations
of acids and buffers by strong bases [8]. Another advantage of an analytical
closed form is the possibility to analyze mathematically the effect on the pH
of parameters such as the concentrations and the acid dissociation constants.
In this work it has been found that that the constrain
p$K_{2}-$p$K_{1}\geq\log_{10}{4}$, on the p$K$s of the acid, has an important
effect on the nature of the roots of the quartic polynomial for $\ce{[H3O+]}$.
This constrain has been previously obtained by considering isomerization of
the ionization products and a detailed equilibrium scheme in which the micro-
equilibrium constants correspond to partial equilibria [9, 10, 11]. Direct
observation of the experimental values of p$K_{1}$ and p$K_{2}$ of a large set
of diprotic acids allowed to find that several compounds, in particular the
nitrogenous organics, follow the constrain p$K_{2}-$p$K_{1}\leq-\log_{10}{4}$.
The main result of this paper is a closed form analytical solutions for [H3O+]
for the full chemical equilibrium of the aqueous dissociation of a diprotic
acid, and its monobasic and dibasic salts. The use of effective acid and base
concentrations allow to have only one mathematical expression for aqueous
dissolutions of diprotic acids, and buffer dissolutions of diprotic acids and
its soluble salts. In this work it is shown how this unified approach to
diprotic acids and its buffers allows to study the pH stability of buffer
solutions in relation with the equivalent acid dissolution.
This article is organized as follows: In the Theory and Methods section, the
first subsection is dedicated to establishing the notation and fundamental
equation of chemical equilibrium and physical constraints. In this subsection
a unified notation is introduced, allowing the same equations to be used for:
aqueous solutions of diprotic acids, buffers of diprotic acids, and titrations
with strong bases. In the second subsection of Theory and Methods, a
mathematical expression for $\ce{[H3O+]}$ is obtained and analyzed, showing
the complete expression for $\ce{[H3O+]}$. The final expressions of
$\ce{[H3O+]}$ are written in algebraic terms using basic arithmetic operations
and radicals (square and cube roots). These expressions can be used with the
help of computer algebra software to obtain the pH without approximations. The
reader interested in the mathematical details and the procedures used to
obtain these equations is referred to the Appendix. The third and final
subsection introduces exact expressions for the titration functions of the
aqueous solution and the buffer solution of diprotic acids. The Results and
Discussion section shows the results obtained using the expressions obtained
previously. The first section shows that although most diprotic acids obey the
condition $\mathrm{p}K_{2}-\mathrm{p}K_{1}\geq\log_{10}4$, there are some
acids that follow the condition
$\mathrm{p}K_{2}-\mathrm{p}K_{1}\leq-\log_{10}{4}$. In the next subsection,
the physical constraints of diprotic acids are used to obtain two
approximations for $\ce{[H3O+]}$, these approximations are used to obtain
analytical expressions for the upper and lower limits of pH. In the next
subsection, we discuss the common approach of monitoring the second
dissociation constant and show that in the case of micro-molar concentrations
this approach fails. The following subsection shows the use of exact closed
forms of pH and titration functions to analyze the neutralization of aqueous
solutions of diprotic acids and their buffer mixtures. The differences between
the exact expressions of this work and the approximate results of recent works
for two cases are shown in detail: maleic acid and 1,8-Octanediamine. Finally,
the last subsection of Results and Discussion shows an analysis of the pH
stability of diprotic acid buffer solutions. In this subsection, the pH
stability is analyzed as a parametric curve in the plane formed by the pH of
the acid and the pH of the corresponding buffer solution.
## 2 Theory and Methods
### 2.1 Aqueous solutions of weak diprotic acids and their salts
The aqueous dissociation equilibrium of a diprotic weak acid $\ce{H2B}$ is
given by the chemical equations
Relevant chemical species are $\ce{H3O+}$, $\ce{OH-}$, $\ce{H2B}$, $\ce{HB-}$,
and $\ce{B^{2-}}$ with equilibrium molar concentrations $\ce{[H3O+]}$,
$\ce{[OH^{-}]}$, $\ce{[H2B]}$, $\ce{[HB^{-}]}$, and $\ce{[B^{2-}]}$,
respectively. The equilibria displayed in equations (2.1)–(2.1) are effective
equilibria since the two protons of $\ce{H2B}$ can dissociate separately and
not necessarily consecutively [9, 1, 11].
A solution of the acid $\ce{H2B}$ is prepared in water at analytical molar
concentration $C_{\mathrm{a}}$. Once the system reaches chemical equilibrium,
the concentrations of the chemical species are given by five physical
conditions: two weak acid dissociation constant $K_{\mathrm{1}}$ and
$K_{\mathrm{2}}$, the water auto-ionization constant $K_{\mathrm{w}}$, the
electric neutrality, and the mass balance,
$\displaystyle K_{\mathrm{1}}$
$\displaystyle=\frac{\ce{[H3O+]}}{C^{\circ}}\frac{\ce{[HB^{-}]}}{C^{\circ}}\left(\frac{\ce{[H2B]}}{C^{\circ}}\right)^{-1},$
(1) $\displaystyle K_{\mathrm{2}}$
$\displaystyle=\frac{\ce{[H3O+]}}{C^{\circ}}\frac{\ce{[B^{2-}]}}{C^{\circ}}\left(\frac{\ce{[HB^{-}]}}{C^{\circ}}\right)^{-1},$
(2) $\displaystyle K_{\mathrm{w}}$
$\displaystyle=\frac{\ce{[H3O^{+}]}}{C^{\circ}}\frac{\ce{[OH^{-}]}}{C^{\circ}},$
(3) $\displaystyle\ce{[H3O+]}$
$\displaystyle=\ce{[OH^{-}]}+\ce{[HB^{-}]}+2\ce{[B^{2-}]},$ (4) $\displaystyle
C_{\mathrm{a}}$ $\displaystyle=\ce{[HB]}+\ce{[HB^{-}]}+\ce{[B^{2-}]},$ (5)
respectively. The standard molar concentration is ${C^{\circ}=1\,\mathrm{M}}$.
The acid constants $K_{\mathrm{1}}$ and $K_{\mathrm{2}}$ are dimensionless,
and their value range typically between $10^{-10}$ and $10^{-1}$. In this
work, the biochemical standard state
$C^{\standardstate}=C^{\circ}\sqrt{K_{\mathrm{w}}}$ is used to define the
dimensionless variables: ${x=\ce{[H3O^{+}]}/C^{\standardstate}}$,
${y=\ce{[OH^{-}]}/C^{\standardstate}}$,
${z_{0}=\ce{[H2B]}/C^{\standardstate}}$,
${z_{1}=\ce{[HB^{-}]}/C^{\standardstate}}$,
${z_{2}=\ce{[B^{2-}]}/C^{\standardstate}}$, and the parameter
${c_{\mathrm{a}}=C_{\mathrm{a}}/C^{\standardstate}}$. These definitions make
the equilibrium constants ${k_{1}=K_{\mathrm{1}}/\sqrt{K_{\mathrm{w}}}}$,
${k_{2}=K_{\mathrm{2}}/\sqrt{K_{\mathrm{w}}}}$, and ${k_{\mathrm{w}}=1}$. In
terms of the new variables and constants, equations (1)–(5) are replaced by
$\displaystyle k_{\mathrm{1}}$ $\displaystyle=\frac{xz_{1}}{z_{0}},$ (6)
$\displaystyle k_{\mathrm{2}}$ $\displaystyle=\frac{xz_{2}}{z_{1}},$ (7)
$\displaystyle k_{\mathrm{w}}$ $\displaystyle=xy=1,$ (8) $\displaystyle x$
$\displaystyle=y+z_{1}+2z_{2},$ (9) $\displaystyle c_{\mathrm{a}}$
$\displaystyle=z_{0}+z_{1}+z_{2}.$ (10)
The equations for electric neutrality (9) and mass balance (10) are explicitly
affected by the presence of a strong base and salts of the conjugate bases
$\ce{HB-}$ and $\ce{B^{2-}}$, _e.g._ $\ce{NaOH}$, $\ce{NaHB}$ and $\ce{Na2B}$
respectively. If the dimensionless concentrations of the strong base and salts
are ${c_{\mathrm{b}}=\ce{[NaOH]}/C^{\standardstate}}$,
${s_{\mathrm{1}}=\ce{[NaHB]}/C^{\standardstate}}$ and
${s_{\mathrm{2}}=\ce{[Na2B]}/C^{\standardstate}}$, the charge and mass balance
equations are modified to
$\displaystyle x+\bar{c}_{\mathrm{b}}$ $\displaystyle=y+z_{1}+2z_{2},$ (11)
$\displaystyle\bar{c}_{\mathrm{a}}$ $\displaystyle=z_{0}+z_{1}+z_{2},$ (12)
with effective concentrations
$\bar{c}_{\mathrm{a}}=c_{\mathrm{a}}+s_{\mathrm{1}}+s_{\mathrm{2}}$ and
$\bar{c}_{\mathrm{b}}=c_{\mathrm{b}}+s_{\mathrm{1}}+2s_{\mathrm{2}}$. These
effective dimensionless variables are related to the effective molar
concentrations by
$\bar{C}_{\mathrm{a}}=C^{\standardstate}\bar{c}_{\mathrm{a}}$, and
$\bar{C}_{\mathrm{b}}=C^{\standardstate}\bar{c}_{\mathrm{b}}$.
The use of $y=1/x$, obtained from equation (8), in the equations (6), (7),
(11) and (12), gives a non-linear system of four equation $\mathcal{S}_{4}$
with four unknowns $x$, $z_{0}$, $z_{1}$ and $z_{2}$.
### 2.2 Mathematical expression for $\ce{[H3O+]}$
Before obtaining the full solution of the non-linear system $\mathcal{S}_{4}$
is useful to analyze the linear subsystem $\mathcal{S}_{3}$ made by equations
(6), (7) and (12). This subsystem can be easily solved to obtain the
concentrations $z_{0}$, $z_{1}$, $z_{2}$, in terms of $x$. The linear system
$\mathcal{S}_{3}$ can be expressed as $\mathsf{K}\bm{z}=\bm{c}$ with
$\mathsf{K}=\mathsf{K}(x)$ given by
$\mathsf{K}=\begin{pmatrix}k_{1}&-x&0\\\ 0&k_{2}&-x\\\ 1&1&1\end{pmatrix},$
(13)
$\bm{z}=\left(z_{0},z_{1},z_{2}\right)^{\intercal}$, and
$\bm{c}=\left(0,0,\bar{c}_{\mathrm{a}}\right)^{\intercal}$. Solving for
$\bm{z}$ gives
$\bm{z}=\frac{\bar{c}_{\mathrm{a}}}{\det{\mathsf{K}}}\begin{pmatrix}x^{2}\\\
k_{1}x\\\ k_{1}k_{2}\end{pmatrix},$ (14)
with $\det{\mathsf{K}}=x^{2}+k_{1}x+k_{1}k_{2}$ as the determinant of
$\mathsf{K}$. It is convenient to write this determinant as
${\det{\mathsf{K}}=(x-\kappa_{1})(x-\kappa_{2})}$ with
$\kappa_{1,2}=\tfrac{k_{1}}{2}\left(-1\pm\sqrt{1-4\kappa}\right),$ (15)
and $\kappa=k_{2}/k_{1}$ as the ratio of the diprotic dissociation constants.
These $\kappa_{1,2}$ are related to the Simms constants [12, 7], $g_{1,2}$, by
$g_{1,2}=-\kappa_{1,2}$.
Given the condition $\kappa\leq 1/4$, _i.e._ $k_{1}\geq 4k_{2}$, the roots
$\kappa_{1,2}$ are both no positive real numbers, $\kappa_{1,2}\leq 0$,
otherwise these roots are a pair of complex conjugate numbers with negative
real part, _i.e._ $\kappa_{1}=\kappa_{2}^{*}$ and
$\mathrm{re}{(\kappa_{1})}=\mathrm{re}{(\kappa_{2})}<0$. The inequality
$k_{1}\geq 4k_{2}$ have been obtained previously by Adams in his analysis of
polyprotic acid dissociations [9].
The solution of $\mathcal{S}_{3}$ gives the concentrations
$\bm{z}=\left(z_{0},z_{1},z_{2}\right)^{\intercal}$ as functions of $x$,
$k_{1}$, $k_{2}$ and $\bar{c}_{\mathrm{a}}$. Although $\bm{z}$ does not depend
explicitly on $\bar{c}_{\mathrm{b}}$, it depends implicitly through $x$. This
dependency is specified by using $y=1/x$ in equation (11), which gives
$x-\tfrac{1}{x}+\bar{c}_{\mathrm{b}}=-\bm{q}\cdot\bm{z},$ (16)
with $\bm{q}=\left(0,-1,-2\right)^{\intercal}$ as the vector of electric
charges of $z_{0}$, $z_{1}$, and $z_{2}$. This equation keeps some similarity
with equation (41) used in the work of Kalka [7]. However, unlike that
article, in this paper closed analytic solutions for $x$ are obtained instead
of graphical or numerical solutions.
Multiplying equation (16) by $x\det{\mathsf{K}}$, and expanding the scalar
product, produces
$\left(x-\kappa_{1}\right)\left(x-\kappa_{2}\right)\left(x^{2}+\bar{c}_{\mathrm{b}}x-1\right)=\bar{c}_{\mathrm{a}}k_{1}x\left(x+2k_{2}\right),$
(17)
which can be written
$\left(x-\kappa_{1}\right)\left(x-\kappa_{2}\right)\left(x-\sigma_{1}\right)\left(x-\sigma_{2}\right)=\bar{c}_{\mathrm{a}}k_{1}x\left(x+2k_{2}\right),$
(18)
with
$\sigma_{1,2}=\tfrac{1}{2}\left(-\bar{c}_{\mathrm{b}}\pm\sqrt{\bar{c}_{\mathrm{b}}^{2}+4}\right)$.
The roots $\sigma_{1,2}$ are both real numbers with $0<\sigma_{1}\leq 1$ and
$\sigma_{2}\leq-1$. The case of $\bar{c}_{\mathrm{b}}=0$ gives
$\sigma_{1,2}=\pm 1$.
Expansion of equation (18) gives $P=0$ with
$P=x^{4}+c_{3}x^{3}+c_{2}x^{2}+c_{1}x+c_{0},$ (19)
and
$\displaystyle c_{3}$ $\displaystyle=\bar{c}_{\mathrm{b}}+k_{1},$ (20)
$\displaystyle c_{2}$
$\displaystyle=-\left(1+k_{1}\left(\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}}-k_{2}\right)\right),$
(21) $\displaystyle c_{1}$
$\displaystyle=-k_{1}\left(1+k_{2}\left(2\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}}\right)\right),$
(22) $\displaystyle c_{0}$ $\displaystyle=-k_{1}k_{2},$ (23)
with $c_{4}=1$.
Before finding the roots of the equation $P=0$ it is helpful to analyze the
nature of its roots, which is studied by considering the 5-tuple of its
coefficients,
$\operatorname{coef}[P]=\left(c_{4},c_{3},c_{2},c_{1},c_{0}\right),$ (24)
and its signs
$\operatorname{sgn}(\operatorname{coef}[P])=\left(\operatorname{sgn}{c_{4}},\operatorname{sgn}{c_{3}},\operatorname{sgn}{c_{2}},\operatorname{sgn}{c_{1}},\operatorname{sgn}{c_{0}}\right).$
(25)
It is straightforward to see that $\operatorname{sgn}{c_{4}}=+$,
$\operatorname{sgn}{c_{3}}=+$, and $\operatorname{sgn}{c_{0}}=-$. The sign of
$c_{2}$ and $c_{1}$ requires a careful analysis. There are four possible
cases: $\left(\operatorname{sgn}{c_{2}},\operatorname{sgn}{c_{1}}\right)$:
$\left(+,+\right)$, $\left(+,-\right)$, $\left(-,+\right)$, and
$\left(-,-\right)$. The 5-tuple $\operatorname{sgn}(\operatorname{coef}[P])$
can have four possible outcomes: $\left(+,+,+,+,-\right)$,
$\left(+,+,+,-,-\right)$, $\left(+,+,-,+,-\right)$, and
$\left(+,+,-,-,-\right)$. These 5-tuples display one or three changes of sign
along the sequence of elements. Descartes’s rule of signs states that the
number of positive roots of a polynomial, $P$, is either equal to the number
of sign changes of $\operatorname{sgn}(\operatorname{coef}[P])$, or is less
than it by an even number. The application of Descartes’ rule to $P$ gives
either one or three positive roots. It can be proved that the polynomial $P$
has only one positive real root by a careful analysis of equation (18). The
left hand side of equation (18) is a fourth degree polynomial $P_{L}(x)$ with
only one positive root $\sigma_{1}$, one negative root $\sigma_{2}$ and two
roots $\kappa_{1,2}$ that could be either negative or a complex conjugate
pair. The right hand side of equation (18) is an upward parabola $P_{R}(x)$
with roots at zero and $-2k_{2}$. The coefficients of the quartic term of
$P_{L}(x)$ and the quadratic term of $P_{R}$ are both positive numbers,
therefore $P_{L}(x)$ and $P_{R}(x)$ must tend to infinity as $x$ goes to
positive or negative infinity. Since the quartic function grows always faster
than the quadratic function, and regarding that $P_{L}(0)=-k_{1}k_{1}<0$, the
polynomials $P_{L}(x)$ and $P_{R}(x)$ must be equal at only one positive $x$.
In the Appendix is shown that using Ferrari’s method [13, 14], the quartic
equation $P=0$ can be written as an associated resolvent cubic equation $R=0$
with
$R=y^{3}-c_{2}y^{2}+\left(c_{1}c_{3}-4c_{0}\right)y+\left(4c_{0}c_{2}-c_{0}c_{3}^{2}-c_{1}^{2}\right).$
(26)
The cubic equation $R=0$ can be solved by Cardano’s method [8], for which the
change of variable $y=\bar{y}+\frac{c_{2}}{3}$ is necessary to obtain a
depressed cubic equation $R_{\mathrm{dc}}=0$ with
$R_{\mathrm{dc}}=\bar{y}^{3}+\bar{p}\bar{y}+\bar{q},$ (27)
where
$\displaystyle\bar{p}$ $\displaystyle=c_{1}c_{3}-\frac{c_{2}^{2}}{3}-4c_{0},$
(28) $\displaystyle\bar{q}$
$\displaystyle=\frac{8c_{0}c_{2}}{3}+\frac{c_{1}c_{2}c_{3}}{3}-\frac{2c_{2}^{3}}{27}-c_{1}^{2}-c_{0}c_{3}^{2},$
(29)
and the discriminant of $P$ is $\Delta=-4\bar{p}^{3}-27\bar{q}^{2}$ [13, 14].
The positive root of $P=0$ is given by three cases depending on the sign of
the $\Delta$, and the sign of the functions $\xi_{1,2}$,
$\xi_{1,2}=-\frac{\bar{q}}{2}\pm\frac{1}{2}\sqrt{-\frac{\Delta}{27}}.$ (30)
The quantities $\Delta$, $\xi_{1}$, and $\xi_{2}$ are functions of the
equilibrium constants, $k_{1}$ and $k_{2}$, and the effective concentrations,
$\bar{c}_{\mathrm{a}}$ and $\bar{c}_{\mathrm{b}}$. Explicitly, the positive
root of $P=0$ is given by
$x=\begin{cases}x_{1},&\Delta>0,\\\ x_{1}&\Delta<0,\;\xi_{1}>0,\;\xi_{2}>0,\\\
x_{3},&\Delta<0,\;\xi_{1}<0,\;\xi_{2}<0,\\\
x_{3},&\Delta<0,\;\xi_{1}>0,\;\xi_{2}<0.\end{cases}$ (31)
The roots $x_{1}$ and $x_{3}$ are:
$\displaystyle x_{1}$
$\displaystyle=\tfrac{1}{2}\left(-\left(\tfrac{\bar{c}_{\mathrm{b}}+k_{1}}{2}-t_{1}\right)+\sqrt{\left(\tfrac{\bar{c}_{\mathrm{b}}+k_{1}}{2}-t_{1}\right)^{2}-2y_{1}+\tfrac{(\bar{c}_{\mathrm{b}}+k_{1})y_{1}+2k_{1}\left(1+(2\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}})k_{2}\right)}{t_{1}}}\right),$
(32) $\displaystyle x_{3}$
$\displaystyle=\tfrac{1}{2}\left(-\left(\tfrac{\bar{c}_{\mathrm{b}}+k_{1}}{2}+t_{1}\right)+\sqrt{\left(\tfrac{\bar{c}_{\mathrm{b}}+k_{1}}{2}+t_{1}\right)^{2}-2y_{1}-\tfrac{(\bar{c}_{\mathrm{b}}+k_{1})y_{1}+2k_{1}\left(1+(2\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}})k_{2}\right)}{t_{1}}}\right),$
(33)
with $y_{1}$ and $t_{1}$:
$\displaystyle t_{1}$
$\displaystyle=\sqrt{1+\tfrac{1}{4}(\bar{c}_{\mathrm{b}}+k_{1})^{2}+k_{1}\left(\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}}-k_{2}\right)+y_{1}},$
(34) $\displaystyle y_{1}$
$\displaystyle=\bar{y}_{1}-\tfrac{1+k_{1}\left(\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}}-k_{2}\right)}{3},$
(35)
and $\bar{y}_{1}$:
$\bar{y}_{1}=\begin{cases}\tfrac{2}{3}\sqrt{1+k_{1}Q_{1}+k_{1}^{2}Q_{2}}\cos{\left(\tfrac{\theta}{3}\right)},&\Delta>0,\\\
\sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|},&\Delta<0,\;\xi_{1}>0,\;\xi_{2}>0,\\\
-(\sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|}),&\Delta<0,\;\xi_{1}<0,\;\xi_{2}<0,\\\
\sqrt[3]{|\xi_{1}|}-\sqrt[3]{|\xi_{2}|},&\Delta<0,\;\xi_{1}>0,\;\xi_{2}<0.\end{cases}$
(36)
The functions $\theta$, $Q_{1}$, and $Q_{2}$ are given in the Appendix. The
concentration $x$ for the most common case of diprotic acid, with
$k_{1}>4k_{2}$, obeys $x=x_{1}$ for most of the concentrations.
### 2.3 Strong base titration of dissolutions of diprotic acids and its
buffer mixtures
The titration by a strong base of an acid solution, or an acid buffer
solution, can be analyzed by using the effective concentrations
$\bar{c}_{\mathrm{a}}$ and $\bar{c}_{\mathrm{b}}$. Recall that the effective
concentrations are related to the molar analytical concentrations by
$\bar{c}_{\mathrm{a}}=C_{\mathrm{a}}/C^{\standardstate}$ and
$\bar{c}_{\mathrm{b}}=C_{\mathrm{b}}/C^{\standardstate}$. A buffer solution is
made of a volume $V_{\mathrm{a0}}$ of an acid solution with analytical
concentration $C_{\mathrm{a0}}$, and volumes $V_{10}$ and $V_{20}$ of salt
solutions with analytical concentrations $C_{10}$ and $C_{20}$, respectively.
The total volume of the buffer solution is
$V_{\mathrm{B0}}=V_{\mathrm{a0}}+V_{\mathrm{10}}+V_{\mathrm{20}}$. The case of
and acid solution is obtained by using $V_{10}=0$, and $V_{20}=0$, in the
volume of the buffer: $V_{\mathrm{B0}}=V_{\mathrm{a0}}$. The buffer effective
concentrations are given by
$\displaystyle\bar{c}_{\mathrm{a0}}$
$\displaystyle=\left(c_{\mathrm{a0}}V_{\mathrm{a0}}+s_{\mathrm{10}}V_{\mathrm{10}}+s_{\mathrm{20}}V_{\mathrm{20}}\right)/V_{\mathrm{B0}},\text{
and}$ (37) $\displaystyle\bar{c}_{\mathrm{b0}}$
$\displaystyle=\left(s_{\mathrm{10}}V_{\mathrm{10}}+2s_{\mathrm{20}}V_{\mathrm{20}}\right)/V_{\mathrm{B0}}.$
(38)
These expression for the effective concentrations are obtained from the
analytical concentrations simply by using the scaling factor
$1/C^{\standardstate}$.
A volume $V_{\mathrm{b}}$ of a strong base with analytical concentration
$C_{\mathrm{b0}}$ is added to the buffer of volume $V_{\mathrm{B0}}$ and
effective concentrations $\bar{c}_{\mathrm{a0}}$ and $\bar{c}_{\mathrm{b0}}$.
The addition of the strong base changes the volume of the buffer to
$V_{\mathrm{B}}=V_{\mathrm{B0}}+V_{\mathrm{b}}$, and the effective
concentrations to the titration effective concentrations
$\displaystyle\bar{c}_{\mathrm{a}}$
$\displaystyle=\left(c_{\mathrm{a0}}V_{\mathrm{a0}}+s_{\mathrm{10}}V_{\mathrm{10}}+s_{\mathrm{20}}V_{\mathrm{20}}\right)/V_{\mathrm{B}},\text{
and}$ (39) $\displaystyle\bar{c}_{\mathrm{b}}$
$\displaystyle=\left(c_{\mathrm{b0}}V_{\mathrm{b}}+s_{\mathrm{10}}V_{\mathrm{10}}+2s_{\mathrm{20}}V_{\mathrm{20}}\right)/V_{\mathrm{B}}.$
(40)
The use of the buffer effective concentrations, equations (37) and (38), in
the titration effective concentrations, equations (39) and (40), gives
$\displaystyle\frac{\bar{c}_{\mathrm{a0}}}{\bar{c}_{\mathrm{a}}}-1$
$\displaystyle=\frac{V_{\mathrm{b}}}{V_{\mathrm{B0}}},\text{ and}$ (41)
$\displaystyle\left(\frac{c_{\mathrm{b0}}}{\bar{c}_{\mathrm{b}}}-1\right)\frac{V_{\mathrm{b}}}{V_{\mathrm{B0}}}$
$\displaystyle=1-\frac{\bar{c}_{\mathrm{b0}}}{\bar{c}_{\mathrm{b}}}.$ (42)
The titration effective concentrations can be combined to obtain an equation
for $\bar{c}_{\mathrm{b}}$ in terms of $\bar{c}_{\mathrm{a}}$,
$c_{\mathrm{b0}}$ and the buffer effective concentrations,
$\bar{c}_{\mathrm{b}}=\frac{\bar{c}_{\mathrm{a}}}{\bar{c}_{\mathrm{a0}}}\left(\bar{c}_{\mathrm{b0}}-c_{\mathrm{b0}}\right)+c_{\mathrm{b0}}.$
(43)
This is the equation of a straight line with slope
$\left(\bar{c}_{\mathrm{b0}}-c_{\mathrm{b0}}\right)/\bar{c}_{\mathrm{a0}}$ and
ordinate intercept $c_{\mathrm{b0}}$. The slope of this straight line could be
be negative, zero, or positive, depending on the concentrations of the buffer,
$\bar{c}_{\mathrm{b0}}=s_{\mathrm{10}}+2s_{\mathrm{20}}$, and the titrating
base, $c_{\mathrm{b0}}$.
In addition to the case of the titration of buffer solutions, this equation
can be used for the titration of acid solutions. The case of an acid solution
is obtained by taking $\bar{c}_{\mathrm{b0}}=0$ and
$\bar{c}_{\mathrm{a0}}=c_{\mathrm{a0}}$, to obtain
$\bar{c}_{\mathrm{b}}=c_{\mathrm{b0}}-\left(\frac{c_{\mathrm{b0}}}{c_{\mathrm{a0}}}\right)\bar{c}_{\mathrm{a}},$
(44)
where $\bar{c}_{\mathrm{a}}=c_{\mathrm{a}}$ and
${\bar{c}_{\mathrm{b}}=c_{\mathrm{b}}}$. This is the equation of a straight
line with slope ${-c_{\mathrm{b0}}/c_{\mathrm{a0}}}$ and ordinate intercept
$c_{\mathrm{b0}}$.
Equations (43) and (44) describe the titrations as straight line paths on the
$(\bar{c}_{\mathrm{a}},\bar{c}_{\mathrm{b}})$ plane. The addition of the base,
with concentration $c_{\mathrm{b0}}$, increases $\bar{c}_{\mathrm{b}}$ along a
straight line, from $\bar{c}_{\mathrm{b0}}$ to $c_{\mathrm{b0}}$, meanwhile
decreases $\bar{c}_{\mathrm{a}}$, from $\bar{c}_{\mathrm{a0}}$ to zero. The
use of the contours of constant pH on the
$(\bar{c}_{\mathrm{a}},\bar{c}_{\mathrm{b}})$ plane and the trajectory
described by equation (43), or (44), give the full description of the
titration experiments. However, it is more practical to describe the titration
experiment as function of the pH, instead of the added strong base. For this,
equation (16) is used, which relates $x$, the pH, with the effective
concentrations $\bar{c}_{\mathrm{a}}$ and $\bar{c}_{\mathrm{b}}$. After some
rearrangement, equation (16) gives
$x-\tfrac{1}{x}+\bar{c}_{\mathrm{b}}=\frac{\bar{c}_{\mathrm{a}}k_{1}\left(x+2k_{2}\right)}{x^{2}+k_{1}x+k_{1}k_{2}}.$
(45)
The use of the ratio between the effective dimensionless concentrations,
$\bar{n}={\bar{c}_{\mathrm{b}}}/{\bar{c}_{\mathrm{a}}}$, in equation (45)
gives
$\bar{n}=\frac{k_{1}\left(x+2k_{2}\right)}{x^{2}+k_{1}x+k_{1}k_{2}}+\frac{1-x^{2}}{x\bar{c}_{\mathrm{a}}}.$
(46)
This equation has been reported by Kalka previously [7], and works well in the
case of $\bar{c}_{\mathrm{a}}$ constant. However, in the titration experiment
the concentrations $\bar{c}_{\mathrm{a}}$ and $\bar{c}_{\mathrm{b}}$ are not
constants. The last term of the right hand side of this equation depends on
$\bar{c}_{\mathrm{a}}$. This dependence can be eliminated using equation (43).
For this purpose, equation (43) must be written in terms of $\bar{n}$,
$\bar{n}=\bar{n}_{0}+c_{\mathrm{b0}}\left(\frac{1}{\bar{c}_{\mathrm{a}}}-\frac{1}{\bar{c}_{\mathrm{a0}}}\right),$
(47)
with $\bar{n}_{0}={\bar{c}_{\mathrm{b0}}}/{\bar{c}_{\mathrm{a0}}}$. Algebraic
manipulation of equation (47) gives
$\frac{1}{\bar{c}_{\mathrm{a}}}=\frac{\bar{n}-\bar{n}_{0}}{c_{\mathrm{b0}}}+\frac{1}{\bar{c}_{\mathrm{a0}}},$
(48)
which can be used in the last term of equation (46) to obtain an expression
for $\bar{n}$ in terms of the pH
$\bar{n}=\left(\bar{n}_{0}-\frac{c_{\mathrm{b0}}}{\bar{c}_{\mathrm{a0}}}\right)\frac{P_{\mathrm{a0}}}{P_{\mathrm{b0}}},$
(49)
where $\bar{n}\geq\bar{n}_{0}$ and
$\displaystyle P_{\mathrm{a0}}$
$\displaystyle=P\left(x=10^{7-\mathrm{pH}},\bar{c}_{\mathrm{a}}=\frac{\bar{c}_{\mathrm{a0}}c_{\mathrm{b0}}}{c_{\mathrm{b0}}-\bar{c}_{\mathrm{b0}}},\bar{c}_{\mathrm{b}}=0\right),$
(50) $\displaystyle P_{\mathrm{b0}}$
$\displaystyle=P\left(x=10^{7-\mathrm{pH}},\bar{c}_{\mathrm{a}}=0,\bar{c}_{\mathrm{b}}=c_{\mathrm{b0}}\right),$
(51)
with $P$ the polynomial given by equation (19). The case of the acid titration
is given by considering $\bar{c}_{\mathrm{b0}}=0$ in equations (49)–(51).
The equivalence points for a diprotic acid occur at $\bar{n}=1,2$ in equation
(49). The first equivalence point is given when the acid and base
concentrations are equal, $\bar{n}=1$, the second equivalence point happens
when the base concentration doubles the acid concentration, $\bar{n}=2$. Since
$\bar{n}(\mathrm{pH})$ must be a monotonically growing function of the pH, it
must fulfill the condition $n^{\prime}(\mathrm{pH})>0$.
## 3 Results and discussion
The validity of equations (31)–(36) has been tested by calculating the pH, at
different effective concentrations $\bar{c}_{\mathrm{a}}$ and
$\bar{c}_{\mathrm{b}}$, of diprotic acids with reported values of p$K_{1}$ and
p$K_{2}$ for 180 diprotic acids [10] and comparing them with the numerical
solution. The average absolute error in the pH, $\epsilon_{\mathrm{pH}}$, at
millimolar concentrations $\bar{c}_{\mathrm{a}}$ and $\bar{c}_{\mathrm{b}}$,
is $\epsilon_{\mathrm{pH}}\lesssim 10^{-5}$ pH units with a standard deviation
$\lesssim 10^{-4}$ pH units. The small error in the calculated pH is caused
mainly by the diprotic acids with
$\mathrm{p}K_{2}-\mathrm{p}K_{1}<-\log_{10}4$.
### 3.1 Aqueous dissolution of a diprotic acid
The case of a diprotic acid is given by using the conditions
$\bar{c}_{\mathrm{b}}=0$ and $\bar{c}_{\mathrm{a}}=c_{\mathrm{a}}$ in
equations (31)–(36). The condition $k_{1}\geq 4k_{2}$, _i.e._ ,
$\mathrm{p}K_{2}-\mathrm{p}K_{1}\geq{\mathrm{log}}_{10}4\approx 0.602,$ (52)
makes the discriminant of $P$ a positive number, $\Delta>0$. This is the
condition for a quartic equation with four distinct real roots. An inspection
of the values of $\mathrm{p}K_{1}$ and $\mathrm{p}K_{2}$ of tabulated diprotic
weak acids indicates that the condition (52) is fulfilled by many diprotic
weak acids [10, 9, 11]. Figure 1 displays, on the p$K_{1}$–p$K_{2}$ plane, the
region given by the condition (52) as the light blue region above the blue
line. It is clear in the Figure that most of the diprotic weak acids (open
black circles) fulfill this condition, however a simple visual inspection of
Figure 1 shows that there are weak diprotic acids that fulfill the condition
${\mathrm{p}K_{2}-\mathrm{p}K_{1}\leq-\log_{10}4}$ (light red region). This
condition can be expressed in terms of the acid constants as $k_{2}/k_{1}\geq
4$, _i.e._ $K_{2}/K_{1}\geq 4$. Diprotic acids in the light red region have
$\mathrm{p}K_{1}>\mathrm{p}K_{2}$, examples of these are several nitrogenous
organic compounds as Piperazine, Quinine, Phenylbiguanide, $L$-Nicotine,
$p$-Benzidine, Sulfamethazine, $m$-Phenylenediamine, $p$-Phenylenediamine,
1,2-Propanediamine, 1,3-Propanediamine, 1,4-Butanediamine, 1,6-Hexanediamine,
1,8-Octanediamine, cis-2,5-Dimethylpiperazine, trans-1,2-Cyclohexanediamine,
cis-1,2-Cyclohexanediamine, and the alcohol 1,3-Diamino-2-propanol [10].
Figure 1: p$K_{1}$–p$K_{2}$ plane for a set of diprotic acids in aqueous
solution [10]. The light blue region is given by the condition
$\mathrm{p}K_{2}-\mathrm{p}K_{1}\geq\log_{10}4$, the light red region is given
by the condition $\mathrm{p}K_{2}-\mathrm{p}K_{1}\leq-\log_{10}4$
### 3.2 Concentrations $\ce{[H2B]}$, $\ce{[HB^{-}]}$, and $\ce{[B^{2-}]}$ in
terms of $\ce{[H3O+]}$
Equation (18) can be written to give an expression for $\det{\mathsf{K}}$,
$\det{\mathsf{K}}=\frac{\bar{c}_{\mathrm{a}}k_{1}x\left(x+2k_{2}\right)}{\left(x-\sigma_{1}\right)\left(x-\sigma_{2}\right)}.$
(53)
This equation can be used in equation (14), to obtain
$\displaystyle z_{0}$
$\displaystyle=\frac{x\left(x-\sigma_{1}\right)\left(x-\sigma_{2}\right)}{k_{1}(x+2k_{2})},$
(54) $\displaystyle z_{1}$
$\displaystyle=\frac{\left(x-\sigma_{1}\right)\left(x-\sigma_{2}\right)}{x+2k_{2}},$
(55) $\displaystyle z_{2}$
$\displaystyle=\frac{k_{2}\left(x-\sigma_{1}\right)\left(x-\sigma_{2}\right)}{x\left(x+2k_{2}\right)}.$
(56)
These concentrations are constrained to be positive numbers. Since
$0<\sigma_{1}\leq 1$ and $\sigma_{2}\leq-1$, it is necessary to have
$x>\sigma_{1}$.
It is also possible to have parametric dependence on $\bar{c}_{\mathrm{a}}$,
$\bar{c}_{\mathrm{b}}$ and $k_{2}$ by using equation (12), which gives as
result equations (55), (56), and
$z_{0}=\bar{c}_{\mathrm{a}}-\frac{\left(x+k_{2}\right)\left(x-\sigma_{1}\right)\left(x-\sigma_{2}\right)}{x\left(x+2k_{2}\right)}$
(57)
instead of (54). The case of a dissolution of the diprotic acid gives
$\sigma_{1}=1$ and $\sigma_{2}=-1$, with $\bar{c}_{\mathrm{b}}=0$ and
$\bar{c}_{\mathrm{a}}=c_{\mathrm{a}}$.
It is convenient to employ logarithmic scaling to describe concentrations and
equilibrium constants of highly diluted solutions and weak acids. The
p-function of $\mathrm{Q}$ is defined as
$\begin{split}\mathrm{pQ}&=-\log_{10}a_{\mathrm{Q}}\\\
&=-\log_{10}\frac{\gamma_{\mathrm{Q}}\ce{[Q]}}{C^{\circ}},\end{split}$ (58)
with $a_{\mathrm{Q}}$ and $\gamma_{\mathrm{Q}}$ as the activity and the
activity coefficient of $\mathrm{Q}$ respectively [4]. Since equilibrium
constants are dimensionless, it is possible to define the p-function of $K$ as
$\mathrm{p}K=-\log_{10}K$ [6].
The case of weak acids, and low concentrations, allows to use the ideal
solution approximation, $\gamma_{\mathrm{Q}}\approx 1$, hence the pH is given
by
$\begin{split}\mathrm{pH}&\approx-\log_{10}\frac{\ce{[H3O+]}}{C^{\circ}}\\\
&\approx-\log_{10}\frac{C^{\standardstate}x}{C^{\circ}}\\\ &\approx
7-\log_{10}x.\end{split}$ (59)
The p-functions for $\ce{[H_{2}B]}$, $\ce{[HB^{-}]}$ and $\ce{[B^{2-}]}$ are
given by $\mathrm{pH_{2}B}\approx 7-\log_{10}z_{0}$, $\mathrm{pHB^{-}}\approx
7-\log_{10}z_{1}$, and $\mathrm{pB^{2-}}\approx 7-\log_{10}z_{2}$,
respectively. The p-function $\mathrm{pH_{2}B}$ can be expressed in two ways,
either using $z_{0}$ from equation (54) to obtain
$\mathrm{pH_{2}B}(k_{1},k_{2})$, or using $z_{0}$ from equation (57) to get
$\mathrm{pH_{2}B}(c_{\mathrm{a}},k_{2})$.
Figure 2 displays the behaviour of pH2B, pHB- and pB2- as functions of the pH
for different concentrations $c_{\mathrm{a}}=2^{n}$ ($n=0,1,\dots,23$) of
oxalic acid, $\ce{H2C2O4}$, which has $K_{1}=5.62\times 10^{-2}$ and
$K_{2}=1.54\times 10^{-4}$ [10]. In this Figure the intersections between the
$\mathrm{pH_{2}B}(k_{1},k_{2})$ curve (red) and the
$\mathrm{pH_{2}B}(c_{\mathrm{a}},k_{2})$ curves (pink) are shown as labeled
black points. These intersections give the pH for the different concentrations
$c_{\mathrm{a}}$.
Figure 2: Behaviour of pH2B (red and pink), pHB- (green) and pB2- (blue) as
functions of the pH for oxalic acid, $\ce{H2C2O4}$. Labeled black dots
indicate the intersection between pH2B$(k_{1},k_{2})$ (red) and
pH2B$(c_{\mathrm{a}},k_{2})$ (pink) at different concentrations,
$c_{\mathrm{a}}=2^{n}$, $n=0,1,2,\dots,23$. Recall that that
$C_{\mathrm{a}}=C^{\standardstate}c_{\mathrm{a}}$, hence for $n=0$,
$C_{\mathrm{a}}=10^{-7}\,\mathrm{M}$, and for $n=23$, $C_{\mathrm{a}}\approx
0.84\,\mathrm{M}$.
### 3.3 Use of physical constraints of the system to obtain approximate
expressions for $\ce{[H3O+]}$
For the diprotic acid, the combined use of equation (57) and the condition
$z_{0}>0$, gives the inequality $P_{z_{0}}<0$ with $P_{z_{0}}$ given by the
monic cubic polynomial
$P_{z_{0}}=x^{3}+\left(k_{2}-c_{\mathrm{a}}\right)x^{2}-\left(1+2c_{\mathrm{a}}k_{2}\right)x-k_{2}.$
(60)
Although this polynomial goes to infinity as $x$ goes to infinity, there are
values of $x$ for which the inequality $P_{z_{0}}<0$ is satisfied. This can be
seen by analyzing the 4-tuple of $P_{z_{0}}$ coefficients,
$\begin{split}\operatorname{coef}[P_{z_{0}}]&=\left(a_{3},a_{2},a_{1},a_{0}\right)\\\
&=\left(1,k_{2}-c_{\mathrm{a}},-(1+2c_{\mathrm{a}}k_{2}),-k_{2}\right).\end{split}$
(61)
The signs of (61) are given by
$\begin{split}\operatorname{sgn}[P_{z_{0}}]&=\left(\operatorname{sgn}a_{3},\operatorname{sgn}a_{2},\operatorname{sgn}a_{1},\operatorname{sgn}a_{0}\right)\\\
&=\left(+,\pm,-,-\right),\end{split}$ (62)
Regardless of the value of $\operatorname{sgn}a_{2}$, there is only one change
of sign in (62), from positive to negative, for this case Descartes rule of
signs gives that $P_{z_{0}}$ must have only one positive root. This positive
root is a function of $k_{2}$ and $c_{\mathrm{a}}$, and gives the upper bound
of $x$. Using the method of Caicedo, et al. the upper bound of $x$ is given by
$x_{\mathrm{ub}}=\tfrac{2}{3}\sqrt{(k_{2}-c_{\mathrm{a}})^{2}+6c_{\mathrm{a}}k_{2}+3}\cos{\left({\theta_{z_{0}}}/{3}\right)}-\frac{k_{2}-c_{\mathrm{a}}}{3},$
(63)
with
$\displaystyle p_{z_{0}}$
$\displaystyle=-\tfrac{1}{3}(k_{2}-c_{\mathrm{a}})^{2}-2c_{\mathrm{a}}k_{2}-1,$
(64) $\displaystyle q_{z_{0}}$
$\displaystyle=\tfrac{2}{27}(k_{2}-c_{\mathrm{a}})^{3}+\tfrac{1}{3}(k_{2}-c_{\mathrm{a}})(1+2c_{\mathrm{a}}k_{2})-k_{2},$
(65) $\displaystyle\Delta_{z_{0}}$
$\displaystyle=-4p_{z_{0}}^{3}-27q_{z_{0}}^{2},$ (66)
$\displaystyle\theta_{z_{0}}$
$\displaystyle=\arctan\left(-\frac{q_{z_{0}}}{2},\frac{\sqrt{\Delta_{z_{0}}}}{6\sqrt{3}}\right).$
(67)
The use of Wolfram Mathematica allows to prove that $\Delta_{z_{0}}>0$ for
$c_{\mathrm{a}}>0$ and $k_{2}>0$. Since $\Delta_{z_{0}}>0$, equation (67)
gives the $0<\theta<\pi$, hence $\cos{(\theta/3)}\geq 1/2$. Furthermore, the
use of Wolfram Mathematica allows to find that $\lim_{c_{\mathrm{a}}\to
0}x_{\mathrm{ub}}=1$.
It was shown above that the dissociation constants of many diprotic acids are
constraint by the condition ${k_{1}\geq 4k_{2}}$. The use of equations (6) and
$\eqref{eq:ka2}$ in the inequality ${k_{1}\geq 4k_{2}}$ leads to the
constraint ${z_{1}^{2}\geq 4z_{0}z_{2}}$ between the concentrations of the
acid and its conjugate bases. The use of equations (55), (56) and (57) in the
inequality ${z_{1}^{2}\geq 4z_{0}z_{2}}$ gives the inequality $P_{z}>0$ with
$P_{z}=x^{3}+2k_{2}x^{2}-(1+4c_{\mathrm{a}}k_{2})x-2k_{2}.$ (68)
Figure 3: Upper (blue) and lower (green) bounds of the pH for oxalic acid
(left) and 1,5-Pentanediamine (right) as functions of the molar concentration
$C_{\mathrm{a}}$. The orange curve represents the exact pH.
This polynomial $P_{z}$ is the same polynomial for a monoprotic weak acid with
$k_{\mathrm{a}}=2k_{2}$. Since
$\operatorname{sgn}[P_{z}]=\left(+,+,-,-\right)$, Descartes rule of signs
indicates that $P_{z}=0$ has only one positive root. Using the method of
Caicedo et al., this root is given by
$x_{\mathrm{lb}}=\tfrac{2}{3}\left(\sqrt{4k_{2}^{2}+3c_{\mathrm{a}}k_{2}+3}\cos{(\theta_{z}/3)}-k_{2}\right),$
(69)
with
$\displaystyle p_{z}$
$\displaystyle=-\tfrac{4k_{2}^{2}}{3}-c_{\mathrm{a}}k_{2}-1,$ (70)
$\displaystyle q_{z}$
$\displaystyle=\tfrac{16k_{2}^{3}}{27}+\tfrac{2c_{\mathrm{a}}k_{2}^{2}}{3}-\tfrac{k_{2}}{3},$
(71) $\displaystyle\Delta_{z}$ $\displaystyle=-4p_{z}^{3}-27q_{z}^{2},$ (72)
$\displaystyle\theta_{z}$
$\displaystyle=\arctan\left(-\frac{q_{z}}{2},\frac{\sqrt{\Delta_{z}}}{6\sqrt{3}}\right).$
(73)
The discriminant $\Delta_{z}$ is a positive quantity, in fact, by using
Wolfram Mathematica it is shown that $\Delta_{z}\geq 4$. Furthermore using the
same software it is shown that $\lim_{c_{\mathrm{a}}\to 0,k_{2}\to
0}{x_{\mathrm{lb}}}=1$, and that $\lim_{c_{\mathrm{a}}\to
0,k_{2}\to\infty}{x_{\mathrm{lb}}}=1/\sqrt{2}$.
The lower and upper bounds to the pH are obtained by
${7-\log_{10}{x_{\mathrm{ub}}}}$ and ${7-\log_{10}{x_{\mathrm{lb}}}}$
respectively. Figure 3 displays the lower and upper bounds to the pH as a
function of the molar concentration $C_{\mathrm{a}}$ for oxalic acid (left)
and 1,5-Pentanediamine (right). In the case of oxalic acid, the exact pH and
the lower pH bound (green) are nearly identical for concentrations
$C_{\mathrm{a}}<10^{-2}\,\mathrm{M}$. On the other hand, for the compound
1,5-Pentanediamine the exact pH and the upper pH bound (blue) are nearly
identical for concentrations $C_{\mathrm{a}}<10^{-4}\,\mathrm{M}$. It is
interesting to notice from both panels of Figure 3 that the upper bound pH
curve (blue) overstimate the pH for a constant difference for concentrations
greater than $10^{-2}\,\mathrm{M}$.
### 3.4 Analysis of the dependence of the $\mathrm{pH}$ on p$K_{2}$
Figure 4: Contours of constant pH (red lines) for different concentrations
$C_{\mathrm{a}}$: (a) $0.1\,\mathrm{M}$; (b) $0.01\,\mathrm{M}$; (c)
$10^{-3}\,\mathrm{M}$; (d) $10^{-6}\,\mathrm{M}$. Contours of
$\delta\mathrm{pH}_{2}$ are shown in shading colors from blue to yellow: dark
blue 0.01, grey 0.02, dark orange 0.06, orange 0.1, light orange 0.14, dark
yellow 0.15, and yellow 0.16. The light blue region has
$\delta\mathrm{pH}_{2}<0.01$. The open circle markers are the values of
$\left(\mathrm{p}K_{1},\mathrm{p}K_{2}\right)$ for different weak diprotic
acids [10].
The pH is calculated by $\mathrm{pH}=7-\log_{10}x$ with $x$ given by equation
(31). The partial derivatives
$\displaystyle\delta\mathrm{pH}_{1}$
$\displaystyle=\left(\frac{\partial\mathrm{pH}}{\partial\mathrm{p}K_{1}}\right)_{C_{\mathrm{a}},\mathrm{p}K_{2}},$
(74) $\displaystyle\delta\mathrm{pH}_{2}$
$\displaystyle=\left(\frac{\partial\mathrm{pH}}{\partial\mathrm{p}K_{2}}\right)_{C_{\mathrm{a}},\mathrm{p}K_{1}},$
(75)
measure how much the pH depends on p$K_{1}$ or p$K_{2}$, respectively.
Figure 4 displays contours of constant $\delta\mathrm{pH}_{2}$ as function of
p$K_{1}$ and p$K_{2}$ for different acid concentrations, $C_{\mathrm{a}}$: (a)
$0.1\,\mathrm{M}$; (b) $0.01\,\mathrm{M}$; (c) $10^{-3}\,\mathrm{M}$; (d)
$10^{-6}\,\mathrm{M}$. Only the acids that fulfill the condition
$\mathrm{p}K_{2}\geq\mathrm{p}K_{1}+\log_{10}4$ are shown. This Figure also
shows contours of constant pH as red curves with their respective call-outs
indicating the value of the pH. In all the panels, the contours of the
derivative $\delta\mathrm{pH}_{2}$ are shown with contour shading from blue to
yellow; the dark blue has $0.01<\delta\mathrm{pH}_{2}<0.02$, the grey has
$0.02<\delta\mathrm{pH}_{2}<0.06$, the dark orange
$0.06<\delta\mathrm{pH}_{2}<0.1$, the orange $0.1<\delta\mathrm{pH}_{2}<0.14$,
the light orange $0.14<\delta\mathrm{pH}_{2}<0.15$, the dark yellow
$0.14<\delta\mathrm{pH}_{2}<0.15$, and the yellow
$0.16<\delta\mathrm{pH}_{2}$. The open circle markers in all the panels of
Figure 4 are the values of $\left(\mathrm{p}K_{1},\mathrm{p}K_{2}\right)$ for
different diprotic weak acids [10]. The maximum value of
$\delta\mathrm{pH}_{2}$ is obtained by evaluating $\delta\mathrm{pH}_{2}$
along the line ${\mathrm{p}K_{2}=\mathrm{p}K_{1}+\log_{10}4}$. By doing this a
function $\delta\mathrm{p}K_{2}(\mathrm{p}K_{1},C_{\mathrm{a}})$ is obtained.
The use of the function $\mathsf{NMaximize}$ of Wolfram Mathematica gives that
${\max{\left(\delta\mathrm{p}K_{2}(\mathrm{p}K_{1},C_{\mathrm{a}})\right)}\approx
0.17153}$ regardless the values of p$K_{1}$ and $C_{\mathrm{a}}$.
Panel (a) of Figure 4 shows that, for $C_{\mathrm{a}}=0.1\,\mathrm{M}$,
p$K_{2}$ has weak influence on the pH for p$K_{1}>4$. This is evident by
observing that the contours of constant pH are practically vertical lines for
$\mathrm{pH}>2.5$ and also by the fact that $\delta\mathrm{pH}_{2}<0.01$ for
the same values of the pH. In the same panel can be observed that the
strongest influence of p$K_{2}$ on the pH, _i.e._
$0.1<\delta\mathrm{p}K_{2}\lesssim 0.17153$, is seen in the regions with
orange and yellow contour shading, and pH$<1.5$. The pH contours in this
region are curved instead of straight vertical lines. Panel (a) shows that the
approximation of considering the pH independent p$K_{2}$ is very good for all
the acids at a concentration $C_{\mathrm{a}}=0.1\,\mathrm{M}$. The highest
observed value of $\delta\mathrm{pH}_{2}$ is about 0.17153 units of pH by 1
unit of p$K_{2}$. This value of $\delta\mathrm{pH}_{2}$ indicates that a
change on 0.5 units of p$K_{2}$ would produce at most a change of 0.08 units
on the pH. This change in the pH is sufficiently small to be within the
experimental error, hence, the heuristic approximation of considering the pH
dependent only on p$K_{1}$ is an good approximation at relatively high
concentrations of the acid.
Panels (b) and (c) of Figure 4 show similarities in shape with panel (a). It
can be seen that for these concentrations the pH is insensitive to the value
of p$K_{2}$ for p$K_{1}>5$ and p$K_{1}>6$, for concentrations
$C_{\mathrm{a}}=10^{-2}$ (b) and $C_{\mathrm{a}}=10^{-3}$ (c) respectively.
Regarding the pH contours, they are insensitive to the p$K_{2}$ value for
pH$>3.5$ and pH$>4.5$ for concentrations $C_{\mathrm{a}}=10^{-2}$ (b) and
$C_{\mathrm{a}}=10^{-3}$ (c) respectively. Panel (d),
$C_{\mathrm{a}}=10^{-6}\,\mathrm{M}$, shows evident differences with respect
to panels (a) to (c). At this low acid concentration the region with
$5.73<\mathrm{pH}<6.5$ displays strong sensitivity to the value of p$K_{2}$,
with the strongest effect for pH$\approx 5.9$ and p$K_{1}\approx 6$.
Panels (a) to (c) of Figure 4 show that the pH contour with
$\mathrm{pH}=-\log_{10}C_{\mathrm{a}}$ is a straight line with negative slope.
This line is a boundary between two regions: one with pH contours that are
asymptotically independent of p$K_{2}$ (vertical lines) and other region with
pH contours that are asymptotically independent of p$K_{1}$ (horizontal
lines). Although panel (d) does not display this straight line with negative
slope, it is clear that there are also regions with asymptotic independence on
p$K_{1}$ and p$K_{2}$.
### 3.5 Strong base titration of diprotic acids and its buffer mixtures
Figure 5: Contours of constant pH on the concentrations plane
$(C_{\mathrm{a}},C_{\mathrm{b}})$ for diprotic acids. Left: maleic acid,
p$K_{1}=1.92$, p$K_{2}=6.23$; Right: 1,8-Octanediamine p$K_{1}=11$,
p$K_{2}=10.1$. The red region is given for $\Delta_{\mathrm{dc}}>0$, the green
region for $\xi_{1}<0$ and $\xi_{2}<0$, and the blue $\xi_{1}>0$ and
$\xi_{2}<0$. The titration line is shown in dashed on both figures. The
equivalence points $n=1,2$ are shown as cyan open circles. The half
equivalence points $n=1/2,3/2,5/2$ are shown as magenta open circles.
Figure 5 displays the contours of constant pH for maleic acid (left panel) and
1,8-Octanediamine (right panel). Maleic acid has p$K_{1}=1.92$ and
p$K_{2}=6.23$, that is p$K_{2}-$p$K_{1}\geq\log_{10}4$, meanwhile the compound
1,8-Octanediamine has p$K_{1}=11$ and p$K_{2}=10.1$, and
p$K_{2}-$p$K_{1}\leq-\log_{10}4$. Maleic acid is an example of a pH given
uniquely by equation (32) with $\bar{y}_{1}$ given by the first case of
equation (36), therefore maleic acid displays only one region in the
$(C_{\mathrm{a}},C_{\mathrm{b}})$ plane. On the other hand the compound
1,8-Octanediamine displays three regions on the
$(C_{\mathrm{a}},C_{\mathrm{b}})$ plane: the red region is given by using
equation (32) with $\bar{y}_{1}$ given by the first case of (36), the green
region given by equation (33) with $\bar{y}_{1}$ given by the second case of
equation (36), and the blue region is given by (33) with $\bar{y}_{1}$ given
by the third case of equation (36).
Figure 6: Titration curves for 10$\,\mathrm{ml}$ of diprotic acids at
concentration $C_{\mathrm{a}}=1\,\mathrm{mM}$ using a volume $V_{\mathrm{b}}$
of strong base with concentration $C_{\mathrm{b}}=1\,\mathrm{mM}$. (Left)
Maleic acid, p$K_{1}=1.92$, p$K_{2}=6.23$; (Right) 1,8-Octanediamine
p$K_{1}=11$, p$K_{2}=10.1$. The red, green and blue curves are calculated
using the first, third, and fourth case of equation (36), respectively.
Figure 6 shows the titrations curves obtained by adding a volume
$V_{\mathrm{b}}$ of a strong base with concentration
$C_{\mathrm{b0}}=1\,\mathrm{mM}$ to a volume $V_{\mathrm{a0}}=10\,\mathrm{ml}$
of solution $C_{\mathrm{a0}}=1\,\mathrm{mM}$ of maleic acid (left) and
1,8-Octanediamine (right). These titration curves are given by the pH along
the dashed lines of Figure 5 for the case $C_{\mathrm{a0}}=1\,\mathrm{mM}$ and
$C_{\mathrm{b0}}=1\,\mathrm{mM}$.
The left panel of Figure 6 displays the typical equivalence points for
diprotic acids with two equivalence points. The first equivalence point occurs
at $V_{\mathrm{b}}\approx 10\,\mathrm{ml}$ with $\mathrm{pH}\approx 5$, the
second equivalence point occurs at $V_{\mathrm{b}}\approx 20\,\mathrm{ml}$
with $\mathrm{pH}\approx 8$. In contrast, the right panel of Figure 6 does not
display equivalence points. The titration curve for 1,8-Octanediamine is made
by joining three different titration curves: red, green and blue (from left to
right). The initial solution of 1,8-Octanediamine has pH slightly below 7, as
the base is added the pH grows rapidly reaching a pH above 10. This behaviour
is described by the red curve of Figure 6 (right). As the volume of added base
increases, the pH grows from 10 to approximately 10.5, following the green
curve. Finally, for $V_{\mathrm{b}}>25\,\mathrm{ml}$ the titration experiment
follows the blue curve reaching a final pH slightly above 10.5 at
$V_{\mathrm{b}}=40\,\mathrm{ml}$.
Figure 7: Titration functions $\bar{n}(\mathrm{pH})$, and
$\bar{n}_{\mathrm{K}}$, obtained from equations (49) (green and red) and (46)
(cyan and magenta), respectively. The first and second equivalence points
occur at $\bar{n},\bar{n}_{K}=1,2$, the half-equivalence points occur at
$\bar{n},\bar{n}_{\mathrm{K}}=1/2,3/2,5/2$. Maleic acid (red and magenta)
displays the typical titration curve of a diprotic acid, 1,8-Octanediamine
(green and cyan) does not diaplay equivalence points.
The use of equation (49) allows to obtain the pH at the equivalence points for
acid and buffer solutions. The equivalence points $\bar{n}=1,2$, and the half-
equivalence points ${\bar{n}=1/2,3/2}$, as functions of the pH, are displayed
in Figure 7 for maleic acid, p$K_{1}=1.92$, p$K_{2}=6.23$, and for
1,8-Octanediamine, p$K_{1}=11$, p$K_{2}=10.1$. It is seen in this Figure that
the equivalence point for $\bar{n},\bar{n}_{\mathrm{K}}=3/2$ are the same for
maleic acid, but not for 1,8-Octanediamine. This Figure also shows that the
use of equation (46) gives a wrong pH at the equivalence points $\bar{n}=1,2$.
The first equivalence point is shifted to more acid pHs meanwhile the second
equivalence point is shifted to basic pHs.
### 3.6 pH stability of buffer solutions
In the titration experiment a volume $V_{\mathrm{b}}$ of a base solution, with
concentration $c_{\mathrm{b0}}$, is added to a volume $V_{\mathrm{B0}}$ of a
buffer solution with concentrations $\bar{c}_{\mathrm{a0}}$ and
$\bar{c}_{\mathrm{b0}}$. The concentrations $\bar{c}_{\mathrm{a}}$ and
$\bar{c}_{\mathrm{b}}$ as functions of $V_{\mathrm{b}}$ are given by equations
(39) and (40). Although equation (49) can be used to analyze the stability of
a buffer solution, it is more convenient to use the parametric curve
$\beta(V_{\mathrm{b}})=\left\\{\mathrm{pH}_{\mathrm{acid}}\left(V_{\mathrm{b}}\right),\mathrm{pH}_{\mathrm{buffer}}\left(V_{\mathrm{b}}\right)\right\\},$
(76)
where $\mathrm{pH}_{\mathrm{acid}}\left(V_{\mathrm{b}}\right)$ is the pH of
the acid as function of added base, and
$\mathrm{pH}_{\mathrm{buffer}}\left(V_{\mathrm{b}}\right)$ is the pH of the
buffer as function of added base. Figure 8 displays $\beta(V_{\mathrm{b}})$
for acid and buffer solutions prepared with the same number of moles of the
acid, $C_{\mathrm{a}}V_{\mathrm{a0}}=7.5\times 10^{-3}$ moles, and titrated
with the same strong base, $C_{\mathrm{b0}}=7.5\,\mathrm{mM}$. The buffer
solutions of panel (a) are prepared by adding $C_{\mathrm{10}}V_{10}=2.5\times
10^{-3}$ moles of the monobasic salt only; The buffer solutions of panel (b)
are prepared by adding $C_{\mathrm{20}}V_{20}=2.5\times 10^{-3}$ moles of the
dibasic salt only; The buffer solutions of panel (c) are prepared by adding
$C_{\mathrm{10}}V_{10}=2.5\times 10^{-3}$ moles of the monobasic salt, and
$C_{\mathrm{20}}V_{20}=2.5\times 10^{-3}$ moles of the dibasic salt. The three
panels show four curves for different acids all with p$K_{1}=1$ and p$K_{2}=1$
(red), p$K_{2}=4$ (green), p$K_{2}=6$ (blue), and p$K_{2}=8$ (cyan).
Figure 8: $\beta(V_{\mathrm{b}})$ curves for buffer solutions of different
acids prepared with: (a) only monobasic salt, (b) only dibasic salt, and (c)
both monobasic and dibasic salts. All the acids have p$K_{1}=1$ and:
p$K_{2}=1$ (red), p$K_{2}=4$ (green), p$K_{2}=6$ (blue), and p$K_{2}=8$
(cyan).
The pH stability of a buffer solution is given when the
$\beta(V_{\mathrm{b}})$ curve is horizontal, _i.e._ regardless the change in
the pH of the acid solution, the pH of the buffer remains stable. The red
curve of panels (a) and (c) of Figure 8 display the best pH stability. These
red curves are produced by acids with p$K_{1}=1$ and p$K_{2}=1$. The red
$\beta(V_{\mathrm{b}})$ curves of panels (a) and (c) display buffer stability
at $\mathrm{pH}_{\mathrm{buffer}}\approx 3$ and
$\mathrm{pH}_{\mathrm{acid}}>4$. The green $\beta(V_{\mathrm{b}})$ curves of
panels (a) and (c) display buffer stability at
$\mathrm{pH}_{\mathrm{buffer}}\approx 4.7$ and
$\mathrm{pH}_{\mathrm{acid}}>6$, for and acid with p$K_{1}=1$ and p$K_{2}=4$.
The cyan curves of panels (b) and (c) of Figure 8 show that the dibasic salt
produces basic pH stability for acids with higher p$K_{2}$.
## 4 Declarations
### 4.1 Ethical Approval
This work does not involve studies in humans and/or animals. There was no need
for ethics committee approval.
### 4.2 Competing interests
The authors declare no competing interests.
### 4.3 Authors’ contributions
Juan C. Morales made analytical and numerical calculations. Carlos A. Arango
performed analytical and numerical calculations, wrote the manuscript,
prepared the figures, and performed the analysis of results.
### 4.4 Funding
This work has been financed by the OMICAS program, Project ID:
FP44842-217-2018, and the internal research grants of Universidad Icesi. The
OMICAS program acronym stands for “In-silico Multiscale Optimization of
Sustainable Agricultural Crops”, a member of the Scientific Colombia
Ecosystem, sponsored by the World Bank, and the Colombian Ministries of
Science, Technology and Innovation (Minciencias), Education, Industry and
Tourism, and the ICETEX.
### 4.5 Availability of data and materials
The data and Wolfram Mathematica codes used for this study are available from
the corresponding author on request.
## 5 Appendix
### 5.1 Mathematical solution of $P=0$
#### 5.1.1 The resolvent cubic equation
The solution of equation $P=0$ by Ferrari’s method requires to find its
resolvent cubic equation [13, 14]. The standard procedure to obtain the
resolvent cubic of a quartic equation begins by writing $P=0$ in its
equivalent form
$\left(x^{2}+\tfrac{1}{2}c_{3}x\right)^{2}=\left(\tfrac{1}{4}c_{3}^{2}-c_{2}\right)x^{2}-c_{1}x-c_{0}.$
(77)
The addition of a quantity $y/2$ inside the squared term of the left hand
side, and the addition of the compensation terms on the right hand side gives,
after simplification,
$\left(x^{2}+\tfrac{1}{2}c_{3}x+\tfrac{y}{2}\right)^{2}=\left(\tfrac{1}{4}c_{3}^{2}-c_{2}+y\right)x^{2}+(\tfrac{1}{2}c_{3}y-c_{1})x-c_{0}+\tfrac{1}{4}y^{2}.$
(78)
The left hand side of this equation can be written as a complete square, that
is, equation (78) can be written
$\left(tx+\frac{c_{3}y-2c_{1}}{4t}\right)^{2}=\left(\tfrac{1}{4}c_{3}^{2}-c_{2}+y\right)x^{2}+(\tfrac{1}{2}c_{3}y-c_{1})x-c_{0}+\tfrac{1}{4}y^{2},$
(79)
with $t=t(y)\neq 0$, given that $t^{2}=\tfrac{1}{4}c_{3}^{2}-c_{2}+y$, and
$\left(\frac{c_{3}y-2c_{1}}{4t}\right)^{2}=\tfrac{1}{4}y^{2}-c_{0}.$ (80)
The expansion of equation (80) gives, after simplification, the resolvent
cubic $R=0$, with
$R=y^{3}-c_{2}y^{2}+\left(c_{1}c_{3}-4c_{0}\right)y+\left(4c_{0}c_{2}-c_{0}c_{3}^{2}-c_{1}^{2}\right).$
(81)
This equation has three roots $y_{i}$, $i=1,2,3$. The use of one of these
roots in equations (78) and (79) gives
$\left(x^{2}+\tfrac{1}{2}c_{3}x+\tfrac{y_{i}}{2}\right)^{2}=\left(t_{i}x+\frac{c_{3}y_{i}-2c_{1}}{4t_{i}}\right)^{2},$
(82)
with $i=1,2,3$ and $t_{i}=t(y_{i})$. Each of these equations split in two
quadratic equations,
$\displaystyle
x^{2}+\left(\tfrac{1}{2}c_{3}-t_{i}\right)x+\tfrac{1}{2}y_{i}-\frac{c_{3}y_{i}-2c_{1}}{4t_{i}}$
$\displaystyle=0,$ (83) $\displaystyle
x^{2}+\left(\tfrac{1}{2}c_{3}+t_{i}\right)x+\tfrac{1}{2}y_{i}+\frac{c_{3}y_{i}-2c_{1}}{4t_{i}}$
$\displaystyle=0,$ (84)
with $i=1,2,3$. The roots of $P=0$ satisfy these quadratic equations [13, 14].
The discriminants of the quartic equation, $\Delta$, and its resolvent cubic
equation, $\Delta_{\mathrm{rc}}$, are identical [13, 14]. The restriction
$k_{1}\geq 4k_{2}$ gives that $\Delta>0$, hence $\Delta_{\mathrm{rc}}>0$ and
the resolvent cubic equation (81) must have three distinct real roots. For the
case $\Delta_{\mathrm{rc}}<0$ the cubic $R=0$ has one real root and two non-
real complex conjugate roots [13, 14].
#### 5.1.2 Solution of the resolvent cubic equation $R=0$
The third order polynomial equation $R=0$ can be solved by Cardano’s method.
The change of variable $y=\bar{y}+\frac{c_{2}}{3}$ gives the depressed cubic
equation $R_{\mathrm{dc}}=0$, with
$R_{\mathrm{dc}}=\bar{y}^{3}+\bar{p}\bar{y}+\bar{q},$ (85)
and
$\displaystyle\bar{p}$ $\displaystyle=c_{1}c_{3}-\frac{c_{2}^{2}}{3}-4c_{0},$
(86) $\displaystyle\bar{q}$
$\displaystyle=\frac{8c_{0}c_{2}}{3}+\frac{c_{1}c_{2}c_{3}}{3}-\frac{2c_{2}^{3}}{27}-c_{1}^{2}-c_{0}c_{3}^{2},$
(87)
and discriminant $\Delta_{\mathrm{dc}}=-4\bar{p}^{3}-27\bar{q}^{2}$, which is
equal to $\Delta$ and $\Delta_{\mathrm{rc}}$ [13, 14].
The use of Vieta’s substitution, $\bar{y}=\bar{z}-\frac{\bar{p}}{3\bar{z}}$,
gives the polynomial equation
$\bar{z}^{3}-\frac{\bar{p}^{3}}{27\bar{z}^{3}}+\bar{q}=0.$ (88)
Multiplication of (88) by $\bar{z}^{3}$ gives
$\bar{z}^{6}+\bar{q}\bar{z}^{3}-\frac{\bar{p}^{3}}{27}=0,$ (89)
which is equivalent to the quadratic equation
$\xi^{2}+\bar{q}\xi-\tfrac{\bar{p}^{3}}{27}=0$, in the variable
$\xi=\bar{z}^{3}$, with roots
$\begin{split}\xi_{1,2}&=-\frac{\bar{q}}{2}\pm\sqrt{\frac{27\bar{q}^{2}+4\bar{p}^{3}}{108}}\\\
&=-\frac{\bar{q}}{2}\pm\frac{1}{2}\sqrt{-\frac{\Delta_{\mathrm{dc}}}{27}}.\end{split}$
(90)
The physical case of diprotic acids with $k_{1}\geq 4k_{2}$ [9, 11] gives
$\Delta_{\mathrm{dc}}>0$ for $\bar{c}_{\mathrm{a}}\geq 0$ and
$\bar{c}_{\mathrm{b}}\geq 0$, therefore $\xi_{1,2}$ are a complex conjugate
pair. On the other hand, for the less common case of diprotic acids with
$k_{1}<4k_{2}$ is possible to have $\Delta_{\mathrm{dc}}<0$, hence $\xi_{1,2}$
are real conjugates, on part of the plane
$\bar{c}_{\mathrm{a}}$-$\bar{c}_{\mathrm{b}}$, with $\xi_{1}>\xi_{2}$.
It is convenient to define $\zeta=\xi_{1}$ and $\zeta^{*}=\xi_{2}$ for the
case $k_{1}\geq 4k_{2}$ and $\xi=\xi_{1}$ and $\bar{\xi}=\xi_{2}$ for the case
$k_{1}<4k_{2}$, with $\zeta^{*}$ and $\bar{\xi}$ as the complex and real
conjugate of $\zeta$ and $\xi$ respectively.
The use the polar representation for $\zeta$ gives
${\zeta=\|\zeta\|e^{i\theta}}$ for the case $k_{1}\geq 4k_{2}$, with
$\displaystyle\|\zeta\|$
$\displaystyle=\frac{1}{2}\sqrt{\bar{q}^{2}+\frac{\Delta_{\mathrm{dc}}}{27}}=\sqrt{\frac{-\bar{p}^{3}}{27}},$
(91) $\displaystyle\theta$
$\displaystyle=\arctan{\left(-\frac{\bar{q}}{2},\frac{\sqrt{\Delta_{\mathrm{dc}}}}{6\sqrt{3}}\right)},$
(92)
with $\theta\in(0,\pi)$ as the angle between $\zeta$ and the positive real
axis on the Argand plane. The angle $\theta$ is related to the trigonometric
solution obtained by Nickalls for the roots of the cubic equation [15]. The
polar representation of $\xi$ and $\bar{\xi}$, case $k_{1}<4k_{2}$, gives
$\xi=|\xi_{1}|e^{i\theta_{1}}$ and $\bar{\xi}=|\xi_{2}|e^{i\theta_{2}}$ with
$\theta_{1,2}=\frac{\pi}{2}(1-\operatorname{sgn}{\xi_{1,2}})$, and
$\xi>\bar{\xi}$.
The roots $\bar{y}$ of the depressed cubic equation $R_{\mathrm{dc}}=0$ are
given by Cardano’s formula
$\bar{y}=\alpha+\beta,$ (93)
where $\alpha=\sqrt[3]{\zeta}$ and $\beta=\sqrt[3]{\zeta^{*}}$ for $k_{1}\geq
4k_{2}$, and $\alpha=\sqrt[3]{\xi}$ and $\beta=\sqrt[3]{\bar{\xi}}$ for
$k_{1}<4k_{2}$. The cubic roots $\alpha$ and $\beta$ have three values each,
$\alpha_{n}$ and $\beta_{n}$, with $n=0,1,2$. The combined use of the three
roots $\alpha$ and the three roots $\beta$ must give the three roots of
$R_{\mathrm{dc}}=0$.
The cubic roots $\alpha$ and $\beta$, for the case $k_{1}\geq 4k_{2}$, are
given by
$\displaystyle\alpha_{n}$
$\displaystyle=\sqrt[3]{\|\zeta\|}\exp{\left(i\left(\frac{\theta}{3}+\frac{2n\pi}{3}\right)\right)},$
(94) $\displaystyle\beta_{n}$
$\displaystyle=\sqrt[3]{\|\zeta\|}\exp{\left(i\left(-\frac{\theta}{3}+\frac{2n\pi}{3}\right)\right)},$
(95)
with $n=0,1,2$, and
$\sqrt[3]{\|\zeta\|}=\tfrac{1}{3}\sqrt{1+k_{1}Q_{1}+k_{1}^{2}Q_{2}}.$ (96)
for which $Q_{1}$ and $Q_{2}$ are
$\displaystyle Q_{1}$
$\displaystyle=-3k_{2}\bar{c}_{\mathrm{b}}^{2}+(6\bar{c}_{\mathrm{a}}k_{2}+1)\bar{c}_{\mathrm{b}}+2\bar{c}_{\mathrm{a}}-14k_{2},$
(97) $\displaystyle Q_{2}$
$\displaystyle=k_{2}^{2}+(4\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}})k_{2}+3+(\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}})^{2}.$
(98)
The case $k_{1}<4k_{2}$ has
$\displaystyle\alpha_{n}$
$\displaystyle=\sqrt[3]{|\xi_{1}|}\exp{\left(i\left(\frac{\theta_{1}}{3}+\frac{2n\pi}{3}\right)\right)},$
(99) $\displaystyle\beta_{n}$
$\displaystyle=\sqrt[3]{|\xi_{2}|}\exp\left(i\left(\frac{\theta_{2}}{3}+\frac{2n\pi}{3}\right)\right),$
(100)
with $n=0,1,2$.
Since for the case $k_{1}\geq 4k_{2}$ the cubic equation $R_{\mathrm{dc}}=0$
has three real roots, the addition of two cubic roots $\alpha_{n}+\beta_{m}$
must give a real number. This is possible only if
$\operatorname{Im}(\alpha_{n})=-\operatorname{Im}(\beta_{m})$. There are only
three possible combinations that fulfill this requirement:
$\displaystyle\alpha_{0}+\beta_{0}$
$\displaystyle=2\sqrt[3]{\|\zeta\|}\cos{\left(\tfrac{\theta}{3}\right)},$
(101) $\displaystyle\alpha_{2}+\beta_{1}$
$\displaystyle=-2\sqrt[3]{\|\zeta\|}\cos{\left(\tfrac{\theta+\pi}{3}\right)},$
(102) $\displaystyle\alpha_{1}+\beta_{2}$
$\displaystyle=-2\sqrt[3]{\|\zeta\|}\sin{\left(\tfrac{\theta+2\pi}{3}\right)},$
(103)
which are the roots of $R_{\mathrm{dc}}=0$: $\bar{y}_{1}$, $\bar{y}_{2}$, and
$\bar{y}_{3}$, respectively, with $\bar{y}_{1}>\bar{y}_{2}>\bar{y}_{3}$.
The case $k_{1}<4k_{2}$ has only one real solution, and a complex conjugate
pair. Since $\xi>\bar{\xi}$, there are three possibilities:
* •
$\theta_{1}=\theta_{2}=0$: the roots are $\alpha_{i}+\beta_{i}$ with
$i=0,1,2$. The root $\bar{y}_{1}=\alpha_{0}+\beta_{0}$ is the only real
solution, $\bar{y}_{1}=\sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|}$.
* •
$\theta_{1}=\theta_{2}=\pi$: the roots are $\alpha_{i}+\beta_{i}$ with
$i=0,1,2$. The root $\bar{y}_{1}=\alpha_{1}+\beta_{1}$ is the only real
solution, $\bar{y}_{1}=-(\sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|})$.
* •
$\theta_{1}=0$, $\theta_{2}=\pi$: the roots are $\alpha_{0}+\beta_{1}$,
$\alpha_{1}+\beta_{0}$, and $\alpha_{2}+\beta_{2}$. The root
$\bar{y}_{1}=\alpha_{0}+\beta_{1}$ is the only real solution,
$\bar{y}_{1}=\sqrt[3]{|\xi_{1}|}-\sqrt[3]{|\xi_{2}|}$.
In summary, the solution $\bar{y}_{1}$ of the depressed cubic equation is
$\bar{y}_{1}=\begin{cases}\tfrac{2}{3}\sqrt{1+k_{1}Q_{1}+k_{1}^{2}Q_{2}}\cos{\left(\tfrac{\theta}{3}\right)},&\Delta_{\mathrm{dc}}>0,\\\
\sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|},&\Delta_{\mathrm{dc}}<0,\;\xi_{1}>0,\;\xi_{2}>0,\\\
-(\sqrt[3]{|\xi_{1}|}+\sqrt[3]{|\xi_{2}|}),&\Delta_{\mathrm{dc}}<0,\;\xi_{1}<0,\;\xi_{2}<0,\\\
\sqrt[3]{|\xi_{1}|}-\sqrt[3]{|\xi_{2}|},&\Delta_{\mathrm{dc}}<0,\;\xi_{1}>0,\;\xi_{2}<0.\end{cases}$
(104)
The root $y_{1}$ of the resultant cubic equation $R=0$ is given by
$y_{1}=\bar{y}_{1}-\tfrac{1+k_{1}\left(\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}}-k_{2}\right)}{3}$
(105)
This root is substituted in the quadratic equations (83) and (84), with
$t_{1}=t(y_{1})$ given by
$t_{1}=\sqrt{1+\tfrac{1}{4}(\bar{c}_{\mathrm{b}}+k_{1})^{2}+k_{1}\left(\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}}-k_{2}\right)+y_{1}}.$
(106)
The four roots of the quartic equation $P=0$ are given by
$\displaystyle x_{1,2}$
$\displaystyle=\tfrac{1}{2}\left(-\left(\tfrac{\bar{c}_{\mathrm{b}}+k_{1}}{2}-t_{1}\right)\pm\sqrt{\left(\tfrac{\bar{c}_{\mathrm{b}}+k_{1}}{2}-t_{1}\right)^{2}-2y_{1}+\tfrac{(\bar{c}_{\mathrm{b}}+k_{1})y_{1}+2k_{1}\left(1+(2\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}})k_{2}\right)}{t_{1}}}\right),$
(107) $\displaystyle x_{3,4}$
$\displaystyle=\tfrac{1}{2}\left(-\left(\tfrac{\bar{c}_{\mathrm{b}}+k_{1}}{2}+t_{1}\right)\pm\sqrt{\left(\tfrac{\bar{c}_{\mathrm{b}}+k_{1}}{2}+t_{1}\right)^{2}-2y_{1}-\tfrac{(\bar{c}_{\mathrm{b}}+k_{1})y_{1}+2k_{1}\left(1+(2\bar{c}_{\mathrm{a}}-\bar{c}_{\mathrm{b}})k_{2}\right)}{t_{1}}}\right).$
(108)
Only the roots $x_{1}$ and $x_{3}$ can have physical significance, and $x$ is
given by
$x=\begin{cases}x_{1},&\Delta_{\mathrm{dc}}>0,\\\
x_{1},&\Delta_{\mathrm{dc}}<0,\;\xi_{1}>0,\;\xi_{2}>0,\\\
x_{3},&\Delta_{\mathrm{dc}}<0,\;\xi_{1}<0,\;\xi_{2}<0,\\\
x_{3},&\Delta_{\mathrm{dc}}<0,\;\xi_{1}>0,\;\xi_{2}<0.\end{cases}$ (109)
## References
* Scholz and Kahlert [2018] Scholz, F.; Kahlert, H. Acid–base equilibria of amino acids: microscopic and macroscopic acidity constants. _The Textbook Journal of Chemistry_ 2018, _4_ , 1–9
* Hamm et al. [2015] Hamm, L.; Nakhoul, N.; Hering-Smith, K. Acid-Base Homeostasis. _Clin J Am Soc Nephrol_ 2015, _10_ , 2232–42
* Doney et al. [2020] Doney, S. C.; Busch, D. S.; Cooley, S. R.; Kroeker, K. J. The Impacts of Ocean Acidification on Marine Ecosystems and Reliant Human Communities. _Annual Review of Environment and Resources_ 2020, _45_ , 83–112
* Denbigh [1981] Denbigh, K. G. _The Principles of Chemical Equilibrium: With Applications in Chemistry and Chemical Engineering_ , 4th ed.; Cambridge University Press, 1981
* Burgot [2012] Burgot, J.-L. _Ionic Equilibria in Analytical Chemistry_ ; Springer, 2012
* Skoog et al. [2022] Skoog, D. A.; West, D. M.; Holler, F. J.; Crouch, S. R. _Fundamentals of Analytical Chemistry_ , 10th ed.; Cengage, 2022
* Kalka [2021] Kalka, H. Polyprotic Acids and Beyond—An Algebraic Approach. _Chemistry_ 2021, _3_ , 454–508
* Caicedo et al. [2023] Caicedo, A.; Morales, J. C.; Arango, C. A. Closed-form expressions for monoprotic weak acid aqueous solutions. _ChemTexts_ 2023, _9_
* Adams [1916] Adams, E. Q. Relations Between the Constants of Dibasic Acids and of Amphoteric Electorolytes. _Journal of the American Chemical Society_ 1916, _38_ , 1503–1510
* CRC Handbook [2007] CRC Handbook, _CRC Handbook of Chemistry and Physics_ , 88th ed.; CRC Press, 2007
* Mchedlov-Petrossyan [2019] Mchedlov-Petrossyan, N. POLYPROTIC ACIDS IN SOLUTION: IS THE INVERSION OF THE CONSTANTS OF STEPWISE DISSOCIATION POSSIBLE? _Ukrainian Chemistry Journal_ 2019, _85_ , 3–45
* Simms [1926] Simms, H. S. Dissociation of Polyvalent Substances II. Relation of Constants to Chemical Structure. _Journal of the American Chemical Society_ 1926, _48_ , 1251–1261
* Dickson [1914] Dickson, L. E. _Elementary Theory of Equations_ , 1st ed.; John Wiley and Sons, Inc., 1914
* Dickson [1922] Dickson, L. E. _First Course in the Theory of Equations_ , 1st ed.; John Wiley and Sons, Inc., 1922
* Nickalls [1993] Nickalls, R. W. D. A new approach to solving the cubic: Cardan’s solution revealed. _The Mathematical Gazette_ 1993, _77_ , 354–359
|
# Photon-photon Correlations from a Pair of Strongly Coupled Two-Level
Emitters
Elnaz Darsheshdar<EMAIL_ADDRESS>Departamento de Física, Universidade
Federal de São Carlos, P.O. Box 676, 13565-905, São Carlos, São Paulo, Brazil
Mathilde Hugbart<EMAIL_ADDRESS>Université Côte d’Azur,
CNRS, INPHYNI, France Romain Bachelard<EMAIL_ADDRESS>Departamento de Física, Universidade Federal de São Carlos, P.O. Box 676,
13565-905, São Carlos, São Paulo, Brazil Celso Jorge Villas-Boas
<EMAIL_ADDRESS>Departamento de Física, Universidade Federal de São
Carlos, P.O. Box 676, 13565-905, São Carlos, São Paulo, Brazil
###### Abstract
We investigate two-color photon correlations in the light emitted by strongly
coupled two-level emitters. Spectral filtering allows us to manipulate the
collected light statistics and we show that the resonances induced by dipole-
dipole interactions give rise to specific correlations, where the time-
symmetry of the correlations is broken. Based on the collective dressed
states, our study encompasses both the case of real processes, where the
photons are associated with specific resonances and classical correlations
between each other, and virtual processes, where pairs of photons are emitted
with non-classical correlations.
## I Introduction
Two-level emitters essentially behave as classical oscillators when they are
weakly driven, and the light elastically scattered by such systems presents a
range of optical phenomena that can be understood using the tools of linear
optics: dispersion, Rayleigh scattering, but also cooperative phenomena such
as superradiance Agarwal (1974a). Differently, for a single emitter, a strong
drive results in a strong inelastic component in the scattering, with the
emergence of a “Mollow triplet” composed of a carrier centered at the laser
frequency, and two symmetric sidebands shifted away by the Rabi frequency of
the driving field Mollow (1969). Of particular interest are the strong
correlations between the photons emitted from the two sidebands: originally
measured in atomic systems Aspect _et al._ (1980) and extensively studied
theoretically Cohen-Tannoudji _et al._ (1979); Dalibard and Reynaud (1983);
Schrama _et al._ (1992), the field has known a resurgence of attention in the
context of quantum dots, with a recent measurement of such photon
(anti)correlations between sidebands Ulhaq _et al._ (2012), thus
demonstrating the potential of artificial atoms as sources of heralded
photons.
In this context, the coupling of emitters gives access to new control
parameters, as interaction-induced resonances rise, but also interferences
phenomena Wolf _et al._ (2020). Indeed, the coupling of the emitters through
common radiation modes result in the commonly called dipole-dipole
interactions, which manifests in both the exchange of excitations and
cooperative decay processes Lehmberg (1970a, b). As a consequence, the
fluorescence spectrum of strongly driven atoms presents new sidebands Senitzky
(1978); Agarwal _et al._ (1980); Ben-Aryeh and Bowden (1988) which, for a
weak interaction, appear at twice the Rabi frequency from the carrier. Such
effects are expected to show up, for instance, in many-atom extended cloud,
with the resonant optical thickness playing the role of cooperativity
parameter Pucci _et al._ (2017).
Nevertheless, the diversity of photon-photon correlations emitted from
strongly-interacting systems has barely been scratched. The recent development
of the so-called sensor method del Valle _et al._ (2012), where the photons
emitted in a given mode are monitored by introducing an artificial two-level
system resonant with its frequency (analogously to a Fabry-Perot cavity), has
allowed to explore extensively multi-photon correlations for single emitters
Carreño _et al._ (2017). In particular, the potential of virtual transitions,
where photons are emitted in bundles, as a source of quantum correlations has
been pointed out. As for interacting emitters, the quantum correlations which
emerge for two weakly interacting emitters have been investigated in the
specific configuration of a pump driving a single emitter, although the
fluorescence spectrum is not substantially affected by the interaction in this
configuration an Peng _et al._ (2019).
In this work, we investigate two-color correlations in the light emitted by
two strongly-driven strongly-interacting emitters. The correlations are
interpreted introducing the collective dressed states picture, which allows us
to describe both bunching and anti-bunching based on the transitions between
them. At odds from weakly-interacting emitters, the strong interaction lifts
the degeneracy of the energy differences between the different states leading
to a temporal breaking of symmetry for the correlations: photons of different
frequencies may not be emitted in any order. Finally we show that most of the
virtual processes, which involve pairs of photons, yield non-classical
correlations when the sum of their energies fits any of the interaction-
induced sidebands.
## II Modeling and detection scheme
In our work, we consider two identical two-level systems. Experimentally, this
can be either two atoms, two molecules or two quantum dots. For this last
example, although quantum dots are promising single-emitter platforms, it
remains very challenging to produce very similar dots, i.e., with very similar
transition frequencies and linewidths. On the other hand, laser cooling has
allowed to bring the interactions between cold atoms under a very high degree
of control. In this paper, the two-level system is considered to be a
motionless atom.
The system under study is thus composed of two identical driven two-level
systems (TLS) at positions ${{\textbf{r}}_{i}}$, with transition frequency
$\omega_{a}$ and linewidth $\Gamma$. Each atom is described by the spin-half
angular momentum algebra, with $\sigma_{i}^{-}$ ($\sigma_{i}^{+}$) the
lowering (rising) operator of the $i$th atom ($i=1,2$). In the Born, Markov
and rotating-wave approximations, the master equation which describes the
dynamics of its density matrix $\rho$, in the laser reference frame, is given
by Agarwal (1974b) (we set $\hbar=1$ along the paper):
$\frac{\partial\rho}{\partial t}=i\left[\rho,H\right]+{\mathcal{L}}\rho,$ (1)
where the coherent and incoherent parts are encoded in the following at
resonance Hamiltonian and Lindblad super-operator, respectively:
$\displaystyle H$ $\displaystyle=$
$\displaystyle\frac{1}{2}\sum\limits_{i}[\Omega^{*}(\mathbf{r}_{j})\sigma_{i}^{-}+\Omega(\mathbf{r}_{j})\sigma_{i}^{+}]+\Gamma\sum\limits_{i,j\neq
i}{{\delta}_{ij}}\sigma_{i}^{+}\sigma_{j}^{-},$ (2)
$\displaystyle\mathcal{L}\rho$ $\displaystyle=$
$\displaystyle\frac{\Gamma}{2}\sum\limits_{i}(2\sigma_{i}^{-}\rho\sigma_{i}^{+}-\sigma_{i}^{+}\sigma_{i}^{-}\rho-\rho\sigma_{i}^{+}\sigma_{i}^{-})$
(3) $\displaystyle+\frac{\Gamma}{2}\sum\limits_{i,j\neq
i}{{\gamma}_{ij}}(2\sigma_{j}^{-}\rho\sigma_{i}^{+}-\sigma_{i}^{+}\sigma_{j}^{-}\rho-\rho\sigma_{i}^{+}\sigma_{j}^{-}).$
The atoms are here resonantly driven by a monochromatic plane–wave
$\Omega(\mathbf{r})=\Omega e^{i\mathbf{k}_{L}.\mathbf{r}}$, where $\Omega$
stands for the Rabi frequency and $\mathbf{k}_{L}$ the light wavevector. The
dipole-dipole interactions give rise to both a coherent and incoherent
coupling
$\displaystyle{{\delta}_{ij}}=-\frac{3}{4}\left(1-\cos^{2}{{\theta_{ij}}}\right)\frac{\cos{(k{r}_{ij})}}{{k{r}_{ij}}}$
(4)
$\displaystyle+\frac{3}{4}\left(1-3\cos^{2}{{\theta_{ij}}}\right)\left[\frac{\sin{(k{r}_{ij})}}{(kr_{ij})^{2}}+\frac{\cos{(k{r}_{ij})}}{(kr_{ij})^{3}}\right],$
$\displaystyle{{\gamma}_{ij}}=\frac{3}{2}\left(1-\cos^{2}{{\theta_{ij}}}\right)\frac{\sin{(k{r}_{ij})}}{{k{r}_{ij}}}$
$\displaystyle+\frac{3}{2}\left(1-3\cos^{2}{{\theta_{ij}}}\right)\left[\frac{\cos{(k{r}_{ij})}}{(kr_{ij})^{2}}-\frac{\sin{(k{r}_{ij})}}{(kr_{ij})^{3}}\right],$
with $\lambda=2\pi/k$ the wavelength transition ($k\approx k_{L}$), $r_{ij}$
the distance between the atoms, and $\theta_{ij}$ the angle between their
dipole moments and the vector joining them,
$\mathbf{r}_{ij}=\mathbf{r}_{j}-\mathbf{r}_{i}$.
Solving the master equation provides the scattered electric field, which is
given, in the far field limit and in direction $\hat{\mathbf{n}}$, by
${{E}^{\dagger}}\left(t\right)=\sum\limits_{j=1}^{2}{\sigma_{j}^{-}\left(t\right){{e}^{-ik\mathbf{\hat{n}}.{{\mathbf{r}}_{j}}}}}.$
(5)
The dependence on $\hat{\mathbf{n}}$ is hereafter kept implicit. Its temporal
coherence is captured by the first-order and second-order two-time correlation
functions:
$\displaystyle{g}^{(1)}\left(\tau\right)$ $\displaystyle=$
$\displaystyle\lim_{t\to\infty}\frac{\left\langle
E\left(t\right){{E}^{\dagger}}\left(t+\tau\right)\right\rangle}{\left\langle
E\left(t\right){{E}^{\dagger}}\left(t\right)\right\rangle},$ (6)
$\displaystyle{g}^{(2)}\left(\tau\right)$ $\displaystyle=$
$\displaystyle\lim_{t\to\infty}\frac{\left\langle
E\left(t\right)E\left(t+\tau\right){{E}^{\dagger}}\left(t+\tau\right){{E}^{\dagger}}\left(t\right)\right\rangle}{\left\langle
E\left(t\right){{E}^{\dagger}}\left(t\right)\right\rangle^{2}},$ (7)
here computed in the steady state. In particular, the fluorescence spectrum,
sometimes referred to one-photon spectrum (1PS), is obtained from the Fourier
transform of the first-order correlation function
$S\left(\omega\right)=\underset{T\to\infty}{\mathop{\lim}}\,\int_{-T}^{T}{{{g}^{(1)}}\left(\tau\right){{e}^{-i\omega\tau}}d\tau}.$
(8)
The 1PS gives the spectral energy distribution of the light scattered
elastically and inelastically, whereas the second-order correlation function
$g^{(2)}$ contains details on the correlations between the emitted photons,
with the antibunching in the trains of photons emitted by a single emitter as
a hallmark of the non-classicality of this emission Kimble _et al._ (1977).
The problem of time-resolved observables is however more challenging. Indeed,
as one introduces the field operator in the reciprocal space
$\tilde{E}(\omega)=\int_{t=-\infty}^{\infty}e^{-i\omega t}E(t)dt$, the problem
of studying two-color photon-photon correlations brings in the calculation of
a four-time correlator, $\left\langle
E(t_{1})E(t_{2}){E}^{\dagger}(t_{3}){E}^{\dagger}(t_{4})\right\rangle$. Then,
the use of the quantum regression theorem, commonly used for two-time
observables, may become a daunting task Gisin (1993); Brun and Gisin (1996);
Breuer _et al._ (1997). This is a strong restriction to the study of photon-
photon correlations, which has long limited rigorous results to single-emitter
physics Bel and Brown (2009).
An elegant solution was found in the “sensor method” that allows one to
investigate theoretically frequency-resolved correlations in greater details
del Valle _et al._ (2012). It relies on the introduction in the system of
extra two-level systems which behave as sensors, as described by the
Hamiltonian
${{H}_{S}}=\sum_{s}\omega_{s}\xi_{s}^{\dagger}{{\xi}_{s}^{-}}+\varepsilon\sum_{s}\left(E\xi_{s}^{-}+E^{\dagger}\xi_{s}^{\dagger}\right),$
(9)
with $\xi_{s}$ ($\xi_{s}^{\dagger}$) the lowering (rising) operator for sensor
$s$, and $\omega_{s}$ its resonant frequency, in the rotating frame at the
laser frequency 111We here consider sensors which all couple to the field
radiated in the same direction, but a generalization to two-direction photon-
photon correlations can be obtained by introducing sensors which couple to the
field (5) emitted in different directions.. The $\varepsilon$ parameter
corresponds to the coupling strength between the sensors and the atomic
system, which must be made sufficiently small to not perturb significantly the
dynamics of the latter and to avoid the saturation of the sensor
($\varepsilon={{10}^{-4}}$ throughout the paper). The sensors are also
characterized by their linewidth $\Gamma_{s}$, a parameter of importance as we
shall see later, which manifests in an extra Lindblad term:
$\mathcal{L}_{S}\rho=\frac{\Gamma_{s}}{2}\sum_{s}\left(2\xi_{s}^{-}\rho\xi_{s}^{\dagger}-\xi_{s}^{\dagger}\xi_{s}^{-}\rho-\rho\xi_{s}^{\dagger}\xi_{s}^{-}\right).$
(10)
The sensor contributions (9-10) are then added to the master equation (1),
with $\rho$ now describing the density matrix of the whole system (atoms plus
sensors).
The steady-state two-photon time- and frequency-resolved correlation is then
obtained from the second-order correlation function from the sensors
operators:
$\displaystyle g_{s}^{(2)}(\omega_{1},\omega_{2},\tau)=$ (11)
$\displaystyle\lim_{t\to\infty}\frac{\left\langle\xi_{1}^{\dagger}(\omega_{1},t)\xi_{2}^{\dagger}(\omega_{2},t+\tau){{\xi}_{2}}(\omega_{2},t+\tau){{\xi}_{1}}(\omega_{1},t)\right\rangle}{\left\langle\xi_{1}^{\dagger}(\omega_{1},t){{\xi}_{1}}(\omega_{1},t)\right\rangle\left\langle\xi_{2}^{\dagger}(\omega_{2},t){{\xi}_{2}}(\omega_{2},t)\right\rangle}.$
Equal-time correlations ($\tau=0$) characterize the simultaneous emission of
photons of frequencies $\omega_{1}$ and $\omega_{2}$, they are hereafter
called 2PFC (two photon frequency resolved correlations) and noted by
$g_{s}^{(2)}(\omega_{1},\omega_{2})$, for simplicity. Thus, at the expense of
two extra degrees of freedom, equal-time frequency-resolved correlations
$g_{s}^{(2)}(\omega_{1},\omega_{2})$ are contained in the steady-state values
of the density matrix, while time- and frequency-resolved ones (i.e.,
$g_{s}^{(2)}(\omega_{1},\omega_{2},\tau)$) are obtained as two-time
correlators, using the “standard” (two-time) quantum regression theorem
Gardiner and Zoller (2014).
Experimentally, frequency-resolved signals can be obtained using frequency
filters such as a Fabry-Perot cavities whose resonance frequency and linewidth
correspond to the sensor ones, $\omega_{s}$ and $\Gamma_{s}$, but also from
time-resolved measurements using beatnote techniques for the $g^{(1)}(\tau)$
function Ortiz-Gutiérrez _et al._ (2019); Ferreira _et al._ (2020), for
example. Throughout this work, the different correlation functions were
computed using the Qutip toolbox Johansson _et al._ (2012, 2013) and the
Matlab® software, using a solver to reach the steady-state.
## III Strongly interacting atoms
### III.1 Fluorescence spectrum
The radiation spectrum of a strongly driven two-level emitter has a rather
intuitive interpretation in the dressed state picture: after the light modes
were traced out to obtain Eqs.(2-3) Lehmberg (1970a, b), in this picture the
photon number is restored to obtain hybrid atom-field states. The resulting
atom-field eigenstates have been discussed extensively for single emitters
Compagno _et al._ (1995), and the coupling of light to atom leads to the
following eigenstates for the Hamiltonian at resonance
$\left|\pm\right\rangle=\frac{1}{\sqrt{2}}\left(\left|\uparrow,n-1\right\rangle\pm\left|\downarrow,n\right\rangle\right),$
(12)
where $\left|\downarrow\right\rangle$ and $\left|\uparrow\right\rangle$ denote
the single-atom ground and excited states, respectively, and $n$ is the photon
number in the driving field (i.e., the laser). This pair of eigenstates forms
the $n$-excitation manifold: in each manifold the eigenstates are split by the
Rabi frequency of the driving field (unless the Cavity Quantum Electrodynamics
regime is reached Jaynes and Cummings (1963); Brune _et al._ (1996)).
For a pair of atoms, the dipole-dipole interaction in Eqs. (2-3) generates two
collective single-excitation eigenstates, labelled symmetric and anti-
symmetric:
$\displaystyle\ket{S}$ $\displaystyle=$
$\displaystyle(\ket{\uparrow\downarrow}+\ket{\downarrow\uparrow})/\sqrt{2},$
$\displaystyle\ket{A}$ $\displaystyle=$
$\displaystyle(\ket{\uparrow\downarrow}-\ket{\downarrow\uparrow})/\sqrt{2},$
(13)
which present linewidths $\Gamma_{S}=\Gamma(1+\gamma_{12})$ and
$\Gamma_{A}=\Gamma(1-\gamma_{12})$, and energy shifts
$\Delta_{S}=\Gamma\delta_{12}$ and $\Delta_{A}=-\Gamma\delta_{12}$. Throughout
this work we have fixed $\cos(\theta_{12})=1/\sqrt{3}$, which implies
$\delta_{12}<0$ for the interatomic distances considered and, consequently,
$\Delta_{S}<\Delta_{A}$, as shown in Fig. 1.
We here consider the case of two very close, strongly interacting atoms
($kr_{12}\ll 1$ or, more specifically, $\left|{{\delta}_{12}}\right|\gg 1$ and
${{\gamma}_{12}}\approx 1$), in the presence of a strong resonant driving,
characterized by $\Omega^{2}>\Gamma^{2}+4|\Gamma\delta_{12}|^{2}$. Following
the approach of Ref. Compagno _et al._ (1995), we consider the following
basis
$\displaystyle\left|\phi_{n}^{1}\right\rangle$
$\displaystyle=\left|\uparrow\uparrow,n-2\right\rangle,$ (14a)
$\displaystyle\left|\phi_{n}^{2}\right\rangle$
$\displaystyle=\left|S,n-1\right\rangle,$ (14b)
$\displaystyle\left|\phi_{n}^{3}\right\rangle$
$\displaystyle=\left|A,n-1\right\rangle,$ (14c)
$\displaystyle\left|\phi_{n}^{4}\right\rangle$
$\displaystyle=\left|\downarrow\downarrow,n\right\rangle.$ (14d)
which is the four dimensional subspace of the eigenvectors of the operator
${{N}_{T}}=N_{\nu}+1/2+\sum\limits_{i=1,2}(1+\sigma_{i}^{+}\sigma_{i}^{-})/2$,
where $N_{\nu}$ is the photon number operator for eigenvalue $n$. In this
basis, the eigenstates of the atom-light system are composed by the collective
dressed states incorporating the eigenstates of Hamiltonian (2) for two atoms
with light-mediated interactions, and the photon number states of the light
field i.e., the $n$-excitation manifold for our system is given by:
$\displaystyle\left|u_{n}^{1}\right\rangle=a_{1}\left|\uparrow\uparrow,n-2\right\rangle+a_{2}\sqrt{2}\left|S,n-1\right\rangle+a_{1}\left|\downarrow\downarrow,n\right\rangle,$
(15a) $\displaystyle\left|u_{n}^{a}\right\rangle=\left|A,n-1\right\rangle,$
(15b)
$\displaystyle\left|u_{n}^{2}\right\rangle=-\frac{1}{\sqrt{2}}\left|\uparrow\uparrow,n-2\right\rangle+\frac{1}{\sqrt{2}}\left|\downarrow\downarrow,n\right\rangle,$
(15c)
$\displaystyle\left|u_{n}^{3}\right\rangle=a_{2}\left|\uparrow\uparrow,n-2\right\rangle-
a_{1}\sqrt{2}\left|S,n-1\right\rangle+a_{2}\left|\downarrow\downarrow,n\right\rangle,$
(15d)
with $a_{1}$ and $a_{2}$ two coefficients obtained by diagonalization of the
Hamiltonian (the lengthy expressions for $a_{1}$ and $a_{2}$ are not shown
here, and the normalization of the eigenstates imposes
$a_{1}^{2}+a_{2}^{2}=1/2$).
These dressed states are characterized by an entanglement between atomic and
field states, apart from the one containing the antisymmetric atomic state,
$\left|u_{n}^{a}\right\rangle$. Since the latter state is not entangled with
the field states, nor is it coupled to the other states through the
Hamiltonian, it does not participate to the dressing. Furthermore, in the
limit of strong coupling of the atoms considered here, it can be shown that
this anti-symmetric state does not participate substantially to the steady-
state fluorescence spectrum. Although it is not driven directly by the laser
($kr_{12}\ll 1$ leads to a rather homogeneous phase profile of the laser on
the atoms, thus addressing the symmetric state), it gets substantially
populated by decay from the atomic state $\ket{\uparrow\uparrow}$ and its long
lifetime allows it to hold a substantial population Cipris _et al._ (2020).
Nevertheless, the weak linewidth also translates in a low number of emitted
photons. Thus, unless specified (see Sec.III.2), we hereafter neglect this
state in our analysis.
This leads us to introduce the collective operator
$\sigma_{S}^{+-}=(\sigma_{1}^{+-}+\sigma_{2}^{+-})/\sqrt{2}$ and simplify the
Linbladian (3) into
${\mathcal{L}_{\sigma_{S}}}\rho={\Gamma_{S}}(2\sigma_{S}^{-}\rho\sigma_{S}^{+}-\sigma_{S}^{+}\sigma_{S}^{-}\rho-\rho\sigma_{S}^{+}\sigma_{S}^{-})$
in the strong interaction regime. As a consequence the $n$-excitation manifold
reduces to the triplet of $(\ket{u_{n}^{1}},\ket{u_{n}^{2}},\ket{u_{n}^{3}}$),
with the frequency difference hereafter called:
$\Delta_{ij}=-\Delta_{ji}\equiv E_{n}^{i}-E_{n}^{j}.$ (16)
The dressed energy levels are then composed of the $n$-excitation manifolds,
each composed of the above triplet, and with successive manifolds separated by
energy $\omega_{L}$: the dressed states and the equivalent bare collective
energy levels for two interacting atoms are presented in Fig. 1(a).
(a)-0.1in0in (b)0.1in0.1in
Figure 1: (a) Collective dressed (left) and bare (right) states for two
strongly-interacting atoms, in the rotating frame of the laser and in the lab
frame, respectively. (b) 1PS of two interacting atoms. The colored arrows
above the peaks correspond to the transitions depicted in (a). Simulations
carried out for two atoms driven with a field of Rabi frequency
$\Omega=30\Gamma$, separated by a distance $kr_{12}=0.05$, with dipole
orientation $\theta_{12}=\cos^{-1}(1/\sqrt{3})$.
The 1PS is obtained by solving the master equation from Eqs. (1-3) combined
with the quantum regression theorem or, equivalently, monitoring the
population of a sensor whose resonant frequency $\omega_{s}$ is tuned. The
fluorescence spectrum for two strongly interacting atoms is depicted in
Fig.1(b), where the different peaks can be interpreted in the dressed state
picture. Similarly to the single-atom case, the central peak originates in the
$\ket{u_{n}^{i}}\to\ket{u_{n-1}^{i}}$ transitions ($i=1,\ 2,\ 3$), which do
not alter the atomic state and are characterized by the emission of a photon
at the laser frequency ${{\omega}_{L}}$.
The transformation of the doublet of states into a triplet of states for the
$n$-excitation manifold, due to the interactions, leads to a seven-peak 1PS.
The six sidebands are collective, corresponding to resonant frequencies
$\pm\Delta_{ij}$ not present for single atoms (all transitions are hereafter
given in the laser frame), and the corresponding transitions
$\ket{u_{n}^{i}}\to\ket{u_{n-1}^{j\neq i}}$ are presented schematically in the
dressed state picture of Fig.1(a).
### III.2 Photon-photon correlations
Let us now study the specific correlations which occur between these different
emission processes. While the transitions from one manifold to the next one
are the origin of the 1PS, the correlations in the emitted photons are the
essence of the 2PFC, $g_{s}^{(2)}\left(\omega_{1},\omega_{2}\right)$, computed
using Eq.(11). The 2PFC corresponding to the situation of Fig.1 is presented
in Fig. 2(a), the complexity of which reflects the diversity in photon-photon
correlations.
(a)0in0in (b)0in-0.1in(c)0in0in
Figure 2: (a) Steady-state photon-photon correlations
$g^{(2)}_{s}(\omega_{1},\omega_{2})$ for a pair of strongly-interacting
strongly-driven atoms. (b) Cascade processes involving the emission of two
photons, according to the energy levels of the dressed states picture (allowed
cascades with plain lines, forbidden processes with dashed/dotted ones, see
text). The associated $g^{(2)}_{s}(\omega_{1},\omega_{2})$ is given by the
same symbols in (a). (c) Time-resolved
$g_{s}^{(2)}(\omega_{1},\omega_{2},\tau)$ for the transitions involving
photons of frequency $(\omega_{1},\omega_{2})=(\Delta_{13},-\Delta_{23})$ (
$+$ symbol in (a) and (b)) with $\Gamma_{s}=\Gamma$. Inset: same curves, for a
broader time window. Simulations realized for two atoms with $kr_{12}=0.05$,
$\theta_{12}=\cos^{-1}(1/\sqrt{3})$, $\Omega=30\Gamma$, and
$\Gamma_{s}=\Gamma$ for (a). $kr=0.006$, $\theta=\cos^{-1}(1/\sqrt{3})$,
$\Omega=250\Gamma$ and $\Gamma_{s}=5\Gamma$ for (c).
_Opposite-sideband correlations –_ We first discuss the correlation between
opposite sidebands, i.e., with frequency $+{{\Delta}_{ij}}$ and
$-{{\Delta}_{ij}}$, as shown by the $\bullet$ symbol for $(i,j)=(1,2)$ in Fig.
2(a): it corresponds to the two-photon cascade
$\left|u_{n}^{1}\right\rangle\to\left|u_{n-1}^{2}\right\rangle\to\left|u_{n-2}^{1}\right\rangle$,
shown in Fig. 2(b). Being an allowed path of relaxation, it leads to photon-
photon bunching,
$g_{s}^{\left(2\right)}\left({{\Delta}_{12}},-{{\Delta}_{12}}\right)>1$: this
case is similar to the opposite-sideband bunching effect reported for single
emitters Schrama _et al._ (1992); Ulhaq _et al._ (2012). The same holds true
for other transitions of the form
$\left|u_{n}^{i}\right\rangle\to|u_{n-1}^{j}\rangle\to|u_{n-2}^{i}\rangle$,
corresponding to the other sidebands.
_Equal-sideband correlations –_ Differently, photons emitted from the same
sidebands come antibunched, as in all cases the associated relaxation path is
blocked (as long as there is no degeneracy, i.e.,
$\Delta_{12}\neq\Delta_{23}$). An analogous effect is observed for single
atoms. For instance, a photon of frequency ${{\Delta}_{13}}$ automatically
leads the system to state $\left|u_{n-1}^{3}\right\rangle$, so the next photon
cannot be emitted at the same frequency as it requires for the system to be in
a state $\left|u_{n-1}^{1}\right\rangle$ (states
$\left|u_{n-1}^{1}\right\rangle$ and $\left|u_{n-1}^{3}\right\rangle$ are
orthogonal). For this reason, the associated path of relaxation is considered
blocked (see $\circ$ symbol cascade in Fig. 2(b)), and it is characterized by
antibunched photons:
$g_{s}^{\left(2\right)}\left({{\Delta}_{13}},{{\Delta}_{13}}\right)<1$
($\circ$ symbol in Fig. 2(a)).
Nevertheless, as it can be seen in Fig. 2(a), photons from the same sidebands
suffer from being in the “indistinguishability bunching line” of the 2PFC.
Indeed, two photons with the same frequency cannot be distinguished by the
sensor, which in turn leads to bunching effects. This manifests in the
“overbunched” diagonal line in Fig.2(a).
_Cross-sideband correlations –_ Let us now discuss processes which involve
photons from two different sidebands, corresponding to
$g_{s}^{\left(2\right)}\left(\pm{{\Delta}_{ij}},\pm{{\Delta}_{i^{\prime}j^{\prime}}}\right)$
with $(i,j)\neq(i^{\prime},j^{\prime})$. For these processes which involve the
three atomic states, a more careful analysis is needed, as photons of
different frequencies may be emitted in a specific order. For instance, the
double transition
$\left|u_{n}^{1}\right\rangle\to\left|u_{n-1}^{3}\right\rangle\to\left|u_{n-2}^{2}\right\rangle$,
indicated by a + symbol in the dressed state representation of Fig. 2(b), is
allowed, and thus permits the successive emission of photons of frequency
$\Delta_{13}$ and $-\Delta_{23}$, in that order. Differently, a photon of
frequency $-\Delta_{23}$ cannot be followed by one of frequency $\Delta_{13}$,
since this would correspond to the successive
$\left|u_{n}^{3}\right\rangle\to\left|u_{n-1}^{2}\right\rangle$ and
$\left|u_{n-1}^{1}\right\rangle\to\left|u_{n-2}^{3}\right\rangle$ (see
$\otimes$ symbol in Fig. 2(b)), which is a blocked path since
$\left|u_{n-1}^{2}\right\rangle$ and $\left|u_{n-1}^{1}\right\rangle$ are
orthogonal.
Monitoring the zero-delay photon-photon correlations
$g_{s}^{(2)}(\Delta_{ij},\Delta_{i^{\prime}j^{\prime}})$ does not allow to
distinguish the two processes, but its time-resolved version does. As
illustrated by the computation of $g_{s}^{(2)}(\Delta_{13},-\Delta_{23},\tau)$
in Fig. 2(c), we observe a strong bunching at delays $\tau\sim+1/\Gamma$, but
a below-unity $g_{s}^{(2)}$ for $\tau\sim-1/\Gamma$ (negative times
corresponds to the reverse order, since
$g_{s}^{(2)}(\Delta_{13},-\Delta_{23},\tau)=g_{s}^{(2)}(-\Delta_{23},\Delta_{13},-\tau))$.
The same phenomenon is observed for transitions with photon pairs of frequency
$(\pm\Delta_{12},\mp\Delta_{13})$ and $(\pm\Delta_{12},\pm\Delta_{23})$. Thus,
these double transitions, which involve the three different atomic states,
present a time-symmetry breaking for the $g_{s}^{(2)}$ function, which
corresponds to a specific ordering of the emitted photons. It is due to the
interaction between the emitters, which leads to a splitting of the energy
levels of the atomic system.
It is interesting to note that on timescales $\tau$ of several single-atom
excited state lifetime, the correlator
$g_{s}^{(2)}(\Delta_{13},-\Delta_{23},\tau)$ does not go to $1$ as one would
expect: this is the signature of the anti-symmetric state holding a
substantial part of the atomic excitations, yet these are released on the mode
timescale Cipris _et al._ (2020) (see inset of Fig.2(c)).
Finally, the case of double processes that depart twice from the same atomic
state, but goes to the two others atomic states (i.e.,
$\left|u_{n}^{i}\right\rangle\to|u_{n-1}^{j}\rangle$ and
$\left|u_{n-1}^{i}\right\rangle\to|u_{n-2}^{l}\rangle$, with $i$, $j$ and $l$
all different), are naturally anti-bunched. Indeed both possible orders for
the double transition are blocked. Consequently,
$g_{s}^{2}(\Delta_{ij},\Delta_{il})$ is below unity, as can be observed in
Fig.2(a).
_Sideband-central peak correlations –_ Finally, cascades which involve one
sideband photon plus a central peak photon can in principle occur
successively, since the latter does not involve a change in the atomic state
(i.e., $\left|u_{n}^{i}\right\rangle\to\left|u_{n-1}^{i}\right\rangle$).
Furthermore, both orders of emission for the photons could equally occur.
Nonetheless, these pairs of photons come anti-bunched. As discussed for the
case of single emitters Arnoldus and Nienhuis (1984), this effect originates
from a destructive interference, due to the fact that the state of the system
is not modified by Rayleigh emission. Thus, despite the two cascades involving
photons of frequency ${\Delta_{ij}}$ and $0$ are degenerate (they have the
same initial and final states), the interference between the amplitude of
their transition probability prevents the process instead of favoring it (see
Fig.2(a)).
Finally, we point out that a clear observation of antibunching, and other
kinds of photon-photon correlations as we investigate in this paper, requires
the use of sensors of linewidth at least comparable to the atomic linewidth
(they are here taken equal: $\Gamma_{s}=\Gamma$). Indeed, it has recently been
shown that antibunching (and other photon-photon correlations), whether in the
temporal domain Carreño _et al._ (2018); Hanschke _et al._ (2020); Phillips
_et al._ (2020) (as given by $g^{(2)}(\tau)$) is strongly reduced in case of a
sublinewidth filtering, since it results in long integration time, which in
turn averages out the correlations Muñoz _et al._ (2014).
(a)0in0in(b)0in0in (c)0in-0.05in
Figure 3: (a) Two-photon “leapfrog” processes, with the system transiting
through virtual dressed levels. (b) Ratio $R_{s}$ from the Cauchy-Schwartz
inequality, and (c) $B_{s}$ from the Bell inequality, when tuning the
frequency of each sensor. Simulations realized for $kr=0.05$,
$\theta=\cos^{-1}(1/\sqrt{3})$, $\Omega=30\Gamma$ and $\Gamma_{s}=\Gamma$.
### III.3 Leapfrog processes
The cascades described above involve two-photon emission processes that
encompass real transitions through intermediate states, where the system state
is described by the manifolds from the dressed atom picture. There exist other
kinds of two-photon transitions, where the system does not transit through one
of these intermediate states, but rather through a “virtual” manifold,
labelled “leapfrog processes” Gonzalez-Tudela _et al._ (2013). These
transitions are characterized by the joint emission of two photons, and have
recently been observed in single quantum dots Peiris _et al._ (2015). Most of
these two-photon collective processes yield correlations much stronger than
real transition ones, and their quantum nature has been demonstrated for
single emitters using Cauchy-Schwartz and Bell inequalities Muñoz _et al._
(2014).
For these leapfrog processes, the energy of each photon does not need to be
related to a specific level transition energy, only their sum needs to obey
the following relation:
$\omega_{1}+\omega_{2}=0,\ \pm{{\Delta}_{ij}},$ (17)
where the frequency in the laboratory frame is obtained by adding
$2\omega_{L}$. The leapfrog transitions correspond to the anti-diagonal lines
marked by color arrows in Fig. 3(b) and (c), and the associated (virtual)
transitions are depicted in Fig. 3(a). Note that if, in addition to condition
(17), the energy of each photon belongs to the allowed real transitions, the
photon emission process is that described in the previous section, and the
correlations between the photons are classical.
To characterize the non-classicality of these correlations, we use the Cauchy-
Schwartz inequality (CSI) for the second-order correlation functions at zero
delay $g^{(2)}_{kl}=g^{(2)}_{s}(\omega_{k},\omega_{l})$:
$\left[g^{(2)}_{12}\right]^{2}\leq g^{(2)}_{11}g^{(2)}_{22},$ (18)
which we monitor by studying the ratio
$R_{s}=\frac{\left[g^{(2)}_{12}\right]^{2}}{g^{(2)}_{11}g^{(2)}_{22}}.$ (19)
Values $R_{s}$ larger than unity are the signatures of non-classical
correlations between the two emitted photons Muñoz _et al._ (2014).
In Fig.3(a), leapfrog processes which involve different initial and final
atomic states are presented: the system does not emit photons from specific
(“real”) transitions, only the sum of the two photon energies corresponds to a
transition between specific states in the dressed states picture. As one can
observe from the anti-diagonal lines in Fig.3(b), which correspond to
$\omega_{1}+\omega_{2}=0,\ \pm{{\Delta}_{ij}}$, the CSI is violated for most
of these joint emission processes ($R_{s}>1$). Nevertheless, the CSI is not
violated for antibunched real transitions, i.e., for photon pairs with
frequencies $(0,\pm{{\Delta}_{ij}})$ or $(\pm{{\Delta}_{ij}},0)$ (see, for
example, the $\oslash$ for frequencies $({{\Delta}_{13}},0)$). Neither it is
for pairs of real photons, for frequencies
$(\pm{{\Delta}_{12}},\pm{{\Delta}_{23}})$. Also, as we can observe in Fig.
4(a-b), a sublinewidth filtering leads to weaker violations of CSI Muñoz _et
al._ (2014).
(a)0in0in(b)0in0in (c)0in-0.05in
Figure 4: Quantifier $R_{s}$ for the Cauchy-Schwartz inequalities (with a
violation above the dashed line) for ${{\Gamma}_{s}}=\Gamma$ (solid blue) and
${{\Gamma}_{s}}=\Gamma/10$ (dotted red), for the leapfrog lines (a)
${{\omega}_{1}}+{{\omega}_{2}}={{\Delta}_{12}}$ and (b)
${{\omega}_{1}}+{{\omega}_{2}}=0$. (c) Quantifier $B_{s}$ for the Bell
inequalities (with a violation above the dashed line) for
${{\Gamma}_{s}}=\Gamma$ (solid blue) and ${{\Gamma}_{s}}=\Gamma/10$ (dotted
red) in the leapfrog line of ${{\omega}_{1}}+{{\omega}_{2}}=0$. The vertical
lines with a symbol at the top refer to the processes discussed in Figs.2(a)
and 3(b) and (c). Simulations realized for $kr=0.05$,
$\theta=\cos^{-1}(1/\sqrt{3})$ and $\Omega=30\Gamma$.
Finally, one observes that the inequality is also violated for some emission
processes which involve real transitions, where the correlations between
emitted photons are classical (see $+$ or $\otimes$ symbols, for example, in
Fig.3(b)). In order to properly observe the classicality of these
correlations, as they correspond to photons of real transitions located in the
leapfrog lines, it is necessary to use sensor with a better resolution.
Indeed, the use of a sensor linewidth $\Gamma_{s}=\Gamma$ leads to an
averaging over processes with different kinds of correlations. To illustrate
this point, we show in Fig. 4(a) the change in CSI as the sensor linewidth is
changed from $\Gamma$ to $\Gamma/10$: the above-mentioned transitions for the
real photons do not violate any longer CSI, showing the classical nature of
their correlations Muñoz _et al._ (2014).
Furthermore, as for single emitters Muñoz _et al._ (2014), violations of CSI
may appear for transitions of the central antidiagonal line
($\omega_{1}+\omega_{2}=0$), even for real transitions and for well-resolved
frequencies: Fig. 4(b) presents such violations of CSI (see $\bullet$, $\ast$
and $\times$ symbols). This failure of CSI to detect classical correlations
can be addressed by using Bell inequality (BI), as monitored by a quantifier
adapted to the sensor approach Muñoz _et al._ (2014):
$B_{s}=\sqrt{2}\left|\frac{B_{1111}+B_{2222}-4B_{1221}-B_{1122}-B_{2211}}{B_{1111}+B_{2222}+2B_{1221}}\right|,$
(20)
with
$B_{jklm}=\langle\xi_{1}^{\dagger}\left(\omega_{j}\right)\xi_{2}^{\dagger}\left(\omega_{k}\right)\xi_{2}\left(\omega_{l}\right)\xi_{1}\left(\omega_{m}\right)\rangle$.
Values $B_{s}>2$ are a violation of the BI, which are considered as a true
signature of quantum correlations. As can be seen in Fig. 3(c) for
${{\Gamma}_{s}}=\Gamma$, BI is violated only for specific areas of the central
antidiagonal line, yet not for the real transitions of $\bullet$ and $\ast$.
This behaviour is similar to the single-emitter case Muñoz _et al._ (2014),
confirming that only transitions involving virtual states hold true quantum
correlations between the emitted photons. Note that BI and CSI are sensitive
to the frequency resolution of the sensors, as narrow linewidth sensors
correspond to long averaging time, which in turn washes out the correlations
Muñoz _et al._ (2014).
As illustrated for the pair of photons
$(\pm{{\Delta}_{13}},\mp{{\Delta}_{13}})$, a sensor linewidth
$\Gamma_{s}=\Gamma$ presents a violation of BI, yet reducing the sensor
linewidth to $\Gamma/10$ removes the violation of the BI, see Fig. 4(c) where
the pair of $({{\Delta}_{13}},-{{\Delta}_{13}})$ indicated by $\times$. This
highlights again the narrow linewidth sensors correspond to a finer structure
for the quantum quantifiers and the necessity of using sensors with a
linewidth comparable to the atomic transition one, in order to detect the
stronger correlations.
## IV Conclusion and perspectives
Strong interactions between two two-level emitters give rise to a series of
new sidebands in the fluorescence spectrum, whose shift from the atomic
transition depends on both the interaction strength and the driving field.
Similarly to the single emitters, the leapfrog processes with frequencies that
sums to zero or the frequency of one of the interaction-induced sidebands, are
characterized by strong correlations, which can be either classical or
quantum. This suggests that strongly coupled emitters are potential sources of
heralded photons, with extra control parameters through the interaction, as
compared to single emitters.
Another regime of interest is that of a weak dipole-dipole interaction, i.e.,
when the collective dressed levels are equally spaced in energy
($\Delta_{12}=\Delta_{23}$), which occurs when the distance between the
emitters is comparable or larger than the optical wavelength. In this regime,
the first correction to the single-atom fluorescence spectrum is the emergence
of sidebands shifted from the laser frequency by twice the Rabi frequency
($\omega=\pm 2\Omega$). This phenomenon was predicted to scale with the
resonant optical depth for dilute extended clouds, as a signature of the
raising two-atom quantum correlations Pucci _et al._ (2017). We have
monitored photon-photon correlations $g^{(2)}(\omega_{1},\omega_{2})$ for a
pair of atoms at a distance of the order of $\lambda$ ($kr=0.4-1$ and
$\theta=\cos^{-1}(1/\sqrt{3})$) and strongly driven, yet photon-photon
correlations appear to be largely dominated by single-atom physics. This is
reminiscent of the anti-bunching phenomenon which vanishes for a large number
of independent emitters, unless specific conditions for their interference is
achieved Grangier _et al._ (1986). Furthermore, the leapfrog processes
associated with the new sidebands, $\omega_{1}+\omega_{2}=\pm 2\Omega$,
present no violation of the Cauchy-Schwartz inequality (not shown here). This
suggests that despite these sidebands result from correlations between the
quantum fluctuations of the two dipoles Pucci _et al._ (2017), the photons
associated with these processes may be only classically correlated.
The variety of sidebands and photon-photon correlations encountered for a pair
of atoms calls for a dedicated study for larger systems. Indeed, although the
coherent manipulation of atoms at scales below the diffraction limit is
experimentally challenging, schemes have been proposed to surpass these
limitations, based on the transparency-induced dark states Agarwal and Kapale
(2006); Cho (2007); Yavuz and Proite (2007); Gorshkov _et al._ (2008), which
have already allowed for the generation of subwavelength cold atom structures
Miles _et al._ (2013); Wang _et al._ (2018); Subhankar _et al._ (2019);
Tsui _et al._ (2020). In this context, strongly-interacting cold atom
ensembles may be a promising tunable source for entangled pairs of photons,
but also for larger bunches of photons Carreño _et al._ (2017).
## Acknowledgment
M. H., R. B. and C. J. V.-B. acknowledge funding from the French National
Research Agency (projects QuaCor ANR19-CE47-0014-01). E. D., R. B. and C. J.
V. -B. benefited from Grants from São Paulo Research Foundation (FAPESP,
Grants Nos. 2018/10813-2, 2018/01447-2, 2018/15554-5, 2019/13143-0, and
2019/11999-5) and from the National Council for Scientific and Technological
Development (CNPq, Grant Nos. 302981/2017-9, 409946/2018-4, and
307077/2018-7). M. H. and R. B. received support from the project CAPES-
COFECUB (Ph879-17/CAPES 88887.130197/2017-01).
## References
* Agarwal (1974a) G. S. Agarwal, in _Springer Tracts in Modern Physics_ (Springer Berlin Heidelberg, 1974) pp. 1–128.
* Mollow (1969) B. R. Mollow, Phys. Rev. 188, 1969 (1969).
* Aspect _et al._ (1980) A. Aspect, G. Roger, S. Reynaud, J. Dalibard, and C. Cohen-Tannoudji, Phys. Rev. Lett. 45, 617 (1980).
* Cohen-Tannoudji _et al._ (1979) C. Cohen-Tannoudji, S. Reynaud, R. K. Bullough, , and J. M. Vaughan, Philos. Trans. R. Soc. A 293, 223 (1979).
* Dalibard and Reynaud (1983) J. Dalibard and S. Reynaud, J. Phys. 44, 1337 (1983).
* Schrama _et al._ (1992) C. A. Schrama, G. Nienhuis, H. A. Dijkerman, C. Steijsiger, and H. G. M. Heideman, Phys. Rev. A 45, 8045 (1992).
* Ulhaq _et al._ (2012) A. Ulhaq, S. Weiler, S. M. Ulrich, R. Roßbach, M. Jetter, and P. Michler, Nat. Photonics 6, 238 (2012).
* Wolf _et al._ (2020) S. Wolf, S. Richter, J. von Zanthier, and F. Schmidt-Kaler, Phys. Rev. Lett. 124, 063603 (2020).
* Lehmberg (1970a) R. H. Lehmberg, Phys. Rev. A 2, 883 (1970a).
* Lehmberg (1970b) R. H. Lehmberg, Phys. Rev. A 2, 889 (1970b).
* Senitzky (1978) I. R. Senitzky, Phys. Rev. Lett. 40, 1334 (1978).
* Agarwal _et al._ (1980) G. S. Agarwal, R. Saxena, L. M. Narducci, D. H. Feng, and R. Gilmore, Phys. Rev. A 21, 257 (1980).
* Ben-Aryeh and Bowden (1988) Y. Ben-Aryeh and C. Bowden, IEEE J. Quantum Electron. 24, 1376 (1988).
* Pucci _et al._ (2017) L. Pucci, A. Roy, T. S. do Espirito Santo, R. Kaiser, M. Kastner, and R. Bachelard, Phys. Rev. A 95, 053625 (2017).
* del Valle _et al._ (2012) E. del Valle, A. Gonzalez-Tudela, F. P. Laussy, C. Tejedor, and M. J. Hartmann, Phys. Rev. Lett. 109, 183601 (2012).
* Carreño _et al._ (2017) J. C. L. Carreño, E. del Valle, and F. P. Laussy, Laser Photonics Rev. 11, 1700090 (2017).
* an Peng _et al._ (2019) Z. an Peng, G. qing Yang, Q. lin Wu, and G. xiang Li, Phys. Rev. A 99, 033819 (2019).
* Agarwal (1974b) G. S. Agarwal, in _Springer Tracts in Modern Physics_ (Springer Berlin Heidelberg, 1974).
* Kimble _et al._ (1977) H. J. Kimble, M. Dagenais, and L. Mandel, Physical Review Letters 39, 691 (1977).
* Gisin (1993) N. Gisin, Journal of Modern Optics 40, 2313 (1993).
* Brun and Gisin (1996) T. A. Brun and N. Gisin, Journal of Modern Optics 43, 2289 (1996).
* Breuer _et al._ (1997) H.-P. Breuer, B. Kappler, and F. Petruccione, Phys. Rev. A 56, 2334 (1997).
* Bel and Brown (2009) G. Bel and F. L. H. Brown, Phys. Rev. Lett. 102, 018303 (2009).
* Note (1) We here consider sensors which all couple to the field radiated in the same direction, but a generalization to two-direction photon-photon correlations can be obtained by introducing sensors which couple to the field (5) emitted in different directions.
* Gardiner and Zoller (2014) C. Gardiner and P. Zoller, _The Quantum World of Ultra-Cold Atoms and Light Book I: Foundations of Quantum Optics_ (IMPERIAL COLLEGE PRESS, 2014).
* Ortiz-Gutiérrez _et al._ (2019) L. Ortiz-Gutiérrez, R. C. Teixeira, A. Eloy, D. F. da Silva, R. Kaiser, R. Bachelard, and M. Fouché, New Journal of Physics 21, 093019 (2019).
* Ferreira _et al._ (2020) D. Ferreira, R. Bachelard, W. Guerin, R. Kaiser, , and M. Fouché, Am. J. of Phys. 0, in press (2020).
* Johansson _et al._ (2012) J. Johansson, P. Nation, and F. Nori, Computer Physics Communications 183, 1760 (2012).
* Johansson _et al._ (2013) J. Johansson, P. Nation, and F. Nori, Computer Physics Communications 184, 1234 (2013).
* Compagno _et al._ (1995) G. Compagno, R. Passante, and F. Persico, in _Atom-Field Interactions and Dressed Atoms_ (Cambridge University Press, 1995).
* Jaynes and Cummings (1963) E. Jaynes and F. Cummings, Proceedings of the IEEE 51, 89 (1963).
* Brune _et al._ (1996) M. Brune, F. Schmidt-Kaler, A. Maali, J. Dreyer, E. Hagley, J. M. Raimond, and S. Haroche, Phys. Rev. Lett. 76, 1800 (1996).
* Cipris _et al._ (2020) A. Cipris, N. A. Moreira, T. S. do Espirito Santo, P. Weiss, C. J. Villas-Boas, R. Kaiser, W. Guerin, and R. Bachelard, Subradiance with saturated atoms: population enhancement of the long-lived states (2020), arXiv:2009.05172 .
* Arnoldus and Nienhuis (1984) H. F. Arnoldus and G. Nienhuis, J. Phys. B: At. Mol. Phys. 17, 963 (1984).
* Carreño _et al._ (2018) J. C. L. Carreño, E. Z. Casalengua, F. P. Laussy, and E. del Valle, Quantum Science and Technology 3, 045001 (2018).
* Hanschke _et al._ (2020) L. Hanschke, L. Schweickert, J. C. L. Carreño, E. Schöll, K. D. Zeuner, T. Lettner, E. Z. Casalengua, M. Reindl, S. F. C. da Silva, R. Trotta, J. J. Finley, A. Rastelli, E. del Valle, F. P. Laussy, V. Zwiller, K. Müller, and K. D. Jöns, Phys. Rev. Lett. 125, 170402 (2020).
* Phillips _et al._ (2020) C. L. Phillips, A. J. Brash, D. P. S. McCutcheon, J. Iles-Smith, E. Clarke, B. Royall, M. S. Skolnick, A. M. Fox, and A. Nazir, Phys. Rev. Lett. 125, 043603 (2020).
* Muñoz _et al._ (2014) C. S. Muñoz, E. del Valle, C. Tejedor, and F. P. Laussy, Phys. Rev. A 90, 052111 (2014).
* Gonzalez-Tudela _et al._ (2013) A. Gonzalez-Tudela, F. P. Laussy, C. Tejedor, M. J. Hartmann, and E. del Valle, New J. Phys. 15, 033036 (2013).
* Peiris _et al._ (2015) M. Peiris, B. Petrak, K. Konthasinghe, Y. Yu, Z. C. Niu, and A. Muller, Phys. Rev. B 91, 195125 (2015).
* Grangier _et al._ (1986) P. Grangier, G. Roger, A. Aspect, A. Heidmann, and S. Reynaud, Phys. Rev. Lett. 57, 687 (1986).
* Agarwal and Kapale (2006) G. S. Agarwal and K. T. Kapale, Journal of Physics B: Atomic, Molecular and Optical Physics 39, 3437 (2006).
* Cho (2007) J. Cho, Physical Review Letters 99, 020502 (2007).
* Yavuz and Proite (2007) D. D. Yavuz and N. A. Proite, Physical Review A 76, 041802 (2007).
* Gorshkov _et al._ (2008) A. V. Gorshkov, L. Jiang, M. Greiner, P. Zoller, and M. D. Lukin, Physical Review Letters 100, 093005 (2008).
* Miles _et al._ (2013) J. A. Miles, Z. J. Simmons, and D. D. Yavuz, Physical Review X 3, 031014 (2013).
* Wang _et al._ (2018) Y. Wang, S. Subhankar, P. Bienias, M. Łacki, T.-C. Tsui, M. Baranov, A. Gorshkov, P. Zoller, J. Porto, and S. Rolston, Physical Review Letters 120, 083601 (2018).
* Subhankar _et al._ (2019) S. Subhankar, P. Bienias, P. Titum, T.-C. Tsui, Y. Wang, A. V. Gorshkov, S. L. Rolston, and J. V. Porto, New Journal of Physics 21, 113058 (2019).
* Tsui _et al._ (2020) T.-C. Tsui, Y. Wang, S. Subhankar, J. V. Porto, and S. L. Rolston, Physical Review A 101, 113058 (2020).
|
# Torsion subgroups of Whitehead groups of graded division algebras
Huynh Viet Khanh , Nguyen Duc Anh Khoa and Nguyen Dinh Anh Khoi
Department of Mathematics and Informatics, HCMC University of Education 280 An
Duong Vuong Str., Dist. 5, Ho Chi Minh City, Vietnam<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
In this paper, we study the torsion subgroup, which is denoted by ${\rm
TK}_{1}(E)$, of the Whitehead group $E^{*}/[E^{*},E^{*}]$ of a graded division
algebra $E$ which is finite dimensional over its center. In particular, we
provide formulas for ${\rm TK}_{1}(E)$ in some special cases such as when $E$
is unramified, totally ramified or semiramified. Using the formulas, we are
able to compute the group ${\rm TK}_{1}(D)$ of a valued division ring $D$,
which is finite dimensional over a henselian center.
###### Key words and phrases:
division ring; graded division ring; valuation theory; reduced K-theory
2020 Mathematics Subject Classification. 16W60; 19B99; 16K20
## 1\. Introduction and statement of results
Let $D$ be a division ring which is finite dimensional over its center $K$,
and ${\rm Nrd}_{D}:D\to K$ be the reduced norm map. The Whitehead group ${\rm
K}_{1}(D)$ and the reduced Whitehead group ${\rm SK}_{1}(D)$ of $D$ are
respectively defined to be the quotient groups
${\rm K}_{1}(D)=D^{*}/[D^{*},D^{*}]\;\;\;\text{ and }\;\;\;{\rm
SK}_{1}(D)=D^{(1)}/[D^{*},D^{*}],$
where $D^{(1)}$ is the set of elements of $D$ with the reduced norm $1$; that
is,
$D^{(1)}=\\{a\in D^{*}:{\rm Nrd}_{D}(a)=1\\}.$
According to Draxl (see [2, p.125]), the history of reduced ${\rm
K}_{1}$-theory begin with the paper [13] established in 1943 by Y. Matsushima
and T. Nakayama who proved that ${\rm SK}_{1}(D)$ is trivial in case $K$ is a
$p$-adic field. Later, in 1950, Wang Shianghaw [19] obtained a similar result
in case $K$ is an algebraic number field. Due to these results, it was widely
believed that ${\rm SK}_{1}(D)$ is trivial in general. The problem of finding
a suitable division algebra with non-trivial reduced Whitehead group remained
unsolved for a long time and was often referred to as the Tannaka-Artin
Problem. In 1975, the first example of a valued division algebra with non-
trivial reduced Whitehead group was given by Platonov in [14] and [15] with
proofs in [16]. Later, Ershow has provided a successful approach to computing
${\rm SK}_{1}$ for valued division algebras over henselian field ([3]), where
he recovered Platonov’s example and more examples as well.
Given a division algebra $D$ with a valuation, ones can construct its
associated graded division algebra ${\rm
gr}(D)=\bigoplus_{\gamma\in\Gamma_{D}}{\rm gr}(D)_{\gamma}$, where
$\Gamma_{D}$ is the value group of $D$ and each ${\rm gr}(D)_{\gamma}$ arises
from the filtration on $D$ induced by the valuation. As it was illustrated,
this graded ring is often easier to work with than $D$ itself, and not much is
lost in passage from $D$ to its corresponded graded division algebra ${\rm
gr}(D)$. Therefore, ones can find many works by well-known authors devoted to
a systematically study of this correspondence. For example, Boulagouaz ([1]),
Hwang, Tignol and Wadsworth ([7], [6], [18]) have all made significant
contributions to this area. Motivated by these, Hazrat and Wadsworth provided
in [5] a systematic study of the functor ${\rm SK}_{1}$ of a graded division
algebra whose graded group is totally ordered abelian. In this study, they
established analogies to Ershov’s results in graded setting, easily deducing
formulas for ${\rm SK}_{1}$ for graded division algebras in some specific
cases. The study also gave them a shortcut to compute the group ${\rm
SK}_{1}({\rm gr}(D))$ of the corresponded graded division algebra ${\rm
gr}(D)$ of a valued division algebra $D$. From this, they obtained a lot of
information about the group ${\rm SK}_{1}(D)$ of the valued division algebra
$D$ ([5, Corollary 4.10]).
Let $E=\bigoplus_{\gamma\in\Gamma}E_{\gamma}$ be a graded division algebra
(with $\Gamma_{E}$ a torsion-free abelian group) having finite dimension
$n^{2}$ over its center $T$, a graded field. (Note that the dimension of a
graded division algebra over its center is a square; please see the Subsection
2.1 for background knowledge about graded division algebras.) Because $E$ is
an Azumaya algebra ([7, Corollary 2.1]), the reduced norm map ${\rm
Nrd}_{E}:E\to T$ exists for it. Now, denote by $E^{*}$ the multiplicative
group of $E$ consisting of non-zero homogenous elements in $E$. Accordingly,
ones can define, analogously to the non-graded cases, the Whitehead group
${\rm K}_{1}(E)$ and reduced Whitehead group ${\rm SK}_{1}(E)$ for $E$ as
${\rm K}_{1}(E)=E^{*}/[E^{*},E^{*}]\;\;\;\text{ and }\;\;\;{\rm
SK}_{1}(E)=E^{(1)}/[E^{*},E^{*}],$
in which the group $E^{(1)}$ is defined as follows
$E^{(1)}=\\{a\in E^{*}:{\rm Nrd}_{E}(a)=1\\}.$
It was proved in [5] that ${\rm SK}_{1}(E)$ is a torsion group of bounded
degree $n$. Denote by ${\rm TK_{1}}(E)$ the torsion subgroup of the Whitehead
group ${\rm K}_{1}(E)$. As ${\rm SK}_{1}(E)$ is $n$-torsion, it follows that
${\rm SK}_{1}(E)$ is contained in ${\rm TK_{1}}(E)$. It can be observed that
the map ${\rm Nrd}_{E}$ then induces a homomorphism, which is again denoted by
${\rm Nrd}_{D}:{\rm TK}_{1}(D)\to\tau(T^{*})$, where $\tau(T^{*})$ is the
torsion group of the mulitplicative group $T^{*}$ of the graded field $T$.
Since the kernel of this homomorphism is ${\rm SK}_{1}(E)$, the following
sequence is exact:
(1.1) $\displaystyle 1\xrightarrow{\;\;\;\;\;\;}{\rm
SK}_{1}(E)\xrightarrow{\;\;\;\;\;\;\;}{\rm TK}_{1}(E)\xrightarrow{\;\;{\rm
Nrd}_{D}\;\;}{\rm Nrd}_{D}(\tau(T^{*}))\xrightarrow{\;\;\;\;\;\;\;}1.$
Because $T$ is an integral domain, it has the quotient field, say $q(T)$. It
follows $\tau(T^{*})$ is embedded in $\tau(q(T)^{*})$, which is the torsion
subgroup of a field, and is well-understood. (In particular, this implies that
$\tau(T^{*})$ is a locally cyclic group). Thus, the short exact sequence
suggests the close relationship between ${\rm TK}_{1}(E)$ and ${\rm
SK}_{1}(E)$. The group ${\rm SK}_{1}(E)$ was studied in depth by Hazrat and
Wadsworth in [5]. Motivated by this, in this paper, we provide a study of the
group ${\rm TK}_{1}(E)$ of a graded division algebra $E$. As it will turn out,
the group ${\rm TK}_{1}(E)$ enjoys the most important functorial properties of
the ${\rm SK}_{1}(E)$. Before stating the main results, we make the following
assumption in order to simplify the exposition.
STANDING ASSUMPTION. The division rings and graded division rings in this
paper are always assumed to be finite-dimensional over their centers.
Let $E=\bigoplus_{\gamma\in\Gamma}E_{\gamma}$ be a graded division algebra,
where $\Gamma$ is a torsion-free abelian group, with center $T$, a graded
field. Then $E_{0}$ is a division ring whose center $Z(E_{0})$ is a field
containing the field $T_{0}$. Let $E^{*}$ and $E_{0}^{*}$ be the
multiplicative groups of $E$ and $E_{0}$ respectively. It will be proved in
Section 3 that the commutator group $[E^{*},E^{*}]$ subgroup is contained in
$E_{0}^{*}$. Also, for an extension field $K/F$, we denote by $N_{K/F}:K\to F$
the norm map of $K$ into $F$. In what follows, we briefly describe our
principal results. Regarding to the group ${\rm TK}_{1}(E)$ of a graded
division algebra $E$, we have the following theorem:
###### Theorem 1.1.
Let $E$ be a graded division ring with center $T$. So, $Z(E_{0})$ is Galois
over $T_{0}$; let $\mathcal{G}={\rm Gal}(Z(E_{0})/T_{0})$. Let
$\widetilde{N}=N_{Z(E_{0})/T_{0}}\circ{\rm Nrd}_{E_{0}}:E_{0}^{*}\to
T_{0}^{*}$. Let $G$ be a subgroup of $E^{*}$ such that ${\rm
TK}_{1}(E)=G/[E^{*},E^{*}]$. Then, $G$ is a subgroup of $E_{0}^{*}$ for which
$\tau(E_{0}^{*}/[E^{*},E^{*}])=G/[E^{*},E^{*}]$, and the row and column of the
following diagram are exact:
${1}$${{\rm
ker}\widetilde{N}/[E_{0}^{*},E^{*}]}$${\Gamma_{E}/\Gamma_{T}\land\Gamma_{E}/\Gamma_{T}}$${G/[E_{0}^{*},E^{*}]}$${{\rm
TK}_{1}(E)}$${1}$${\tau(T_{0}^{*})\cap\widetilde{N}(E_{0}^{*})}$${1}$$\iota$$\alpha$$\bar{N}$$\bar{\varphi}$
Diagram 1
The map $\alpha$ is given by the following rule: For
$\gamma,\delta\in\Gamma_{E}$, take any non-zero $x_{\gamma}$ and
$x_{\delta}\in E_{\delta}$. Then,
$\alpha((\gamma+\Gamma_{T})\land(\delta+\Gamma_{T}))=[x_{\gamma},x_{\delta}]$
mod $[E_{0}^{*},E^{*}]$.
As a consequence of this theorem, as Hazrat and Wadsworth have done for ${\rm
SK}_{1}(E)$ in [5, Corollary 3.6], we have the following corollary which
provides formulas for calculating ${\rm TK}_{1}(E)$ in some certain cases:
###### Corollary 1.2.
Let $E$ be a graded division ring with center $T$. Then, the following
assertions hold:
1. (i)
If $E$ is unramified, then ${\rm TK}_{1}(E)\cong{\rm TK}_{1}(E_{0})$.
2. (ii)
If $E$ is totally ramified, then ${\rm
TK}_{1}(E)\cong\tau(T_{0}^{*})/\mu_{e}(T_{0})$, where $e={\rm
exp}(\Gamma_{E}/\Gamma_{T})$ the exponent of the group
$\Gamma_{E}/\Gamma_{T}$.
3. (iii)
If $E$ is semiramified, then $E_{0}$ is a field which is Galois over $T_{0}$,
and for $\mathcal{G}={\rm Gal}(E_{0}/T_{0})\cong\Gamma_{E}/\Gamma_{T}$, the
following sequence is exact:
$\mathcal{G}\land\mathcal{G}\longrightarrow
G/I_{\mathcal{G}}(E_{0}^{*})\longrightarrow{\rm TK}_{1}(E)\longrightarrow 1,$
where $G$ is a subgroup of $E_{0}^{*}$ such that
$\tau(E_{0}^{*}/[E^{*},E^{*}])=G/[E^{*},E^{*}]$, and
$I_{\mathcal{G}}(E_{0}^{*})=\left\langle c\sigma(c)^{-1}:c\in
E_{0}^{*},\sigma\in\mathcal{G}\right\rangle$.
4. (iv)
If $E$ has maximal graded subfields $L$ and $K$ which are respectively
unramified and totally ramified over $T$, then $E$ is semiramified and
$\widehat{H}^{-1}(\mathcal{G},E_{0}^{*})$, the $-1$-Tate cohomology group of
$\mathcal{G}$ with respect to $E_{0}^{*}$, is isomorphic to ${\rm SK}_{1}(E)$;
and the following sequence is exact:
$1\longrightarrow\widehat{H}^{-1}(\mathcal{G},E_{0}^{*})\longrightarrow{\rm
TK}_{1}(E)\longrightarrow\tau(T_{0}^{*})\cap
N_{E_{0}/T_{0}}(E_{0}^{*})\longrightarrow 1.$
As a bridge to the ungraded case, we also establish in the next theorem a
criteria for the group ${\rm TK}_{1}$ of a strongly tame valued division
algebra with a henselian field to be isomorphic to ${\rm TK}_{1}$ of its
associated graded division algebra:
###### Theorem 1.3.
Let $F$ be a field with henselian valuation $v$, and let $D$ be a strongly
tame division algebra with center $F$. If ${\rm TK}_{1}(D)$ contains no
element of order ${\rm char}(\overline{F})$, where $\overline{F}$ is the
residue field of $v$ on $F$, then
${\rm TK}_{1}(D)\cong{\rm TK}_{1}({\rm gr}(D)).$
Using Theorems 1.1 and 1.3, we are able to give the ways calculate ${\rm
TK}_{1}({\rm gr}(D))$ of the associated graded division algebra ${\rm gr}(D)$
of a valued division algebra $D$ in some specific cases:
###### Corollary 1.4.
Let $F$ be a field with henselian valuation, and $D$ be a strongly tame
division algebra with center $F$. If ${\rm TK}_{1}(D)$ contains no element of
order ${\rm char}(\overline{F})$, then the following assertions hold:
1. (i)
If $D$ is unramified, then ${\rm TK}_{1}(D)\cong{\rm TK}_{1}(\overline{D})$.
2. (ii)
If $D$ is totally ramified, then ${\rm
TK}_{1}(D)\cong\tau(\overline{F}^{*})/\mu_{e}(\overline{F})$, where $e={\rm
exp}(\Gamma_{D}/\Gamma_{F})$.
3. (iii)
If $D$ is semiramified, then $\overline{D}$ is a field which is Galois over
$\overline{F}$, and for $\mathcal{G}={\rm
Gal}(\overline{D}/\overline{F})\cong\Gamma_{D}/\Gamma_{F}$, the following
sequence is exact:
(1.2) $\mathcal{G}\land\mathcal{G}\longrightarrow
G/I_{\mathcal{G}}(\overline{D}^{*})\longrightarrow{\rm
TK}_{1}(D)\longrightarrow 1,$
where $G$ is the subgroup of $\overline{D}^{*}$ such that
$\tau(\overline{D}^{*}/[D^{*},D^{*}])=G/[D^{*},D^{*}]$ and
$I_{\mathcal{G}}(\overline{D}^{*})=\left\langle
c\sigma(c)^{-1}:c\in\overline{D}^{*},\sigma\in\mathcal{G}\right\rangle$.
4. (iv)
If $D$ is nicely semiramified, then $\mathcal{G}={\rm
Gal}(\overline{D}/\overline{F})\cong\Gamma_{D}/\Gamma_{F}$, and
$\widehat{H}^{-1}(\mathcal{G},\overline{D}^{*})$, the $-1$-Tate cohomology
group of $\mathcal{G}$ with respect to $\overline{D}^{*}$, is isomorphic to
${\rm SK}_{1}(D)$; and the following sequence is exact:
$1\longrightarrow\widehat{H}^{-1}(\mathcal{G},\overline{D}^{*})\longrightarrow{\rm
TK}_{1}(D)\longrightarrow\tau(\overline{F}^{*})\cap
N_{\overline{D}/\overline{F}}(\overline{D}^{*})\longrightarrow 1.$
The group ${\rm TK}_{1}(D)$ for a valued division algebra $D$ with a henselian
center was also studied by Motiee in [12] (in which Motiee denoted such a
group by $T(D)$). In this paper, Motiee has completely determined all torsion
abelian groups can occur as the torsion group of the Whitehead group of a
division algebra of prime index ([12, Theorem 2]). This result can be
considered as a generalization of May’s Theorem about the torsion group of the
multiplicative group of a field ([10, Theorem 3]). The key idea that was used
to prove [12, Theorem 2] is that for a given torsion locally cyclic group $A$,
Motiee constructed a valued division algebra $D$ of prime index over a
henselian field such that ${\rm TK}_{1}(D)\cong A$, using [12, Theorem 10]
which is a connection between ${\rm TK}_{1}$ of the valued division ring $D$
and that of the residue division ring $\overline{D}$. In the current paper, we
give another way to obtain this result as well as extend it using the study in
graded setting.
The paper is organized as follows. In Section 2, we briefly recall some
background on the theory of graded division algebras indexed by a totally
ordered abelian group as well as the theory of valued division ring. For a
full reference for these topics, we refer the reader to the excellent book by
Tignol and Wadsworth [18]. Section 3 is devoted to a study of the group ${\rm
TK}_{1}$ of a graded division algebra $E$ which is finite dimensional over its
center. The main purpose of this section is to give a proof of Theorem 1.1,
which is an analogous version of [5, Theorem 3.4] by Hazrat and Wadsworth,
yielding formulas for ${\rm TK}_{1}$ of unramified, semiramified, and totally
ramified graded division algebras given in Corollary 1.4. In Section 4, we
provide the proofs of Theorem 1.3 and of Corollary 1.4. Also, in each section,
we present some examples of calculating ${\rm TK}_{1}$ of graded division
algebras and valued division algebras.
## 2\. Preliminaries
This section is devoted to recalling basic concepts from the theory of graded
division algebras indexed by a totally ordered abelian group that will be used
throughout the current paper. For an extensive study of such algebras, we
refer the readers to an excellent book by Tignol and Wadsworth [18].
### 2.1. Graded division algebras
A unital ring $R$ is said to be a graded ring if
$R=\bigoplus_{\gamma\in\Gamma}R_{\gamma}$, where $\Gamma$ is an abelian group,
and each $R_{\gamma}$ is a subgroup of $(R,+)$ such that
$R_{\gamma}R_{\delta}\subseteq R_{\gamma+\delta}$ for all
$\gamma,\delta\in\Gamma$. Put
$\displaystyle\Gamma_{R}=\\{\gamma\in\Gamma:R_{\gamma}\neq 0\\},\text{ the
grade set of $R$};$ $\displaystyle
R^{h}=\bigcup_{\gamma\in\Gamma_{R}}R_{\gamma},\text{ the set of homogeneous
elements of $R$}.$
For a homogeneous elements of degree $\gamma$, that is, an $r\in
R_{\gamma}\backslash\\{0\\}$, we write $\deg(r)=\gamma$. Note that $R_{0}$ is
a subring of $R$, and for each $\gamma\in\Gamma_{R}$, the additive group
$R_{\gamma}$ is a left and right $R_{0}$ module. A subring $S$ of $R$ is a
graded subring if $S=\bigoplus_{\gamma\in\Gamma_{R}}(S\cap R_{\gamma})$. For
example, the center of $R$, denoted by $Z(R)$, is a graded subring of $R$.
Let $R$ be a graded ring. Then, a graded left $R$-module $M$ is a left
$R$-module with a grading $M=\bigoplus_{\gamma\in\Gamma}M_{\gamma}$, where
each $M_{\gamma}$ is an abelian group, such that
$R_{\gamma}M_{\delta}\subseteq M_{\gamma+\delta}$ for all
$\gamma,\delta\in\Gamma$. As an analogy to $\Gamma_{R}$ and $R^{h}$, the set
graded set $\Gamma_{M}$ and the set of homogenous element $M^{h}$ of $M$ are
defined similarly. We say that $M$ is a graded free $R$-module if it has a
base as a free $R$-module consisting of homogenous elements.
A graded ring $E=\bigoplus_{\gamma\in\Gamma}E_{\gamma}$ is called a graded
division ring if $\Gamma$ is a torsion-free abelian group and every non-zero
homogenous element of $E$ has a multiplicative inverse. Recall that the grade
set $\Gamma_{E}$ is then actually a group. Note also that $E_{0}$ is a
division ring, and each $E_{\gamma}$ is a 1-dimensional left and right
$E_{0}$-vector space for every $\gamma\in\Gamma_{E}$. The requirement that
$\Gamma$ be torsion-free is made because in this paper we regularly consider
graded division algebra arising from valued division rings, in which all the
grade groups appearing there are torsion-free. It can be proved (see e.g. [18,
Proposition 2.3, p.35] that $E$ has no zero divisors and that the
multiplicative group of units $E^{*}$ of $E$ coincides with
$E^{h}\backslash\\{0\\}$. Therefore, the degree map
${\rm deg}:E^{*}\to\Gamma_{E}$
is a group epimorphism with kernel $E_{0}^{*}$.
It can be proved that every graded module $M$ over a graded division ring $E$
is graded free, and every two homogenous bases have the same cardinality.
Thus, it can be said that $M$ is a graded vector space over $E$, an so we can
write ${\rm dim}_{E}(M)$ for the rank of $M$ as a graded free $E$-module. Let
$S\subseteq E$ be a graded subring which is a graded division ring. Then, we
can view $E$ as a graded left $S$-vector space, and we write $[E:S]$ for
$\dim_{S}(E)$. Then, we have the “Fundamental Equality”,
$[E:S]=[E_{0}:S_{0}]|\Gamma_{E}:\Gamma_{S}|,$
where $[E_{0}:S_{0}]$ is the dimension of $E_{0}$ as a vector space over
$S_{0}$ and $|\Gamma_{E}:\Gamma_{S}|$ denotes the index in the group
$\Gamma_{E}$ of its subgroup $\Gamma_{S}$ (see [18, Section 2.1.2]).
A graded field $T$ is a commutative graded division ring. Such a $T$ is an
integral domain, so it has a quotient field, which is denoted by $q(T)$. If
$E$ is a graded division algebra, then its center $Z(E)$ is clearly a graded
subfield of $E$. A graded division algebra $E$ with center $T$ is said to be
unramified if $\Gamma_{E}=\Gamma_{T}$. In this case, it follows from above
equality that $[E:S]=[E_{0}:T_{0}]$. On the other hand, $E$ is said to be
totally ramified if $E_{0}=T_{0}$. In a case in the middle, $E$ is said to be
semiramified if $E_{0}$ is a field and
$[E_{0}:T_{0}]=[\Gamma_{E}:\Gamma_{T}]={\rm ind}(E)$, where the index of $E$
is defined by ${\rm ind}(E)^{2}=[E:T]$.
### 2.2. Some group theory and the group ${\rm TK}_{1}$ of graded division
algebras
Before giving the definition of the group ${\rm TK}_{1}$ of a graded division
algebra, it is convenient to recall some basic concepts from group theory. For
a group $G$ and $a,b\in G$, we set $[a,b]=aba^{-1}b^{-1}$, the multiplicative
commutator of $a$ and $b$. If $H$ and $K$ are subgroup of $G$, then we let
$[H,K]$ denotes the subgroup of $G$ generated by all commutators of form
$[a,b]$ where $a\in H$ and $b\in K$; that is,
$[H,K]=\left\langle[h,k]:h\in H,k\in K\right\rangle.$
In the case when $K=H=G$, we have $[G,G]=G^{\prime}$ which is the commutator
subgroup of $G$.
Every group $G$ has a unique maximal torsion normal subgroup, denoted by
$\tau(G)$. The next lemma is made to assure the existence of such a subgroup
$\tau(G)$; and, in particular, determines $\tau(G)$ in the case when $G$ is an
abelian group. Although the proof is elementary, we still include it for
readers’ convenience.
###### Lemma 2.1.
Let $G$ be an arbitrary group. Then the unique maximal torsion normal subgroup
$\tau(G)$ of $G$ always exists, containing all torsion normal subgroup of $G$.
Further, $\tau(G)$ is a characteristic subgroup of $G$. If $G$ is assumed
further to be abelian, then $\tau(G)$ consists of all torsion element of $G$.
###### Proof.
Let $\mathcal{B}$ be a family of subgroups which are torsion and normal in
$G$; that is,
$\mathcal{B}=\left\\{X\leq G\;|\;X\text{ is torsion and }X\unlhd G\right\\}.$
Then $(\mathcal{B},\subseteq)$ is a partially ordered set which is non-empty
as the subgroup $\\{1\\}\in\mathcal{B}$. Let $T$ be a chain of subgroups in
$\mathcal{B}$. Put $B=\cup_{X\in T}X$. Then, it is straightforward to check
that $B$ is a torsion subgroup which is normal in $G$, and so
$B\in\mathcal{B}$. As $X\leq B$ for all $X\in T$, we get that $B$ is an upper
bound for $T$ in $\mathcal{B}$. By Zorn’s Lemma, $\mathcal{B}$ has maximal
element, say $\tau(G)$, which is a maximal torsion normal subgroup of $G$. To
see the uniqueness of $\tau(G)$, take $H$ to be a maximal torsion normal
subgroup of $G$. Our aim is to prove that $H=\tau(G)$. Indeed, it is clear
that $\tau(G)\leq H\tau(G)$. Because both $H$ and $\tau(G)$ are torsion and
$H$ normalizes $\tau(G)$, the group $H\tau(G)$ is torsion too. Moreover, the
normality of $H$ and $\tau(G)$ in $G$ implies that $H\tau(G)\unlhd G$. It
follows that $H\tau(G)$ belongs to $\mathcal{B}$, yielding that
$H\tau(G)=\tau(G)$ due to the maximality of $\tau(G)$ in $\mathcal{B}$. This
implies that $\tau(G)=H$, proving the uniqueness of $\tau(G)$.
Next, we prove that $\tau(G)$ is characteristic in $G$. For this purpose, pick
$\varphi\in{\rm Aut}(\tau(G))$. Then $\varphi(\tau(G))$ is certainly a torsion
normal subgroup of $G$. The uniqueness of $\tau(G)$ implies that
$\varphi(\tau(G))=\tau(G)$, proving that $\tau(G)$ is a characteristic
subgroup of $G$.
For the final conclusion, assume that $G$ is abelian, and denote $T(G)$ the
set of all torsion element of $G$. Then, it is clear that $\tau(G)\subseteq
T(G)$. Moreover, as $G$ is abelian, it is easy to see that $T(G)$ is indeed a
subgroup which is normal in $G$. It follows that $T(G)=\tau(G)$ by the
uniqueness of $\tau(G)$. ∎
In view of Lemma 2.1, every abelian group $G$ has a unique subgroup $\tau(G)$
consisting of all torsion elements of $G$, which is called the torsion
subgroup of $G$. If $\tau(G)=G$, then $G$ is a torsion group; such a group can
be expressed in just one way as a direct product $G=\prod G_{p}$, where $p$
runs over all primes and, $G_{p}$, the $p$-primary component of $G$, consists
of all elements of $p$-power order. If the group $G$ is locally ciyclic, i.e.
all its finitely generated subgroups are cyclic, then its $p$-primary
component $G_{p}$ is either cyclic of order $p^{r}$, where $r$ is a non-
negative integer, or of type $Z_{p^{\infty}}$ (the group of all $p^{r}$th
roots of 1 in $\mathbb{C}$, for all $r$). It is well-known that the torsion
subgroup $\tau(F^{*})$ of the multiplicative group $F^{*}$ of a field $F$ is
alway locally cyclic.
Now, we return our attention to giving the definition of the group ${\rm
TK}_{1}$ of a graded division algebra. Let
$E=\bigoplus_{\gamma\in\Gamma}E_{\gamma}$ be a graded division ring with
center $T$, where $\Gamma$ is a torsion-free abelian group. The group ${\rm
TK}_{1}(E)$ is defined to be torsion subgroup $\tau({\rm K}_{1}(E))$ of the
Whitehead group ${\rm K}_{1}(E)$. Then, because the Whitehead group ${\rm
K}_{1}(E)$ is an abelian group, as a consequence of Lemma 2.1, the group ${\rm
TK}_{1}(E)$ consists of all elements of finite order of ${\rm K}_{1}(E)$.
Moreover, in view of the exact sequence (1.1), the factor group ${\rm
TK}_{1}(E)/{\rm SK}_{1}(E)$ is isomorphic to a subgroup of $\tau(T^{*})$ which
is a locally cyclic group.
### 2.3. Valued division rings and the associated graded division rings
Let $D$ be a division algebra which is finite-dimensional over its center $F$,
with a valuation $v:D^{*}\to\Gamma$. Then, by definition of a valuation, we
have: For all $a,b\in D^{*}$, we have
1. (1)
$v(ab)=v(a)+v(b)$;
2. (2)
$v(a+b)\geq{\rm min}\\{v(a),v(b)\\}\;\;(b\neq-a)$.
For the valuation $v$, let
$\displaystyle V_{D}=\\{a\in D^{*}:v(a)\geq 0\\}\cup\\{0\\},\text{ the value
ring of }v;$ $\displaystyle M_{D}=\\{a\in D^{*}:v(a)>0\\}\cup\\{0\\},\text{
the unique maximal ideal of }V_{D};$
$\displaystyle\overline{D}=V_{D}/M_{D},\text{ the residue division ring of
}v\text{ on }D;\text{ and }$ $\displaystyle\Gamma_{D}={\rm im}(v),\text{ the
value group of the valuation}.$ $\displaystyle\text{Set }U_{D}=\\{d\in
D^{*}:v(d)=0\\}\text{ so that }U_{D}=V_{D}^{*}.$
The graded division ring ${\rm gr}(D)$ associates to the valued division
algebra $D$ is defined as follows: For each $\gamma\in\Gamma_{D}$, put
$\displaystyle D^{\geq\gamma}=\\{d\in
D^{*}:v(d)\geq\gamma\\}\cup\\{0\\},\text{ an additive subgroup of }D;$
$\displaystyle D^{>\gamma}=\\{d\in D^{*}:v(d)>\gamma\\}\cup\\{0\\},\text{ an
additive subgroup of }D^{\geq};\text{ and }$ $\displaystyle{\rm
gr}(D)_{\gamma}=D^{\geq\gamma}/D^{>\gamma}.$
Define
${\rm gr}(D)=\bigoplus_{\gamma\in\Gamma_{D}}{\rm gr}(D)_{\gamma}.$
Then, the condition
$D^{>\gamma}D^{\geq\delta}+D^{\geq\gamma}D^{>\delta}\subseteq
D^{>(\gamma+\delta)}$, for all $\gamma,\delta\in\Gamma_{D}$, assures that the
multiplication on ${\rm gr}(D)$ induced by multiplication on $D$ is well-
defined. It can be checked that ${\rm gr}(D)$ is indeed a graded division
ring, called the associated graded division ring of $D$. Also, it is clear
that ${\rm gr}(D)_{0}=\overline{D}$ and $\Gamma_{{\rm gr}(D)}=\Gamma_{D}$. For
each $d\in D^{*}$, we write $\tilde{d}$ for the image $d+D^{>v(d)}$ of $d$ in
${\rm gr}(D)_{v(d)}$. Then, the map
$D^{*}\to{\rm gr}(D)^{*}\;\;\;\text{ given by }\;\;\;d\mapsto\tilde{d}$
is a group epimorphism with kernel $1+M_{D}$. It was shown in [5, Corollary
4.4] that the reduced norm maps for $D$ and ${\rm gr}(D)$ are related by
(2.1) $\displaystyle{\rm Nrd}_{{\rm gr}(D)}(\widetilde{a})=\widetilde{{\rm
Nrd}_{D}(a)},\text{ for all }a\in D^{*}.$
The restriction $v|_{F}$ of the valuation $v$ on $D$ to its center $F$ is a
valuation on $F$, which induces a corresponding graded field ${\rm gr}(F)$. It
can be checked that ${\rm gr}(D)$ is a graded ${\rm gr}(F)$-algebra which
satisfies:
(2.2) $\displaystyle[{\rm gr}(D):{\rm
gr}(F)]=[\overline{D}:\overline{F}]|\Gamma_{D}/\Gamma_{F}|\leq[D:F]<\infty.$
Let $F$ be a field with a henselian valuation $v$. Recall that a finite degree
field extension $L$ of $F$ is defined to be unramified over $F$, with respect
to the unique extension of $v$ to $L$, if $[\overline{L}:\overline{F}]=[L:F]$
and $\overline{L}$ is separable over $\overline{F}$ (hence $L$ is separable
over $F$). Also, a field extension $L$ of $F$ of finite degree $n$ is said to
be tamely ramified or tame over $F$ if $\overline{L}$ is a separable field
extension of $\overline{F}$ and ${\rm char}(\overline{F})\nmid
n/[\overline{L}:\overline{F}]$. Such an $L$ satisfies the equality
$[L:F]=[\overline{L}:\overline{F}]|\Gamma_{L}/\Gamma_{F}|=[{\rm gr}(L):{\rm
gr}(F)]$.
Let $D$ be a division ring with center $F$ and $[D:F]<\infty$, then the
valuation $v$ on $F$ extends uniquely to a valuation on $D$. With respect to
this valuation, we say that $D$ is tamely ramified or tame if
$Z(\overline{D})$ is separable over $\overline{F}$ and ${\rm
char}(\overline{F})\nmid{\rm ind}(D)/({\rm
ind}(\overline{D})[Z(\overline{D}):\overline{F}])$. Recall from [9,
Proposition 1.7] that whenever the field extension
$Z(\overline{D})/\overline{F}$ is separable, it is abelian Galois. Also, it
follows from [7, Propostion 4.3] that $D$ is tame if and only if $[{\rm
gr}(D):{\rm gr}(F)]=[D:F]$ and $Z({\rm gr}(D))={\rm gr}(F)$, if any only if
$D$ is split by a maximal tamely ramified extension of $F$, if and only if
${\rm char}(\overline{F})=0$ or ${\rm char}(\overline{F})=p\neq 0$ and the
$p$-primary component of $D$ is inertially split, i.e., split by a maximal
unramified extension of $F$. We say that $D$ is strongly tame if ${\rm
char}(\overline{F})\nmid{\rm ind}(D)$. It is clear that strong tameness
implies tameness. Note that the term “tameness” used in [12] is exactly the
term “strong tameness” here.
Let $D$ be a division ring with a henselian center $F$. Then the valuation on
$F$ extends uniquely to a valuation on $D$. With respect to this valuation, we
say that $D$ is unramified or innertial over $F$ if
$[\overline{D}:\overline{F}]=[D:F]$ and $Z(\overline{D})$ is separable over
$\overline{F}$. We say that $D$ is semiramified if $\overline{D}$ is a field
and $[\overline{D}:\overline{F}]=|\Gamma_{D}:\Gamma_{F}|={\rm ind}(D)$. At the
other extreme, $D$ is said to be totally ramified over $F$ if
$|\Gamma_{D}:\Gamma_{F}|=[D:F]$, or equivalently, $\overline{D}=\overline{F}$
and $D$ is defectless over $F$. Recall from [9, p.128] that a division algebra
is called nicely semiramified if it has a maximal subfield inertial over $F$
and another maximal subfield totally ramified over $F$.
## 3\. Torsion subgroup of graded division algebras
Now, for further using, let us make some observations about some certain
subgroups of the multiplicative group $E^{*}$ of the graded division ring $E$.
Recall first that $E^{*}$ coincides with the set of all non-zero homogenous
elements of $E$. The next remark follows directly from [5, Proposition 3.2]
and is used regularly throughout the present paper.
###### Remark 1.
Let $E$ be a graded division ring with center $T$, and let $n={\rm ind}(E)$.
1. (i)
If $K$ is any graded subfield of $E$ containing $T$ and $a\in K$, then
${\rm Nrd}_{E}(a)=N_{K/T}(a)^{n/[K:T]}.$
2. (ii)
For $\gamma\in\Gamma_{E}$, if $a\in E_{\gamma}$ then ${\rm Nrd}_{E}(a)\in
E_{n\gamma}$.
3. (iii)
Set $\delta={\rm ind}(E)/({\rm ind}(E_{0})[Z(E_{0}):T_{0}])$. If $a\in E_{0}$,
then
${\rm Nrd}_{E}(a)=N_{Z(E_{0})/T_{0}}{\rm Nrd}_{E_{0}}(a)^{\delta}\in T_{0}.$
Remark 1 helps us to prove the following useful lemma:
###### Lemma 3.1.
Let $E$ be a graded division ring and $a\in E^{*}$. If $a$ is radical over
$E^{(1)}$, then $a\in E^{0}$. In particular, we have
$[E_{0}^{*},E^{*}]\subseteq[E^{*},E^{*}]\subseteq E^{(1)}\subseteq E_{0}^{*}$
and $\tau(E^{*})\subseteq E_{0}^{*}.$
###### Proof.
As we have mentioned before, $E^{*}$ only consists of homogeneous elements of
$E$. Since $a$ is a unit, it follows that $a\in E_{\gamma}$ for some
$\gamma\in\Gamma_{E}$. Let $k\geq 1$ be an integer such that $a^{k}=b\in
E^{(1)}$. We have
$1={\rm Nrd}_{E}(b)={\rm Nrd}_{E}(a^{k})={\rm Nrd}_{E}(a)^{k}.$
Since ${\rm Nrd}_{E}(a)\in E_{n\gamma}$ (Remark 1(ii)), we have $1={\rm
Nrd}_{E}(a)^{k}\in E_{nk\gamma}$. This implies that $\gamma=0$, and so $a\in
E_{0}^{*}$. The remaining assertion is obvious. ∎
###### Corollary 3.2.
Let $E$ be a graded division ring. Then $E_{0}^{*}$ is a normal subgroup of
$E^{*}$.
###### Proof.
The assertion follows from the fact that $[E^{*},E^{*}]\subseteq E_{0}^{*}$. ∎
###### Lemma 3.3.
Let $E$ be a graded division ring. If $G$ and $G_{0}$ are subgroups of $E^{*}$
and $E_{0}^{*}$ such that $\tau(E^{*}/[E^{*},E^{*}])=G/[E^{*},E^{*}]$ and
$\tau(E_{0}^{*}/[E^{*},E^{*}])=G_{0}/[E^{*},E^{*}]$ respectively, then
$G=G_{0}$.
###### Proof.
For any $x\in G$, there exists $k\in\mathbb{N}$ such that $x^{k}=1$. Then, for
$\overline{x}=x\left[{E^{*},E^{*}}\right]$, we have
$\overline{x}^{k}=x^{k}\left[{E^{*},E^{*}}\right]=\left[{E^{*},E^{*}}\right]\Longrightarrow
x^{k}\in\left[{E^{*},E^{*}}\right].$
Thus, $G$ is radical over $[E^{*},E^{*}]\subseteq E^{(1)}$; and so, by Lemma
3.1, we obtain that $G\subseteq E_{0}^{*}$. It follows that $G/[E^{*},E^{*}]$
is a (torsion) subgroup of $E_{0}^{*}/[E^{*},E^{*}]$. As
$G/[E^{*},E^{*}]\unlhd E_{0}^{*}/[E^{*},E^{*}]$, we get that
$G/[E^{*},E^{*}]\subseteq G_{0}/[E^{*},E^{*}]$, which implies that $G\subseteq
G_{0}$. For opposite inclusion, we note first that
$E_{0}^{*}/[E^{*},E^{*}]\unlhd E^{*}/[E^{*},E^{*}]$ as $E_{0}^{*}\unlhd
E^{*}$. Now, because $G_{0}/[E^{*},E^{*}]$ is characteristic in
$E^{0}/[E^{*},E^{*}]$, it follows that $G_{0}/[E^{*},E^{*}]\unlhd
E^{*}/[E^{*},E^{*}]$. Thus, $G_{0}/[E^{*},E^{*}]$ is a torsion subgroup which
is normal in $E^{*}/[E^{*},E^{*}]$. It follows that
$G_{0}/[E^{*},E^{*}]\subseteq G/[E^{*},E^{*}]$, and so $G_{0}\subseteq G$. The
Lemma is proved. ∎
###### Remark 2.
Let $E$ be a graded division ring with center $T$. Then, the degree map
$E^{*}\to\Gamma_{E}$ induces a surjective map $E^{*}\to\Gamma_{E}/\Gamma_{T}$
given by $a\mapsto{\rm deg}(a)+\Gamma_{T}$ with kernel $T^{*}E_{0}^{*}$.
Therefore, we have the following short exact sequence:
(3.1) $\displaystyle
1\xrightarrow{\;\;\;\;\;\;}T^{*}E_{0}^{*}\xrightarrow{\;\;\;\;\;\;\;}E^{*}\xrightarrow{\;\;\;\;\;\;\;}\Gamma_{E}/\Gamma_{T}\xrightarrow{\;\;\;\;\;\;\;}1.$
Next, we need to recall the definition of the cohomology group. Let $G$ be a
finite abelian group and $M$ be a $G$-module, with operation written
multiplicatively; that is the group action $G\times M\to
M:(\sigma,c)\mapsto\sigma(c)$ satisfies
$\sigma(m_{1}m_{2})=\sigma(m_{1})\sigma(m_{2})$ and
$(\sigma_{1}\sigma_{2})(m)=\sigma_{1}(\sigma_{2}(m))$.
###### Lemma 3.4.
Let $G$ be a finite group and $M$ a $G$-module. Let $N_{G}:M\to M$ be the
$G$-norm map given by $c\mapsto\prod_{\sigma\in G}\sigma(c)$. Put
$I_{G}(M)=\left\langle c\sigma(c)^{-1}:c\in M,\sigma\in G\right\rangle$. Then
$I_{G}(M)$ is a subgroup of ${\rm ker}N_{G}$.
###### Proof.
It suffices to show that $I_{G}(M)\subseteq{\rm ker}N_{G}$. For this purpose,
we need to prove $N_{G}(c\sigma(c)^{-1})=1$ for any $c\in M$ and $\sigma\in
G$. Indeed, we have
$\displaystyle N_{G}(c\sigma(c)^{-1})$
$\displaystyle=\prod\limits_{\sigma_{1}\in
G}{\sigma_{1}\left({c\sigma\left(c\right)^{-1}}\right)}$
$\displaystyle=\prod\limits_{\sigma_{1}\in
G}{\sigma_{1}\left(c\right)}\prod\limits_{\sigma_{1}\in
G}{\sigma_{1}\left({\sigma\left(c\right)^{-1}}\right)}$
$\displaystyle=\prod\limits_{\sigma_{1}\in
G}{\sigma_{1}\sigma\left(c\right)}\prod\limits_{\sigma_{1}\in
G}{\sigma_{1}\left({\sigma\left(c\right)^{-1}}\right)}$
$\displaystyle=\prod\limits_{\sigma_{1}\in
G}{\sigma_{1}\left({\sigma\left(c\right)}\right)}\prod\limits_{\sigma_{1}\in
G}{\sigma_{1}\left({\sigma\left(c\right)^{-1}}\right)}$
$\displaystyle=\prod\limits_{\sigma_{1}\in G}{\sigma_{1}\left(1\right)}=1.$
∎
###### Definition 3.5.
The $-1$-Tate cohomology group of $G$ with respect to $M$ is defined to be the
following factor group:
$\widehat{H}^{-1}(G,M)={\rm ker}N_{G}/I_{G}(M).$
Let $K/F$ be a finite Galois extension of fields. Set $\mathcal{G}={\rm
Gal}(K/F)$. Then, for any $c\in K$, it is well-known that the norm of $c$ is
given by
$N_{K/F}(c)=\prod_{\sigma\in\mathcal{G}}\sigma(c).$
###### Lemma 3.6.
Let $K/F$ be a finite Galois extension of fields. Set $\mathcal{G}={\rm
Gal}(K/F)$. Then $|\mathcal{G}|=[K:F]<\infty$. Define an action
$\mathcal{G}\times K^{*}\to K^{*}$ given by $(\sigma,c)=\sigma(c)$. Then
$K^{*}$ becomes a $\mathcal{G}$-module. Moreover, the $K^{*}$-norm map
$N_{\mathcal{G}}:K^{*}\to K^{*}$ coincides with the ordinary norm field
$N_{K/F}$ and $I_{\mathcal{G}}(K^{*})=\left\langle c\sigma(c)^{-1}:c\in
K^{*},\sigma\in\mathcal{G}\right\rangle$. Consequently, we have
$\widehat{H}^{-1}(\mathcal{G},K^{*})={\rm ker}N_{K/F}/I_{\mathcal{G}}(K^{*}).$
###### Proof.
For each $a\in K^{*}$, we have
$N_{\mathcal{G}}(a)=\prod_{\sigma\in\mathcal{G}}\sigma(c)=N_{K/F}(a).$
This implies that $N_{\mathcal{G}}$ coincides with $N_{K/F}$. Also, it is
clear that $I_{\mathcal{G}}(K^{*})=\left\langle c\sigma(c)^{-1}:c\in
K^{*},\sigma\in\mathcal{G}\right\rangle$. The final conclusion follows
immediately. ∎
Before give a proof of Theorem 1.1, let us explain the notation
$\Gamma_{E}/\Gamma_{T}\land\Gamma_{E}/\Gamma_{T}$ appearing in Diagram 1. Such
an explanation can be found in [5, Lemma 3.5]. However, for reader’s
convenience, we also include the construction of the group
$\Gamma_{E}/\Gamma_{T}\land\Gamma_{E}/\Gamma_{T}$ here. Let $P$ be a group,
and $S$ a subgroup of $P$ such that $[P,P]\subseteq S$. Let $Q=P/(SZ(P))$.
Then, we have $[[P,P],[P,P]]\subseteq[S,P]$, from which it follows that
$[S,P]$ is a normal subgroup of $[P,P]$ with abelian factor. Consider the map
$\eta:P\times P\to[P,P]/[S,P]\text{\;\;\;\; given by \;\;\;\;}(a,b)\mapsto
aba^{-1}b^{-1}[S,P].$
For any $a,b,c\in P$, we have $[a,bc]=[a,b][b,[a,c]][a,c]$. As
$[b,[a,c]]\in[H,G]$, we have $\eta(a,bc)=\eta(a,b)\eta(a,c)$. So $\eta$ is
multiplicative in the second variable. Likewise, $\eta$ is multiplicative in
the first variable.
Let $x$ is an element of $[SZ(P),P]$, there exists $a\in S,b\in Z(P),c\in P$
such that $x=abc(ab)^{-1}c^{-1}=abcb^{-1}a^{-1}c^{-1}$. As $b\in Z(P)$, we
have $x=aca^{-1}c^{-1}\in[S,P]$, so $[SZ(P),P]\subset[S,P]$.
Let $\eta^{\prime}:Q\times Q\to[P,P]/[S,P]$ given by
$(aSZ(P),bSZ(P))\mapsto[a,b][S,P]$. Take any $a,b,c,d\in P$ such that
$(aSZ(P),bSZ(P))=(cSZ(P),dSZ(P))$. We have
$[a,b][S,P]=[a,b][a^{-1}c,b][c,b^{-1}d][S,P]=[c,b][c,b^{-1}d][S,P]=[c,d][S,P]$,
which means $\eta^{\prime}$ is well-defined and multiplicative. So
$\eta^{\prime}$ induces a well-defined group homomorphism
$\eta^{\prime\prime}:Q\otimes_{\mathbb{Z}}Q\to[P,P]/[S,P]$ given by
$aSZ(P)\otimes bSZ(P)\mapsto[a,b][S,P]$, which is obviously surjective. As
$\eta^{\prime\prime}(\zeta\otimes\zeta)=[S,P]$ for all $\zeta\in Q$, we have
an induced epimorphism $B\land B\to[P,P]/[S,P]$.
The proof of Theorem 1.1. That $G$ is a subgroup of $E_{0}^{*}$ follows
immediately from Lemma 3.1, and that
$\tau(E_{0}^{*}/[E^{*},E^{*}])=G/[E^{*},E^{*}]$ follows from Lemma 3.3(i).
Now, we consider the column in Diagram 1. For convenience, we present the
sequence here:
$1\xrightarrow{\;\;\;\;\;\;\;\;}{\rm
ker}\widetilde{N}/[E_{0}^{*},E^{*}]\xrightarrow{\;\;\;\;\iota\;\;\;\;}G/[E_{0}^{*},E^{*}]\xrightarrow{\;\;\;\;\bar{N}\;\;\;\;}\tau(T_{0}^{*})\cap\widetilde{N}(E_{0}^{*})\xrightarrow{\;\;\;\;\;\;\;\;\;}1$
We first show how the map $\iota$ is defined. In view of Remark 1(iii), we
obtain that ${\rm ker}\widetilde{N}\subseteq E^{(1)}$. Because ${\rm
SK}_{1}(E)=E^{(1)}/[E,E]$ is a torsion group, we get that $E^{(1)}\subseteq
G$. It follows that ${\rm ker}\widetilde{N}\subseteq G$, which implies that
the embedded $\iota$ in above sequence is well-defined.
Now we construct the surjective map $\bar{N}$ in the column. In view of Lemma
3.3, we conclude that $G\subseteq E_{0}^{*}$. For each $g\in G$, there exists
$m\geq 0$ such that $g^{m}\in[E^{*},E^{*}]$, and so $1={\rm
Nrd}_{E}(g^{m})={\rm Nrd}_{E}(g)^{m}$. Thus, by Remark 1(i), we get that
$1={\rm Nrd}_{E}(g^{m})={\rm Nrd}_{E}(g)^{m}=N_{Z(E_{0})/T_{0}}{\rm
Nrd}_{E_{0}}(g)^{\delta m}=\widetilde{N}(g)^{\delta m}.$
This means that
$\widetilde{N}(G)\subseteq\tau(T_{0}^{*})\cap\widetilde{N}(E_{0}^{*})$.
Moreover, as $[E_{0}^{*},E^{*}]\subseteq{\rm ker}\widetilde{N}$, the map
$\bar{N}:G/[E_{0}^{*},E^{*}]\to\tau(T_{0}^{*})\cap\widetilde{N}(E_{0}^{*})\text{
\;\;\;\; given by \;\;\;\; }a[E_{0}^{*},E^{*}]\mapsto\widetilde{N}(a)$
is well-defined. To prove that this map is surjective, we take $b$ to be an
arbitrary element in $\tau(T_{0}^{*})\cap\widetilde{N}(E_{0}^{*})$. As $b$ is
an element of $\widetilde{N}(E_{0}^{*})$, there exists an element $a$ of
$E_{0}^{*}$ such that $b=\widetilde{N}(a)$. Moreover, because
$b\in\tau(T_{0}^{*})$, there is an integer $k$ for which $b^{k}=1$, yielding
that $\widetilde{N}\left(a^{k}\right)=1$. It follows that
$a^{k}\in\ker\widetilde{N}\subseteq G$, and so we can find an integer $m$ such
that $a^{km}=\left(a^{k}\right)^{m}\in[E^{*},E^{*}]$. Hence,
$a[E^{*},E^{*}]\in\tau(E^{*}_{0}/[E^{*},E^{*}])=G/[E^{*},E^{*}]$, which leads
to $a\in G$. Therefore, we finally obtain that
$\bar{N}(a[E_{0}^{*},E^{*}])=b$, proving that the map $\bar{N}$ is surjective.
Next, we prove that the column is exact at $G/[E_{0}^{*},E^{*}]$. To prove
this, as ${\rm im}\iota={\rm ker}\widetilde{N}/[E_{0}^{*},E^{*}]$, we need to
show that ${\rm ker}\bar{N}={\rm ker}\widetilde{N}/[E_{0}^{*},E^{*}]$. Let
make the following computations:
$\displaystyle{\rm ker}\bar{N}$ $\displaystyle=\\{a[E_{0}^{*},E^{*}]|a\in
G\text{ and }\bar{N}(a[E_{0}^{*},E^{*}])=1\\}$
$\displaystyle=\\{a[E_{0}^{*},E^{*}]|a\in G\text{ and }\widetilde{N}(a)=1\\}$
$\displaystyle=\\{a[E_{0}^{*},E^{*}]|a\in\ker\widetilde{N}\\}$
$\displaystyle={\rm ker}\widetilde{N}/[E_{0}^{*},E^{*}].$
The above computations show that the column is exact. Now, we consider the row
of Diagram 1.
$\Gamma_{E}/\Gamma_{T}\land\Gamma_{E}/\Gamma_{T}\xrightarrow{\;\;\;\alpha\;\;\;}G/[E_{0}^{*},E^{*}]\xrightarrow{\;\;\;\;\bar{\varphi}\;\;\;\;}{\rm
TK}_{1}(E)\xrightarrow{\;\;\;\;\;\;\;\;\;}1$
First, let us explain the construction of the map $\bar{\varphi}$. Let
$\varphi:G\to G/[E,E]={\rm TK}_{1}(E)$ be the natural epimorphism. Because
$[E_{0}^{*},E^{*}]\subseteq[E^{*},E^{*}]$, the map $\varphi$ induces an
epimorphism
$\bar{\varphi}:G/[E_{0}^{*},E^{*}]\to{\rm TK}_{1}(E)\text{\;\;\;\; given by
\;\;\;\;}a[E_{0}^{*},E^{*}]\mapsto a[E^{*},E^{*}].$
Now, we show that the sequence is exact at $G/[E_{0}^{*},E^{*}]$. By the
definition of $\alpha$, a simple calculation shows that ${\rm
im}\alpha=[E^{*},E^{*}]/[E_{0}^{*},E^{*}]$. Moreover, we have
$\displaystyle\ker\bar{\varphi}$
$\displaystyle=\\{a[E_{0}^{*},E^{*}]|a[E^{*},E^{*}]=[E^{*},E^{*}]\\}$
$\displaystyle=\\{a[E_{0}^{*},E^{*}]|a\in[E^{*},E^{*}]\\}$
$\displaystyle=[E^{*},E^{*}]/[E_{0}^{*},E^{*}].$
This proves the exactness of the column. The proof of Theorem 1.1 is complete.
Before giving a proof of Corollary 1.2, let us make a lemma for further use.
###### Lemma 3.7.
Let $E$ be a graded division ring with center $T$. If $E$ is semiramified,
then $E_{0}$ is a field which is Galois over $T_{0}$, and for
$\mathcal{G}={\rm Gal}(E_{0}/T_{0})\cong\Gamma_{E}/\Gamma_{T}$, we have
$E^{(1)}/[E_{0}^{*},E^{*}]=\widehat{H}^{-1}(\mathcal{G},E_{0}^{*})$. In
particular, we have $[E^{*},E_{0}]\cong I_{\mathcal{G}}(E_{0}^{*})$ and
$E^{(1)}={\rm ker}N_{E_{0}/T_{0}}$.
###### Proof.
Since $E$ is semiramified, we have $[E_{0}:T_{0}]=|\Gamma_{E}/\Gamma_{T}|={\rm
ind}(E)$, and $E_{0}$ is a field which is Galois over $T_{0}$. Therefore, the
group $E_{0}^{*}$ becomes a $\mathcal{G}$-module with the action is defined as
those in Lemma 3.6, and the map $N_{\mathcal{G}}$ coincides with
$N_{Z(E_{0})/T_{0}}$. Consider the map $\widetilde{N}$ defined in Proposition
1.1. Because $E_{0}$ is a field, the map ${\rm Nrd}_{E_{0}}$ is the identity
map on $E_{0}$, from which it follows that $\widetilde{N}=N_{E_{0}/T_{0}}$.
Consequently, we have ${\rm ker}N_{\mathcal{G}}={\rm ker}N_{E_{0}/T_{0}}={\rm
ker}\widetilde{N}$. Moreover, it was proved in [5, Corollary 3.6(iii)] that,
if $E$ is semiramified, we have
$\widehat{H}^{-1}(\mathcal{G},E_{0}^{*})={\rm
ker}N_{\mathcal{G}}/I_{\mathcal{G}}(E_{0}^{*})={\rm
ker}\widetilde{N}/[E_{0}^{*},E^{*}]=E^{(1)}/[E_{0}^{*},E^{*}].$
This implies that have $[E^{*},E_{0}]=I_{\mathcal{G}}(E_{0}^{*})$ and
$E^{(1)}={\rm ker}N_{E_{0}/T_{0}}$, and so
$E^{(1)}/[E_{0}^{*},E^{*}]=\widehat{H}^{-1}(\mathcal{G},E_{0}^{*}).$
The proof of the lemma is now completed. ∎
The proof of Corollary 1.2. (i) Since $E$ is unramified, we have
$\Gamma_{E}=\Gamma_{T}$, and so $\Gamma_{E}/\Gamma_{T}=1$. Thus, the row of
Diagram 1 says that $G/[E_{0}^{*},E^{*}]={\rm TK}_{1}(E)$. Moreover, it
follows from (3.1) that $E^{*}=E_{0}^{*}T^{*}$, and so
$[E^{*},E^{*}]=[E_{0}^{*},E^{*}]=[E_{0}^{*},E_{0}^{*}]$. Let $G_{0}$ be the
subgroup of $E_{0}$ such that
$G_{0}/[E_{0}^{*},E^{*}]=\tau(E_{0}^{*}/[E_{0}^{*},E^{*}])$. As
$[E^{*},E^{*}]=[E_{0}^{*},E_{0}^{*}]$, we obtain that
$G_{0}/[E_{0}^{*},E^{*}]={\rm TK}_{1}(E_{0})$. Furthermore, in view of Lemma
3.3(2), we get that $G=G_{0}$, and so ${\rm TK}_{1}(E)={\rm TK}_{1}(E_{0})$.
The part (1) is proved.
(ii) Consider the column in Diagram 1. Since $E_{0}=T_{0}\subseteq T$, we get
that $\widetilde{N}$ is the identity map on $T_{0}$ and
$[E^{*},E_{0}^{*}]=[E^{*},T_{0}^{*}]=1$. Therefore, the exactness of the
column implies that $G\cong\tau(T_{0}^{*})$. Moreover, in view of [7,
Propositon 2.1], we have $[E^{*},E^{*}]\cong\mu_{e}(T_{0})$. Thus, ${\rm
TK}_{1}(E)=G/[E^{*},E^{*}]\cong\tau(T_{0}^{*})/\mu_{e}(T_{0})$.
(iii) This assertion follows from Lemma 3.7 and the row of Diagram 1.
(iv) As $L$ is unramified over $T$, we get that
$[L:T]=[L_{0}:T_{0}]\leq[E_{0}:T_{0}]$. The maximality of $L$ in $E$ implies
that ${\rm ind}(E)=[L:T]$. It follows that ${\rm TK}_{1}(E)\cong
G/I_{\mathcal{G}}(E_{0}^{*})$.
(3.2) ${\rm ind}(E)=[L_{0}:T_{0}]\leq[E_{0}:T_{0}].$
Similarly, because $K$ is totally ramified over $T$, we have
$[K:T]=|\Gamma_{K}:\Gamma_{T}|\leq|\Gamma_{E}:\Gamma_{T}|$. As $K$ is maximal
in $E$, we conclude that ${\rm ind}(E)=[L:T]=[K:T]$, and so
(3.3) ${\rm ind}(E)=|\Gamma_{K}:\Gamma_{T}|\leq|\Gamma_{E}:\Gamma_{T}|.$
Moreover, it follows from the Fundamental Equality that
(3.4) ${\rm ind}(E)^{2}=[E:T]=[E_{0}:T_{0}]|\Gamma_{E}:\Gamma_{T}|.$
It follows from (3.2), (3.3) and (3.4) that $[L_{0}:T_{0}]=[E_{0}:T_{0}]$ and
$|\Gamma_{K}:\Gamma_{T}|=|\Gamma_{E}:\Gamma_{T}|$ which implies that
$E_{0}=L_{0}$ and $\Gamma_{K}=\Gamma_{E}$. This implies that $E$ is
semiramified over $T$, and (iii) applies. In other words, we have ${\rm
ker}\tilde{N}/[E_{0}^{*},E^{*}]\cong\widehat{H}^{-1}(\mathcal{G},E_{0}^{*})$
and $\mathcal{G}\cong\Gamma_{E}/\Gamma_{T}$. Take
$\nu,\eta\in\Gamma_{E}/\Gamma_{T}$, and let $a,b$ be the inverse images of
$\nu,\eta$ in $E^{*}$. The left map of (1.2) sends $\nu\wedge\eta$ to
$(aba^{-1}b^{-1})I_{\mathcal{G}}(E_{0}^{*})$. Because $\Gamma_{E}=\Gamma_{K}$,
the elements $a$ and $b$ can be chosen in $K$ so that they commute. Thus, the
left map of (1.2) is trivial, which implies that and ${\rm TK}_{1}(E)\cong
G/[E_{0}^{*},E^{*}]$. Put these information into the exact column, the
sequence is exact. Finally, the fact that
$\widehat{H}^{-1}(\mathcal{G},E_{0}^{*})\cong{\rm SK}_{1}(E)$ was proved in
[5, Corollary 3.6].
###### Remark 3.
Let $E$ be a graded division ring with center $T$. Then, the finite-
dimensionality assures that $E$ has a quotient division ring $q(E)$ obtain by
central localization; that is, $q(E)=E\otimes_{T}q(T)$, where $q(T)$ is the
quotient field of $T$. Then, we have $Z(q(E))=q(T)$ and ${\rm ind}(E)={\rm
ind}(q(E))$. It was shown in [5] that ${\rm SK}_{1}(q(E))\cong{\rm
SK}_{1}(E)$, which is a very difficult result. This result motivates the
question that whether ${\rm TK}_{1}(q(E))\cong{\rm TK}_{1}(E)$ or not? It is
not hard to see that ${\rm TK}_{1}(E)\subseteq{\rm TK}_{1}(q(E))$. However,
whether the inverse inclusion holds or not is not clear.
Now, let us give some examples to demonstrate how to use Corollary 1.2 to
compute the group ${\rm TK}_{1}(E)$ for some graded division algebras $E$. The
constructions of graded division algebras we use in these examples can be
directly adopted from [17, Examples 11.13 and 11.14, p.544].
###### Example 1.
Let $D$ be a division ring which is finite-dimensional over its center $F$,
and $x$ an indeterminate. Let $E=D[x,x^{-1}]$ be the ring of Laurent
polynomials with coefficients taken from $D$. Then, with the usual
$\mathbb{Z}$ grading, $E$ is a graded division ring which is unramified over
its center $F[x,x^{-1}]$ with $E_{0}=D$. Thus, by Corollary 1.2(i), we have
${\rm TK}_{1}(E)\cong{\rm TK}_{1}(D)$.
In the next example, we use the $\mathcal{T}$ construction which was
investigated in [17, 9.1.3 Example, p.461]. For convenience, we briefly
present this construction here. Let $n_{1},\dots,n_{r}$ be integers with
$n_{i}\geq 2$ for all $i$. Let
$n=n_{1}\dots n_{r}\text{ and }m={\rm lcm}(n_{1},\dots,n_{r}).$
Let $k$ be any field containing a primitive $m$-th root of unity $\omega$, and
let $\omega_{i}=\omega^{m/n_{i}}$ for $i=1,\dots,r$; so $\omega_{i}$ is a
primitive $n_{i}$-th root of unity. Let $x_{1},y_{1},\dots,x_{r},y_{r}$ be
$2r$ independent indeterminates over $k$. Put
$F=k((x_{1}))((y_{1}))\dots((x_{r}))((y_{r})),$
the iterated Laurent series (cf. [17, Example 1.1.3, p.4]; and put
$\mathcal{T}(k;n_{1},\dots,n_{r})=(x_{1},y_{1}/F)_{\omega_{1},n_{1}}\otimes_{F}\cdots\otimes_{F}(x_{r},y_{r}/F)_{\omega_{r},n_{r}},$
where each $(x_{i},y_{i}/F)_{\omega_{i},n_{i}}$ is the graded symbol algebra
of degree $n_{r}$ (see [17, Definition 2.18, p.49]. Then, the field $F$ is
henselian for the $(x_{1},y_{1},\dots,x_{r},y_{r})$-adic valuation, with
residue field $\overline{F}=k$ and $\Gamma_{F}=\mathbb{Z}^{2r}$
lexicographically ordered from right to left.
###### Example 2.
Let $D=\mathcal{T}(k;n_{1},\dots,n_{r})$ be as above. Then, $D$ is a tame
totally ramified division algebra with center $Z(D)=F$. Let $E={\rm gr}(D)$,
which is a graded division algebra totally ramified over its center $T={\rm
gr}(F)$. According to [17, Proposition 9.8], we have ${\rm ind}(E)={\rm
ind}(D)=n$ and ${\rm exp}(\Gamma_{E}/\Gamma_{T})=m$. It follows from Corollary
1.2(ii) that ${\rm TK}_{1}(E)\cong\tau(k^{*})/\mu_{m}(k)$. It was shown in
[17, Example 11.14] that ${\rm SK}_{1}(E)\cong\mu_{n}(k)/\mu_{m}(k)$, where
$\mu_{m}(k)$ is the group of $m$-th roots of unity; and thus ${\rm SK}_{1}(E)$
is a cyclic group of order $|\mu_{n}(k)|/m$. Therefore, the following short
sequence is exact:
$1\xrightarrow{\;\;\;\;\;\;}\mu_{n}(k)/\mu_{m}(k)\xrightarrow{\;\;\;\;\;\;\;}{\rm
TK}_{1}(E)\xrightarrow{\;\;\;\;\;\;}\tau(k^{*})/\mu_{n}(k)\xrightarrow{\;\;\;\;\;\;\;}1.$
Thus, by using $\mathcal{T}$ construction, ones may produce many examples of
totally ramified graded division algebras $E$ for which ${\rm TK}_{1}(E)$ is
an extension of a finite cyclic group by a locally cyclic group.
Before closing this section, let us make the following remark.
###### Remark 4.
Let $E$ be a graded division ring of prime index $p$ with center $T$. In view
of the short exact sequence (1.1), we get that
${\rm TK}_{1}(E)/{\rm SK}_{1}(E)\cong{\rm Nrd}_{D}(\tau(T^{*})),$
which is a locally cyclic group. Let $q(E)$ be the quotient division ring of
$E$. Then, we have ${\rm ind}(E)={\rm ind}(q(E))=p$. It was proved in [5,
Theorem 5.7] that ${\rm SK}_{1}(E)\cong{\rm SK}_{1}(q(E))$. Thus, by Wang’s
Theorem, we have ${\rm SK}_{1}(q(E))=1$, and hence ${\rm SK}_{1}(E)=1$. It
follows that ${\rm TK}_{1}(E)$ is a locally cyclic group. Therefore, the
torsion group of the Whitehead group of a graded division algebra of prime
index is a locally cyclic group. At this point, a question is naturally
raised: Given a locally cyclic group $A$, is there a graded division graded
algebra $E$ of prime index with ${\rm TK}_{1}(E)\cong A$? This may be
considered as a graded version of [12, Theorem 2] by Motiee, where he has
completely determined all torsion abelian groups that can occur as the torsion
subgroup of the Whitehead group of a division algebra of prime index.
## 4\. ${\rm TK}_{1}$ of associated graded division ring of a valued division
algebra
Let $F$ be a field with a valuation $v$. For each $a\in\tau(F^{*})$, there
exists $m\in\mathbb{Z}$ such that $a^{m}=1$. It follows that
$mv(a)=v(a^{m})=v(1)=0$. Since $\Gamma_{F}$ is torsion free, we get that
$v(a)=0$, and so $a\in U_{F}$. Thus, the reduction map $F\to\overline{F}$
induces a map
$U_{F}\to\overline{F}^{*}\text{\;\;\;\; given by \;\;\;\;}a\mapsto a+M_{F}$
with kernel $1+M_{F}$. In other words, we have an short exact sequence:
$\displaystyle
1\xrightarrow{\;\;\;\;\;\;}1+M_{F}\xrightarrow{\;\;\;\;\;\;\;}U_{F}\xrightarrow{\;\;\;\;\;\;\;}\overline{F}^{*}\xrightarrow{\;\;\;\;\;\;\;}1.$
Before proving the main theorems of the current section, we first give some
auxiliary results which focus on some certain subgroups of $\tau(F^{*})$ of a
valued field $F$. Although these results are made as a preparation for further
use, they are interesting in their own rights.
###### Lemma 4.1.
Let $F$ be a valued field. If there exists $0\neq a\in M_{F}$ such that
$(1+a)^{k}=1$ for some $k\in\mathbb{Z}$, then $0\neq{\rm
char}(\overline{F})|k$.
###### Proof.
Put $p={\rm char}(\overline{F})$. Since $a\in M_{F}$, we get $v(a)>0$. Write
$1=(1+a)^{k}=1+ka+ba$ with $b\in M_{F}$, we get that $(k+b)a=0$ and so
$k+b=0$. It follows that $v(k)=v(b)>0$. This implies that
$\overline{k}=\overline{0}$ in $\overline{F}$, and so ${\rm
char}\overline{F}=p>0$ and $p|k$. ∎
###### Corollary 4.2.
Let $F$ be a valued field. If $\tau(1+M_{F})\neq 1$, then $p={\rm
char}(\overline{F})>0$.
###### Proof.
Let $1+a$ be a non-trivial torsion element of $1+M_{F}$. Let $k$ be the order
of $1+a$. The non-triviality implies that $a\neq 0$. It follows from above
lemma that $p|k$. ∎
Recall that if ${\rm char}=p>0$, then ${\rm char}\overline{F}=p$; and, in case
that ${\rm char}F=0$ then ${\rm char}\overline{F}$ is either $0$ or $p$ for
some prime $p$. So, if we write the characteristic of $F$ and $\overline{F}$
as a pair, then we get $({\rm char}F,{\rm
char}\overline{F})\in\\{(0,0),\;(0,p),\;(p,p)\\}$.
###### Corollary 4.3.
Let $F$ be a valued field. If $-1\in\tau(F^{*})_{p}$, the $p$-primary
component of $\tau(F^{*})$ for some odd prime $p$, then ${\rm char}F=2$.
###### Proof.
Since $-1\in\tau(1+M_{F})$, Corollary 4.2 implies that ${\rm
char}\overline{F}=p>0$. Let $k\in\mathbb{Z}$ such that $(-1)^{p^{k}}=1$. Since
$p^{k}$ and $2$ are coprime, there are $m,n\in\mathbb{Z}$ such that $2m+pn=1$.
Then,
$-1=(-1)^{2m+p^{k}n}=(-1)^{2m}(-1)^{p^{k}n}=1.$
It follows that $2=0$ in $F$, and so $p={\rm char}\overline{F}={\rm char}F=2$.
∎
To clarify the situation of Corollary 4.3, we give an example of a henselian
field $F$ with $\tau(1+M_{F})\neq 1$.
###### Example 3.
Let $(\mathbb{Q}_{2},v_{2})$ be the field of $2$-adic numbers, and $v_{2}$ the
$2$-adic valuation on $\mathbb{Q}_{2}$. Then $(\mathbb{Q}_{2},v_{2})$ becomes
a henselian valued field with $\overline{\mathbb{Q}_{2}}=\mathbb{F}_{2}$, the
field of $2$ elements. Then, we have $v_{2}(-2)=v_{2}(2)=1>0$, which implies
that $-2\in M_{\mathbb{Q}_{2}}$. It follows that
$-1=1+(-2)\in\tau(1+M_{\mathbb{Q}_{2}})$, so $\tau(1+M_{\mathbb{Q}_{2}})\neq
1$.
###### Corollary 4.4.
Let $F$ be a field with a henselian valuation $v$, and let $D$ be a strongly
tame $F$-central division algebra with ${\rm ind}(D)=n$. Suppose that $G$ is a
subgroup of $D^{*}$ containing $[D^{*},D^{*}]$ such that $G/[D^{*},D^{*}]$ is
torsion of bounded degree $n$. Then $G\cap(1+M_{F})=1$.
###### Proof.
Assume by contradiction that $G\cap(1+M_{F})\neq 1$. Then, there exists $0\neq
a\in M_{F}$ such that $(1+a)^{n}\in[D,D]$. It follows that
$1={\rm Nrd}_{D}((1+a)^{n})=(1+a)^{n^{2}}.$
It follows from Lemma 4.1 that ${\rm char}(\overline{F})|n$, a contradiction.
∎
###### Corollary 4.5.
Let $F$ be a field with a henselian valuation $v$, and let $D$ be a strongly
tame $F$-central division algebra with ${\rm ind}(D)=n$. Suppose that $G$ is a
subgroup of $D^{*}$ containing $[D^{*},D^{*}]$ such that $G/[D^{*},D^{*}]$ is
torsion. If $G/[D^{*},D^{*}]$ contains no element of order ${\rm
char}(\overline{F})$, then $G\cap(1+M_{F})=1$.
###### Proof.
Assume by contradiction that $G\cap(1+M_{F})\neq 1$. Then, there exists $0\neq
a\in M_{F}$ such that $(1+a)^{m}\in[D^{*},D^{*}]$ for some $m\in\mathbb{Z}$.
It follows that
$1={\rm Nrd}_{D}((1+a)^{m})=(1+a)^{mn}.$
It follows from Lemma 4.1 that ${\rm char}(\overline{F})|{mn}$. Since $D$ is
strongly tame, we conclude that ${\rm char}(\overline{F})|m$, which would
imply that $G/[D^{*},D^{*}]$ contains an element of order ${\rm
char}(\overline{F})$, a contradiction. ∎
###### Lemma 4.6 ([4, Lemma 1]).
Let $F$ be a field and $D$ be a $F$-central division algebra with ${\rm
ind}(D)=n$. If $N$ is a normal subgroup of $D^{*}$, then
$N^{n}\subseteq(F^{*}\cap N)[D^{*},N]$.
Next we present another version of Congruence Theorem, which is considered as
an analogous version of Congruence Theorem ([4, Theorem 2]) for ${\rm
SK}_{1}(D)$ or of [12, Theorem 9]. To give the proof of this theorem, we learn
the techniques which were used by Motiee in the proof of [12, Theorem 9].
###### Theorem 4.7 (Congruence Theorem).
Let $F$ be a field with a henselian valuation $v$, and let $D$ be a strongly
tame $F$-central division algebra with ${\rm ind}(D)=n$. If $G/[D^{*},D^{*}]$
contains no element of order ${\rm char}(\overline{F})$, then $(1+M_{D})\cap
G\subseteq[D^{*},D^{*}]$.
###### Proof.
The proof for this case was given in the proof of [12, Theorem 9]. For
completeness we include a proof here. Since $1+M_{D}$ is a normal subgroup of
$D^{*}$, we may apply Lemma 4.6 to conclude that
$(1+M_{D})^{n}\subseteq(1+M_{F})[D^{*},(1+M_{D})]$. Since ${\rm
char}(\overline{F})\nmid n$, Hensel’s Lemma shows that
$(1+M_{D})^{n}=(1+M_{D})$. It follows that
$(1+M_{D})^{n}=(1+M_{F})[D^{*},(1+M_{D})]$. Moreover, in view of Corollary
4.5, we have $(1+M_{F})\cap G=1$, from which it follows that $(1+M_{D})\cap
G\subseteq[D^{*},D^{*}]$. ∎
Theorem 4.7 has two corollaries which are interesting results proved by Motiee
and Hazrat respectively. Before presenting these corollaries, we need the
Primary Decomposition Theorem which decomposes the group ${\rm TK}_{1}(D)$ of
a central division ring $D$ into its primary components, given by Motiee in
[12, Theorem 5]. Recall that if $D$ is a division ring with center $F$ of
index $n=p_{1}^{r_{1}}p_{2}^{r_{2}}\dots p_{k}^{r_{k}}$, with distinct prime
numbers $p_{j}$’s, then it can be decomposed as
$D=D_{p_{1}}\otimes_{F}D_{p_{2}}\otimes_{F}\dots\otimes_{F}D_{p_{k}}$, where
each $D_{p_{j}}$ is a division ring with center $F$ of index $p^{r_{j}}$.
###### Theorem 4.8 (Primary decomposition).
Let $F$ be a field, and $D$ be an $F$-central division algebra. If
$D=D_{p_{1}}\otimes_{F}D_{p_{2}}\otimes_{F}\dots\otimes_{F}D_{p_{k}}$ is the
primary decomposition of $D$, then
${\rm TK}_{1}(D)=\left(\prod_{i=1}^{k}{\rm
TK}_{1}(D_{p_{j}})_{p_{j}}\right)\times\left(\prod_{p\in\mathfrak{P}}\tau(F^{*})_{p}\right),$
where $\mathfrak{P}$ runs through the all primes in
$\mathbb{N}\backslash\\{p_{1},p_{2},\dots,p_{k}\\}$.
###### Corollary 4.9 ([12, Theorem 9]).
Let $F$ be a field with a henselian valuation $v$, and let $D$ be a strongly
tame $F$-central division algebra. Suppose that $G$ is a subgroup of $D^{*}$
containing $[D^{*},D^{*}]$ such that $G/[D^{*},D^{*}]$ is torsion. If ${\rm
char}(F)={\rm char}(\overline{F})$, then $(1+M_{D})\cap
G\subseteq[D^{*},D^{*}]$.
###### Proof.
We will prove that $G/[D^{*},D^{*}]$ contains no element of order ${\rm
char}(\overline{F})$. In fact, there is nothing to prove if ${\rm
char}(\overline{F})=0$. Thus, we may assume that ${\rm
char}(\overline{F})=p>0$. Assume by contradiction that there exists $g\in
G\backslash[D^{*},D^{*}]$ such that $g^{p}\in[D^{*},D^{*}]$. It follows that
the $p$-component ${\rm TK}_{1}(D)_{p}$ of ${\rm TK}_{1}(D)$ is non-trivial.
By Primary Decomposition Theorem, we get that $\tau(F^{*})_{p}\cong{\rm
TK}_{1}(D)_{p}\neq 1$. This is impossible because it is well-known that if
${\rm char}F=p>0$ then $\tau(F^{*})$ contains no element of order $p$. ∎
###### Corollary 4.10 ([4, Theorem 2]).
Let $F$ be a field with a henselian valuation $v$, and let $D$ be a strongly
tame $F$-central division algebra. Then $(1+M_{D})\cap
D^{(1)}\subseteq[D^{*},D^{*}]$.
###### Proof.
Let $n={\rm ind}(D)$. Because ${\rm SK}_{1}(D)=D^{(1)}/[D^{*},D^{*}]$ is
$n$-torsion and $D$ is strongly tame, we conclude that $D^{(1)}/[D^{*},D^{*}]$
contains no element of order ${\rm char}(\overline{F})$. Therefore the result
follows from Theorem 4.7. ∎
In view of the proof of Corollary 4.9, we conclude that if ${\rm char}(F)={\rm
char}(\overline{F})$, then $G/[D^{*},D^{*}]$ contains no element of order
${\rm char}(\overline{F})$. However, the converse does not hold, as we will
see in Example 5 later. Therefore, the result obtained in Theorem 4.7 properly
covers that of Corollary 4.9.
###### Remark 5.
The statement in Corollary 4.10 is also true if the hypothesis “strongly tame”
is replaced by a weaker hypothesis “tame”. This is a very beautiful result
which was proved by Hazrat and Wadsworth in [5, Theorem B.1] with a very long
and complex argument which is very difficult to follow. The authors of the
current paper still don’t know whether Theorem 4.7 true or not if the words
“strongly tame” are placed by “tame”. It seems that the arguments used in the
proof of [5, Theorem B.1] cannot be applied here.
Before giving a proof of Theorem 1.3, we need record an auxiliary lemma.
###### Lemma 4.11.
Let $F$ be a field with a henselian valuation $v$, and let D be an $F$-central
division algebra. If $D$ is strongly tame, then ${\rm TK}_{1}({\rm gr}(D))$
contains no element of order ${\rm char}(\overline{F})$.
###### Proof.
There is nothing to prove if ${\rm char}(\overline{F})=0$. Thus, assume that
${\rm char}(\overline{F})=q>0$. Let $Q$ be the quotient division ring of ${\rm
gr}(D)$. Then ${\rm ind}(Q)={\rm ind}({\rm gr}(D))$, and so, by (2.2), we have
$\displaystyle{\rm ind}(Q)^{2}=[{\rm gr}(D):{\rm
gr}(F)]=[\overline{D}:\overline{F}]|\Gamma_{D}/\Gamma_{F}|.$
Moreover, according to [11, Theorem 3], we have the “Ostrowski theorem”:
$\displaystyle{\rm
ind}(D)^{2}=[D:F]=q^{k}[\overline{D}:\overline{F}]|\Gamma_{D}/\Gamma_{F}|,$
where $k\in\mathbb{Z}$ with $k\geq 0$. The two equations show that ${\rm
ind}(Q)$ divides ${\rm ind}(D)$. Assume by contradiction that there exists
$1\neq a\in{\rm gr}(D)^{*}$ such that $a^{q}\in[{\rm gr}(D)^{*},{\rm
gr}(D)^{*}]$. Consider $a$ as an element of $Q^{*}$. Because $[{\rm
gr}(D)^{*},{\rm gr}(D)^{*}]\leq[Q^{*},Q^{*}]$, we get that
$a^{q}\in[Q^{*},Q^{*}]$. This implies that the $q$-primary component ${\rm
TK}_{1}(Q)_{q}$ of ${\rm TK}_{1}(Q)$ is non-trivial. At the other extreme, as
$\overline{F}$ is contained in $Q$, we conclude that ${\rm char}(Q)={\rm
char}(\overline{F})$. It follows that $Z(Q)$ is a field with ${\rm
char}(Z(Q))=q$, and so $\tau(Z(Q^{*}))_{q}=1$. Put $n={\rm ind}(Q)$ and write
$n=p_{1}^{r_{1}}p_{2}^{r_{2}}\dots p_{k}^{r_{k}}$. Let
$Q=Q_{p_{1}}\otimes_{Z(Q)}Q_{p_{2}}\otimes_{Z(Q)}\dots\otimes_{Z(Q)}Q_{p_{k}},$
where each $Q_{p_{j}}$ is a division ring with center $Z(Q)$ of index
$p^{r_{j}}$. By Primary Decomposition Theorem applied for $Q$, we get
${\rm TK}_{1}(Q)=\left(\prod_{i=1}^{k}{\rm
TK}_{1}(Q_{p_{j}})_{p_{j}}\right)\times\left(\prod_{p\in\mathfrak{P}}\tau((Z(Q))^{*})_{p}\right),$
where $\mathfrak{P}$ runs through the all primes in
$\mathbb{N}\backslash\\{p_{1},p_{2},\dots,p_{k}\\}$. Because
$\tau(Z(Q^{*}))_{q}=1$ and ${\rm TK}_{1}(Q)_{q}\neq 1$, we conclude that $q$
must be one of the $p_{i}$’s in the decomposition of ${\rm TK}_{1}(Q)$. This
implies that $q$ divides ${\rm ind}(Q)$. Moreover, as ${\rm ind}(Q)$ divides
${\rm ind}(D)$, we obtain that $p$ divides ${\rm ind}(D)$. This violates the
assumption that $D$ is strongly tame over $F$. ∎
With Theorem 4.7 in hand, we are now able give a proof of Theorem 1.3 which
indicates that the ${\rm TK}_{1}$ of a valued division ring coincides with
that of its associated graded division ring under a certain circumstance.
The proof of Theorem 1.3. Let $G$ and $\mathbf{G}$ be subgroup of $D^{*}$ and
${\rm gr}(D)^{*}$ such that ${\rm TK}_{1}(D)=G/[D^{*},D^{*}]$ and ${\rm
TK}_{1}({\rm gr}(D))=\mathbf{G}/[{\rm gr}(D)^{*},{\rm gr}(D)^{*}]$
respectively. Consider the surjective group homomorphism $\rho:D^{*}\to{\rm
gr}(D)^{*}$ given by $a\to\widetilde{a}$ with ${\rm ker}\rho=1+M_{D}$. We
claim that $\rho(G)=\mathbf{G}$. Consider the following diagram:
${1}$${(1+M_{D})\cap D^{\prime}}$${D^{\prime}}$${{\rm
gr}(D)^{\prime}}$${0}$${1}$${(1+M_{D})\cap G}$${G}$${\mathbf{G}}$${0}$ $\iota$
$\rho$ $\iota$$\rho$
where $\iota$ denotes the inclusion map. It is clear that $1+M_{D}\subseteq
D^{*}$. Thus, $\mathrm{im}\iota=(1+M_{D})\cap D^{*}=1+M_{D}=\mathrm{ker}\rho$.
Moreover, $\iota$ is a monomorphism and $\rho$ is an epimorphism so the top
row of the diagram is exact.
Next, we prove that the lower row of the diagram is exact. First, we claim
that $\rho(G)=\mathbf{G}$. First, we prove that $\rho(G)\subseteq\mathbf{G}$.
Indeed, for any $g\in G$, there exist $m\in\mathbb{Z}$ and such that $g^{m}=a$
for some $a\in[D^{*},D^{*}]$. As $\rho([D^{*},D^{*}])\subseteq[{\rm
gr}(D)^{*},{\rm gr}(D)^{*}]$, we get that
$\rho(g)^{m}=\rho(g^{m})=\rho(a)\in[{\rm gr}(D)^{*},{\rm gr}(D)^{*}].$
This implies that $\rho(G)\subseteq\mathbf{G}$. For the converse, we note
first, by Lemma 4.11, that ${\rm TK}_{1}({\rm gr}(D))$ contains no element of
order ${\rm char}(\overline{F})$. Let $b\in\mathbf{G}$. According to Lemma
3.1, we get that $\mathbf{G}\subseteq{\rm gr}(D)_{0}$. Then, there exists
$k\in\mathbb{Z}$ such that $b^{k}\in[{\rm gr}(D),{\rm gr}(D)]$. As ${\rm
gr}(D)_{0}=V_{D}/M_{D}=\overline{D}$, there exists $a\in U_{D}$ such that
$\widetilde{a}=b$. By (2.1), we have
$\overline{{\rm Nrd}_{D}(a^{k})}=\widetilde{{\rm Nrd}_{D}(a^{k})}={\rm
Nrd}_{{\rm gr}(D)}(\widetilde{a}^{k})={\rm Nrd}_{{\rm gr}(D)}(b^{k})=1,$
which implies that ${\rm Nrd}_{D}(a^{k})\in 1+M_{F}={\rm Nrd}_{D}(1+M_{D})$.
As ${\rm TK_{1}}({\rm gr}(D))$ contains no element of order ${\rm
char}(\bar{F})$, we have ${\rm char}(\bar{F})\nmid k$. By Hensel’s Lemma, we
obtain that $\left(1+M_{D}\right)^{k}=1+M_{D}$. So, let $c\in 1+M_{D}$ such
that ${\rm Nrd}_{D}(c)={\rm Nrd}_{D}(a^{k})^{-1}$. Then, there exists $d\in
1+M_{D}$ such that $d^{k}=c$. We have ${\rm Nrd}_{D}((ad)^{k})={\rm
Nrd}_{D}\left(a^{k}\right){\rm Nrd}_{D}\left(c\right)=1$, which implies
$(ad)^{k}\in D^{(1)}\subseteq G$. It follows that $(ad)^{kl}\in[D^{*},D^{*}]$
for some $l\in\mathbb{Z}$. Hence, $ad\in G$ and
$\rho(ad)=\rho(a)\rho(d)=\rho(a)=b,$
which means that $b\in\rho(G)$. Therefore, the lower row is exact at
$\mathbf{G}$. Moreover, we observe that the kernel of the restriction of
$\rho$ to $G$ is $(1+M_{D})\cap G$, which shows that the lower row is also
exact at $G$. Therefore, the lower row of the diagram is exact.
Moreover, by Congruence theorem, we get that the left vertical map of the
diagram is an isomorphism. Thus, by snake lemma, we obtain that ${\rm
TK}_{1}(D)\cong{\rm TK}_{1}({\rm gr}(D))$. The theorem is proved.
The proof of Corollary 1.4. As $D$ is strongly tame, we have ${\rm ind}({\rm
gr}(D))={\rm ind}(D)$ and $Z({\rm gr}(D))={\rm gr}(F)$. Therefore, we may
apply Corollary 1.2 to the graded division algebra ${\rm gr}(D)$ here, and
then all assertions may follows from the fact that ${\rm TK}_{1}(D)\cong{\rm
TK}_{1}({\rm gr}(D))$.
(i) As $D$ is unramified, by definition, we get that
$[\overline{D}:\overline{F}]=[D:F]$, from which it follows that $[{\rm
gr}(D):Z({\rm gr}(D))]=[\overline{D}:\overline{F}]=[{\rm gr}(D)_{0}:Z({\rm
gr}(D))_{0}]$. This means that ${\rm gr}(D)$ is a unramified graded division
algebra, and so (i) follows from Corollary 1.2(i).
(ii) Because $D$ is totally ramified, we have ${\rm
gr}(D)_{0}=\overline{D}=\overline{F}=Z({\rm gr}(D))_{0}$, which implies that
${\rm gr}(D)$ is a totally ramified graded division algebra, and so (ii)
follows from Corollary 1.2(ii).
(iii) Assume that $D$ is semiramifield. Then, by definition, we get that
$\overline{D}$ is a field and
$[\overline{D}:\overline{F}]=|\Gamma_{D}:\Gamma_{F}|={\rm ind}(D)$. As
$\Gamma_{{\rm gr}(D)}=\Gamma_{D}$ and $\Gamma_{{\rm gr}(F)}=\Gamma_{F}$, it
follows that $[{\rm gr}(D)_{0}:Z({\rm
gr}(D))_{0}]=[\overline{D}:\overline{F}]=[\Gamma_{{\rm gr}(D)}:\Gamma_{{\rm
gr}(F)}]={\rm ind}({\rm gr}(D))$. It follows that ${\rm gr}(D)$ is a
semiramified graded division algebra, and so (iii) follows from Corollary
1.2(iii).
(iv) As $D$ is nicely semiramified, it has a maximal subfield $K$ unramified
over $F$ and another maximal subfield $L$ totally ramified over $F$. Then, it
can be checked that ${\rm gr}(K)$ and ${\rm gr}(L)$ are graded maximal
subfields of ${\rm gr}(D)$, with ${\rm gr}(K)$ unramified over ${\rm gr}(F)$
and ${\rm gr}(L)$ totally ramified over ${\rm gr}(F)$. Therefore the assertion
(iv) follows immediately from Corollary 1.2(iv).
Now, we present some examples of strongly tame division algebras over a
henselian center, and compute the torsion subgroup of its Whitehead group
using Corollary 1.4. We start with using $\mathcal{T}$ construction to produce
examples of strongly tame totally ramified division ring.
###### Example 4.
Let $D=\mathcal{T}(k;n_{1},\dots,n_{r})$ be the division ring considered in
Example 2. Then, we have $F:=Z(D)=k((x_{1}))((y_{1}))\dots((x_{r}))((y_{r}))$
which is henselian for the $(x_{1},y_{1},\dots,x_{r},y_{r})$-adic valuation,
with residue field $\overline{F}=k$. Because ${\rm ind}(D)=n_{1}\dots n_{r}$,
by suitable choices of $k$ and the $n_{i}$’s, we get ${\rm char}(k)\nmid{\rm
ind}(D)$; that is, $D$ is strongly tame. Moreover, because ${\rm char}(F)={\rm
char}(k)={\rm char}(\overline{F})$, the proof of Corollary 4.9 shows that
${\rm TK}_{1}(D)$ contains no element of order ${\rm char}(\overline{F})$, and
so Corollary 1.4(ii) is applied. It follows that there is a short exact
sequence:
$1\xrightarrow{\;\;\;\;\;\;}\mu_{n}(k)/\mu_{m}(k)\xrightarrow{\;\;\;\;\;\;\;}{\rm
TK}_{1}(D)\xrightarrow{\;\;\;\;\;\;}\tau(k^{*})/\mu_{n}(k)\xrightarrow{\;\;\;\;\;\;\;}1,$
in which $\mu_{n}(k)/\mu_{m}(k)$ is a finite cyclic group and
$\tau(k^{*})/\mu_{n}(k)$ is a locally cyclic group. Using this, ones may
produce many examples of strongly tame totally ramified division ring $D$ for
which ${\rm TK}_{1}(D)$ is an extension of a finite cyclic group by a locally
cyclic group.
In what follows, we give an examples of a nicely semiramified divsion ring $D$
over an henselian valuded field and compute ${\rm SK}_{1}(D)$ using 1.4(iv).
###### Example 5.
Let $(\mathbb{Q}_{p},v_{p})$ be the field of $p$-adic numbers with $p$ a
prime, and $v_{p}$ the $p$-adic valuation on $\mathbb{Q}_{p}$. Then
$(\mathbb{Q}_{p},v_{p})$ becomes a henselian valued field with
$\overline{\mathbb{Q}_{p}}=\mathbb{F}_{p}$, the field of $p$ elements. Let $L$
be an unramified extension of $\mathbb{Q}_{p}$ of degree $2$. Then, with
respect to the valuation $v$ extending the valuation $v_{p}$ on
$\mathbb{Q}_{p}$, we have $\overline{L}=\mathbb{F}_{p^{2}}$, and $L$ is cyclic
Galois over $\mathbb{Q}_{p}$ as the valuation is henselian and $\overline{L}$
is cyclic Galois over $\overline{\mathbb{Q}_{p}}$. Let ${\rm
Gal}(L/\mathbb{Q}_{p})=\langle\sigma\rangle$. Take $\pi\in\mathbb{Q}_{p}$ such
that $v(\pi)=1$. Let $D$ be the cyclic algebra
$D=(L/\mathbb{Q}_{p},\sigma,\pi)=L\oplus Lx$, where $xc=\sigma(c)x$ for all
$c\in L$ and $x^{2}=\pi$ (see [8, p.218] for the definition of a cyclic
algebra). Then, $v$ extends to a valuation on $D$ given by
$v(a+bx)=\min\\{v(a),v(b)+1/2\\}$. Moreover, if we set $K=\mathbb{Q}_{p}(x)$,
then $K$ is a maximal subfield of $D$ totally ramified over $\mathbb{Q}_{p}$.
It follows from [9, Theorem 4.4] that $D$ is a nicely semiramified valued
division ring with $\overline{D}=\overline{K}=\mathbb{F}_{p^{2}}$, and thus
Corollary 1.4(iv) is applied here. As ${\rm ind}(D)=2$, according to Wang’s
Theorem (see [2, Corollary 4, p.164]), we get that ${\rm SK}_{1}(D)=1$. Thus,
Corollary 1.4(iv) implies that ${\rm
TK}_{1}(D)\cong\tau(\mathbb{F}_{p}^{*})\cap
N_{\mathbb{F}_{p^{2}}/\mathbb{F}_{p}}(\mathbb{F}_{p^{2}}^{*})$ which contains
no element of order $p$.
## References
* [1] M. Boulagouaz, Le gradúe d’une algèbre à division valuée, Comm. Algebra 23 (1995) 4275-4300.
* [2] P. Draxl, Skew Fields, London Math. Soc. Lecture Note Ser., 81 Cambridge University Press, Cambridge, 1983.
* [3] Yu. Ershov, Henselian valuations of division rings and the group ${\rm SK}_{1}$, Mat. Sb. (N.S.) 117 (1982) 60-68 (Russian), Math. USSR Sb. 45 (1983) 63-71 (English translation).
* [4] R. Hazrat, Wedderburn’s factorization theorem application to reduced K-theory, Proc. Am. Math. Soc. 130(2) (2002) 311-314.
* [5] R. Hazrat and A. R. Wadsworth, $\rm SK_{1}$ of graded division algebras, Israel J. Math. 183 (2011) 117-163.
* [6] Y.-S. Hwang and A. R. Wadsworth, Algebraic extensions of graded and valued fields, Comm. Algebra 27 (1999) 821-840.
* [7] Y.-S. Hwang and A. R. Wadsworth, Correspondences between valued division algebras and graded division algebras, J. Algebra 220 (1999) 73-114.
* [8] T. Y. Lam, A first course in noncommutative rings, 2nd edn, GTM 131, Springer-Verlag, New York, 2001.
* [9] B. Jacob and A. R. Wadsworth, Division algebras over henselian fields, J. Algebra 128 (1990) 126-179.
* [10] W. May, Multiplicative groups of fields, Proc. London Math. Soc 24(3) (1972) 295-306.
* [11] P. Morandi, The Henselization of a valued division algebra, J. Algebra 122 (1989) 232-243.
* [12] M. Motiee, On torsion subgroups of Whitehead groups of division algebras, Manuscripta Math. 141(3-4) (2013) 717-726.
* [13] T. Nakayama and Y. Matsushima, Über die multiplikative Gruppe einer $p$-adischen Divisions algebra, Proc. Imp. Acad. Japan 19 (1943) 622-628.
* [14] V. P. Platonov, On the Tannaka-Artin problem. Dokl. Akad. Nauk SSSR 221(5) (1975) 1038-1041. English trans.: Soviet Math. Dokl. 16 (1975) 468-473.
* [15] V. P. Platonov, The Tannaka-Artin problem, and groups of projective conorms. Dokl. Akad. Nauk SSSR 222(6) (1975) 1299-1302. English trans.: Soviet Math. Dokl. 16 (1975) 782-786.
* [16] V. P. Platonov, Reduced K-theory and approximation in algebraic groups. Trudy Mat. Inst. Steklov. 142 (1976) 198-207. English trans.: Proc. Steklov Inst. Math. 142 (1979) 213-224.
* [17] J.-P. Tignol and A. R. Wadsworth, Totally ramified valuations infinite-dimensional division algebras. Trans. Am. Math. Soc 302(1)(1987) 223-250.
* [18] J.-P. Tignol and A. R. Wadsworth, Value functions and associated graded rings for semisimple algebras, Trans. Amer. Math. Soc. 362 (2010) 687-726.
* [19] Wang Shianghaw, On the commutator group of a simple algebra, Amer. J. Math. 72 (1950) 323-334.
|
11institutetext: Department of Mathematics and Computing Science, Eindhoven
University of Technology, Eindhoven, The Netherlands. 22institutetext:
Department of Electrical Engineering, Federal University of Campina Grande,
Campina Grande, Paraíba, Brazil.
22email<EMAIL_ADDRESS>
# Entanglement-assisted Quantum Codes from Cyclic Codes
Francisco Revson F. Pereira 1122
###### Abstract
Entanglement-assisted quantum (QUENTA) codes are a subclass of quantum error
correcting codes which use entanglement as a resource. These codes can provide
error correction capability higher than the codes derived from the traditional
stabilizer formalism. In this paper, it is shown a general method to construct
QUENTA codes from cyclic codes. Afterwards, the method is applied to Reed-
Solomon codes, BCH codes, and general cyclic codes. We use the Euclidean and
Hermitian construction of QUENTA codes. Two families of QUENTA codes are
maximal distance separable (MDS), and one is almost MDS or almost near MDS.
The comparison of the codes in this paper is mostly based on the quantum
Singleton bound.
MSC: 81P70, 81P40, 94B15, 94B27.
###### Keywords:
Quantum Codes Reed-Solomon Codes BCH Codes Maximal Distance Separable Maximal
Entanglement.
## 1 Introduction
Practical implementations of most quantum communication schemes and quantum
computers will only be possible if such systems incorporate quantum error
correcting codes to them. Quantum error correcting codes restore quantum
states from the action of a noisy quantum channel. One of the most known and
used methods to create quantum codes from classical block codes is the CSS
method [23]. Unfortunately, it requires (Euclidean or Hermitian) duality
containing to one of the classical codes used. One way to overcome this
constraint is via entanglement. It is also possible to show that entanglement
also improves the error-correction capability of quantum codes. These codes
are called Entanglement-Assisted Quantum (QUENTA) codes, also denoted by EAQEC
codes in the literature. The first proposals of QUENTA codes were done by
Bowen [1] and Fattal, _et al._ [7]. In the following, Brun _et al._ [2] have
developed an entanglement-assisted stabilizer formalism for these codes, which
were recently generalized by Galindo, _et al._ [8].
This formalism has created a method to construct QUENTA codes from classical
block codes, which has lead to the construction of several families of QUENTA
codes [26, 6, 4, 20, 5, 4, 18, 10, 16]. The majority of them utilized
constacyclic codes [6, 5, 20] or negacyclic codes [4, 20] as the classical
counterpart. However, only a few of them have used cyclic codes and described
the parameters of the quantum code constructed via the defining set of cyclic
code. This can lead to a straightforward relation between the parameters of
the classical and quantum codes, and a method to create MDS QUENTA code. Li
_et al._ used BCH codes to construct QUENTA codes via decomposing the defining
set of the BCH code used [14]. Lu and Li constructed QUENTA codes from
primitive quaternary BCH codes [17]. Recently, Lu et al. [20], using not
cyclic but constacyclic MDS codes as the classical counterpart, proposed four
families of MDS QUENTA codes.
The main goal of this paper is to describe any cyclic code, such as Reed-
Solomon and BCH codes, under the same framework via defining set description,
and to show, using two classical codes from one of these families, how to
construct QUENTA codes from them. We have used the Euclidean and Hermitian
methods to construct QUENTA codes. And as it will be shown, QUENTA codes from
Reed-Solomon codes are MDS codes and the ones from BCH codes are new in two
senses. The first one is that there is no work in the literature with the same
parameters. The second is that we are using two BCH codes to derive the QUENTA
code, this gives more liberty in the choice of parameters. Two more families
of QUENTA codes are constructed using the Hermitian construction. One of these
families can generate codes which are almost MDS or almost near MDS; i.e., the
Singleton defect for these codes is equal to one or two units. The last family
created is maximal entangled and have length proportional to a high power of
the cardinality of the field, which make them suitable to achieve the hashing
bound [15]. Lastly, we would like to highlight that the description made in
this paper gives a more direct relation between cyclic codes and the
entanglement-assisted quantum codes constructed from them. Furthermore, such
relation can be extended to constacyclic and negacyclic codes with a few
adjustments.
The paper is organized as follows. In Section 2, we review Reed-Solomon and
BCH codes and describe their parameters via defining set. Additionally, it is
shown construction methods of QUENTA codes from classical codes. Using these
methods to cyclic classical, new QUENTA codes are constructed in Section 3. In
Section 4, a comparison of these codes with the quantum Singleton bound is
covered. In particular, it is shown families of MDS and almost MDS QUENTA
codes. We also create a family of QUENTA codes which can be applied to achieve
the hashing bound [15]. Lastly, the conclusion is carried out in Section 5.
_Notation._ Throughout this paper, $p$ denotes a prime number and $q\neq 2$ is
a power of $p$. Let $\mathbb{F}_{q}$ be the finite field with $q$ elements. A
linear code $C$ with parameters $[n,k,d]_{q}$ is a $k$-dimensional subspace of
$\mathbb{F}_{q}^{n}$ with minimum distance $d$. For cyclic codes, $Z(C)$
denotes the defining set and $g(x)$ is the generator polynomial. Lastly, an
$[[n,k,d;c]]_{q}$ quantum code is a $q^{k}$-dimensional subspace of
$\mathbb{C}^{q^{n}}$ with minimum distance $d$ that utilizes $c$ pre-shared
entangled pairs.
## 2 Preliminaries
In this section, we review some ideas related to linear complementary dual
(LCD) codes, cyclic codes, and entanglement-assisted quantum (QUENTA) codes.
As it will be shown, LCD codes give an important source to construct QUENTA
codes with interesting properties, such as maximal distance separability and
maximal entanglement (see Section 3). But before giving a description of LCD
codes, we need to define the Euclidean and Hermitian dual of a linear code.
###### Definition 1.
Let $C$ be a linear code over $\mathbb{F}_{q}$ with length $n$. The
(Euclidean) dual of $C$ is defined as
$C^{\perp}=\\{\ {\bf x}\in\mathbb{F}_{q}^{n}\ |\ {\bf x}\cdot{\bf c}=0\mbox{
for all }{\bf c}\in C\\}.$ (1)
If the finite field has cardinality equal to $q^{2}$, an even power of a
prime, then we can define the Hermitian dual of $C$. This dual code is defined
by
$C^{\perp_{h}}=\\{\ {\bf x}\in\mathbb{F}_{q^{2}}^{n}\ |\ {\bf x}\cdot{\bf
c}^{q}=0\mbox{ for all }{\bf c}\in C\ \\},$ (2)
where ${\bf c}^{q}=(c_{1}^{q},\ldots,c_{n}^{q})$ for ${\bf
c}\in\mathbb{F}_{q^{2}}^{n}$.
These types of dual codes can be used to derive quantum codes from the
stabilizer formalism [23]. The requirement in this formalism is to the
classical code to be self-dual; i.e., $C\subseteq C^{\perp}$ or $C\subseteq
C^{\perp_{H}}$. However, there is a different relationship between a code and
its (Euclidean or Hermitian) dual that it can be interesting in the
construction of QUENTA codes. This relation is complementary duality and is
defined in the following.
###### Definition 2.
The hull of a linear code $C$ is given by $hull(C)=C^{\perp}\cap C$. The code
is called linear complementary dual (LCD) code if the hull is trivial; i.e,
$hull(C)=\\{\bf{0}\\}$. Similarly, it is defined
$hull_{H}(C)=C^{\perp_{h}}\cap C$ and the idea of Hermitian LCD code.
Now, we can define cyclic codes and some properties that can be used to
extract the parameters of the quantum code constructed from them.
### 2.1 Cyclic codes
A linear code $C$ with parameters $[n,k,d]_{q}$ is called cyclic if for any
codeword $(c_{0},c_{1},\ldots,c_{n-1})\in C$ implies
$(c_{n-1},c_{0},c_{1},\ldots,c_{n-2})\in C$. Defining a map from
$\mathbb{F}_{q}^{n}$ to $\mathbb{F}_{q}[x]/(x^{n}-1)$, which takes ${\bf
c}=(c_{0},c_{1},\ldots,c_{n-1})\in\mathbb{F}_{q}^{n}$ to
$c(x)=c_{0}+c_{1}x+\cdots+c_{n-1}x^{n-1}\in\mathbb{F}_{q}[x]/(x^{n}-1)$, we
can see that a linear code $C$ is cyclic if and only if it corresponds to an
ideal of the ring $\mathbb{F}_{q}[x]/(x^{n}-1)$. Since that any ideal in
$\mathbb{F}_{q}[x]/(x^{n}-1)$ is principal, then any cyclic code $C$ is
generated by a polynomial $g(x)|(x^{n}-1)$, which it is called generator
polynomial. This polynomial is monic and has the smallest degree among all the
generators of $C$.
A characterization of the parameters of a cyclic code can be given from the
generator polynomial and its defining set. For the description of this set,
consider the following: Let $m=ord_{n}(q)$, $\alpha$ be a generator of the
multiplicative group $\mathbb{F}_{q^{m}}^{*}$, and assume
$\beta=\alpha^{\frac{q^{m}-1}{n}}$; i.e., $\beta$ is a primitive $n$-th root
of unity. Then the defining set of $C$, which is denoted by $Z(C)$, is defined
as $Z(C)=\\{i\in\mathbb{Z}_{n}\colon c(\beta^{i})=0\text{ for all }c(x)\in
C\\}$.
BCH and Reed-Solomon codes are particular cases of cyclic codes, where the
generator polynomial has some additional properties. See Definitions 3 and 5.
###### Definition 3.
Let $b\geq 0$, $\delta\geq 1$, and $\alpha\in\mathbb{F}_{q^{m}}$, where
$m=ord_{n}(q)$. A cyclic code $C$ of length $n$ over $\mathbb{F}_{q}$ is a BCH
code with designed distance $\delta$ if
$g(x)=\text{lcm}\\{m_{b}(x),m_{b+1}(x),\ldots,m_{b+\delta-2}(x)\\}$
where $m_{i}(x)$ is the minimal polynomial of $\alpha^{i}$ over
$\mathbb{F}_{q}$.If $n=q^{m}-1$ then the BCH code is called primitive, and if
$b=1$ it is called narrow sense.
Before relating the parameters of an BCH code with the defining set, we need
to introduce the idea of cyclotomic coset. It comes from the observation that
the minimal polynomial $m_{i}(x)$ of $\alpha^{i}$ can be the minimal
polynomial of other powers of $\alpha$. The reason for this is that $\alpha$
belongs to an extension of $\mathbb{F}_{q}$ while the polynomial
$m_{i}(x)\in\mathbb{F}_{q}[x]$. The set of all zeros of $m_{i}(x)$ in the
field $\mathbb{F}_{q^{m}}$ is given by the cyclotomic coset of $i$. Thus, the
defining set of a BCH code $C$ is the union of the cyclotomic cosets of
$b,b+1,\ldots,b+\delta-2$. The following definition describes this set.
###### Definition 4.
The $q$-ary cyclotomic coset $\mod n$ containing an element $i$ is defined by
$\mathbb{C}_{i}=\\{i,iq,iq^{2},iq^{3},\ldots,iq^{m_{i}-1}\\},$ (3)
where $m_{i}$ is the smallest positive integer such that $iq^{m_{i}}\equiv
i\mod n$.
For the parameters of a BCH code, it is shown that the dimension is equal to
$n-|Z(C)|$ and the minimal distance of $C$ is at least $\delta$ [24]. Thus, we
can see that important properties of an BCH codes can be obtained from the
defining set. The same characterization happens with Euclidean or Hermitian
dual cyclic code. Propositions 1 and 2 focus on this.
###### Proposition 1.
[24, Proposition 4.3.8] Let $C$ be a linear code of length $n$ and defining
set $Z(C)$. Then the defining set of $C^{\perp}$ is given by
$Z(C^{\perp})=\mathbb{Z}_{n}\setminus\\{-i|i\in Z(C)\\}$
For BCH codes, the generator polynomial is given by the lcm of the minimal
polynomials over $\mathbb{F}_{q}$ of the elements $\alpha^{j}$ such that $j\in
Z(C^{\perp})$.
###### Proposition 2.
Let $C$ be a cyclic code over $\mathbb{F}_{q^{2}}$ with defining set $Z(C)$.
Then
$Z(C^{\perp_{h}})=\mathbb{Z}_{n}\setminus\\{-i|i\in qZ(C)\\}.$
###### Proof.
Let ${\bf c}\in\mathbb{F}_{q^{2}}^{n}$ be a codeword of $C$. Expressing ${\bf
c}^{q}$ as a polynomial we have that
$c^{(q)}(x)=c_{0}^{q}+c_{1}^{q}x+\cdots+c_{n-1}^{q}x^{n-1}$. So,
$i\in\mathbb{Z}_{n}$ belongs to $Z(C^{q})$ if and only if
$\displaystyle c^{(q)}(\alpha^{i})=0$ $\displaystyle\iff$ $\displaystyle
c_{0}^{q}+c_{1}^{q}\alpha^{i}+\cdots+c_{n-1}^{q}\alpha^{i(n-1)}=0$
$\displaystyle\iff$
$\displaystyle(c_{0}^{q}+c_{1}^{q}\alpha^{i}+\cdots+c_{n-1}^{q}\alpha^{i(n-1)})^{q}=0$
$\displaystyle\iff$ $\displaystyle
c_{0}+c_{1}\alpha^{iq}+\cdots+c_{n-1}\alpha^{iq(n-1)}=0$ $\displaystyle\iff$
$\displaystyle iq\in Z(C).$
This shows that $Z(C^{q})=qZ(C)$. Since $C^{\perp_{h}}=(C^{q})^{\perp}$, we
have from Proposition 1 that
$Z(C^{\perp_{h}})=\mathbb{Z}_{n}\setminus\\{-i|i\in qZ(C)\\}$. ∎
The other class of cyclic codes used in this paper, Reed-Solomon codes, can be
viewed as a subclass of BCH codes. Thus, a similar characterization in terms
of defining set can be given, see Definition 5 and Corollary 1. One property
of such codes that make them important is that they are maximal distance
separable (MDS) codes; i.e., fixing the length and the dimension, they have
the maximal minimal distance possible. As shown in Section 3, using such codes
to construct QUENTA codes will result in MDS quantum codes.
###### Definition 5.
Let $b\geq 0$, $n=q-1$, and $1\leq k\leq n$. A cyclic code $RS_{k}(n,b)$ of
length $n$ over $\mathbb{F}_{q}$ is a _Reed-Solomon code_ with minimal
distance $n-k+1$ if
$g(x)=(x-\alpha^{b})(x-\alpha^{b+1})\cdot\cdots\cdot(x-\alpha^{b+n-k-1}),$
where $\alpha$ is a primitive element of $\mathbb{F}_{q}$.
A particular application of Proposition 1 to Reed-Solomon codes is described
in Corollary 1, where the parameters and defining set of an Euclidean dual of
a Reed-Solomon is derived.
###### Corollary 1.
Let $RS_{k}(n,b)$ be a Reed-Solomon code. Then its Euclidean dual can be
described as
$RS_{k}(n,b)^{\perp}=RS_{n-k}(n,n-b+1)$
In particular, the defining the of $RS_{k}(n,b)^{\perp}$ is given by
$Z(RS_{k}(n,b)^{\perp})=\\{n-b+1,n-b+2,\ldots,n-b+k\\}$.
As it will be in the next subsection, the amount of entanglement in a QUENTA
code is computed from the dimension of the intersection between two codes. So,
the last proposition of this subsection addresses such subject.
###### Proposition 3.
[12, Exercise 239, Chapter 4] Let $C_{1}$ and $C_{2}$ be cyclic codes with
defining set $Z(C_{1})$ and $Z(C_{2})$, respectively. Then the defining set of
$C_{1}\cap C_{2}$ is given by $Z(C_{1})\cup Z(C_{2})$. In particular,
$\dim(C_{1}\cap C_{2})=n-|Z(C_{1})\cup Z(C_{2})|$.
### 2.2 Entanglement-assisted quantum codes
###### Definition 6.
A quantum code $\mathcal{Q}$ is called an $[[n,k,d;c]]_{q}$ entanglement-
assisted quantum (QUENTA) code if it encodes $k$ logical qudits into $n$
physical qudits using $c$ copies of maximally entangled states and can correct
$\lfloor(d-1)/2\rfloor$ quantum errors. A QUENTA code is said to have maximal
entanglement when $c=n-k$.
Formulating a stabilizer paradigm for QUENTA codes gives a way to use
classical codes to construct this quantum codes [3]. In particular, we have
the next two procedures by Galindo, _et al._ [8].
###### Proposition 4.
[8, Theorem 4] Let $C_{1}$ and $C_{2}$ be two linear codes over
$\mathbb{F}_{q}$ with parameters $[n,k_{1},d_{1}]_{q}$ and
$[n,k_{2},d_{2}]_{q}$ and parity check matrices $H_{1}$ and $H_{2}$,
respectively. Then there is a QUENTA code with parameters
$[[n,k_{1}+k_{2}-n+c,d;c]]_{q}$, where
$d=\min\\{d_{H}(C_{1}\setminus(C_{1}\cap
C_{2}^{\perp})),d_{H}(C_{2}\setminus(C_{1}^{\perp}\cap C_{2}))\\}$, with
$d_{H}$ as the minimum Hamming weight of the vectors in the set, and
$c={\rm{rank}}(H_{1}H_{2}^{T})=\dim C_{1}^{\perp}-\dim(C_{1}^{\perp}\cap
C_{2})$ (4)
is the number of required maximally entangled states.
###### Proposition 5.
[8, Proposition 3 and Corollary 1] Let $C$ be a linear codes over
$\mathbb{F}_{q^{2}}$ with parameters $[n,k,d]_{q}$, $H$ be a parity check
matrix for $C$, and $H^{*}$ be the $q$-th power of the transpose matrix of
$H$. Then there is a QUENTA code with parameters
$[[n,2k-n+c,d^{\prime};c]]_{q}$, where $d^{\prime}=d_{H}(C\setminus(C\cap
C^{\perp_{h}}))$, with $d_{H}$ as the minimum Hamming weight of the vectors in
the set, and
$c={\rm{rank}}(HH^{*})=\dim C^{\perp_{h}}-\dim(C^{\perp_{h}}\cap C)$ (5)
is the number of required maximally entangled states.
A measurement of goodness for a QUENTA code is the quantum Singleton bound
(QSB). Let $[[n,k,d;c]]_{q}$ be a QUENTA code, then the QSB is given by
$d\leq\Big{\lfloor}\frac{n-k+c}{2}\Big{\rfloor}+1.$ (6)
The difference between the QSB and $d$ is called _quantum Singleton defect_.
When the quantum Singleton defect is equal to zero (resp. one) the code is
called maximum distance separable quantum code (resp. almost maximum distance
separable quantum code) and it is denoted MDS quantum code (resp. almost MDS
quantum code).
## 3 New Entanglement-Assisted Quantum Error Correcting Cyclic Codes
In this section is shown the construction of QUENTA codes from the cyclic
codes. We are going to make use of Euclidean and Hermitian constructions,
which will give codes with different parameters when compared over the same
field.
### 3.1 Euclidean Construction
A straightforward application of cyclic codes to the Proposition 4 via
defining set description can produce some interesting results. See Theorem 3.1
and Corollary 2.
###### Theorem 3.1.
Let $C_{1}$ and $C_{2}$ be two cyclic codes with parameters
$[n,k_{1},d_{1}]_{q}$ and $[n,k_{2},d_{2}]_{q}$, respectively. Then there is
an QUENTA code with parameters $[[n,k_{1}-|Z(C_{1}^{\perp})\cap
Z(C_{2})|,\min\\{d_{1},d_{2}\\};n-k_{2}-|Z(C_{1}^{\perp})\cap
Z(C_{2})|]]_{q}$.
###### Proof.
From Proposition 3 we have that $\dim(C_{1}^{\perp}\cap
C_{2})=n-|Z(C_{1}^{\perp})\cup
Z(C_{2})|=n-|Z(C_{2})|-|Z(C_{1}^{\perp})|+|Z(C_{1}^{\perp})\cap
Z(C_{2})|=k_{2}-k_{1}+|Z(C_{1}^{\perp})\cap Z(C_{2})|$. So, the amount of
entanglement used in an QUENTA code constructed from these two cyclic codes
can be computed from Proposition 4, which is $c=n-k_{2}-|Z(C_{1}^{\perp})\cap
Z(C_{2})|$. Substituting this value of $c$ in the parameters of the QUENTA
code in Proposition 4, we obtain an $[[n,k_{1}-|Z(C_{1}^{\perp})\cap
Z(C_{2})|,\min\\{d_{1},d_{2}\\};n-k_{2}-|Z(C_{1}^{\perp})\cap Z(C_{2})|]]_{q}$
QUENTA code. ∎
###### Corollary 2.
Let $C$ be a LCD cyclic code with parameters $[n,k,d]_{q}$. Then there is a
maximal entanglement QUENTA code with parameters $[[n,k,d;n-k]]_{q}$. In
particular, if $C$ is MDS, so it is the QUENTA code derived from it.
###### Proof.
Let $C_{1}=C_{2}=C$ in Theorem 3.1. Since $C$ is LCD, then
$|Z(C_{1}^{\perp})\cap Z(C_{2})|=0$. From Theorem 3.1 we have that there is an
QUENTA code with parameters $[[n,k,d;n-k]]_{q}$. ∎
###### Theorem 3.2.
Let $C_{1}=RS_{k_{1}}(n,b_{1})$ and $C_{2}=RS_{k_{2}}(n,b_{2})$ be two Reed-
Solomon codes over $\mathbb{F}_{q}$ with $0\leq b_{1}\leq k_{1}$, $b_{2}\geq
0$, and $b_{1}+b_{2}\leq k_{2}+1$. Then we have two possible cases:
1. 1.
For $k_{1}-b_{1}\geq b_{2}$, there is an QUENTA code with parameters
$[[n,b_{1}+b_{2}-1,n-\min\\{k_{1},k_{2}\\}+1;n+b_{1}+b_{2}-k_{1}-k_{2}-1]]_{q};$
2. 2.
For $k_{1}-b_{1}<b_{2}$, there is an QUENTA code with parameters
$[[n,k_{1},n-\min\\{k_{1},k_{2}\\}+1;n-k_{2}]]_{q}.$
###### Proof.
From Corollary 1, we have that
$Z(C_{1}^{\perp})=\\{n-b_{1}+1,n-b_{1}+2,\ldots,n-b_{1}+k_{1}\\}$. First of
all, notice that the restriction $b_{1}+b_{2}\leq k_{2}+1$ implies that the
first element in the defining set of $Z(C_{1}^{\perp})$ comes after the last
element in $Z(C_{2})$. Since that $0\leq b_{1}\leq k_{1}$, we have that
$n-b_{1}+k_{1}\geq n$, which implies in a defining set for $C_{1}^{\perp}$
equals to
$Z(C_{1}^{\perp})=\\{n-b_{1}+1,n-b+2,\ldots,n-1,0,1,\ldots,k_{1}-b_{1}\\}$.
Thus, $Z(C_{1}^{\perp})$ intersects with $Z(C_{2})$ if and only if
$k_{1}-b_{1}\geq b_{2}$. In the case that it does, the intersection is equals
to $Z(C_{1}^{\perp})\cap Z(C_{2})=k_{1}-(b_{1}+b_{2})+1$. The missing claims
are obtained using these results in Theorem 3.1. ∎
###### Corollary 3.
Let $C=RS_{k}(n,b)$ be a Reed-Solomon codes over $\mathbb{F}_{q}$ with
$0<b\leq(k+1)/2$ and $0<k<n\leq q$. Then there is an MDS QUENTA code with
parameters $[[n,2b-1,n-k+1;n+2b-2k-1]]_{q}$. In particular, for $b=(k+1)/2$,
there is a maximal entanglement MDS QUENTA code.
###### Proof.
Let $C_{1}=C_{2}=RS_{k}(n,b)$ in Theorem 3.2. Assuming $0\leq b<(k+1)/2$, we
have that the classical codes attain the first case of Theorem 3.2; and for
$b=(k+1)/2$, we are in the second case of Theorem 3.2. Thus, substituting the
values of $k_{1},k_{2}$ and $b_{1},b_{2}$ by $k$ and $b$, respectively, the
result follows. ∎
In a similar way, we can use BCH codes to construct QUENTA codes. The gain in
using BCH codes is that the length of the code is not bounded by the
cardinality of the finite field used. However, creating classical or quantum
codes from BCH codes which are MDS is a difficult task. Our proposal to have
BCH codes as the classical counterpart in this paper is to show how to use two
BCH codes to construct QUENTA codes. In addition, it is also constructed
maximal entanglement QUENTA codes. In order to do this, we show suitable
properties concerning some cyclotomic cosets for $n=q^{2}-1$.
###### Lemma 1.
Let $n=q^{2}-1$ with $q>2$. Then the $q$-ary coset $\mathbb{C}_{0}$ has one
element, and $\mathbb{C}_{i}=\\{i,iq\\}$ for any $1\leq i\leq q-1$.
###### Proof.
The first claim is trivial. For the second one, notice $iq^{2}\equiv
i\mod(q^{2}-1)$. Thus, the only elements in $\mathbb{C}_{i}$ are $i$ and $iq$,
for $1\leq i\leq q-1$. ∎
From Lemma 1, we can construct QUENTA codes with length $n=q^{2}-1$. See
Theorem 3.3.
###### Theorem 3.3.
Let $n=q^{2}-1$ with $q>2$. Assume $a,b$ are integer such that $0\leq a\leq
q-1$ and $1\leq b\leq q$. Then there is an QUENTA code with parameters
* •
$[[n,2(q-b)-1,b+1;2(q-a-1)]]_{q}$, if $a\geq q-b$ and $b<q$;
* •
$[[n,2a+1,b+1;2b-\lfloor\frac{b}{q}\rfloor]]_{q}$, if $a<q-b$.
###### Proof.
First of all, assume that $C_{1}^{\perp}$ has defining set given by
$Z(C_{1}^{\perp})=\cup_{i=0}^{a}\mathbb{C}_{i}$ and the defining set of
$C_{2}$ is equal to $Z(C_{2})=\cup_{i=1}^{b}\mathbb{C}_{q-i}$. From Lemma 1 we
have that $|Z(C_{1}^{\perp})|=2a+1$ and
$|Z(C_{2})|=2b-\lfloor\frac{b}{q}\rfloor$. Thus, the dimensions of $C_{1}$ and
$C_{2}$ are equal to $k_{1}=|Z(C_{1}^{\perp})|=2a+1$ and
$k_{2}=n-|Z(C_{2})|=n-2b+\lfloor\frac{b}{q}\rfloor$, respectively. To compute
$|Z(C_{1}^{\perp})\cap Z(C_{2})|$, we have to consider two cases. If $a\geq
q-b$, then we have that $Z(C_{1}^{\perp})\cap
Z(C_{2})=\cup_{i=q-b}^{a}\mathbb{C}_{i}$, which has cardinality given by
$|Z(C_{1}^{\perp})\cap Z(C_{2})|=2(a-(q-b)+1)-\lfloor\frac{b}{q}\rfloor$,
because $|\mathbb{C}_{0}|=1$. On the other hand, if $a<q-b$, then
$|Z(C_{1}^{\perp})\cap Z(C_{2})|=0$. Lastly, since that $a,b\leq q$,
$Z(C_{1}^{\perp})=\cup_{i=0}^{a}\mathbb{C}_{i}$, and $n=q^{2}-1$ with $q>2$,
we can see that $d_{1}>d_{2}=b+1$. Now, applying these results to Theorem 3.1,
we have that there is a QUENTA code with parameters
$[[n,2(q-b)-1+\lfloor\frac{b}{q}\rfloor,b+1;2(q-a-1)]]_{q}$, if $a\geq q-b$,
or a QUENTA code with parameters
$[[n,2a+1,b+1;2b-\lfloor\frac{b}{q}\rfloor]]_{q}$. ∎
### 3.2 Hermitian Construction
In the same way as before, it possible to use cyclic codes to construct QUENTA
codes from the Hermitian construction method of Proposition 5. See the
following theorem.
###### Theorem 3.4.
Let $C$ be a cyclic code with parameters $[n,k,d]_{q^{2}}$. Then there is an
QUENTA code with parameters $[[n,k-|Z(C^{\perp_{h}})\cap
Z(C)|,d;n-k-|Z(C^{\perp_{h}})\cap Z(C)|]]_{q}$.
###### Proof.
First of all, from Proposition 3 we have $\dim(C^{\perp}\cap
C)=n-|Z(C^{\perp})\cup Z(C)|=n-|Z(C)|-|Z(C^{\perp_{h}})|+|Z(C^{\perp_{h}})\cap
Z(C)|=k-k+|Z(C^{\perp_{h}})\cap Z(C)|=|Z(C^{\perp_{h}})\cap Z(C)|$. So,
$c=\dim(C^{\perp_{h}})-\dim(C^{\perp}\cap C)=n-k-|Z(C^{\perp_{h}})\cap Z(C)|$.
Using a $[n,k,d]_{q^{2}}$ to construct a QUENTA codes via Proposition 5, we
derive a code with parameters $[[n,k-|Z(C^{\perp_{h}})\cap
Z(C)|,d;n-k-|Z(C^{\perp_{h}})\cap Z(C)|]]_{q}$. ∎
###### Corollary 4.
Let $C$ be an LCD cyclic code with parameters $[n,k,d]_{q^{2}}$. Then there is
a maximal entanglement QUENTA code with parameters $[[n,k,d;n-k]]_{q}$.
###### Proof.
From the proof of Theorem 3.4, we have that $\dim(C^{\perp_{h}}\cap
C)=|Z(C^{\perp_{h}})\cap Z(C)|$. Since that $C$ is LCD, $|Z(C^{\perp_{h}})\cap
Z(C)|=0$ and the result follows from Theorem 3.4. ∎
Differently of constructing QUENTA code from Euclidean dual cyclic code, the
construction via Hermitian dual can be more delicate, even for Reed-Solomon
codes. Even so, we are going to show that is possible to construct QUENTA
codes from a particular case of Reed-Solomon codes and some cyclic codes.
###### Theorem 3.5.
Let $q$ be a prime power and assume $C=RS_{k}(n,1)$ be a Reed-Solomon codes
over $\mathbb{F}_{q^{2}}$ with $k=qt+r<q^{2}$, where $t\geq 1$ and $0\leq
r\leq q-1$, and $n=q^{2}$. Then we have the following:
* •
If $t\geq q-r-1$, then there exists an MDS QUENTA code with parameters
$[[q^{2},(t+1)^{2}-2(q-r)+1,q(q-t)-r+1;(q-t-1)^{2}+1]]_{q}.$
* •
If $t<q-r-1$, then there exists an MDS QUENTA code with parameters
$[[q^{2},t^{2}-1,q(q-t)-r+1;(q-t)^{2}-2r-1)]]_{q}.$
###### Proof.
Since that $C=RS_{k}(n,0)$, we have that $Z(C)=\\{0,1,2,\ldots,n-k-1\\}$. From
the proof of Theorem 2, we also have that
$Z(C^{\perp_{h}})=qZ(C^{\perp})=\\{q,2q,\ldots,kq\\}$. From $n=q^{2}$ and
$k=qt+r$, we can rewrite these two defining set as $Z(C)=\\{qi+j|0\leq i\leq
q-t-2,0\leq j\leq q-1\\}\cup\\{(q-t-1)q+j|0\leq j\leq q-r-2\\}$ and
$Z(C^{\perp_{h}})=\\{qi+j|0\leq i\leq q-1,0\leq j\leq t-1\\}\cup\\{qi+t|0\leq
i\leq r\\}$. Using this description, we can compute $|Z(C)\cap
Z(C^{\perp_{h}})|$. To do so, we have to consider two cases separately, $t\geq
q-r-1$ and $t<q-r-1$. For the first case, the intersection is given by the
following set $Z(C)\cap Z(C^{\perp_{h}})=\\{qi+j|0\leq i\leq q-t-2,0\leq j\leq
t\\}\cup\\{(q-t-1)q+j|0\leq j\leq q-r-2\\}$. Thus, $|Z(C)\cap
Z(C^{\perp_{h}})|=(q-t-1)(t+1)+q-r-1$. Similarly for the case $t<q-r-1$, we
have $Z(C)\cap Z(C^{\perp_{h}})=\\{qi+j|0\leq i\leq q-t-1,0\leq j\leq
t-1\\}\cup\\{qi+t|0\leq i\leq r\\}$, which implies in $|Z(C)\cap
Z(C^{\perp_{h}})|=(q-t)t+r+1$. Applying the previous computations, and using
the fact that $C$ has parameters $[q^{2},k,q^{2}-k+1]_{q^{2}}$, to Theorem
3.4, we have that there exists a QUENTA code with parameters
* •
$[[q^{2},(t+1)^{2}-2(q-r)+1,q(q-t)-r+1;(q-t-1)^{2}+1]]_{q}$, for $t\geq
q-r-1$; and
* •
$[[q^{2},t^{2}-1,q(q-t)-r+1;(q-t)^{2}-2r-1)]]_{q}$, for $t<q-r-1$.
∎
###### Theorem 3.6.
Let $n=q^{4}-1$ and $q\geq 3$ a prime power. There exists an QUENTA code with
parameters $[[n,n-4(a-1)-3,d\geq a+1;1]]_{q}$, where $2\leq a\leq q^{2}-1$.
###### Proof.
Let $C_{a}$ be a cyclic code with defining set
$Z(C_{a})=\mathbb{C}_{0}\cup\mathbb{C}_{q^{2}+1}\cup(\cup_{i=2}^{a}\mathbb{C}_{q^{2}+a})$,
for $2\leq a\leq q^{2}-1$. From Ref. [9], we have that
$\mathbb{C}_{q^{2}+1}=\\{q^{2}+1\\}$ and
$\mathbb{C}_{q^{2}+a}=\\{q^{2}+a,1+aq^{2}\\}$. It is trivial to show that
$\mathbb{C}_{0}=\\{0\\}$. From $-qZ(C_{a})\cap Z(C_{a})=\mathbb{C}_{0}$ [9],
we can see that $Z(C_{a}^{\perp_{h}})\cap
Z(C_{a})=Z(C_{a})\setminus\mathbb{C}_{0}$. Hence, $|Z(C_{a}^{\perp_{h}})\cap
Z(C_{a})|=2(a-1)+1$. From the assumption of the defining set, the dimension
and minimal distance of the classical code is $k=n-2(a-1)-2$ and $d\geq a+1$,
respectively. Thus, applying these quantities to Theorem 3.4, we have that
there exists an QUENTA code with parameters $[[n,n-4(a-1)-3,d\geq
a+1;1]]_{q}$. ∎
Two important statements can be said about Theorem 3.6. Comparing the bound
given for the minimal distance and the Singleton bound for QUENTA codes, we
see that the difference between these two values is equal to $a-1$. So, for
lower values of $a$ (such as $a=2$ or $a=3$) the QUENTA codes have minimal
distance close to optimal; e.g., if $a=2$ (or $a=3$), the family of QUENTA
codes is almost MDS (or almost near MDS). The second point is that the codes
in Theorem 3.6 can be seen as a generalization of the result by Qian and Zhang
[25].
In the following, we use LCD cyclic code to construct maximal entanglement
QUENTA codes. The families obtained have an interesting range of possible
parameters.
###### Theorem 3.7.
Let $q$ be a prime power, $m\geq 2$, $2\leq\delta\leq
q^{2\lceil\frac{m}{2}\rceil}+1$, and
$\kappa=q^{2m}-2-2(\delta-1-\Big{\lfloor}\frac{\delta-1}{q^{2}}\Big{\rfloor})m$.
Then,
1. 1.
For $m$ odd and $1\leq u\leq q-1$, there is a maxima entanglement QUENTA code
with parameters
$[[q^{2m}-1,k,d\geq\delta+1+\lfloor\frac{\delta-1}{q}\rfloor;q^{2m}-1-k]]_{q}$,
where
$\displaystyle k=\left\\{\begin{array}[]{ll}\kappa,&\text{if }2\leq\delta\leq
q^{m}-1;\\\ \kappa+u^{2}m,&\text{if }uq^{m}\leq\delta\leq(u+1)(q^{m}-1);\\\
\kappa+(u^{2}+2v+1)m,&\text{if }\delta=(u+1)(q^{m}-1)+v+1\text{ for }0\leq
v\leq u-1;\\\ \kappa+q^{2}m,&\text{if }\delta=q^{m+1}\text{ or
}q^{m+1}+1.\end{array}\right.$ (11)
2. 2.
For $m$ even, there is an maximal entanglement QUENTA code with parameters
$[[q^{2m}-1,\kappa,d\geq\delta+1+\lfloor\frac{\delta-1}{q}\rfloor;2(\delta-1-\lfloor\frac{\delta-1}{q^{2}}\rfloor)m+1]]_{q}.$
(12)
###### Proof.
From Li [13], we have that there are LCD cyclic codes with parameters
$[q^{2m}-1,k,\delta+1+\lfloor\frac{\delta-1}{q}\rfloor]_{q^{2}}$, where $k$ is
the same as in Eqs. 11 and 12 for $m$ odd and even, respectively. Thus,
applying this LCD code to Corollary 4 we obtain the mentioned codes. ∎
## 4 Code Examples
In Tables 1, we present some MDS QUENTA codes obtained from Corollary 3 and
Theorem 3.5. The codes in the first column are obtained from the Euclidean
construction and the ones in the second from the Hermitian construction. As
can be seen, the later one has a higher length when compared with the same
field. So, they can be used in applications where the quantum system has low
degrees of freedom. On the other hand, the codes in the first column can reach
values that the ones from the Hermitian construction cannot. Thus, these two
class of QUENTA codes are suitable for their specific applications.
Table 1: Some new MDS maximal entanglement QUENTA codes from Reed-Solomon codes New QUENTA codes – Corollary 3 | New QUENTA codes – Theorem 3.5
---|---
$[[n,2b-1,n-k+1;n+2b-2k-1]]_{q}$ | $[[q^{2},t^{2}-1,q(q-t)-r+1;(q-t)^{2}-2r-1)]]_{q}$
$0<b\leq(k+1)/2$ and $0<k<n\leq q$ | $qt+r<q^{2}$, where $1\leq t<q-r-1$ and $0\leq r\leq q-1$
Examples
${[[3,1,3;2]]}_{3}$ | ${[[16,3,9;3]]}_{4}$
${[[4,3,2;1]]}_{4}$ | ${[[64,35,17;3]]}_{8}$
${[[7,3,5;4]]}_{7}$ | ${[[64,15,31;11]]}_{8}$
${[[8,5,4;3]]}_{8}$ | ${[[256,196,33;3]]}_{16}$
${[[11,9,3;2]]}_{11}$ | ${[[256,120,78;18]]}_{16}$
${[[13,9,5;4]]}_{13}$ | ${[[1024,784,129;15]]}_{32}$
${[[16,13,3;2]]}_{16}$ | ${[[1024,624,220;38]]}_{32}$
One family of QUENTA codes derived from BCH codes have been constructed, see
Theorem 3.3. Some examples of these QUENTA codes are shown in Table 2. As can
be seen from Table 1 in Ref. [21] (and the reference there in), the QUENTA
codes derived from Theorem 3.3 have new parameters when compared with QUENTA
codes in the literature. So, even not having good parameters as the ones in
our Table 1, these codes are new. One advantage of our codes from the ones in
the literature is that, since they were constructed from two BCH codes not
only one as is commonly found in the literature, we have more liberty in the
choice of parameters. Such property can help to adjust the QUENTA code to the
framework where it will be used. Please, see Table 2 for some examples.
Table 2: Some new QUENTA codes from BCH codes New QUENTA codes – Theorem 3.3
---
$[[q^{2}-1,2a+1,b+1;2b-\lfloor\frac{b}{q}\rfloor]]_{q}$
$1\leq b\leq q$ and $0\leq a<q-b$
Examples
${[[15,5,2;2]]}_{4}$
${[[48,9,3;4]]}_{7}$
${[[63,7,5;8]]}_{8}$
${[[80,13,3;4]]}_{9}$
${[[255,19,7;12]]}_{16}$
The remaining QUENTA codes constructed in this paper are the ones derived from
cyclic codes that are neither Reed-Solomon nor BCH codes. Two families of such
codes were created. Both of them using Hermitian construction. Some examples
of parameters that can be obtained from these codes are presented in Table 3.
Codes in the first column are almost MDS or almost near MDS; i.e., the
Singleton defect, which is the difference between the quantum Singleton bound
(QSB) presented in Eq. 6 and the minimal distance of the code, is equal to one
or two units. Lastly, it is displayed in the second column of Table 3 some
codes from Theorem 3.7. All codes in Theorem 3.7 are maximal entanglement.
Thus, they can be used to achieve the hashing bound [15]. Having length
proportional to a high power of the cardinality of the field, it is expected
to achieve low error probability using these codes. Comparing our parameters
with the ones in the literature [20, 11, 19, 22], we see that our codes are
new.
Table 3: Some QUENTA codes from Cyclic Codes via Hermitian Construction New QUENTA codes – Theorem 3.6 | New QUENTA codes – Theorem 3.7
---|---
Examples
${[[80,73,3;1]]}_{3}$ | ${[[80,42,14;38]]}_{3}$
${[[80,69,4;1]]}_{3}$ | ${[[80,50,10;30]]}_{3}$
${[[255,248,3;1]]}_{4}$ | ${[[255,193,20;62]]}_{4}$
${[[255,244,4;1]]}_{4}$ | ${[[255,237,7;18]]}_{4}$
## 5 Conclusion
This paper has been devoted to the use of cyclic codes in the construction of
QUENTA codes. General construction methods of QUENTA codes from cyclic codes
via defining sets have been presented, using both Euclidean and Hermitian dual
of the classical codes. As an application of these methods, five families of
QUENTA codes were created. Two of them were derived from Reed-Solomon codes,
which resulted in MDS codes. An additional family of almost MDS or near almost
MDS QUENTA codes was derived from general cyclic codes. One of the remaining
family used BCH codes as the classical counterpart. The construction of this
family of QUENTA code used two BCH codes, which provided more liberty in the
parameters of the quantum code. Lastly, we construct a family of maximal
entanglement QUENTA code that can be useful in reaching the hashing bound.
## 6 Acknowledgements
I am grateful to Ruud Pellikaan for proposing this research problem and for
interesting discussions which helped me to clarify some points of view. This
work was supported by the _Conselho Nacional de Desenvolvimento Científico e
Tecnológico_ , grant No. 201223/2018-0.
## References
* [1] Bowen, G.: Entanglement required in achieving entanglement-assisted channel capacities. Physical Review A 66, 052313–1–052313–8 (Nov 2002)
* [2] Brun, T., Devetak, I., Hsieh, M.H.: Correcting quantum errors with entanglement. Science 314(5798), 436–439 (Oct 2006)
* [3] Brun, T.A., Devetak, I., Hsieh, M.H.: Catalytic quantum error correction. IEEE Transactions on Information Theory 60(6), 3073–3089 (Jun 2014)
* [4] Chen, J., Huang, Y., Feng, C., Chen, R.: Entanglement-assisted quantum MDS codes constructed from negacyclic codes. Quantum Information Processing 16(12), 303 (Nov 2017)
* [5] Chen, X., Zhu, S., Kai, X.: Entanglement-assisted quantum MDS codes constructed from constacyclic codes. Quantum Information Processing 17(10), 273 (Oct 2018)
* [6] Fan, J., Chen, H., Xu, J.: Constructions of $q$-ary entanglement-assisted quantum MDS codes with minimum distance greater than $q+1$. Quantum Information and Computation 16(5&6), 423–434 (2016)
* [7] Fattal, D., Cubitt, T.S., Yamamoto, Y., Bravyi, S., Chuang, I.L.: Entanglement in the stabilizer formalism (Jun 2004)
* [8] Galindo, C., Hernando, F., Matsumoto, R., Ruano, D.: Entanglement-assisted quantum error-correcting codes over arbitrary finite fields. Quantum Information Processing 18(116), 1–18 (Apr 2019)
* [9] Guardia, G.G.L.: Constructions of new families of nonbinary quantum codes. Physical Review A 80(4), 042331–1–042331–11 (Oct 2009)
* [10] Guenda, K., Jitman, S., Gulliver, T.A.: Constructions of good entanglement-assisted quantum error correcting codes. Designs, Codes and Cryptography 86(1), 121–136 (Jan 2018)
* [11] Guo, L., Fu, Q., Li, R., Lu, L.: Maximal entanglement entanglement-assisted quantum codes of distance three. International Journal of Quantum Information 13(01), 1550002–1–1550002–7 (Feb 2015)
* [12] Huffman, W.C., Pless, V.: Fundamentals of Error-Correcting Codes. University Press, Cambridge (2003)
* [13] Li, C.: Hermitian lcd codes from cyclic codes. Designs, Codes and Cryptography 86(10), 2261–2278 (Oct 2018)
* [14] Li, R., Zuo, F., Liu, Y.: A study of skew asymmetric $q^{2}$-cyclotomic coset and its application. Journal of Air Force Engineering University (Natural Science Edition) 12(1), 87–89 (2011)
* [15] Li, R., Guo, L., Xu, Z.: Entanglement-assisted quantum codes achieving the quantum Singleton bound but violating the quantum Hamming bound. Quantum Information & Computation 14(13), 1107–1116 (Oct 2014)
* [16] Liu, X., Yu, L., Hu, P.: New entanglement-assisted quantum codes from $k$-Galois dual codes. Finite Fields and Their Applications 55, 21–32 (Jan 2019)
* [17] Lu, L., Li, R.: Entanglement-assisted quantum codes constructed from primitive quaternary BCH codes. International Journal of Quantum Information 12(3), 1450015 (Jun 2014)
* [18] Lu, L., Li, R., Guo, L.: Entanglement-assisted quantum codes from quaternary codes of dimension five. International Journal of Quantum Information 15(3), 1750017 (April 2017)
* [19] Lu, L., Li, R., Guo, L., Fu, Q.: Maximal entanglement entanglement-assisted quantum codes constructed from linear codes. Quantum Information Processing 14(1), 165–182 (Jan 2015)
* [20] Lu, L., Ma, W., Li, R., Ma, Y., Liu, Y., Cao, H.: Entanglement-assisted quantum MDS codes from constacyclic codes with large minimum distance. Finite Fields and Their Applications 53, 309–325 (Sep 2018)
* [21] Luo, G., Cao, X.: Two new families of entanglement-assisted quantum MDS codes from generalized Reed–Solomon codes. Quantum Information Processing 18(3), 89 (Feb 2019)
* [22] Lv, L., Li, R., Fu, Q., Li, X., Li, X.: Maximal entanglement entanglement-assisted quantum codes from quaternary BCH codes. In: Proceedings of IEEE Advanced Information Technology, Electronic and Automation Control Conference (2015)
* [23] Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information. Cambridge University Press (2011)
* [24] Pellikaan, R., Wu, X.W., Bulygin, S., Jurrius, R.: Codes, Cryptology and Curves with Computer Algebra. Cambridge University Press (2017)
* [25] Qian, J., Zhang, L.: Constructions of new entanglement-assisted quantum MDS and almost MDS codes. Quantum Information Processing 18(3), 71 (Jan 2019)
* [26] Wilde, M.M., Brun, T.A.: Optimal entanglement formulas for entanglement-assisted quantum coding. Physical Review A 77(6), 064302–1–064302–4 (Jun 2008)
|
# SympOCnet: Solving optimal control problems with applications to high-
dimensional multi-agent path planning problems††thanks: Submitted to the
editors . This work was supported by OSD/AFOSR MURI grant FA9550-20-1-0358.
Tingwei Meng222Division of Applied Mathematics, Brown University, Providence,
RI 02912, USA<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]). 333Tingwei Meng and
Zhen Zhang contributed equally to this work. Zhen Zhang222Division of Applied
Mathematics, Brown University, Providence, RI 02912, USA
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]). 333Tingwei Meng and Zhen Zhang contributed
equally to this work. Jérôme Darbon222Division of Applied Mathematics, Brown
University, Providence, RI 02912, USA<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]).
444Corresponding author. George Em Karniadakis222Division of Applied
Mathematics, Brown University, Providence, RI 02912, USA
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]). 555School of Engineering, Brown University,
Providence, RI 02912, USA.
###### Abstract
Solving high-dimensional optimal control problems in real-time is an important
but challenging problem, with applications to multi-agent path planning
problems, which have drawn increased attention given the growing popularity of
drones in recent years. In this paper, we propose a novel neural network
method called SympOCnet that applies the Symplectic network to solve high-
dimensional optimal control problems with state constraints. We present
several numerical results on path planning problems in two-dimensional and
three-dimensional spaces. Specifically, we demonstrate that our SympOCnet can
solve a problem with more than $500$ dimensions in 1.5 hours on a single GPU,
which shows the effectiveness and efficiency of SympOCnet. The proposed method
is scalable and has the potential to solve truly high-dimensional path
planning problems in real-time.
###### keywords:
deep neural networks, optimal control, path planning, physics-informed
learning
49M99, 68T07
## 1 Introduction
Optimal control methods are widely applied in many practical problems, which
include path planning [17, 83, 43, 25, 75, 59, 65], humanoid robot control
[30, 34, 56, 36, 33, 26], and robot manipulator control [61, 48, 54, 63, 12].
With the increasing impact of drones in everyday tasks as well as specialized
military tasks, the path planning problem of multiple drones has become
increasingly important. However, it is challenging to model this problem using
the optimal control formulation because the dimensionality is usually very
high, causing difficulty in applying traditional numerical methods, such as
the very accurate pseudospectral (PS) method [31, 32, 77, 80, 38], to solve
it. For instance, for a multi-agent path planning problem with $M$ drones, the
motion of each drone has to be described by a state variable in
$\mathbb{R}^{m}$, therefore the dimension of the corresponding optimal control
problem will be $Mm$, which is huge when the number of drones $M$ is large.
Computational complexity of traditional numerical methods may scale
exponentially with respect to the dimension, which makes the high-dimensional
problems intractable. This is an open issue called “curse-of-dimensionality”
(CoD) [7]. Hence, new approaches are required to tackle high-dimensional
optimal control problems efficiently.
In literature, there are some algorithms for solving high-dimensional optimal
control problems (or the corresponding Hamilton-Jacobi PDEs), which include
optimization methods [18, 24, 21, 23, 15, 14, 87, 60, 55], max-plus methods
[1, 2, 29, 35, 39, 67, 66, 68, 69], tensor decomposition techniques [28, 44,
86], sparse grids [10, 37, 53], polynomial approximation [51, 52], model order
reduction [4, 57], optimistic planning [9], dynamic programming and
reinforcement learning [13, 11, 3, 8, 89], as well as methods based on neural
networks [5, 6, 27, 47, 42, 45, 46, 58, 73, 78, 81, 84, 62, 20, 22, 19, 71,
72, 49, 50, 74]. However, path planning problems are in general hard to solve
and cannot be solved directly by applying most of the aforementioned
algorithms. One difficulty comes from the state constraints given by the
complicated environment. Therefore, solving high-dimensional optimal control
problems with complicated state constraints is an important yet challenging
open problem. The difficulty of high dimensionality can be avoided by solving
the sequential path planning problem instead of the simultaneous planning
problem [13, 11, 79]. The main idea of sequential planning is to assign for
different drones different levels of priorities and then sequentially solve
the low-dimensional path planning problem involving each drone according to
its priority. However, sequential planning only provides a feasible solution
to the original simultaneous planning problem, which may not be optimal.
Hence, how to solve high-dimensional simultaneous planning problems with
complicated state constraints is still an unsolved problem.
Recently, neural networks have achieved great success in solving high-
dimensional PDEs and inverse problems and bear some promise in overcoming the
CoD. We can make further progress if we take advantage of the knowledge about
the underlying data generation process and encode such information in the
neural network architectures and the loss functions [49, 88, 40, 70]. As a
specific example, the physics-informed neural network (PINN), which was first
introduced in [76], encodes physical information directly into the loss
function of neural networks using the residuals of ODEs/PDEs and automatic
differentiation. In this work, we leverage PINNs and propose a novel method
using a Symplectic optimal control network called SympOCnet to solve high-
dimensional optimal control problems. SympOCnet is designed based on the
Symplectic network (SympNet), which is a neural network architecture proposed
in [50] and is able to approximate arbitrary symplectic maps. In the original
paper, SympNet is used to solve the inverse parameter identification problem
by acting as a surrogate model for the data generation procedure. In this
paper, we present a novel way to utilize SympNet for solving Hamiltonian ODEs
in a forward manner. By applying the SympOCnet, we encode our knowledge about
optimal control problems and Hamilton-Jacobi theory in the neural network. We
then apply SympOCnet on multi-agent path planning problems and show its
effectiveness and efficiency in solving high-dimensional problems.
Specifically, we can solve a path planning problem involving $256$ drones,
which corresponds to a $512$-dimensional optimal control problem.
The organization of this paper is given as follows. In Section 2, we briefly
summarize some preliminary background which will be used in the paper,
including Hamiltonian systems and symplectic maps in Section 2.1 and SympNet
in Section 2.2. In Section 3, we present our proposed SympOCnet method for the
optimal control problems without state constraints, which serves as a building
block for the full version of the SympOCnet method. The full version is then
introduced in Section 4, which is proposed for solving the optimal control
problems with state constraints. To be specific, we propose four loss
functions, which are compared in the numerical experiments. Our numerical
experiments on multi-agent path planning problems are presented in Section 5.
The first example in Section 5.1 is used to compare the four loss functions.
The second example in Section 5.2 demonstrates the generalizability of our
method when only partial data is available in the training process. Then, in
Sections 5.3 and 5.4, we demonstrate the effectiveness and efficiency of
SympOCnet on high-dimensional problems, with dimensions ranging from $64$ to
$512$. A summary is provided in Section 6.
## 2 Preliminary background
In this paper, we solve optimal control problems with SympOCnet, a neural
network with SympNet architecture and loss function given by the corresponding
Hamiltonian ODEs. This section provides some mathematical materials used in
the remainder of the paper. In Section 2.1, we provide a brief summary about
symplectic maps, Hamiltonian ODEs and their relations. Then, in Section 2.2,
we review the SympNet architectures. For more details regarding the theory and
applications of symplectic methods, we refer the readers to [41].
### 2.1 Hamiltonian systems and symplectic maps
###### Definition 2.1 (Symplectic maps).
Let $U$ be an open set in $\mathbb{R}^{2n}$. A differentiable map
$\phi:U\rightarrow\mathbb{R}^{2n}$ is called symplectic if the Jacobian matrix
$\nabla\phi$ satisfies
(1)
$\nabla\phi^{T}(\bm{z})J\nabla\phi(\bm{z})=J\quad\forall\bm{z}\in\mathbb{R}^{2n},$
where $J$ is a matrix with $2n$ rows and $2n$ columns defined by
(2) $J:=\begin{pmatrix}\bm{0}&I_{n}\\\ -I_{n}&\bm{0}\end{pmatrix},$
and $I_{n}$ denotes the identity matrix with $n$ rows and $n$ columns.
The Hamiltonian ODE system is a dynamical system taking the form
$\dot{\bm{z}}(s)=J\nabla H(\bm{z}(s)),$
where $J$ is the matrix defined in (2) and $H:U\rightarrow\mathbb{R}$ is a
function called Hamiltonian. The Hamiltonian systems and symplectic maps are
highly related to each other. To be specific, the Hamiltonian structure is
preserved under the change of variable using any symplectic map. This result
is stated in the following theorem.
###### Theorem 2.2.
[41, Theorem 2.8 on p. 187] Let $U$ and $V$ be two open sets in
$\mathbb{R}^{2n}$. Let $\phi:U\rightarrow V$ be a change of coordinates such
that $\phi$ and $\phi^{-1}$ are continuously differentiable functions. If
$\phi$ is symplectic, the Hamiltonian ODE system $\dot{\bm{z}}(s)=J\nabla
H(\bm{z}(s))$ can be written in the new variable $\bm{w}=\phi(\bm{z})$ as
(3) $\dot{\bm{w}}(s)=J\nabla\tilde{H}(\bm{w}(s)),$
where the new Hamiltonian $\tilde{H}$ is defined by
(4) $\tilde{H}(\bm{w})=H(\bm{z})=H(\phi^{-1}(\bm{w}))\quad\forall\bm{w}\in V.$
Conversely, if $\phi$ transforms every Hamiltonian system to another
Hamiltonian system by (3) and (4), then $\phi$ is symplectic.
Theorem 2.2 indicates that a symplectic map can transform a Hamiltonian ODE
system to another system which is again Hamiltonian and potentially in a
simpler form. This is the starting point of our proposed method. It is well
known from literature that the optimal trajectory of an optimal control
problem is related to a Hamiltonian ODE system under certain assumptions. Such
systems may be solved more easily if written in the right coordinates. We
propose to learn the corresponding coordinate transformation through a
parameterized family of symplectic maps $\phi_{\theta}$, where $\theta$
represents the unknown parameters to be learned, and solve the Hamiltonian
ODEs with new coordinates. The solution to the original problem can be
obtained by mapping the trajectory back to original phase space through
$\varphi_{\theta}=\phi_{\theta}^{-1}$.
### 2.2 Symplectic networks (SympNets)
SympNet is a neural network architecture proposed in [50] to approximate
symplectic transformations. There are different kinds of SympNet
architectures. In this paper, we use the G-SympNet as the class of our
symplectic transformation $\varphi_{\theta}$. For other architectures, we
refer the readers to [50].
Define a function
$\hat{\sigma}_{K,\bm{a},\bm{b}}\colon\mathbb{R}^{n}\to\mathbb{R}^{n}$ by
$\hat{\sigma}_{K,\bm{a},\bm{b}}(\bm{x}):=K^{T}(\bm{a}\odot\sigma(K\bm{x}+\bm{b}))$
for any $\bm{x}\in\mathbb{R}^{n}$, where $\sigma$ is the activation function,
and $\odot$ denotes the componentwise multiplication. Here $\bm{a},\bm{b}$ are
vectors in $\mathbb{R}^{l}$, $K$ is a matrix with $l$ rows and $n$ columns.
Any G-SympNet is an alternating composition of the following two parameterized
functions:
(5) $\begin{split}&\mathcal{G}_{up}\begin{pmatrix}\bm{x}\\\
\bm{p}\end{pmatrix}=\begin{pmatrix}\bm{x}\\\
\bm{p}+\hat{\sigma}_{K,\bm{a},\bm{b}}(\bm{x})\end{pmatrix}\quad\forall\bm{x},\bm{p}\in\mathbb{R}^{n},\\\
&\mathcal{G}_{low}\begin{pmatrix}\bm{x}\\\
\bm{p}\end{pmatrix}=\begin{pmatrix}\hat{\sigma}_{K,\bm{a},\bm{b}}(\bm{p})+\bm{x}\\\
\bm{p}\end{pmatrix}\quad\forall\bm{x},\bm{p}\in\mathbb{R}^{n},\end{split}$
where the learnable parameters are the matrix $K$ and the vectors
$\bm{a},\bm{b}\in\mathbb{R}^{l}$. The dimension $l$ (which is the dimension of
$\bm{a},\bm{b}$ as well as the number of rows in $K$) is a positive integer
that can be tuned, called the width of SympNet. In [50], it is proven that
G-SympNets are universal approximators within the family of symplectic maps.
Note that it is easy to obtain the inverse map of a G-SympNet, since we have
explicit formulas for $\mathcal{G}_{up}^{-1}$ and $\mathcal{G}_{low}^{-1}$
given as follows
(6) $\mathcal{G}_{up}^{-1}\begin{pmatrix}\bm{x}\\\
\bm{p}\end{pmatrix}=\begin{pmatrix}\bm{x}\\\
\bm{p}-\hat{\sigma}_{K,\bm{a},\bm{b}}(\bm{x})\end{pmatrix},\quad\mathcal{G}_{low}^{-1}\begin{pmatrix}\bm{x}\\\
\bm{p}\end{pmatrix}=\begin{pmatrix}\bm{x}-\hat{\sigma}_{K,\bm{a},\bm{b}}(\bm{p})\\\
\bm{p}\end{pmatrix}.$
## 3 SympOCnet for optimal control problems without state constraints
In this section, we consider the following optimal control problem without
state constraints
(7)
$\begin{split}&\min\left\\{\int_{0}^{T}\ell(\bm{x}(s),\bm{v}(s))ds\colon\bm{x}(0)=\bm{x}_{0},\bm{x}(T)=\bm{x}_{T},\dot{\bm{x}}(s)=\bm{v}(s)\in
U\,\forall s\in(0,T)\right\\}\\\ =\
&\min\left\\{\int_{0}^{T}\ell(\bm{x}(s),\dot{\bm{x}}(s))ds\colon\bm{x}(0)=\bm{x}_{0},\bm{x}(T)=\bm{x}_{T},\dot{\bm{x}}(s)\in
U\,\forall s\in(0,T)\right\\},\end{split}$
where $\bm{v}\colon[0,T]\to U$ is the control function taking values in the
control set $U\subseteq\mathbb{R}^{n}$, $\bm{x}\colon[0,T]\to\mathbb{R}^{n}$
is the corresponding trajectory, and $\ell\colon\mathbb{R}^{n}\times
U\to\mathbb{R}$ is called running cost.
It is well-known in the literature that the solution $\bm{x}$ to the optimal
control problem (7) satisfies the following Hamiltonian ODE system
(8)
$\begin{dcases}\dot{\bm{x}}(s)=\nabla_{\bm{p}}H(\bm{x}(s),\bm{p}(s)),&s\in[0,T],\\\
\dot{\bm{p}}(s)=-\nabla_{\bm{x}}H(\bm{x}(s),\bm{p}(s)),&s\in[0,T],\end{dcases}$
where the function $H\colon\mathbb{R}^{2n}\to\mathbb{R}$ is called Hamiltonian
defined by
(9) $H(\bm{x},\bm{p})=\sup_{\bm{v}\in
U}\\{\langle\bm{v},\bm{p}\rangle-\ell(\bm{x},\bm{v})\\},\quad\forall\bm{x},\bm{p}\in\mathbb{R}^{n}.$
A method to solve the optimal control problem (7) in high dimensions is to
solve the corresponding Hamiltonian ODE system (8) instead. However, when the
Hamiltonian is too complicated, the Hamiltonian ODEs may be hard to solve. To
tackle this difficulty, we propose a method called SympOCnet, which uses the
SympNet architecture to solve the optimal control problem (7) in high
dimensions. The key idea is to perform a change of variables in the phase
space using a symplectic map represented by a SympNet, and then solve the
Hamiltonian ODEs in the new coordinate system.
Given a symplectic map $\phi\colon\mathbb{R}^{2n}\to\mathbb{R}^{2n}$, we
define the change of variables $(\bm{y},\bm{q})=\phi(\bm{x},\bm{p})$ for any
$\bm{x},\bm{p}\in\mathbb{R}^{n}$. If the phase trajectory
$t\mapsto(\bm{x}(t),\bm{p}(t))$ is the solution to the Hamiltonian ODEs (8),
then by Theorem 2.2, the phase trajectory $t\mapsto(\bm{y}(t),\bm{q}(t))$
under the new coordinate system solves the Hamiltonian ODEs with a new
Hamiltonian $\tilde{H}\colon\mathbb{R}^{2n}\to\mathbb{R}$ defined by
$\tilde{H}(\bm{y},\bm{q})=H(\phi^{-1}(\bm{y},\bm{q}))\quad\forall\bm{y},\bm{q}\in\mathbb{R}^{n}.$
In other words, the function $t\mapsto(\bm{y}(t),\bm{q}(t))$ satisfies
(10)
$\begin{dcases}\dot{\bm{y}}(s)=\nabla_{\bm{q}}\tilde{H}(\bm{y}(s),\bm{q}(s)),&s\in[0,T],\\\
\dot{\bm{q}}(s)=-\nabla_{\bm{y}}\tilde{H}(\bm{y}(s),\bm{q}(s)),&s\in[0,T].\end{dcases}$
Here, we assume that the new Hamiltonian $\tilde{H}$ has a simpler form such
that the corresponding Hamiltonian ODE system (10) is easier to solve. To be
specific, we assume that $\tilde{H}$ does not depend on the variable $\bm{y}$,
and hence the Hamiltonian ODEs become
(11) $\begin{dcases}\dot{\bm{y}}(s)=\nabla\tilde{H}(\bm{q}(s)),&s\in[0,T],\\\
\dot{\bm{q}}(s)=0,&s\in[0,T].\end{dcases}$
The solution $s\mapsto\bm{q}(s)$ is a constant function, and the solution
$s\mapsto\bm{y}(s)$ is an affine function. Therefore, the solution
$s\mapsto(\bm{y}(s),\bm{q}(s))$ can be represented using three parameters in
$\mathbb{R}^{n}$, which are denoted by $\bm{q}_{0}=\bm{q}(0)$,
$\bm{y}_{0}=\bm{y}(0)$ and $\bm{u}=\dot{\bm{y}}(s)$. With these three
parameters, the solution to (11) is given by $\bm{y}(s)=\bm{y}_{0}+s\bm{u}$
and $\bm{q}(s)=\bm{q}_{0}$ for any $s\in[0,T]$.
Under this framework, we explain our proposed method in the remainder of this
section. In Section 3.1, we apply the SympOCnet method to approximate the
unknown function $\phi$ and the unknown parameters $\bm{u},\bm{y}_{0}$ and
$\bm{q}_{0}$. Then, after the training process, to further enhance the
accuracy we can also apply the pseudospectral method for post-processing (for
low-dimensional problems), as we explain in Section 3.2.
### 3.1 SympOCnet method
In this section, we describe how to apply SympOCnet method to solve the
optimal control problem (7). SympOCnet uses a SympNet architecture to
represent the inverse of the unknown function $\phi$, which is also a
symplectic map. In other words, we approximate $\phi^{-1}$ using a SympNet
denoted by $\varphi=(\varphi_{1},\varphi_{2})$. Here, $\varphi_{1}$ denotes
the first $n$ components in $\varphi$, which is a map from $(\bm{y},\bm{q})$
to $\bm{x}$, and $\varphi_{2}$ denotes the last $n$ components in $\varphi$,
which is a map from $(\bm{y},\bm{q})$ to $\bm{p}$. Other learnable parameters
include $\bm{u},\bm{y}_{0},\bm{q}_{0}\in\mathbb{R}^{n}$ mentioned above, which
respectively correspond to $\dot{\bm{y}}(s)$, $\bm{y}(0)$ and $\bm{q}(s)$ in
the new Hamiltonian ODE system (11). With the SympNet $\varphi$ and the
variables $\bm{u},\bm{y}_{0},\bm{q}_{0}$, the solution to the original
Hamiltonian ODEs (8) is given by
$s\mapsto\varphi(\bm{y}_{0}+s\bm{u},\bm{q}_{0})$.
To learn the SympNet $\varphi$ and the parameters $\bm{u},\bm{y}_{0}$ and
$\bm{q}_{0}$, we apply the PINN loss [76] on the Hamiltonian ODEs (8), and get
$\begin{split}\mathcal{L}_{res}&=\sum_{j=1}^{N}\left\lVert\frac{d\bm{x}(s_{j})}{ds}-\nabla_{\bm{p}}H(\bm{x}(s_{j}),\bm{p}(s_{j}))\right\rVert^{2}+\sum_{j=1}^{N}\left\lVert\frac{d\bm{p}(s_{j})}{ds}+\nabla_{\bm{x}}H(\bm{x}(s_{j}),\bm{p}(s_{j}))\right\rVert^{2}\\\
&=\sum_{j=1}^{N}\left\lVert\nabla_{\bm{y}}\varphi_{1}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})\bm{u}-\nabla_{\bm{p}}H(\varphi(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0}))\right\rVert^{2}\\\
&\quad\quad+\sum_{j=1}^{N}\left\lVert\nabla_{\bm{y}}\varphi_{2}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})\bm{u}+\nabla_{\bm{x}}H(\varphi(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0}))\right\rVert^{2},\end{split}$
where the second equality holds by the following chain rule
$\begin{split}\frac{d\bm{x}(s_{j})}{ds}&=\nabla_{\bm{y}}\bm{x}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})\frac{d\bm{y}(s_{j})}{ds}+\nabla_{\bm{q}}\bm{x}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})\frac{d\bm{q}(s_{j})}{ds}\\\
&=\nabla_{\bm{y}}\varphi_{1}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})\bm{u},\\\
\frac{d\bm{p}(s_{j})}{ds}&=\nabla_{\bm{y}}\bm{p}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})\frac{d\bm{y}(s_{j})}{ds}+\nabla_{\bm{q}}\bm{p}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})\frac{d\bm{q}(s_{j})}{ds}\\\
&=\nabla_{\bm{y}}\varphi_{2}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})\bm{u}.\end{split}$
Here, the data points are given by $s_{1},\dots,s_{N}\in[0,T]$. The gradients
$\nabla_{\bm{y}}\varphi_{1}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})$ and
$\nabla_{\bm{y}}\varphi_{2}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})$ are calculated
by the automatic differentiation of the SympNet $\varphi$. Moreover, we have
another loss term for the initial and terminal conditions of $\bm{x}$, defined
by
$\begin{split}\mathcal{L}_{bd}&=\left\lVert\bm{x}(0)-\bm{x}_{0}\right\rVert^{2}+\left\lVert\bm{x}(T)-\bm{x}_{T}\right\rVert^{2}\\\
&=\left\lVert\varphi_{1}(\bm{y}_{0},\bm{q}_{0})-\bm{x}_{0}\right\rVert^{2}+\left\lVert\varphi_{1}(\bm{y}_{0}+T\bm{u},\bm{q}_{0})-\bm{x}_{T}\right\rVert^{2}.\end{split}$
The total loss function $\mathcal{L}$ is a weighted sum of the two loss terms,
defined by
(12) $\mathcal{L}=\mathcal{L}_{res}+\lambda\mathcal{L}_{bd},$
where $\lambda$ is a tunable positive hyperparameter. After training, the
optimal trajectory is obtained from
(13) $\bm{x}(s)=\varphi_{1}(\bm{y}_{0}+s\bm{u},\bm{q}_{0})\quad\forall
s\in[0,T],$
where the function $\varphi_{1}$ contains the first $n$ components of the
SympNet $\varphi$. An illustration of our proposed SympOCnet method is shown
in Figure 1.
Figure 1: An illustration of the SympOCnet method. We propose to use SympNet
$\phi$ to map curvy trajectory in the phase space to simpler trajectory in new
coordinates and solve the corresponding Hamiltonian system of the optimal
control problem.
### 3.2 Post-processing using the pseudospectral method
In Section 3.1, we introduced the SympOCnet to solve the optimal control
problem (7). In this section, we explain how to apply the pseudospectral
method to postprocess SympOCnet results to improve feasibility and optimality.
The pseudospectral method is a popular method proposed to solve a general
optimal control problem [31, 32, 77, 80, 38]. It applies the quadrature rule
to discretize the integration and the ODE in a continuous optimal control
problem. In this way, the continuous optimal control problem is converted to a
finite-dimensional optimization problem with constraints. Then, some
optimization algorithms such as sequential quadratic programming are applied
to solve the finite-dimensional problem.
When the dimension is high, it is, in general, hard to solve the converted
finite-dimensional optimization problem. The difficulty comes from the number
of variables and constraints in the optimization problem, as well as the non-
convexity of the problem. However, there are some theoretical guarantees when
the initialization is good enough. Therefore, it can be applied as a
postprocessing procedure by taking the trajectory obtained from Section 3.1 as
an initialization.
We will apply this procedure in the numerical experiments in Sections 5.1 and
5.2. From the numerical results, we observe that the pseudospectral method
preserves the shape of the trajectories obtained from the training step of the
SympOCnet method, and it makes the trajectories smoother while improving the
optimality. Therefore, the pseudospectral method performs well as a post-
processing method. It is also observed in the experiments that pseudospectral
method itself with a linear initialization may not converge to the correct
solution, which justifies the choice of SympOCnet as a good initializer. Note
that we do not apply this post-process to the experiments in Sections 5.3 and
5.4 due to the high dimensionality.
## 4 SympOCnet for optimal control problems with state constraints
In this section, we focus on the following optimal control problem with state
constraints
(14)
$\begin{split}&\min\Bigg{\\{}\int_{0}^{T}\ell(\bm{x}(s),\bm{v}(s))ds\colon
h(\bm{x}(s))\geq 0,\dot{\bm{x}}(s)=\bm{v}(s)\in U\forall s\in[0,T],\\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\bm{x}(0)=\bm{x}_{0},\bm{x}(T)=\bm{x}_{T}\Bigg{\\}}\\\
=\,&\min\Bigg{\\{}\int_{0}^{T}\ell(\bm{x}(s),\dot{\bm{x}}(s))ds\colon
h(\bm{x}(s))\geq 0,\dot{\bm{x}}(s)\in U\forall s\in[0,T],\\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\bm{x}(0)=\bm{x}_{0},\bm{x}(T)=\bm{x}_{T}\Bigg{\\}},\end{split}$
where $\ell\colon\mathbb{R}^{n}\times U\to\mathbb{R}$ is the running cost, and
$h\colon\mathbb{R}^{n}\to\mathbb{R}^{m}$ is a function providing the
constraint on the state variable $\bm{x}$. For instance, to avoid obstacles,
$h$ can be defined using the signed distance function to the obstacles.
In order to apply the SympOCnet method in Section 3, we need to convert the
original problem (14) to an unconstrained optimal control problem, which is
achieved using the soft penalty method described in Section 4.1. The soft
penalty method will enforce the constraints in an asymptotic sense, i.e., when
the hyperparameters converge to zero under some conditions, which is
impractical in experiments. In the training process, when the hyperparameters
in the soft penalty are fixed, to improve the feasibility of the output
trajectory, we introduce extra terms in the loss function in Sections 4.2 and
4.3. The selection of these extra loss terms may differ from problem to
problem. Later in Section 5.1, we compare the performance of different loss
functions under our path planning problem setup to choose a suitable loss
function for the other experiments.
### 4.1 Soft penalty method for the optimal control problem
To convert a constrained problem to an unconstrained one, the soft penalty
method replaces the hard constraint by a soft penalty term in the objective
function. Here, we consider the log penalty function [82, 85], denoted by
$\beta_{a}\colon\mathbb{R}\to\mathbb{R}$ and defined by
(15) $\beta_{a}(x):=\begin{dcases}-\log(x),&\text{if}\ x>a,\\\
-\log(a)+\frac{1}{2}\left(\left(\frac{x-2a}{a}\right)^{2}-1\right),&\text{if}\
x\leq a,\end{dcases}$
where $a$ is a positive hyperparameter. The high-dimensional log penalty
function is the summation of the one-dimensional function acting on each
component, i.e., for any $m>1$, the log penalty function
$\beta_{a}\colon\mathbb{R}^{m}\to\mathbb{R}$ is defined by
(16)
$\beta_{a}(\bm{x}):=\sum_{i=1}^{m}\beta_{a}(x_{i})\quad\forall\bm{x}=(x_{1},\dots,x_{m})\in\mathbb{R}^{m}.$
With this log penalty term, the problem (14) is converted to
(17)
$\begin{split}\min\Bigg{\\{}\int_{0}^{T}\ell(\bm{x}(s),\dot{\bm{x}}(s))+\epsilon\beta_{a}(h(\bm{x}(s))ds\colon\dot{\bm{x}}(s)\in
U\forall s\in[0,T],\quad\quad\\\
\bm{x}(0)=\bm{x}_{0},\bm{x}(T)=\bm{x}_{T}\Bigg{\\}},\end{split}$
where $\epsilon$ and $a$ are positive hyperparameters. This problem (17) is in
the form of (7), and hence can be solved using the SympOCnet method in Section
3. By straightforward calculation, for a sequence of parameters $\epsilon_{k}$
and $a_{k}$ such that they both converge to zero and the following equations
hold
$\lim_{k\to\infty}\epsilon_{k}\log
a_{k}=0,\quad\quad\lim_{k\to\infty}\frac{a_{k}^{2}}{\epsilon_{k}}=0,$
the penalty term $\epsilon_{k}\beta_{a_{k}}$ converges to the indicator
function of $[0,+\infty)$, which takes the value $0$ in $[0,+\infty)$ and
$+\infty$ in $(-\infty,0)$. Therefore, the objective function in (17) with
parameters $\epsilon_{k}$, $a_{k}$ converges to the objective function in (14)
with the hard constraint given by $h$.
With this penalty term, the Hamiltonian corresponding to the optimal control
problem (17) equals
(18) $H_{\epsilon,a}(\bm{x},\bm{p})=\sup_{\bm{v}\in
U}\\{\langle\bm{v},\bm{p}\rangle-\ell(\bm{x},\bm{v})-\epsilon\beta_{a}(h(\bm{x}))\\}=H(\bm{x},\bm{p})-\epsilon\beta_{a}(h(\bm{x})),$
where $H$ is the Hamiltonian corresponding to the running cost $\ell$ defined
in (17).
###### Remark 4.1.
Note that another widely-used method to covert a constrained optimization
problem to an unconstrained one is the augmented Lagrangian method. In our
problem, since the constraint is enforced for any time $s\in[0,T]$, to apply
the augmented Lagrangian method, the Lagrange multiplier needs to be a
function of time. Then, with the added Lagrangian term, the new running cost
also depends on the time variable, which requires a more complicated
symplectic method to handle time-dependent Hamiltonians. Therefore, we do not
apply the augmented Lagrangian method for the constrained optimal control
problem (14) in this paper. Instead, we add extra terms in the loss function
(see Sections 4.2 and 4.3) to enforce the constraint.
### 4.2 Penalty method in the training process of SympOCnet
To enforce the state constraint $h(\bm{x}(s))\geq 0$, one way is to add the
penalty term in the loss function of the training process. Here, we introduce
two penalty terms, which include the log penalty $\beta_{a}$ defined in (16)
and the quadratic penalty. The loss function with the log penalty term
corresponding to the state constraint $h(\bm{x}(s))\geq 0$ is defined by
(19)
$\begin{split}\mathcal{L}_{log}(\varphi,\bm{y}_{0},\bm{q}_{0},\bm{u})&=\mathcal{L}(\varphi,\bm{y}_{0},\bm{q}_{0},\bm{u})+\frac{\tilde{\lambda}}{N}\sum_{j=1}^{N}\epsilon\beta_{a}(h(\varphi_{1}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})))\\\
&\quad\quad+\epsilon\beta_{a}(\bm{x}_{0}-\varphi_{1}(\bm{y}_{0},\bm{q}_{0}))+\epsilon\beta_{a}(\varphi_{1}(\bm{y}_{0},\bm{q}_{0})-\bm{x}_{0})\\\
&\quad\quad+\epsilon\beta_{a}(\bm{x}_{T}-\varphi_{1}(\bm{y}_{0}+T\bm{u},\bm{q}_{0}))+\epsilon\beta_{a}(\varphi_{1}(\bm{y}_{0}+T\bm{u},\bm{q}_{0})-\bm{x}_{T}),\end{split}$
where $\beta_{a}$ is the penalty function defined in (16), $\mathcal{L}$ is
the loss function defined in Section 3.1, and $\tilde{\lambda}$ is a positive
hyperparameter. Similarly, the loss function with a quadratic penalty is
defined by
(20)
$\mathcal{L}_{quad}(\varphi,\bm{y}_{0},\bm{q}_{0},\bm{u})=\mathcal{L}(\varphi,\bm{y}_{0},\bm{q}_{0},\bm{u})+\frac{\tilde{\lambda}}{N}\sum_{j=1}^{N}\|\min\\{h(\varphi_{1}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0})),0\\}\|^{2},$
where we do not add the penalty term for the initial and terminal conditions,
since the quadratic penalty for them are the same with $\mathcal{L}_{bd}$ in
the loss function $\mathcal{L}$.
### 4.3 Augmented Lagrangian method in the training process of SympOCnet
Another method to enforce the state constraint $h(\bm{x}(s))\geq 0$ is to
apply augmented Lagrangian method in the training process. The loss function
with the Lagrangian term is defined as follows
(21)
$\begin{split}&\mathcal{L}_{aug}(\varphi,\bm{y}_{0},\bm{q}_{0},\bm{u},\bm{\mu},\bm{\lambda}_{1},\bm{\lambda}_{2})\\\
=\,&\mathcal{L}(\varphi,\bm{y}_{0},\bm{q}_{0},\bm{u})+\frac{1}{2\rho_{1}}\sum_{j=1}^{N}\|\max\\{\bm{0},\bm{\mu}(s_{j})-\rho_{1}h(\varphi_{1}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0}))\\}\|^{2}\\\
&\quad+\frac{1}{2\rho_{2}}\left(\|\bm{\lambda}_{1}-\rho_{2}(\bm{x}_{0}-\varphi_{1}(\bm{y}_{0},\bm{q}_{0}))\|^{2}+\|\bm{\lambda}_{2}-\rho_{2}(\bm{x}_{T}-\varphi_{1}(\bm{y}_{0}+T\bm{u},\bm{q}_{0}))\|^{2}\right),\end{split}$
where $\bm{\mu}(s_{j})\in\mathbb{R}^{m}$,
$\bm{\lambda}_{1},\bm{\lambda}_{2}\in\mathbb{R}^{n}$ are the augmented
Lagrange multipliers for the constraints $h(\bm{x}(s_{j}))\geq 0$,
$\bm{x}(0)=\bm{x}_{0}$ and $\bm{x}(T)=\bm{x}_{T}$, respectively, and
$\rho_{1},\rho_{2}>0$ are positive hyperparameters in the augmented Lagrangian
method. The training process becomes an iterative scheme where the $k$-th
iteration contains the following two steps
* •
Train the SympNet $\varphi$ and parameters
$\bm{y}_{0},\bm{q}_{0},\bm{u}\in\mathbb{R}^{n}$ for several iterations using
the loss function
$\mathcal{L}_{aug}(\varphi,\bm{y}_{0},\bm{q}_{0},\bm{u},\bm{\mu}^{k},\bm{\lambda}_{1}^{k},\bm{\lambda}_{2}^{k})$;
* •
Update the Lagrange multiplier by
$\begin{split}\bm{\mu}^{k+1}(s_{j})&=\max\\{\bm{0},\bm{\mu}^{k}(s_{j})-\rho_{1}h(\varphi_{1}(\bm{y}_{0}+s_{j}\bm{u},\bm{q}_{0}))\\}\quad\forall
j=1,\dots,N,\\\
\bm{\lambda}_{1}^{k+1}&=\bm{\lambda}_{1}^{k}-\rho_{2}(\bm{x}_{0}-\varphi_{1}(\bm{y}_{0},\bm{q}_{0})),\\\
\bm{\lambda}_{2}^{k+1}&=\bm{\lambda}_{2}^{k}-\rho_{2}(\bm{x}_{T}-\varphi_{1}(\bm{y}_{0}+T\bm{u},\bm{q}_{0})).\end{split}$
Note that in practice, we need to tune the hyperparameters $\rho_{1}$ and
$\rho_{2}$. In our implementation, these hyperparameters are tuned
automatically using a similar strategy as in [16].
## 5 Applications in path planning problems with obstacle and collision
avoidance
In this section, we apply our method to path planning problems with multiple
drones. We assume that each drone is represented by a ball in the physical
space $\mathbb{R}^{m}$ with radius $C_{d}$. Note that the meaning of the
notation $m$ in this section (i.e., the dimension of the physical space) is
different from $m$ in Section 4 (which is the number of state constraints). We
set the state variable to be
$\bm{x}=(\bm{x}_{1},\dots,\bm{x}_{M})\in\mathbb{R}^{Mm}$, where $M$ is the
number of drones, and each $\bm{x}_{j}\in\mathbb{R}^{m}$ denotes the position
of the center of each drone. In other words, the state variable is the
concatenation of the position variables of all drones. Similarly, the control
variable $\bm{v}$ is the concatenation of the velocity variables for all
drones, i.e., $\bm{v}$ equals $(\bm{v}_{1},\dots,\bm{v}_{M})\in U^{M}$, where
each $\bm{v}_{j}$ is the velocity of the $j$-th drone, and
$U\subseteq\mathbb{R}^{m}$ is the space for all admissible velocities for each
drone. In our numerical experiments, the control space $U$ is the ball in
$\mathbb{R}^{m}$ with zero center and radius $C_{\bm{v}}>0$, i.e., the norm of
the velocity of each drone has the upper bound $C_{\bm{v}}$.
We want to solve for the optimal trajectory with minimal energy, and hence the
running cost is set to be
$\ell(\bm{x},\bm{v})=\frac{1}{2}\|\bm{v}\|^{2}\quad\forall\bm{x}\in\mathbb{R}^{Mm},\bm{v}\in
U^{M}.$
The corresponding Hamiltonian equals
$\begin{split}H(\bm{x},\bm{p})&=\sup_{\bm{v}\in
U^{M}}\left\\{\langle\bm{v},\bm{p}\rangle-\frac{1}{2}\|\bm{v}\|^{2}\right\\}=\sum_{i=1}^{M}\sup_{\bm{v}_{i}\in
U}\left\\{\langle\bm{v}_{i},\bm{p}_{i}\rangle-\frac{1}{2}\|\bm{v}_{i}\|^{2}\right\\}\\\
&=\sum_{i=1}^{M}\begin{dcases}\frac{1}{2}\|\bm{p}_{i}\|^{2}&\text{if
}\|\bm{p}_{i}\|\leq C_{\bm{v}},\\\
C_{\bm{v}}\|\bm{p}_{i}\|-\frac{C_{\bm{v}}^{2}}{2}&\text{if
}\|\bm{p}_{i}\|>C_{\bm{v}},\end{dcases}\end{split}$
for any $\bm{p}=(\bm{p}_{1},\cdots,\bm{p}_{M})\in\mathbb{R}^{Mm}$, where each
$\bm{p}_{i}$ is the momentum variable in $\mathbb{R}^{m}$ for the $i$-th
drone.
To avoid obstacles and collisions among drones, we set the constraint function
$h$ to be $h=(h_{1},h_{2})$, where $h_{1}$ is for obstacle avoidance, and
$h_{2}$ is for avoiding collisions among drones. If there are $n_{o}$
obstacles, denoted by $E_{1},\dots,E_{n_{o}}$, then the function
$h_{1}\colon\mathbb{R}^{Mm}\to\mathbb{R}^{Mn_{o}}$ is defined by
(22)
$h_{1}(\bm{x}_{1},\dots,\bm{x}_{M})=(d(\bm{x}_{1}),d(\bm{x}_{2}),\dots,d(\bm{x}_{M}))\quad\forall\bm{x}_{1},\dots,\bm{x}_{M}\in\mathbb{R}^{m},$
where the function $d\colon\mathbb{R}^{m}\to\mathbb{R}^{n_{o}}$ is defined by
(23)
$d(\bm{x})=(d_{1}(\bm{x}),\dots,d_{n_{o}}(\bm{x}))\quad\forall\bm{x}\in\mathbb{R}^{m},$
and each function $d_{j}$ is defined such that $d_{j}(\bm{x})<0$ implies the
collision of the drone whose center is at the position $\bm{x}$ with the
$j$-th obstacle $E_{j}$. For instance, $d_{j}$ can be defined using the signed
distance function to $E_{j}$. The definition of the function $d_{j}$ depends
on the shape of the constraint set $E_{j}$, and hence we do not specify it now
but postpone its definition to each example in later sections.
The function $h_{2}\colon\mathbb{R}^{Mm}\to\mathbb{R}^{M(M-1)/2}$ is the
constraint function for collision avoidance; each component of the constraint
function gives a constraint for avoiding collision between a pair of drones.
Since each drone can be seen as a ball centered at a point in
$\mathbb{R}^{m}$, two drones collide if and only if the distance between their
centers is less than the sum of their radii. Hence, we set the $k$-th
component of $h_{2}$ to be
(24)
$(h_{2})_{k}(\bm{x}_{1},\dots,\bm{x}_{M})=\|\bm{x}_{i}-\bm{x}_{j}\|^{2}-(2C_{d})^{2}\quad\forall\bm{x}_{1},\dots,\bm{x}_{M}\in\mathbb{R}^{m},$
where $C_{d}$ is the radius of a drone, and $k=i+(j-1)(j-2)/2$ for any $1\leq
i<j\leq M$ is the corresponding constraint index for the pair $(i,j)$. Note
that the constraints may be duplicated to simplify the implementation. As a
result, $(h_{2})_{k}(\bm{x}_{1},\dots,\bm{x}_{M})<0$ is equivalent to the
collision between the $i$-th and $j$-th drones. In other words, such defined
constraint function $h_{2}$ can avoid collisions among drones.
In the following sections, we present several numerical results. In Section
5.1, we compare the performance of different loss functions $\mathcal{L}$,
$\mathcal{L}_{log}$, $\mathcal{L}_{quad}$ and $\mathcal{L}_{aug}$ to choose a
suitable loss function for the remaining experiments. In Section 5.2, we
present an offline-online training strategy for an optimal control problem
whose initial conditions are not known in the training process of the SympNet.
This experiment shows the generalizability of SympOCnet. Then, in Section 5.3,
we demonstrate the performance of SympOCNet on a high-dimensional problem
reported in [79]. From the results in Section 5.3, we observe that SympOCnet
can handle the path planning problem whose state space has dimension $512$,
and hence the method can potentially mitigate the CoD. In Section 5.4, we
apply SympOCnet to a swarm path planning problem in [74], and demonstrate good
performance and efficiency in path planning problems, where the agents move in
a three-dimensional space.
In Sections 5.1 and 5.2 we use a $6$-layer SympNet with $60$ hidden neurons in
each layer and ReLU activation function. In Sections 5.3 and 5.4 we use a
$6$-layer SympNet with $200$ hidden neurons in each layer and ReLU activation
function. In all these experiments, the parameter $\lambda$ in (12) is set to
be $600$, and the parameter $\tilde{\lambda}$ in (19) and (20) is set to be
$200$. By default, the time horizon $T$ is set to be $1$, and the speed limit
$C_{\bm{v}}$ is $25$. All the numerical experiments are run using a shared
NVIDIA GeForce RTX 3090 GPU and a shared NVIDIA RTX A6000 GPU. In each
experiment, we only show the figure for a part of the testing cases. More
numerical results and codes are provided in
https://github.com/zzhang222/SympOCNet. Video animations of these examples are
available online at
https://github.com/zzhang222/SympOCNet/tree/main/saved_results_animation.
### 5.1 Comparison among different loss functions
In this section, we consider the path planning problem with two obstacles and
four drones in $\mathbb{R}^{2}$. Each obstacle $E_{j}\subset\mathbb{R}^{2}$ is
a convex set containing points whose distance to a line segment (denoted by
$l_{j}\subset\mathbb{R}^{2}$) is less than a constant $C_{o}$. The initial
positions of the four drones are $(-2,-2)$, $(2,-2)$, $(2,2)$, $(-2,2)$, and
their terminal positions are $(2,2),(-2,2),(-2,-2),(2,-2)$. Therefore, the
optimal control problem (14) is an $8$-dimensional problem, whose initial
state $\bm{x}_{0}$ is $(-2,-2,2,-2,2,2,-2,2)$ and terminal state $\bm{x}_{T}$
is $(2,2,-2,2,-2,-2,2,-2)$. The obstacles and the initial positions of the
four drones are shown in Figure 2.
Figure 2: Initial positions of drones and obstacles. The four drones located
at their initial positions are represented by the four colored circles. The
two obstacles are represented by round cornered rectangles in the middle.
NN, $t=\frac{1}{3}$
NN, $t=\frac{2}{3}$
NN, $t=1$
PS, $t=\frac{1}{3}$
PS, $t=\frac{2}{3}$
PS, $t=1$
Figure 3: Output trajectories with loss $\mathcal{L}_{aug}$ and pseudospectral
method. The left three figures show the trajectories of our proposed SympOCnet
method at different time $t=\frac{1}{3},\frac{2}{3},1$, and the right three
figures show the outputs of the post-process (pseudospectral method) at
different time $t=\frac{1}{3},\frac{2}{3},1$. In each figure, the four colored
circles show the positions of the four drones at time $t$, and the curves
connected to the circles show the trajectories before time $t$.
To avoid the collisions of drones with obstacles, we define the first part of
the constraint function $h$ by (22), where each component $d_{j}$ of the
function $d$ in (23) is defined by
$d_{j}(\bm{x})=\min_{\bm{y}\in
l_{j}}\|\bm{x}-\bm{y}\|^{2}-(C_{o}+C_{d})^{2}\quad\forall\bm{x}\in\mathbb{R}^{2}.$
The term $\min_{\bm{y}\in l_{j}}\|\bm{x}-\bm{y}\|^{2}$ in $d_{j}$ gives the
squared distance between $\bm{x}$ and the line segment $l_{j}$. Here, we use
the squared distance instead of distance function to improve the smoothness of
the constraint function $h$. Since the obstacle $E_{j}$ contains all points
whose distance to the line segment $l_{j}$ is less than $C_{o}$, a drone
collide with $E_{j}$ if and only if the distance between its center and the
line segment $l_{j}$ is less than $C_{o}+C_{d}$. Therefore, such defined
function $d_{j}$ provides a constraint which avoids the collisions of drones
with the $j$-th obstacle $E_{j}$.
Under this problem setup, we compare four different loss functions, i.e., the
function $\mathcal{L}$ defined in (12), $\mathcal{L}_{log}$ in (19),
$\mathcal{L}_{quad}$ in (20), and $\mathcal{L}_{aug}$ in (21). With each loss
function, we train the neural network for $100,000$ iterations, and then run
the pseudospectral method to improve the results. By comparing the performance
of the four loss functions in this example, we choose a better loss function
for the other experiments in the remainder of this paper.
The outputs of the four loss functions are shown in Table 1, where the
hyperparameters in the penalty term $\epsilon\beta_{a}(h(\bm{x}(s))$ are set
to be $a=0.004$ and $\epsilon=0.0004$. Results for different loss functions
are shown in different columns. For each loss function, we repeat the
experiment $10$ times with different random seeds, and then take average to
obtain the statistics in the table. We observe the convergence of the
pseudospectral method in all repeated experiments, which shows the robustness
of our SympOCnet method, providing optimal or suboptimal solutions in this
example. The second to fourth lines in Table 1 show the minimal constraint
value, the cost value in the optimal control problem, and the running time of
the training process of our proposed SympOCnet method. The last four lines in
Table 1 show the minimal constraint value, the cost value in the optimal
control problem, the number of iterations, and the running time of the post-
process using pseudospectral method.
loss function | $\mathcal{L}$ | $\mathcal{L}_{log}$ | $\mathcal{L}_{quad}$ | $\mathcal{L}_{aug}$
---|---|---|---|---
SympOCnet | min constraint | -0.834 | -0.00704 | -0.0824 | -4.84E-04
cost | 79.62 | 94.15 | 76.50 | 93.98
running time (s) | 1229 | 1776 | 1361 | 1708
PS | min constraint | -4.70E-09 | -9.87E-12 | -2.04E-12 | -8.02E-13
cost | 73.47 | 91.00 | 76.95 | 80.91
# iterations | 49 | 19 | 17 | 23
running time (s) | 104 | 44 | 38 | 47
Table 1: The comparison among results of four loss functions with SympOCnet method and pseudospectral post-process. We show the minimal constraint value $\min_{s}h(\bm{x}(s))$, the cost $\int_{s=0}^{T}\frac{||\bm{v}(s)||^{2}}{2}ds$, and the running time of the SympOCnet as well as those statistics after the post-process. The loss function $\mathcal{L}_{aug}$ provides the solution with least amount of constraint violations and reasonable cost values. It can be seen that in all cases, the SympOCnet provides a good initialization for the pseudospectral method. minimal constraint | cost | number of iterations | running time (s)
---|---|---|---
-1.00E+00 | 64.00 | 5 | 44
Table 2: The result of the pseudospectral method with the linear
initialization. We show the minimal constraint value $\min_{s}h(\bm{x}(s))$,
the cost $\int_{s=0}^{T}\frac{||\bm{v}(s)||^{2}}{2}ds$, the number of
iterations, and the running time of the pseudospectral method with the linear
initialization (i.e., the initial trajectory is an affine function of the time
variable, and the initial velocity is a constant function). The minimal
constraint value is $-1.00$, which shows that the constraint is not satisfied,
and the pseudospectral method with the linear initialization does not converge
to a feasible solution.
Comparing the results of the four loss functions, we observe that the penalty
term and the augmented Lagrangian term in the loss function indeed improve the
feasibility of the obtained trajectory. The constraints are better satisfied
using the loss $\mathcal{L}_{aug}$ in (21), while the corresponding cost value
may be a little bit higher than the the results of $\mathcal{L}_{quad}$. In
the other experiments, the problems are more complicated, and the constraints
are harder to satisfy. Therefore, we use the loss function $\mathcal{L}_{aug}$
for the experiments in the remainder of the paper to help enforce the state
constraints. For simpler problems, other loss function may be more preferable
for faster running time. How to choose the loss function adaptively may be a
possible future direction. Although the augmented Lagrangian method and log
penalty loss function are slightly slower than the other two loss functions,
it takes less than $30$ minutes to run $100,000$ training iterations, which
shows the efficiency of SympOCnet.
To show the improvement using our proposed SympOCnet method as an
initialization of the pseudospectral method, we solve the same problem using
the pseudospectral method with the linear initialization, i.e., we set the
initial guess of the pseudospectral method to be
$\bm{v}(t)=\frac{\bm{x}_{T}-\bm{x}_{0}}{T}$ and
$\bm{x}(t)=\frac{(\bm{x}_{T}-\bm{x}_{0})t}{T}$ for any $t\in[0,T]$. The
minimal constraint value, the cost value in the optimal control problem, the
number of iterations, and the running time are shown in Table 2. The minimal
constraint value in Table 2 is $-1.00$, which shows that the pseudospectral
method with the linear initialization does not converge to a feasible
solution. Therefore, compared with the linear initialization, our SympOCnet
method, no matter which loss function in $\mathcal{L}$, $\mathcal{L}_{log}$,
$\mathcal{L}_{quad}$, and $\mathcal{L}_{aug}$ we choose, provides a better
initial guess for the pseudospectral method.
Moreover, we show the obtained results in one trial using SympOCnet with loss
$\mathcal{L}_{aug}$ and the post-process using the pseudospectral method in
Figure 3. The left three figures show the trajectories of our proposed
SympOCnet method at different times, while the right three figures correspond
to the final results from the pseudospectral method at different times. In
each figure, the four colored circles show the positions of the four drones at
the specific time, and the curves connected to the circles show the
trajectories before the current time. We observe that the shape of the
trajectory given by our SympOCnet method does not change too much after the
post-process, which shows that SympOCnet provides a suboptimal solution to the
optimal control problem in this example.
### 5.2 Generalizability and offline-online training
case 1
case 2
case 3
case 4
case 5
case 6
Figure 4: Initial positions of drones, obstacles, and training data. In the
six figures, we show the initial positions of the four drones (represented by
colored circles), the positions of the two obstacles (represented by the round
cornered rectangles), and the randomly sampled training data (represented by
the yellow dots) in the six testing cases in Section 5.2.
NN, $t=\frac{1}{3}$
NN, $t=\frac{2}{3}$
NN, $t=1$
PS, $t=\frac{1}{3}$
PS, $t=\frac{2}{3}$
PS, $t=1$
NN, $t=\frac{1}{3}$
NN, $t=\frac{2}{3}$
NN, $t=1$
PS, $t=\frac{1}{3}$
PS, $t=\frac{2}{3}$
PS, $t=1$
Figure 5: Output trajectories of offline-online training and post-process. The
first and second rows correspond to the first and fourth case, respectively.
The left three columns are the output from L-BFGS, and the right three columns
are the output from the pseudospectral method. In each figure, the four
colored circles show the positions of the four drones at time $t$, and the
curves connected to the circles show the trajectories before time $t$.
In this section, we consider the same problem setup as in Section 5.1. To test
the generalizability of SympOCnet, we assume that the exact initial position
is unknown in the training process. We train one SympNet $\varphi$ using
multiple initial positions sampled from a uniform distribution centered at the
initial position in Section 5.1. With multiple initial positions, we train one
SympNet $\varphi$ and several variables
$\\{(\bm{y}_{0k},\bm{u}_{k},\bm{q}_{0k})\\}_{k}$, where the $k$-th trainable
tuple $(\bm{y}_{0k},\bm{u}_{k},\bm{q}_{0k})$ corresponds to the $k$-th sampled
initial position. We refer to this training process of the SympNet and all
trainable variables as “offline training”. After $100,000$ steps of offline
training, the exact initial positions are provided. To obtain an improved
solution in reasonable running time using the trained SympNet $\varphi$ as
well as the new information, we fix the SympNet $\varphi$ and apply the L-BFGS
method to train the variables $(\bm{y}_{0},\bm{u},\bm{q}_{0})$ with the exact
initial and terminal positions. We call this training process using L-BFGS
method the “online training”. The optimal trajectory is then computed using
(13), where $\varphi_{1}$ comes from the SympNet $\varphi$ trained in the
offline training, and the parameters $(\bm{y}_{0},\bm{u},\bm{q}_{0})$ are
outputted from the online training. After the online training, we also apply
the pseudospectral method to improve the quality of the results.
To generate training data for the initial positions, we sample $100$ points
from the uniform distribution in $[x-1,x+1]\times[y-1,y+1]$ where $(x,y)$ is
the initial position of a drone in Section 5.1. To test the generalizability
of our method, we have six cases. In the first three cases, the exact initial
positions are randomly sampled from the same distribution used to generate the
training data. In other words, we assume the exact data is covered in the area
of the training data. In the last three cases, we set the initial conditions
for the state variable to be $(-2,-4,2,-4,2,4,-2,4)$, $(-4,-4,0,-4,0,4,-4,4)$,
and $(0,-4,0,-2,0,2,0,4)$, respectively. These three initial positions are not
in the range of the training data. Therefore, the results with these three
initial positions show the generalizability of our SympOCnet method when the
test data is not covered by the area of the training data. The initial
positions of the drones, the positions of the obstacles, and the training data
are shown in Figure 4, where the four colored circles denote the drones, the
two round cornered rectangles denote the obstacles, and the yellow dots denote
the sampled training data.
In our experiments, we found that the performance is improved if we increase
the dimension of the problem by adding some latent variables. We increase the
dimension of the state space from $8$ to $16$, and define the Hamiltonian on
the larger space by
$H(\bm{x}_{1},\bm{x}_{2},\bm{p}_{1},\bm{p}_{2})=H_{\epsilon,a}(\bm{x}_{1},\bm{p}_{1})+\frac{1}{2}\|\bm{p}_{2}\|^{2}\quad\forall\bm{x}_{1},\bm{x}_{2},\bm{p}_{1},\bm{p}_{2}\in\mathbb{R}^{8},$
where $(\bm{x}_{1},\bm{p}_{1})$ are the variables in the original problem,
$(\bm{x}_{2},\bm{p}_{2})$ are the latent variables we add, and
$H_{\epsilon,a}$ is the function defined in (18). The minimal constraint, cost
in the optimal control problem, and running time of the L-BFGS method and
pseudospectral method are shown in Table 3. As in Section 5.1, we run $10$
repeated experiments and put their average values in the table. In the online
training, we run $1,000$ L-BFGS iterations to train the parameters of all the
six cases simultaneously. Note that we can train them all at once, since they
share the same SympNet, and the loss function is the summation of the loss
functions in all cases. Then, we run the pseudospectral method on each case
until it converges. The statistics for the L-BFGS training and the
pseudospectral method for the six cases are shown in each line of Table 3. The
minimal constraint value of L-BFGS method is not as good as the results in
Section 5.1, which is expected, since the initial positions are not known when
the SympNet is trained. However, from the minimal constraint values of the
pseudospectral post-process in the six cases, we conclude that our offline-
online training still provides a good initialization for the pseudospectral
method, which shows a good generalizability of our proposed method.
| minimal constraint | cost | time (s)
---|---|---|---
L-BFGS | -1.27E-04 | 231.44 | 158
PS (case 1) | -1.97E-11 | 102.00 | 59
PS (case 2) | -1.74E-11 | 94.59 | 77
PS (case 3) | -3.36E-11 | 90.95 | 79
PS (case 4) | -6.00E-09 | 141.22 | 65
PS (case 5) | -7.19E-08 | 140.47 | 111
PS (case 6) | 1.64E-13 | 112.41 | 67
Table 3: The results given by the offline-online training (L-BFGS) and the post-process (pseudospectral method) in six generalizability tests. The offline-online training using L-BFGS method provides a good initialization for the pseudospectral method. With this initialization, the pseudospectral method in the post-process improves the feasibility and optimality of the solution in all the six cases. | minimal constraint | cost | time (s)
---|---|---|---
PS (linear initialization, case 1) | -7.71E-09 | 85.19 | 134
PS (linear initialization, case 2) | 2.95E-07 | 83.98 | 173
PS (linear initialization, case 3) | -1.94E-10 | 79.67 | 178
PS (linear initialization, case 4) | -5.55E-16 | 80.44 | 157
PS (linear initialization, case 5) | -8.08E-14 | 136.16 | 147
PS (linear initialization, case 6) | -1.00 | 60.00 | 28
Table 4: The results given by the pseudospectral method with the linear
initialization in six generalizability tests. Although the pseudospectral
method with the linear initialization performs slightly better than our
proposed method in the first five cases, it violates the constraint in the
last case. Therefore, our proposed method provides a more robust
initialization for the pseudospectral method compared with the linear
initialization.
We also compare our results with the pseudospectral method whose
initialization is given by the linear interpolation (as descibed in Section
5.1) in the six generalizability tests. The minimal constraint value, cost
value in the optimal control problem, and running time of the pseudospectral
method with the linear initialization are shown in Table 4. Comparing Table 3
with Table 4, we see that although our proposed method has slightly larger
cost values in the first five cases, the pseudospectral method with the linear
initialization does not converge to a feasible solution in the last case.
Therefore, our proposed method provides a more robust initialization for the
pseudospectral method compared with the linear initialization.
Moreover, we plot the output trajectories of one trial of the first and fourth
cases computed using our proposed method in Figure 5. The left three columns
show the output trajectory given by the offline-online training, and the right
three columns are the output trajectory from the post-process. In each figure,
the yellow dots are the initial positions of the sampled training data.
Similar as in Section 5.1, the four colored circles show the current positions
of the four drones, and the curves connected to them illustrate the
trajectories before the current time. The first row in Figure 5 is for the
first case where the exact initial positions are in the area of the sampled
training data. The second row is for the fourth case where the exact initial
positions are outside the area of the training data. We observe that the
offline-online training process still provides feasible trajectories in both
cases, based on which the post-process improves the smoothness of the
trajectories to provide more optimal solutions. Therefore, our proposed
SympOCnet method has the generalizability to handle unseen data which are
close to the training data.
### 5.3 Multiple drones in a two-dimensional room
In this section, we consider the path planning problem for multiple drones in
a room, which is inspired by [79]. To be specific, we consider $M$ drones
moving in a room $[-C_{r},C_{r}]^{2}$ for some $C_{r}>0$. In our experiments,
we set the constant $C_{r}=5$ (i.e., the room has size $10\times 10$). We need
to avoid the collisions among the drones as well as the collisions between
drones and the walls. The constraint function $h=(h_{1},h_{2})$ contains two
parts: the function $h_{2}$ is defined by (24), and the function $h_{1}$ is
the obstacle constraint provided by the four walls. Hence, we set $h_{1}$ in
the form of (22), where the function $d$ is defined by
$d(\bm{x})=(x_{1}+C_{r},C_{r}-x_{1},x_{2}+C_{r},C_{r}-x_{2})$ for any
$\bm{x}=(x_{1},x_{2})\in\mathbb{R}^{2}$. The initial positions of the drones
are near the boundary of the room, and the terminal positions are the opposite
locations, i.e., we set $\bm{x}_{T}=-\bm{x}_{0}$. This is a high-dimensional
example (the dimension of the state space is $n=2M$, which ranges from $64$ to
$512$ in our experiments), and hence we apply our SympOCnet method without the
post-process.
First, we set the drone radius $C_{d}$ to be $0.3$. We run $10$ repeated
experiments (with $100,000$ training iterations in each experiment) with drone
numbers $M=32$ and $64$ to test the robustness of our SympOCnet method. The
results are shown in Table 5. In each trial, we compute the minimal normalized
distance of any pair of drones at any time grid, i.e., we compute
$\mathcal{D}$ defined by
$\mathcal{D}=\frac{1}{2C_{d}}\min_{1\leq i<j\leq M}\min_{1\leq k\leq
N}\|\bm{x}_{i}(t_{k})-\bm{x}_{j}(t_{k})\|,$
where $\bm{x}_{i}(t_{k})$ denotes the center position for the $i$-th drone at
time $t_{k}$. Then, we compute the average and standard deviation of these
minimal normalized distances $\mathcal{D}$ in the $10$ trials, which are shown
in the second and third columns in Table 5. We also compute the scaled cost in
the optimal control problem, which is defined by
$\frac{1}{M}\int_{0}^{T}\frac{1}{2}\|\dot{\bm{x}}(t)\|^{2}dt$, where
$\bm{x}(\cdot)$ is the output trajectory for the state variable $\bm{x}$ in
the optimal control problem. The average of the scaled cost is shown in the
fourth column in Table 5. In the last column of Table 5, we show the averaged
running time of the training process. From the results, we observe that our
SympOCnet method provides feasible results in reasonable running time for this
high-dimensional problem. Note that the collision between two drones happens
whenever the minimal normalized distance is less than $1$. Therefore, in
general the collision does not happen for $32$ drones, but it may happen for
$64$ or more drones. To provide a more feasible solution, in the following
experiments, we put a safe zone outside each drone by setting $C_{d}$ to be a
little bit larger than the actual drone radius.
# drones | $\mathbb{E}(\mathcal{D})$ | std($\mathcal{D}$) | scaled cost | time (s)
---|---|---|---|---
32 | 1.001 | 0.00390 | 85.84 | 1905
64 | 0.986 | 0.00347 | 100.43 | 2214
Table 5: The results for the path planning problem with $32$ and $64$ drones
in a room. We show the expected value and standard deviation of the minimal
normalized distances. The collision does not happen for 32 drones, but it may
happen for 64 or more drones. Therefore, we need to put a safe zone near each
drone to avoid collision when the number of drones is large. The computation
time increases only by 309 seconds when scaling from 32 to 64 drones.
We also tested our SympOCnet method on this path planning problem with $128$
drones (whose radius is $0.075$) and four obstacles. The initial and terminal
positions are similar to the cases of $32$ or $64$ drones (see Figure 6 (a)
and (d)). The four obstacles are balls inside the room, which are represented
by the black circles in Figure 6. The constraint function $d_{j}$ in (23)
corresponding to the obstacle $E_{j}$ is defined by
$d_{j}(\bm{x})=\|\bm{z}_{j}-\bm{x}\|^{2}-(C_{o}+C_{d})^{2}$ for any
$\bm{x}\in\mathbb{R}^{n}$, where $\bm{z}_{j}\in\mathbb{R}^{2}$ and $C_{o}>0$
are the center and radius of the obstacle $E_{j}$, respectively. The results
are shown in Figure 6. Each figure shows the current positions (represented by
colored circles) and the previous trajectories (represented by grey curves) of
the drones at different time $t=0,\frac{1}{3},\frac{2}{3},1$. To provide a
feasible solution, we set $C_{d}$ in the constraint function to be $0.1$
instead of $0.075$. It takes $4161$ seconds to finish $100,000$ training
iterations. From the numerical results, we found that there are no collisions
between obstacles and drones, and hence SympOCnet performs well and
efficiently in this high-dimensional problem.
$t=0$
$t=\frac{1}{3}$
$t=\frac{2}{3}$
$t=1$
Figure 6: Path planning of 128 drones with four obstacles on the plane. We
plot the predicted positions of 128 drones at time
$t=0,\frac{1}{3},\frac{2}{3},1$. The four black circles represent the four
obstacles, and the colored circles represent the drones. The paths from the
initial conditions to the current positions of all drones are plotted as gray
lines. There are no collisions in the three clipped snapshots, and the planned
trajectories are all inside the $10\times 10$ square.
Then, we tested our proposed method on a higher dimensional problem. We set
the drone number $M=256$, and hence the dimension of the state space is
$n=2M=512$. We solve this path planning problem with $256$ drones (whose
radius is $0.09$) in the two-dimensional room $[-5,5]^{2}$. The initial
positions are shown in Figure 7 (a), and the terminal condition is
$\bm{x}_{T}=-\bm{x}_{0}$. The results are shown in Figure 7. To provide a
feasible solution, we add a safe zone and set $C_{d}$ in the state constraint
to be $0.1$ instead of $0.09$. From this result, we observe no collisions
among the drones. It takes $5481$ seconds to obtain this feasible solution in
$100,000$ training iterations. Therefore, SympOCnet is able to efficiently
solve this $512$-dimensional path planning problem.
$t=0$
$t=\frac{1}{3}$
$t=\frac{2}{3}$
$t=1$
Figure 7: Path planning of 256 drones on the plane. We plot the predicted
positions of 256 drones at time $t=0,\frac{1}{3},\frac{2}{3},1$. The drones
are represented by colored circles. The paths from the initial positions to
the current positions of all drones are plotted as gray lines. There are no
collisions in the three clipped snapshots, and the planned trajectories are
inside the $10\times 10$ square.
### 5.4 Multiple drones with obstacle avoidance in a three-dimensional space
$t=\frac{1}{3}$
$t=\frac{2}{3}$
$t=1$
Figure 8: Path planning of 100 drones in a 3d space. We plot the predicted
positions of 100 drones at time $t=\frac{1}{3},\frac{2}{3},1$. The paths from
the initial positions to the current positions of all drones are plotted as
colored lines. The destinations are marked as red crosses. The drones reach
their destinations without any collision.
We consider the three-dimensional swarm path planning example in [74]. To be
specific, we consider $M=100$ drones with radius $0.18$. Their goal is to
avoid two obstacles which are located between their initial positions and
terminal positions, as illustrated in Figure 8. The total dimension of the
state space is $n=3M=300$. The $j$-th obstacle $E_{j}$ is defined to be the
rectangular set
$[C^{j}_{11},C^{j}_{12}]\times[C^{j}_{21},C^{j}_{22}]\times[C^{j}_{31},C^{j}_{32}]$,
where $C^{j}_{ik}$ is a constant scalar. We set the constraint function
$h=(h_{1},h_{2})$ in the same way as described before, where the function
$d_{j}$ in (23) is defined by
$\begin{split}&d_{j}(\bm{x})=\max\\{C^{j}_{i1}-C_{d}-x_{i},x_{i}-C^{j}_{i2}-C_{d}\colon
i=1,2,3\\},\end{split}$
for each $j=1,2$ and $\bm{x}=(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}$. Note that
$d_{j}(\bm{x})\leq 0$ if and only if
$x_{i}\in[C^{j}_{i1}-C_{d},C^{j}_{i2}+C_{d}]$ for each $i=1,2,3$, which holds
if the ball centered at $\bm{x}$ with radius $C_{d}$ intersects with the
obstacle $E_{j}$. Therefore, the constraint $d_{j}(\bm{x})\geq 0$ enforces the
drones to avoid collisions with the obstacle $E_{j}$. In our experiment, we
set
$\begin{split}(C^{1}_{11},C^{1}_{12},C^{1}_{21},C^{1}_{22},C^{1}_{31},C^{1}_{32})&=(-1.8,1.8,-0.3,0.3,0.2,6.8),\\\
(C^{2}_{11},C^{2}_{12},C^{2}_{21},C^{2}_{22},C^{2}_{31},C^{2}_{32})&=(2.2,3.8,-0.8,0.8,0.2,3.8).\end{split}$
In other words, the constraint sets are
$[-1.8,1.8]\times[-0.3,0.3]\times[0.2,6.8]$ and
$[2.2,3.8]\times[-0.8,0.8]\times[0.2,3.8]$. We apply our proposed SympOCnet
method to solve this $300$-dimensional problem, and plot the result in Figure
8. Due to the high dimensionality, we do not apply the post-process. To
provide a feasible solution, we set $C_{d}$ in the state constraint to be
$0.2$ instead of $0.18$. From the numerical results, we observe no collisions
among the drones and the obstacles. The minimal distance between every pair of
drones is $0.3994$, which is bigger than twice of the radius. It takes $3633$
seconds to finish $150,000$ training iterations. Therefore, our proposed
SympOCnet method provides a feasible solution to this high-dimensional swarm
path planning problem in reasonable time.
## 6 Summary
We have proposed a novel SympOCnet method for efficiently solving high-
dimensional optimal control problems with state constraints. We applied
SympOCnet to several multi-agent simultaneous path planning problems with
obstacle avoidance. The numerical results show SympOCnet’s ability to solve
high-dimensional problems efficiently with dimension more than $500$. These
first results reveal the potential of SympOCnet for solving high-dimensional
optimal control problems in real-time. In future work, we are going to
consider possible combinations with the DeepONet architecture [64] to avoid
any training cost during inference and to endow the predictive system with
uncertainties, such as uncertain initial and terminal positions in path
planning problems.
## Acknowledgement
The simulations were run on GeForce RTX 3090 and RTX A6000 GPU donated to us
by NVIDIA.
## References
* [1] M. Akian, R. Bapat, and S. Gaubert, Max-plus algebra, Handbook of linear algebra, 39 (2006).
* [2] M. Akian, S. Gaubert, and A. Lakhoua, The max-plus finite element method for solving deterministic optimal control problems: basic properties and convergence analysis, SIAM Journal on Control and Optimization, 47 (2008), pp. 817–848.
* [3] A. Alla, M. Falcone, and L. Saluzzi, An efficient DP algorithm on a tree-structure for finite horizon optimal control problems, SIAM Journal on Scientific Computing, 41 (2019), pp. A2384–A2406.
* [4] A. Alla, M. Falcone, and S. Volkwein, Error analysis for POD approximations of infinite horizon problems via the dynamic programming approach, SIAM Journal on Control and Optimization, 55 (2017), pp. 3091–3115.
* [5] A. Bachouch, C. Huré, N. Langrené, and H. Pham, Deep neural networks algorithms for stochastic control problems on finite horizon: numerical applications, arXiv preprint arXiv:1812.05916, (2018).
* [6] S. Bansal and C. J. Tomlin, Deepreach: A deep learning approach to high-dimensional reachability, in 2021 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2021, pp. 1817–1824.
* [7] R. E. Bellman, Adaptive control processes: a guided tour, Princeton university press, 1961.
* [8] D. P. Bertsekas, Reinforcement learning and optimal control, Athena Scientific, Belmont, Massachusetts, (2019).
* [9] O. Bokanowski, N. Gammoudi, and H. Zidani, Optimistic Planning Algorithms For State-Constrained Optimal Control Problems. working paper or preprint, July 2021, https://hal.archives-ouvertes.fr/hal-03283075.
* [10] O. Bokanowski, J. Garcke, M. Griebel, and I. Klompmaker, An adaptive sparse grid semi-Lagrangian scheme for first order Hamilton-Jacobi Bellman equations, Journal of Scientific Computing, 55 (2013), pp. 575–605.
* [11] M. Chen, J. F. Fisac, S. Sastry, and C. J. Tomlin, Safe sequential path planning of multi-vehicle systems via double-obstacle Hamilton-Jacobi-Isaacs variational inequality, in 2015 European Control Conference (ECC), IEEE, 2015, pp. 3304–3309.
* [12] M. Chen, Q. Hu, J. F. Fisac, K. Akametalu, C. Mackin, and C. J. Tomlin, Reachability-based safety and goal satisfaction of unmanned aerial platoons on air highways, Journal of Guidance, Control, and Dynamics, 40 (2017), pp. 1360–1373.
* [13] M. Chen and C. J. Tomlin, Exact and efficient Hamilton-Jacobi reachability for decoupled systems, in 2015 54th IEEE Conference on Decision and Control (CDC), IEEE, 2015, pp. 1297–1303.
* [14] P. Chen, J. Darbon, and T. Meng, Hopf-type representation formulas and efficient algorithms for certain high-dimensional optimal control problems, arXiv preprint arXiv:2110.02541, (2021).
* [15] P. Chen, J. Darbon, and T. Meng, Lax-Oleinik-type formulas and efficient algorithms for certain high-dimensional optimal control problems, arXiv preprint arXiv:2109.14849, (2021).
* [16] A. R. Conn, N. I. Gould, and P. Toint, A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds, SIAM Journal on Numerical Analysis, 28 (1991), pp. 545–572.
* [17] M. Coupechoux, J. Darbon, J.-M. Kélif, and M. Sigelle, Optimal trajectories of a UAV base station using Lagrangian mechanics, in IEEE INFOCOM 2019-IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), IEEE, 2019, pp. 626–631.
* [18] J. Darbon, On convex finite-dimensional variational methods in imaging sciences and Hamilton–Jacobi equations, SIAM Journal on Imaging Sciences, 8 (2015), pp. 2268–2293, https://doi.org/10.1137/130944163, https://arxiv.org/abs/https://doi.org/10.1137/130944163.
* [19] J. Darbon, P. M. Dower, and T. Meng, Neural network architectures using min plus algebra for solving certain high dimensional optimal control problems and Hamilton-Jacobi PDEs, arXiv preprint arXiv:2105.03336, (2021).
* [20] J. Darbon, G. P. Langlois, and T. Meng, Overcoming the curse of dimensionality for some Hamilton-Jacobi partial differential equations via neural network architectures, Res. Math. Sci., 7 (2020), p. 20, https://doi.org/10.1007/s40687-020-00215-6, https://doi.org/10.1007/s40687-020-00215-6.
* [21] J. Darbon and T. Meng, On decomposition models in imaging sciences and multi-time Hamilton–Jacobi partial differential equations, SIAM Journal on Imaging Sciences, 13 (2020), pp. 971–1014, https://doi.org/10.1137/19M1266332, https://doi.org/10.1137/19M1266332, https://arxiv.org/abs/https://doi.org/10.1137/19M1266332.
* [22] J. Darbon and T. Meng, On some neural network architectures that can represent viscosity solutions of certain high dimensional Hamilton–Jacobi partial differential equations, Journal of Computational Physics, 425 (2021), p. 109907, https://doi.org/https://doi.org/10.1016/j.jcp.2020.109907, http://www.sciencedirect.com/science/article/pii/S0021999120306811.
* [23] J. Darbon, T. Meng, and E. Resmerita, On Hamilton-Jacobi PDEs and image denoising models with certain non-additive noise, arXiv preprint arXiv:2105.13997, (2021).
* [24] J. Darbon and S. Osher, Algorithms for overcoming the curse of dimensionality for certain Hamilton-Jacobi equations arising in control theory and elsewhere, Res Math Sci Research in the Mathematical Sciences, 3 (2016), pp. 1–26, https://doi.org/10.1186/s40687-016-0068-7, https://doi.org/10.1186/s40687-016-0068-7.
* [25] D. Delahaye, S. Puechmorel, P. Tsiotras, and E. Féron, Mathematical models for aircraft trajectory design: A survey, in Air Traffic Management and Systems, Springer, 2014, pp. 205–247.
* [26] J. Denk and G. Schmidt, Synthesis of a walking primitive database for a humanoid robot using optimal control techniques, in Proceedings of IEEE-RAS International Conference on Humanoid Robots, 2001, pp. 319–326.
* [27] B. Djeridane and J. Lygeros, Neural approximation of PDE solutions: An application to reachability computations, in Proceedings of the 45th IEEE Conference on Decision and Control, Dec 2006, pp. 3034–3039, https://doi.org/10.1109/CDC.2006.377184.
* [28] S. Dolgov, D. Kalise, and K. K. Kunisch, Tensor decomposition methods for high-dimensional Hamilton–Jacobi–Bellman equations, SIAM Journal on Scientific Computing, 43 (2021), pp. A1625–A1650, https://doi.org/10.1137/19M1305136, https://doi.org/10.1137/19M1305136, https://arxiv.org/abs/https://doi.org/10.1137/19M1305136.
* [29] P. M. Dower, W. M. McEneaney, and H. Zhang, Max-plus fundamental solution semigroups for optimal control problems, in 2015 Proceedings of the Conference on Control and its Applications, SIAM, 2015, pp. 368–375.
* [30] A. El Khoury, F. Lamiraux, and M. Taix, Optimal motion planning for humanoid robots, in 2013 IEEE International Conference on Robotics and Automation, IEEE, 2013, pp. 3136–3141.
* [31] F. Fahroo and I. M. Ross, Costate estimation by a Legendre pseudospectral method, Journal of Guidance, Control, and Dynamics, 24 (2001), pp. 270–277.
* [32] F. Fahroo and I. M. Ross, Direct trajectory optimization by a Chebyshev pseudospectral method, Journal of Guidance, Control, and Dynamics, 25 (2002), pp. 160–166.
* [33] M. Fallon, S. Kuindersma, S. Karumanchi, M. Antone, T. Schneider, H. Dai, C. P. D’Arpino, R. Deits, M. DiCicco, D. Fourie, T. Koolen, P. Marion, M. Posa, A. Valenzuela, K.-T. Yu, J. Shah, K. Iagnemma, R. Tedrake, and S. Teller, An architecture for online affordance-based perception and whole-body planning, Journal of Field Robotics, 32 (2015), pp. 229–254.
* [34] S. Feng, E. Whitman, X. Xinjilefu, and C. G. Atkeson, Optimization based full body control for the atlas robot, in 2014 IEEE-RAS International Conference on Humanoid Robots, IEEE, 2014, pp. 120–127.
* [35] W. Fleming and W. McEneaney, A max-plus-based algorithm for a Hamilton–Jacobi–Bellman equation of nonlinear filtering, SIAM Journal on Control and Optimization, 38 (2000), pp. 683–710, https://doi.org/10.1137/S0363012998332433.
* [36] K. Fujiwara, S. Kajita, K. Harada, K. Kaneko, M. Morisawa, F. Kanehiro, S. Nakaoka, and H. Hirukawa, An optimal planning of falling motions of a humanoid robot, in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, IEEE, 2007, pp. 456–462.
* [37] J. Garcke and A. Kröner, Suboptimal feedback control of PDEs by solving HJB equations on adaptive sparse grids, Journal of Scientific Computing, 70 (2017), pp. 1–28.
* [38] D. Garg, Advances in global pseudospectral methods for optimal control, PhD thesis, University of Florida Gainesville, FL, 2011.
* [39] S. Gaubert, W. McEneaney, and Z. Qu, Curse of dimensionality reduction in max-plus based approximation methods: Theoretical estimates and improved pruning algorithms, in 2011 50th IEEE Conference on Decision and Control and European Control Conference, IEEE, 2011, pp. 1054–1061.
* [40] S. Greydanus, M. Dzamba, and J. Yosinski, Hamiltonian neural networks, in Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, eds., vol. 32, Curran Associates, Inc., 2019, https://proceedings.neurips.cc/paper/2019/file/26cd8ecadce0d4efd6cc8a8725cbd1f8-Paper.pdf.
* [41] E. Hairer, M. Hochbruck, A. Iserles, and C. Lubich, Geometric numerical integration, Oberwolfach Reports, 3 (2006), pp. 805–882.
* [42] J. Han, A. Jentzen, and W. E, Solving high-dimensional partial differential equations using deep learning, Proceedings of the National Academy of Sciences, 115 (2018), pp. 8505–8510, https://doi.org/10.1073/pnas.1718942115.
* [43] M. Hofer, M. Muehlebach, and R. D’Andrea, Application of an approximate model predictive control scheme on an unmanned aerial vehicle, in 2016 IEEE International Conference on Robotics and Automation (ICRA), IEEE, 2016, pp. 2952–2957.
* [44] M. B. Horowitz, A. Damle, and J. W. Burdick, Linear Hamilton Jacobi Bellman equations in high dimensions, in 53rd IEEE Conference on Decision and Control, IEEE, 2014, pp. 5880–5887.
* [45] C. Huré, H. Pham, A. Bachouch, and N. Langrené, Deep neural networks algorithms for stochastic control problems on finite horizon: Convergence analysis, SIAM Journal on Numerical Analysis, 59 (2021), pp. 525–557, https://doi.org/10.1137/20M1316640, https://doi.org/10.1137/20M1316640, https://arxiv.org/abs/https://doi.org/10.1137/20M1316640.
* [46] C. Huré, H. Pham, and X. Warin, Some machine learning schemes for high-dimensional nonlinear PDEs, arXiv preprint arXiv:1902.01599, (2019).
* [47] F. Jiang, G. Chou, M. Chen, and C. J. Tomlin, Using neural networks to compute approximate and guaranteed feasible Hamilton-Jacobi-Bellman PDE solutions, arXiv preprint arXiv:1611.03158, (2016).
* [48] L. Jin, S. Li, J. Yu, and J. He, Robot manipulator control using neural networks: A survey, Neurocomputing, 285 (2018), pp. 23 – 34, https://doi.org/https://doi.org/10.1016/j.neucom.2018.01.002, http://www.sciencedirect.com/science/article/pii/S0925231218300158.
* [49] P. Jin, Z. Zhang, I. G. Kevrekidis, and G. E. Karniadakis, Learning Poisson systems and trajectories of autonomous systems via Poisson neural networks, arXiv preprint arXiv:2012.03133, (2020).
* [50] P. Jin, Z. Zhang, A. Zhu, Y. Tang, and G. E. Karniadakis, SympNets: Intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems, Neural Networks, 132 (2020), pp. 166–179.
* [51] D. Kalise, S. Kundu, and K. Kunisch, Robust feedback control of nonlinear PDEs by numerical approximation of high-dimensional Hamilton-Jacobi-Isaacs equations, arXiv preprint arXiv:1905.06276, (2019).
* [52] D. Kalise and K. Kunisch, Polynomial approximation of high-dimensional Hamilton–Jacobi–Bellman equations and applications to feedback control of semilinear parabolic PDEs, SIAM Journal on Scientific Computing, 40 (2018), pp. A629–A652.
* [53] W. Kang and L. C. Wilcox, Mitigating the curse of dimensionality: sparse grid characteristics method for optimal feedback control and HJB equations, Computational Optimization and Applications, 68 (2017), pp. 289–315.
* [54] Y. H. Kim, F. L. Lewis, and D. M. Dawson, Intelligent optimal control of robotic manipulators using neural networks, Automatica, 36 (2000), pp. 1355 – 1364, https://doi.org/https://doi.org/10.1016/S0005-1098(00)00045-5, http://www.sciencedirect.com/science/article/pii/S0005109800000455.
* [55] M. R. Kirchner, M. J. Debord, and J. P. Hespanha, A Hamilton–Jacobi formulation for optimal coordination of heterogeneous multiple vehicle systems, in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, 2020, pp. 11623–11630.
* [56] S. Kuindersma, R. Deits, M. Fallon, A. Valenzuela, H. Dai, F. Permenter, T. Koolen, P. Marion, and R. Tedrake, Optimization-based locomotion planning, estimation, and control design for the atlas humanoid robot, Autonomous robots, 40 (2016), pp. 429–455.
* [57] K. Kunisch, S. Volkwein, and L. Xie, HJB-POD-based feedback design for the optimal control of evolution problems, SIAM Journal on Applied Dynamical Systems, 3 (2004), pp. 701–722.
* [58] P. Lambrianides, Q. Gong, and D. Venturi, A new scalable algorithm for computational optimal control under uncertainty, Journal of Computational Physics, 420 (2020), p. 109710, https://doi.org/https://doi.org/10.1016/j.jcp.2020.109710, https://www.sciencedirect.com/science/article/pii/S0021999120304848.
* [59] D. Lee and C. J. Tomlin, A Hopf-Lax formula in Hamilton–Jacobi analysis of reach-avoid problems, IEEE Control Systems Letters, 5 (2020), pp. 1055–1060.
* [60] D. Lee and C. J. Tomlin, A Computationally Efficient Hamilton-Jacobi-based Formula for State-Constrained Optimal Control Problems, arXiv e-prints, (2021), arXiv:2106.13440, p. arXiv:2106.13440, https://arxiv.org/abs/2106.13440.
* [61] F. Lewis, D. Dawson, and C. Abdallah, Robot Manipulator Control: Theory and Practice, Control engineering, Marcel Dekker, 2004, https://books.google.com/books?id=BDS_PQAACAAJ.
* [62] A. Li, S. Bansal, G. Giovanis, V. Tolani, C. Tomlin, and M. Chen, Generating robust supervision for learning-based visual navigation using Hamilton-Jacobi reachability, in Learning for Dynamics and Control, PMLR, 2020, pp. 500–510.
* [63] F. Lin and R. D. Brandt, An optimal control approach to robust control of robot manipulators, IEEE Transactions on robotics and automation, 14 (1998), pp. 69–77.
* [64] L. Lu, P. Jin, G. Pang, Z. Zhang, and G. E. Karniadakis, Learning nonlinear operators via DeepONet based on the universal approximation theorem of operators, Nature Machine Intelligence, 3 (2021), pp. 218–229.
* [65] V. R. Makkapati, J. Ridderhof, P. Tsiotras, J. Hart, and B. van Bloemen Waanders, Desensitized trajectory optimization for hypersonic vehicles, in 2021 IEEE Aerospace Conference (50100), 2021, pp. 1–10, https://doi.org/10.1109/AERO50100.2021.9438511.
* [66] W. McEneaney, A curse-of-dimensionality-free numerical method for solution of certain HJB PDEs, SIAM Journal on Control and Optimization, 46 (2007), pp. 1239–1276, https://doi.org/10.1137/040610830.
* [67] W. M. McEneaney, Max-plus methods for nonlinear control and estimation, Systems & Control: Foundations & Applications, Birkhäuser Boston, Inc., Boston, MA, 2006.
* [68] W. M. McEneaney, A. Deshpande, and S. Gaubert, Curse-of-complexity attenuation in the curse-of-dimensionality-free method for HJB PDEs, in 2008 American Control Conference, IEEE, 2008, pp. 4684–4690.
* [69] W. M. McEneaney and L. J. Kluberg, Convergence rate for a curse-of-dimensionality-free method for a class of HJB PDEs, SIAM Journal on Control and Optimization, 48 (2009), pp. 3052–3079.
* [70] X. Meng and G. E. Karniadakis, A composite neural network that learns from multi-fidelity data: Application to function approximation and inverse PDE problems, Journal of Computational Physics, 401 (2020), p. 109020.
* [71] T. Nakamura-Zimmerer, Q. Gong, and W. Kang, Adaptive deep learning for high-dimensional Hamilton–Jacobi–Bellman equations, SIAM Journal on Scientific Computing, 43 (2021), pp. A1221–A1247, https://doi.org/10.1137/19M1288802, https://doi.org/10.1137/19M1288802, https://arxiv.org/abs/https://doi.org/10.1137/19M1288802.
* [72] T. Nakamura-Zimmerer, Q. Gong, and W. Kang, QRnet: Optimal regulator design with LQR-augmented neural networks, IEEE Control Systems Letters, 5 (2021), pp. 1303–1308, https://doi.org/10.1109/LCSYS.2020.3034415.
* [73] K. N. Niarchos and J. Lygeros, A neural approximation to continuous time reachability computations, in Proceedings of the 45th IEEE Conference on Decision and Control, Dec 2006, pp. 6313–6318, https://doi.org/10.1109/CDC.2006.377358.
* [74] D. Onken, L. Nurbekyan, X. Li, S. W. Fung, S. Osher, and L. Ruthotto, A neural network approach for high-dimensional optimal control, arXiv preprint arXiv:2104.03270, (2021).
* [75] C. Parzani and S. Puechmorel, On a Hamilton-Jacobi-Bellman approach for coordinated optimal aircraft trajectories planning, Optimal Control Applications and Methods, 39 (2018), pp. 933–948.
* [76] M. Raissi, P. Perdikaris, and G. E. Karniadakis, Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations, Journal of Computational Physics, 378 (2019), pp. 686–707.
* [77] A. V. Rao, A survey of numerical methods for optimal control, Advances in the Astronautical Sciences, 135 (2009), pp. 497–528.
* [78] C. Reisinger and Y. Zhang, Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems, Analysis and Applications, 18 (2020), pp. 951–999, https://doi.org/10.1142/S0219530520500116, https://doi.org/10.1142/S0219530520500116, https://arxiv.org/abs/https://doi.org/10.1142/S0219530520500116.
* [79] D. R. Robinson, R. T. Mar, K. Estabridis, and G. Hewer, An efficient algorithm for optimal trajectory generation for heterogeneous multi-agent systems in non-convex environments, IEEE Robotics and Automation Letters, 3 (2018), pp. 1215–1222.
* [80] I. M. Ross and M. Karpenko, A review of pseudospectral optimal control: From theory to flight, Annual Reviews in Control, 36 (2012), pp. 182–197, https://doi.org/https://doi.org/10.1016/j.arcontrol.2012.09.002, https://www.sciencedirect.com/science/article/pii/S1367578812000375.
* [81] V. R. Royo and C. Tomlin, Recursive regression with neural networks: Approximating the HJI PDE solution, arXiv preprint arXiv:1611.02739, (2016).
* [82] A. Rucco, G. Notarstefano, and J. Hauser, An efficient minimum-time trajectory generation strategy for two-track car vehicles, IEEE Transactions on Control Systems Technology, 23 (2015), pp. 1505–1519.
* [83] A. Rucco, P. Sujit, A. P. Aguiar, J. B. De Sousa, and F. L. Pereira, Optimal rendezvous trajectory for unmanned aerial-ground vehicles, IEEE Transactions on Aerospace and Electronic Systems, 54 (2017), pp. 834–847.
* [84] J. Sirignano and K. Spiliopoulos, DGM: A deep learning algorithm for solving partial differential equations, Journal of Computational Physics, 375 (2018), pp. 1339 – 1364, https://doi.org/10.1016/j.jcp.2018.08.029.
* [85] S. Spedicato and G. Notarstefano, Minimum-time trajectory generation for quadrotors in constrained environments, IEEE Transactions on Control Systems Technology, 26 (2017), pp. 1335–1344.
* [86] E. Todorov, Efficient computation of optimal actions, Proceedings of the national academy of sciences, 106 (2009), pp. 11478–11483.
* [87] I. Yegorov and P. M. Dower, Perspectives on characteristics based curse-of-dimensionality-free numerical approaches for solving Hamilton–Jacobi equations, Applied Mathematics & Optimization, (2017), pp. 1–49.
* [88] Z. Zhang, Y. Shin, and G. E. Karniadakis, GFINNs: GENERIC formalism informed neural networks for deterministic and stochastic dynamical systems, arXiv preprint arXiv:2109.00092, (2021).
* [89] M. Zhou, J. Han, and J. Lu, Actor-critic method for high dimensional static Hamilton–Jacobi–Bellman partial differential equations based on neural networks, arXiv preprint arXiv:2102.11379, (2021).
|
11institutetext: Department of Astrophysics, IMAPP, Radboud University, PO Box
9010, 6500 GL Nijmegen, The Netherlands 22institutetext: Instituto de
Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile,
Casilla 306, Santiago 22, Chile 33institutetext: Center for Astrophysics $|$
Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA
44institutetext: Black Hole Initiative, Harvard University, 20 Garden Street,
Cambridge, MA 02138, USA 55institutetext: NASA Hubble Fellowship Program,
Einstein Fellow 66institutetext: Center for Computational Astrophysics,
Flatiron Institute, 162 Fifth Avenue, New York, NY 10010, USA 77institutetext:
Department of Astronomy and Columbia Astrophysics Laboratory, Columbia
University, 550 W 120th St, New York, NY 10027, USA 88institutetext: Max-
Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany
99institutetext: ASTRON, The Netherlands Institute for Radio Astronomy,
Postbus 2, NL-7990 AA Dwingeloo, The Netherlands
# Imaging the event horizon of M87* from space on different timescales
A. Shlentsova 1122 F. Roelofs 334411 S. Issaoun 335511 J. Davelaar 667711 H.
Falcke 118899
(Received June 16, 2023; accepted February 4, 2024)
###### Abstract
Context. The concept of a new space very long baseline interferometry (SVLBI)
system named the Event Horizon Imager (EHI) has been proposed to dramatically
improve black hole imaging and provide precise tests of the theory of general
relativity.
Aims. This paper presents imaging simulations for the EHI. We investigate the
ability to make high-resolution movies of the black hole shadow and jet
launching region around the supermassive black hole M87* and other black hole
jets with a three-satellite EHI configuration. We aim to identify orbital
configurations to optimize the $uv$-coverage to image variable sources.
Methods. Observations of general relativistic magnetohydrodynamics (GRMHD)
models were simulated for the configuration, consisting of three satellites in
circular medium earth orbits with an orbital plane perpendicular to the line
of sight. The expected noise was based on preliminary system parameters. Movie
frames, for which a part of the $uv$-coverage may be excessively sparse, were
reconstructed with algorithms that recover missing information from other
frames. Averaging visibilities accumulated over multiple epochs of
observations with an appropriate orbital configuration then improves the image
quality. With an enhanced signal-to-noise ratio, timescales of observed
variability were decreased.
Results. Our simulations show that the EHI with standard system parameters is
capable of imaging the variability in the M87* environment on event horizon
scales with approximately a month-long temporal resolution. The EHI with more
optimistic noise parameters (enhancing the signal-to-noise ratio about
100-fold) would allow for imaging of the variability on gravitational
timescales. Observations with an EHI setup at lower frequencies are capable of
imaging the variability in extended jets.
Conclusions. Our study shows that the EHI concept can be used to image the
variability in a black hole environment and extended jets, allowing for
stronger tests of gravity theories and models of black hole accretion, plasma
dynamics, and jet launching.
###### Key Words.:
Galaxy: center – Techniques: interferometric – Techniques: high angular
resolution – Methods: data analysis
## 1 Introduction
The supermassive black hole in the nucleus of the elliptical galaxy M87 (M87*)
has been an object of great interest for intensive studies with imaging
observations since its discovery in 1978 (Young et al., 1978; Sargent et al.,
1978). Measurements of the black hole mass $M_{BH}=(6.5\pm
0.2{|}_{\mathrm{stat}}\pm 0.7{|}_{\mathrm{sys}})\times 10^{9}M_{\odot}$
extracted from the direct measurements of the angular diameter of the shadow
(Event Horizon Telescope Collaboration et al., 2019a, b, c, d, e, f) are
consistent with the presence of a central Kerr black hole. The estimated mass
from the Event Horizon Telescope (EHT) observations agrees with stellar-
dynamic observations for which the most recently obtained result is
$M_{BH}=(6.6\pm 0.4)\times 10^{9}M_{\odot}$ (Gebhardt et al., 2011). Falcke et
al. (2000) introduced the black hole ‘shadow’ — the region on the sky plane of
a black hole in which there is a noticeable deficit of the observed intensity
due to gravitational lensing effects. They proposed that this shadow is
resolvable with very long baseline interferometry (VLBI) at sub-millimetre
wavelengths. The shadow is caused by two effects: the strong gravitational
redshift and a shorter total path length of photons escaping to the observer
from geodesics intersecting the horizon, while photons on geodesics missing
the horizon can orbit the black hole near the circular radius several times,
which leads to a higher integrated emissivity (Bronzwaer & Falcke, 2021). This
region is a circle of radius $\sqrt{27}R_{g}$ (where $R_{g}=GM_{BH}/c^{2}$,
$G$ is the gravitational constant, $M_{BH}$ is the mass of the black hole and
$c$ is the velocity of light) in the Schwarzschild case (the dimensionless
black hole spin parameter $a_{*}=0$) and has a more flattened shape of a
similar size for a Kerr black hole. Numerical studies on the observability of
the M87* shadow were either done using semi-analytical models (Broderick &
Loeb, 2009) or with general relativistic magnetohydrodynamics (GRMHD)
simulations (Dexter et al., 2012; Mościbrodzka et al., 2016, 2017; Ryan et
al., 2018; Chael et al., 2019b; Davelaar et al., 2019). The shadow is directly
connected to the time-like component of the space-time metric that is also
probed by gravitational wave measurements (Psaltis et al., 2020, 2021).
M87* has a black hole shadow with an angular size of $\sim
42\leavevmode\nobreak\ \mu$as (Event Horizon Telescope Collaboration et al.,
2019a). It is the second-largest black hole shadow on the sky as seen from
Earth, after the Galactic Centre black hole Sagittarius A* (Sgr A*), the
supermassive black hole at the centre of the Milky Way. Sgr A* has a black
hole mass of $M_{BH}=(4.152\pm 0.014)\times 10^{6}{M}_{\odot}$ and an angular
shadow size of $\sim 52\leavevmode\nobreak\ \mu$as (Doeleman et al., 2008;
Johnson et al., 2015; Lu et al., 2018; Abuter et al., 2019; Event Horizon
Telescope Collaboration et al., 2022a, b, c, d, e, f). M87* has several
advantages for observations in comparison with Sgr A*, namely the absence of
scattering by the interstellar medium between us and the central radio source.
The timescales of structural variations of a black hole are characterized by
the gravitational time $t_{\mathrm{G}}=GM_{BH}/c^{3}$ and the innermost stable
circular orbital time $t_{\mathrm{ISCO}}\approx 24.25t_{\mathrm{G}}$ in the
case of the rapidly spinning Kerr black hole ($a_{*}=0.94$). The timescales of
structural variations of M87* are, therefore, $\sim 10^{3}$ longer than those
of Sgr A*, allowing us to assume a static source during single-day
observations and make use of classical aperture synthesis techniques. Another
striking difference is that M87* hosts a powerful relativistic jet observed in
the radio, optical, and X-ray bands (e.g. Sparks et al., 1996; Marshall et
al., 2002). Although Sgr A* may also host a jet, it has not been clearly
observed yet. Altogether this makes M87* an auspicious source for observations
with VLBI at millimetre and sub-millimetre wavelengths.
Multiple VLBI imaging studies of M87* have been carried out at 3.5 mm (a
frequency of 86 GHz), 7 mm (43 GHz), 1.3 cm (24 GHz), and longer wavelengths
(e.g. Hada et al., 2016; Kravchenko et al., 2020; Chang et al., 2010). The
development of high-bandwidth VLBI systems made it possible to decrease
observational wavelength down to 1.3 mm (230 GHz) for the EHT and reach an
instrument angular resolution of about 25 $\mu$as (Event Horizon Telescope
Collaboration et al., 2019b). Therefore, the EHT array can resolve the shadow
of M87* (Event Horizon Telescope Collaboration et al., 2019a, b, c, d, e, f).
The EHT data provided the first image of the black hole shadow with the
surrounding ring of emission. This allowed for the exclusion of some
categories of black hole models (Event Horizon Telescope Collaboration et al.,
2019e). With the addition of linear- and circular-polarimetric images of the
ring, magnetically arrested disk (MAD) models (Narayan et al., 2003) with
$a_{*}$ roughly around -0.5 and 0.0 were found to be favoured (Event Horizon
Telescope Collaboration et al., 2021a, b; Event Horizon Telescope
Collaboration et al., 2023). Nevertheless, conclusions are still strongly
dependent on the imaging assumptions. A further reduction of acceptable models
requires more accurate spin measurements as well as an improved estimation of
the magnetization and electron temperature distribution. Quantitative spin
measurements require higher-resolution imaging of the source (Van der Gucht et
al., 2020; Roelofs et al., 2021). Measurements of the photon subring
parameters of M87* could offer precise measurements of the black hole mass and
spin (Johnson et al., 2020). In addition, more accurate measurements of the
static photon ring allow for deeper tests of the Kerr metric and consequently
the theory of general relativity (GR) (Vagnozzi et al., 2022). Moreover, the
plasma behaviour in a black hole environment is currently not fully
understood. Imaging the variability of the emitting plasma would provide
constraints on the plasma parameters. Hence imaging variability in the M87*
environment with a higher resolution and fidelity would deliver data for more
accurate reconstructions of the plasma surrounding the black hole, as well as
GR tests. Furthermore, such measurements could be used to test different
theories of gravity (Mizuno et al., 2018) and other non-Kerr alternatives such
as boson stars (Olivares et al., 2020), axion models (Chen et al., 2020), or
fuzzballs (Bacchini et al., 2021). However, this requires improvements to be
made to the EHT.
Earth-based extensions of the EHT array, such as the next-generation EHT
(ngEHT, Doeleman et al., 2019; Raymond et al., 2021; Johnson et al., 2023;
Doeleman et al., 2023), will substantially improve imaging with increased
sensitivity and the additional VLBI baseline coverage from new sites. The
ngEHT will also have an expanded frequency coverage up to 345 GHz.
Nevertheless, angular resolution improvements for ground-based EHT extensions
are limited to a maximum physical baseline length of one Earth diameter.
Besides, frequencies higher than 345 GHz are accessible only for very few
sites with excellent atmospheric conditions, which makes VLBI image
reconstruction at these frequencies difficult, if not impossible.
Further enhancement of the EHT can be achieved with the deployment of EHT
antennas into space. The first observations which included space-ground
baselines were performed in 1986 by the Tracking and Data Relay Satellite
System (TDRSS) and ground-based telescopes in Australia and Japan (Levy et
al., 1986). One of the first space VLBI (SVLBI) missions to include space-
ground baselines was the VLBI Space Observatory Programme (VSOP) carried by
the Highly Advanced Laboratory for Communications and Astronomy (HALCA)
satellite that operated in orbit in the period from 1997 to 2003 (Hirabayashi
et al., 1998, 2000). Another SVLBI mission of this kind was RadioAstron, which
was operational between 2011 and 2019 (Kardashev et al., 2013).
In addition to the EHT array, VLBI stations could be located in a nearly
circular Low Earth Orbit (Palumbo et al., 2019) or a Highly Elliptical Orbit
(Andrianov et al., 2021). Such a setup would provide fast baseline coverage,
allowing for dynamical imaging of rapidly varying sources, as Sgr A*. VLBI
stations located in an elliptical medium Earth orbit (MEO) would increase the
angular resolution and the imaging capabilities of the array to distinguish
different space-time metrics around Sgr A* (Fromm et al., 2021). Inclusion of
one telescope in a high-inclination MEO or Geosynchronous Orbit would increase
the angular resolution and the number of available sources for which the
shadow could be resolved (Fish et al., 2019). For example, the central black
holes in IC 1459 and M84 (NGC 4374) with predicted shadow diameters of $\sim
9.2\leavevmode\nobreak\ \mu$as and $\sim 9.1\leavevmode\nobreak\ \mu$as,
respectively, as well as the Sombrero Galaxy (NGC 4594, M104) with an
estimated shadow diameter of $\sim 5.7\leavevmode\nobreak\ \mu$as and several
other sources (Johannsen et al., 2012). The Black Hole Explorer
(BHEX111https://www.blackholeexplorer.org/) mission concept aims to launch a
telescope into a near-geosynchronous orbit to make precision measurements of a
black hole’s photon ring properties using ground-space baselines (Kurczynski
et al., 2022, Johnson et al., Marrone et al., in prep., SPIE), which would
complement high-frequency space-space baseline imaging with the Event Horizon
Imager concept presented in this work. A station on the Moon or in the Earth-
Sun or Earth-Moon Lagrange points, added to the EHT array, would sharpen the
angular resolution sufficiently to resolve signatures of the substructures in
the photon rings of M87* and Sgr A* (Johnson et al., 2020). It was proposed
that the Origins Space Telescope, equipped with a modified version of its
heterodyne instrument, could be used for these purposes (Pesce et al., 2019).
An alternative approach is to consider only space-space baselines. Such a
system has the advantage that atmospheric data corruptions can be avoided,
thus allowing for observation at higher frequencies in addition to the
possibility of reaching baselines longer than an Earth diameter. Therefore,
the resolution can be increased even further.
The Event Horizon Imager (EHI) is a concept for a new SVLBI system that
consists of two or three satellites in polar or equatorial circular MEOs
observing with only space-space baselines at frequencies up to 690 GHz
(Martin-Neira et al., 2017; Kudriashov et al., 2019; Roelofs et al., 2019;
Kudriashov et al., 2021b). The concept envisages the precise localization of
the interferometer baseline based on the real-time relative positioning of the
satellites using the Global Navigation Satellite System (GNSS). In addition,
the concept suggests the on-board cross-correlation for the reduction of the
difficulties of the data downlink to the ground. The accurate positioning of
the satellites and interchange of local oscillator signals permit the phase
calibration of EHI observations and, thus, the use of complex visibilities
(Kudriashov et al., 2021b). Both the proposed on-the-fly positioning and the
on-board data processing require a working inter-satellite link (ISL) to
perform highly accurate ranging measurements, as well as to exchange observed
signals and local oscillator components for coherent operation (Martin-Neira
et al., 2019; Kudriashov et al., 2021a). This implies a direct line of sight
between the satellites. Hence the maximum baseline length, limiting the
angular resolution of the EHI, is set by the radius of the satellite orbits
and the occultation of the ISL by the Earth. The proposed resolution is $\sim
5$ $\mu$as (Kudriashov et al., 2021b), which is an order of magnitude better
than what can be obtained with the EHT from Earth. Due to higher observation
frequencies, the EHI can detect emission that originates from regions that are
closer to the black hole (Mościbrodzka et al., 2009). Those regions are more
dominated by general relativistic effects, leading to the reduced variability
of the images. Additionally, it causes the emission to trace the photon ring
more closely. Moreover, the proposed system can allow one to avoid a
significant part of the interstellar scattering for the observations of Sgr A*
since the scattering kernel size decreases with the square of the observing
frequency (Roelofs et al., 2019).
Simulated observations of Sgr A* with the EHI demonstrate the excellent
imaging capability with this concept (Roelofs et al., 2019). These simulations
have shown that the EHI could image Sgr A* with an order of magnitude higher
angular resolution and accuracy than the EHT within just a few months of
observation, assuming standard system noise parameters.
The overarching EHI science goal is to test theories of gravity. Precise
imaging of a black hole photon ring brings essential information to
distinguish between GR and alternative theories. The imaging of the plasma
variability would provide constraints on models of plasma dynamics and jet
formation. Therefore, the motivation for further testing of the EHI
capabilities is to study aspects limiting the fidelity of imaging on different
timescales of observations in order to determine how accurately GR, as well as
models of black hole accretion, plasma dynamics and jet launching, can be
tested with the EHI. In addition, understanding EHI imaging constraints can
help to optimize the design of the system.
This paper supplements previous EHI studies by examining several
configurations of the system. We study their influence on the possibility of
resolving structural variations of the M87* environment and extended jets of
other black holes (e.g. NGC 1052) at millimetre and sub-millimetre
wavelengths. The research is focused on M87* since its timescales of
structural variations are significantly longer than those of Sgr A*, as
mentioned above. The considered system setups are described in Section 2.
Section 3 introduces source models and image generation. The simulation
results are presented in Section 4, and conclusions are summarized in Section
5.
Figure 1: Time to reach the longest baseline as a function of orbital
separation for the considered EHI system setup. Blue line corresponds to the
most separate pair of satellites; red line corresponds to the pair of the
innermost and the middle satellites; green line corresponds to the pair of the
middle and the outermost satellites. Figure 2: Parts of the $uv$-coverage of
8.9-hour duration for 30, 400, and 1000 km separations of the orbits at 230
GHz; first, 81st, and 161st snapshots are shown (from top to bottom). Blue
points correspond to the most separate pair of satellites; red points
correspond to the pair of the innermost and the middle satellites; green
points correspond to the pair of the middle and the outermost satellites.
## 2 EHI system setup
The EHI concept implies two or three satellites in polar or equatorial
circular MEOs, as mentioned above. The particular configuration is still under
discussion. The EHI system considered in this paper consisted of three
satellites in circular MEOs with slightly different radii. The largest radius
of the satellite orbits was 13 913 km. The selected radius provides the
longest possible baselines taking into account the required simultaneous
accessibility of at least three GNSS satellites from the EHI orbits. At the
same time, this maximal radius ensures the orbits of all three EHI satellites
to be between the Van Allen belts (Kudriashov et al., 2019). The second
satellite was placed on an orbit with a radius smaller by a distance referred
to hereafter as the orbital separation. The third satellite was added between
the first two. Its orbit radius was one-third of the orbital separation bigger
than for the inner satellite. The orbital plane was set perpendicular to the
line of sight to the observed source, taking into account the declination. In
reality, this condition without additional reorientation of the orbital plane
can be fulfilled only for a very limited list of sources. Due to the different
orbital radii and velocities of the satellites, the complete $uv$-coverage
will have the shape of a spiral. We investigated systems with constant orbital
separations of 30, 50, 60, 100, 200, 300, 350, 400, 500, and 1000 km.
The imaging of the horizon-scale variability requires satisfactory coverage of
the whole $uv$-plane over time intervals comparable to the timescale of the
investigated variability. The longest available baseline is limited by the
radius of the satellite orbits and the occultation of the ISL by the Earth.
Let us assume that all three satellites start at the same orbital phase. When
the first pair of satellites reaches the longest baseline, the other two pairs
are also at the relatively long baselines. If the system continues observation
without configuration changes, the $uv$-coverage provided by the remaining
pairs of satellites lacks short baselines. This situation continues until the
occultation of the ISL between the first pair of satellites is finished and
they converge again. During this time, the other two pairs of satellites
sequentially find themselves in a similar situation, which creates a prolonged
period of increased $uv$-coverage sparseness. This sparsity period can in
principle be avoided by interchanging the satellite orbits each time when the
longest baseline length is reached. Figure 1 shows that the time to reach the
longest baseline depends on the orbital separation and rapidly decreases from
approximately a month to several days. Regular changes in the satellite orbits
on such short timescales are exceedingly fuel-consuming and, therefore,
unrealistic. The exclusion of satellite orbit changes will save fuel,
resulting in a prolongation of the overall mission duration and an increase in
both the amount of collected data and the number of objects that can be
observed. We considered satellites to stay in their orbits as long as required
to get a long enough series of $uv$-data points. The full data series can be
used for the reconstruction of a highly detailed time-averaged image. In this
paper, we divided the series of data into parts to obtain the time resolution
of interest. Figure 2 illustrates this concept with an example (see Sec. 3.2
for details). Therefore, the temporal resolution did not depend on orbital
separation and was selected during data processing.
In this work, we considered the standard EHI system composed of 4.0-metre
antennas, fitting in the Ariane 6 spacecraft (Arianespace, 2018), and a
hypothetical EHI+ system consisting of 15.0-metre antennas (see also Gurvits
et al., 2022). System parameters for the noise addition were taken from
Roelofs et al. (2019) for the EHI system. The EHI+ system was assumed to have
a system temperature close to the minimum allowable for a coherent heterodyne
system and a wider observing bandwidth to increase the signal-to-noise ratio
(Sec. 3.3). In addition, EHI+ antennas were considered to observe at higher
frequencies (see Sec. 3.1). Technical specifications assumed in this paper for
EHI+ are foreseen by the authors as technically very challenging and not yet
demonstrated, but not impossible. Phase corruptions due to uncertainties in
the orbital model have not been studied in this work; it was assumed that the
baseline vector is known exactly thanks to the GNSS-assisted orbit
determination, inter-satellite link ranging measurements, a local oscillator
sharing setup, and other measures that are currently under investigation
(Kudriashov et al., 2021b). Image reconstructions were therefore produced
using complex visibilities; the extent of the necessity of using closure
products will depend on the eventual feasibility and performance of the
connected-interferometer concept laid out in Kudriashov et al. (2021b) (see
also Sec. 3.2). An investigation of the influence of the detailed system noise
and calibration parameters on the reconstructed images presented here is left
for future work.
## 3 Simulated observations
In this section, the generation of simulated observations is described.
Theoretical emission maps were produced from GRMHD simulations and used as
input. These source models were used to calculate complex visibilities for the
simulated observations together with the coverage of the $uv$-plane, produced
by the system setup outlined in Section 2. The calculated complex visibilities
were supplemented by data corruption effects (e.g. thermal noise) and then
reconstructed with a Regularized maximum likelihood (RML) algorithm. The
quality of the image reconstructions was quantified by several image quality
metrics.
Table 1: Parameters of the theoretical emission maps used.
source | time interval (hours) | movie duration (months) | $\nu$ (GHz) | total flux (Jy) | FoV ($\mu$as)
---|---|---|---|---|---
| | | 230 | 0.84 - 1.31 | 190.9
M87* | 89 (10 $t_{\mathrm{G}}$) | 10 (800 $t_{\mathrm{G}}$) | 560 | 0.47 - 0.84 | 190.9
| | | 5000 | 0.10 - 0.18 | 190.9
| | | 43 | 0.60 - 0.82 | 763.7
NGC 1052 | 89 | 10 | 43 | 7.0 - 7.6 | $9.6\times 10^{3}$
| | | 230 | 0.93 - 1.04 | 190.9
M87* | 8.9 (1 $t_{\mathrm{G}}$) | 2.5 (200 $t_{\mathrm{G}}$) | 560 | 0.48 - 0.56 | 190.9
| | | 5000 | 0.10 - 0.12 | 190.9
| | | 43 | 0.69 - 0.72 | 763.7
NGC 1052 | 8.9 | 2.5 | 43 | 7.39 - 7.46 | $9.6\times 10^{3}$
222Duration of movies was calculated as time intervals between frames
multiplied by the number of frames in sets. For $uv$-coverage calculation, the
same time intervals between frames in movies were used as for M87* since the
black hole mass was not properly scaled for NGC 1052 models when changing the
angular size of the source.
### 3.1 Theoretical emission maps
Modelling of accretion flows around black holes is typically done by
performing GRMHD simulations (see Porth et al., 2019, and references therein).
The majority of these simulations only solve for the dynamically important
protons, although more recent works also include information on the electron
population via either including electron thermodynamics in GRMHD (Ryan et al.,
2018; Chael et al., 2019b) or by using Particle-in-Cell simulations instead
(Parfrey et al., 2019; Crinquand et al., 2021; Bransgrove et al., 2021). The
GRMHD models used in this work do not contain information on the electron
population, and therefore use a parametrization for their properties, such as
the shape of the distribution function, in the subsequent ray tracing. In this
work, we used the $\kappa$-jet model of M87* from Davelaar et al. (2019),
first developed for Sgr A* in Davelaar et al. (2018). The $\kappa$-jet model
for M87* is capable of recovering the observed spectral energy distribution
from radio to near-infrared, the jet core shift relation and shows consistent
image morphologies at 230 GHz. The model assumes that the electron
distribution function is a $\kappa$-distribution function, which is a
combination of a thermal core and a power-law. The slope of the power-law is
set by a parametrization from Ball et al. (2018), who studied trans-
relativistic reconnection. The parameterization effectively adds a power-law
population in the jet sheath, while electrons in the disk are in a thermal
distribution.
The dynamics of the accretion flow onto the black hole were simulated using
the Black Hole Accretion Code (BHAC, Porth et al., 2017; Olivares et al.,
2019) that solves the GRMHD equations. The performed simulation assumed a
standard and normal evolution (SANE) accretion disk, the dimensionless black
hole spin parameter was set to be $a_{*}=15/16$. The general relativistic ray-
tracing code RAPTOR (Bronzwaer et al., 2018, 2020) was used to generate
synthetic synchrotron maps (for further details, see Davelaar et al., 2019).
The resulting collections of synthetic synchrotron maps of the jet-launching
region in M87* were used in this paper as input models to simulate
observations. These maps were calculated at an inclination of $i=160^{\circ}$,
which ensured that the orientation of the emitting region corresponds to the
results of the EHT observations (Event Horizon Telescope Collaboration et al.,
2019a, e). The collection of synthetic synchrotron maps represents a black
hole environment with a certain time interval between frames, defined by the
gravitational timescale of a black hole $t_{\mathrm{G}}$. In this work, the
models had 1 and 10 $t_{\mathrm{G}}$ (8.9 and 89 hours, respectively)
intervals between frames.
The analysis for the EHI system was carried out for three frequencies, namely
43 GHz, which is a standard frequency for many VLBI observations, 230 GHz,
which is the operating frequency of the EHT, and 560 GHz, which is an expected
operating frequency of the EHI. 560 GHz was selected, following one of the
secondary EHI science goals of imaging the water line in protoplanetary disks.
If future technical studies find that ground telescope support will be a
system requirement, this observing frequency may be adjusted to, for example,
690 GHz without large consequences for the observable black hole shadow
features.
In the case of the EHI+ system, the analysis was performed for two
frequencies, which are 560 GHz and 5 THz, to test the imaging limitations of
the EHI concept. The frequency of a few THz was selected since this
additionally increases the angular resolution and, therefore, the number of
sources for which a black hole shadow is resolvable, for instance, one in
Centaurus A (Janssen et al., 2021). It should be noted that 5 THz corresponds
to a wavelength of 60 microns, which translates to very stringent requirements
for the phase calibration and antenna surface accuracies of the EHI+ system.
To get models at 5 THz, synthetic synchrotron maps initially calculated at 560
GHz were scaled to the flux at 5 THz according to the flux density spectrum.
The flux density spectrum in the $\kappa$-jet model is described by a power-
law $F_{\nu}\propto\nu^{\alpha}$ with index $\alpha\approx-0.7$ at high
frequencies ($\nu>230$ GHz), as declared in Davelaar et al. (2019).
Another potential secondary EHI science goal is imaging the structural
variations of the extended jet of black holes at 43 GHz. Many active galactic
nuclei (AGNs) are located further away or have smaller black hole masses than
M87*, thus, shadows of these black holes can not be resolved even with the
EHI. Nevertheless, they are suitable for imaging relativistic jets on larger
scales. The considered M87* model provides information about the jet on scales
much shorter than in the observations. Hence additional synthetic synchrotron
maps with $i=90^{\circ}$ inclination of the source were scaled so that the
field of view and the total flux correspond to the parameters of the jet of
another AGN. We note that this scaling is not physical because the black hole
mass and hence the variability timescales are not properly scaled when
changing the angular size of the source. Nevertheless, these coarse maps allow
one to get insight into the ability to resolve AGN jet structures with the
EHI. As a generic example for an AGN jet, NGC 1052 was chosen (Kadler et al.,
2004; Ros & Kadler, 2008; Baczko et al., 2019; Nakahara et al., 2019; Gurvits
et al., 2021). Table 1 summarizes parameters of collections of synthetic
synchrotron maps that are used in this work.
### 3.2 Coverage of the $(u,v)$ plane
Calculation of the complex visibilities was performed with the `eht-imaging`
software (Chael et al., 2018, 2019a). This requires, besides the source model,
a $uv$-coverage as input. As described in Section 2, we fixed the satellite
orbits throughout the observations. The obtained $uv$-data was split in the
time domain into snapshots of a duration corresponding to the time interval
between frames in the theoretical emission maps, namely 1 and 10
$t_{\mathrm{G}}$ (8.9 and 89 hours). Therefore, each part of the $uv$-coverage
was related to the specific frame of the source model when calculating the
complex visibilities.
Following Roelofs et al. (2019), we set an integration time per measurement
that is $uv$-distance-dependent,
$t_{\mathrm{int}}=\frac{P}{4\pi D_{\lambda}\Theta},$ (1)
where $P$ is the orbital period of the innermost satellite, $D_{\lambda}$ is
the length of the corresponding baseline, and $\Theta$ is the field of view
(FoV) of the EHI system. Thereby $t_{\mathrm{int}}$ is within the
$uv$-smearing limit, so we can avoid the corruption of the reconstructed image
due to the displacement of the $uv$-vector during an integration time
(Thompson et al., 2017; Palumbo et al., 2019). For the shortest baselines, the
integration time was chosen so that the $uv$-arcs are limited to 10 degrees.
The FoV of the EHI system was chosen corresponding to the FoV, provided by the
model.
The time it takes to reach the longest baselines (Figure 1) and the spiral
density depend on the orbital separation. Hence the described $uv$-coverage
after splitting into snapshots demonstrates three distinctive possibilities of
points distribution on the $uv$-plane for each pair of satellites. Figure 2
illustrates these possibilities with an example of several snapshots for three
different orbital separations. The first option is a ‘spiral-like’
distribution. For this option, it is typical that the points are allocated
sparsely but homogeneously in the shape of a part of a spiral (e.g. Figure 2,
middle right panel). The second option is a ‘ring-shaped’ distribution. The
points are allocated tightly to each other, however, they are grouped into a
narrow ring (e.g. Figure 2, middle left panel). The third option is an
intermediate one between the previous two. For this option, it is typical that
a part of a spiral forms a ring of a medium width (Figure 2, bottom middle
panel). With a small orbital separation, a spiral-like distribution is present
only in the first snapshot (Figure 2, top left panel). In the other snapshots,
all three pairs of satellites demonstrate a ring-shaped distribution of points
on the $uv$-plane (Figure 2, middle left and bottom left panels). With an
increasing orbital separation, the $uv$-coverage from each pair of satellites
expands through an intermediate phase to a spiral-like distribution (Figure 2,
bottom panels).
The coverage of the $uv$-plane formed by the whole system is one, two or three
concentric narrow rings, depending on how many pairs of satellites are blocked
by the Earth, in most of the snapshots for the small orbital separations.
Therefore, the resulting $uv$-coverage provides information only on a very
limited range of baselines in each snapshot. When the orbital separation
increases, the range of baselines represented in each snapshot also grows due
to wider rings, providing more homogeneous coverage. As a trade-off, the
shortest baselines become longer and the coverage becomes less dense. Hence
optimizing the movie reconstruction quality requires finding a balance between
density and isotropy of the $uv$-coverage in each snapshot, taking the source
evolution timescales into account. Nevertheless, with a suitable orbital
separation, sufficient coverage of the $uv$-plane can be obtained without
additional changes in the satellite orbits during the observations.
Similar to the time it takes to reach the longest baselines, the time it takes
until the temporally blocked satellites catch up with the other satellites,
restoring the ISL as its path is no longer occulted by Earth, is inversely
proportional to the orbital separation. Therefore, over multiple iterations,
an increase of the orbital separation leads to an increase of the fractional
time period with the co-visibility of three satellites, which is necessary for
the calculation of closure phases (i.e. the phase of the bispectrum, the sum
of visibility phases on a closed triangle of baselines). For example, with an
orbital separation of 50 km, only $35\%$ frames of the 10 $t_{\mathrm{G}}$
duration are suitable for reconstructions with amplitudes and closure phases
instead of complex visibilities on the investigated observational period. 100
and 200 km separation provides $45\%$ appropriate frames, while 300 and 400 km
orbital separations grant $75\%$ and $90\%$ suitable frames, respectively.
Since closure phases are immune to station-based phase errors, and the
bispectrum imposes less strict requirements on the phase stability of the
interferometer, the use of these robust quantities relaxes technical system
requirements for the EHI. Therefore, the dependence of the possibility to
calculate closure phases on the orbital separations should be considered in
the design of the system.
Table 2: System parameters and resulting noise.
| EHI system | EHI+ system
---|---|---
| M87* | NGC 1052 | M87*
$\nu$ (GHz) | 230 | 560 | 43 | 43 | 560 | 5000
$D$ (m) | 4.0 | 15.0
$T_{\mathrm{sys}}$ (K) | 150 | 50 | 300
$\eta_{\mathrm{ap}}$ | 0.58 | 0.58
$\eta_{\mathrm{cor}}$ | 0.97 | 0.97
$\eta_{\mathrm{clock}}$ | 0.87 | 0.87
$\Delta\nu$ (GHz) | 5 | 25 | 450
$t_{\mathrm{int,centre}}$ (s) | 452.2 | 452.2
$t_{\mathrm{int,edge}}$ (s) | 65.67 | 27.12 | 87.82 | 6.99 | 27.12 | 3.02
$\mathrm{SEFD}$ (Jy) | $6.7\times 10^{4}$ | $1.6\times 10^{3}$
$\sigma_{\mathrm{centre}}$ (Jy) | 0.036 | 0.00038
$\sigma_{\mathrm{edge}}$ (Jy) | 0.094 | 0.15 | 0.082 | 0.29 | 0.0016 | 0.0047
333$\sigma$-values are calculated with equations 2 and 3 for 30 km orbital
separation at the centre (long integration time) and edge (short integration
time) of the $uv$-spiral. Integration time at the edge of the $uv$-spiral
$t_{\mathrm{int,edge}}$ at 43 GHz is different for NGC 1052 and M87* due to
different fields of view (FoVs).
### 3.3 System noise calculation
Thermal noise in the system can be described as a Gaussian noise in the
complex visibility plane with zero mean and standard deviation $\sigma$. In
radio interferometry, $\sigma$ can be calculated from the integration time
$t_{\mathrm{int}}$, the observing bandwidth $\Delta\nu$ and System Equivalent
Flux Densities of the antennas $\mathrm{SEFD}_{1,2}$ (Thompson et al., 2017).
For the two-bit sampling of the signal with four-level quantization,
$\sigma=\frac{1}{0.88}\sqrt{\frac{\mathrm{SEFD}_{1}\mathrm{SEFD}_{2}}{2\Delta\nu
t_{\mathrm{int}}}}.$ (2)
SEFD is standardly defined as
$\mathrm{SEFD}=\frac{2k_{\mathrm{B}}T_{\mathrm{sys}}}{\eta A},$ (3)
where $k_{\mathrm{B}}$ is Boltzmann’s constant, $T_{\mathrm{sys}}$ is the
system temperature, $\eta$ is the efficiency and $A=\pi(D/2)^{2}$ is the area
of an antenna with diameter D. The efficiency
$\eta=\eta_{\mathrm{ap}}\eta_{\mathrm{cor}}\eta_{\mathrm{clock}}$ includes
efficiencies of the aperture, correlator, and clock, respectively. All
antennas were assumed to have constant efficiency independently of
observational frequency. In practice, the considered efficiency can be
implemented for only one frequency, while at higher frequencies the efficiency
will be lower. The bandwidth of the ISL for all EHI configurations and EHI+,
observing at 560 GHz, was the sum of bandwidths in two polarizations, hence
equals $2\Delta\nu$. For EHI+, observing at 5 THz, the bandwidth was the sum
of bandwidths in two polarizations and two sidebands, so it equals
$4\Delta\nu$. The parameters and resulting noise for the considered system
setups are shown in Table 2. The noise increases towards the edge of the
spiral due to the $uv$-distance-dependent integration times (see Sec. 3.2).
### 3.4 Image reconstruction
With a finite $uv$-sampling, image reconstruction requires imposing
assumptions due to the non-uniqueness of images reproducing the observed data.
The CLEAN algorithm (Högbom, 1974) represents the sky image in terms of a
finite number of point sources to constrain the image. Regularized maximum
likelihood (RML) algorithms (Gull & Daniell, 1978) minimize the weighted sum
of the goodness-of-fit test statistic $\chi^{2}$ and a set of regularizers
that favour certain image characteristics such as smoothness, sparsity, or
similarity to a prior (e.g. Chael et al., 2016, 2018). In this work, two
methods of applying the RML algorithm were used: snapshot imaging and
dynamical imaging, following Johnson et al. (2017).
Imaging of the source dynamics implies long-lasting observations, which can be
used later for the reconstruction of a movie. This movie should display
changes in the surrounding environment of a black hole. In snapshot imaging, a
set of images, making up the movie, is reconstructed from a corresponding set
of observations, and each reconstruction is performed independently. We
selected a combination of the Gull-Skilling entropy function (Skilling & Gull,
1991) and the Total Squared Variation regularizer (Kuramochi et al., 2018)
from several regularization functions that are implemented into the RML
algorithm. The selection of parameters for snapshot imaging is explained in
Appendix A.1.
Dynamical imaging assumes that the images in the set are temporally connected,
each being a perturbation of the previous frame. This allows one to get
information that is lacking for high-quality reconstruction from previous and
subsequent frames. In dynamical imaging, we used a combination of the Total
Squared Variation regularizer, the Second Moment regularizer (Issaoun et al.,
2019) and a generic regularizer $\mathcal{R}_{\Delta t}$, enforcing continuity
from frame to frame in the reconstructed movie (Johnson et al., 2017). The
selection of parameters for dynamical imaging is explained in Appendix A.2.
We reconstructed movies with three temporal resolutions: 1, 10, and 100
$t_{\mathrm{G}}$ (8.9 hours, 3.7 days, and 37 days). The complex visibilities
were calculated from the source model frames and corresponding parts of the
$uv$-coverage frame by frame. Then data files with complex visibilities were
united according to the assumed temporal resolution of the reconstructed
movie. Movies with 10 $t_{\mathrm{G}}$ and 100 $t_{\mathrm{G}}$ temporal
resolutions were simulated based on source models with 1 $t_{\mathrm{G}}$ and
10 $t_{\mathrm{G}}$ time intervals between frames, respectively. Thus, the
variability of the source throughout the observation was included in the
reconstructed movies. For the comparison with the theoretical emission maps,
the latter was averaged over the corresponding number of frames.
### 3.5 Image quality metrics
Quantitative comparison of the reconstructed image with the true model image
was performed via two metrics, namely the Normalized Root-Mean-Square Error
(NRMSE) and the Normalized cross-Correlation (NXCORR). The NRMSE evaluates
images based on pixel-to-pixel similarities and is given by
$\mathrm{NRMSE}=\sqrt{\frac{\sum_{i=1}^{n^{2}}(I_{i}-I_{i}^{\prime})^{2}}{\sum_{i=1}^{n^{2}}I_{i}^{2}}},$
(4)
where $I_{i}$ is the intensity of the $i$th pixel of the $n\times n$ pixels
model image and $I_{i}^{\prime}$ is that of the reconstructed image (Chael et
al., 2016, 2018). For the best fit, the NRMSE should be minimized since it is
zero when the reconstructed image is identical to the true image.
The NXCORR (a sliding inner-product of two normalized functions) is determined
as the cross-correlation of the Fourier transforms of the normalized intensity
patterns of two images at different relative shifts (e.g. Event Horizon
Telescope Collaboration et al., 2019a; Issaoun et al., 2019). Since the NXCORR
compares the bulks of each intensity pattern, it is less sensitive to
individual features in the reconstructed image. For a given relative shift
$\boldsymbol{\delta}$,
$\mathrm{NXCORR}(\boldsymbol{\delta})=\mid\mathcal{F}^{-1}\\{\mathcal{F}\\{I_{\mathrm{norm}}^{*}(\boldsymbol{x})\\}\boldsymbol{\cdot}\mathcal{F}\\{I_{\mathrm{norm}}^{\prime}(\boldsymbol{x+\delta})\\}\\}\mid,$
(5)
where $\mathcal{F}$ is the Fourier transform operator, $I_{\mathrm{norm}}$ is
the normalized intensity pattern of the true image and
$I_{\mathrm{norm}}^{\prime}$ is the same for the reconstructed image. The
normalized intensity for each pixel $i$ in the image can be calculated as
$I_{\mathrm{norm},i}=\frac{I_{i}-\mu_{\mathrm{I}}}{\sigma_{\mathrm{I}}},$ (6)
where $\mu_{\mathrm{I}}$ and $\sigma_{\mathrm{I}}$ are the mean and standard
deviation of the intensity distribution in the image. One of all the different
shifts across the extent of the images resulting in the maximum cross-
correlation is then used to output the final NXCORR value for the two images.
Thus, the maximized NXCORR gives the best fit.
Despite the differences in comparison methods, the NRMSE and the NXCORR
demonstrate the same qualitative dependencies (see Appendix A). Therefore, for
comparison of movies obtained under different conditions, only NXCORR is
shown. To evaluate the quality of a set of images making up the movie, metrics
were calculated for each frame independently and then averaged over the
duration of the movie.
## 4 Simulation results
In this section, we describe the outcome of the simulations for which the
setup is described in the previous sections. We start with observations of the
M87* shadow with the EHI at 230 and 560 GHz. Further, we describe observations
with the EHI+ at 560 GHz and 5 THz. Finally, observations of AGN jets at 43
GHz are described.
### 4.1 Imaging of the M87* shadow
Simulations of M87* shadow observations were produced from models with 1 and
10 $t_{\mathrm{G}}$ time intervals between frames and then reconstructed with
temporal resolutions of 10 and 100 $t_{\mathrm{G}}$ respectively. These
simulations were reconstructed using both snapshot and dynamical imaging
methods. Additionally, observations with 1 $t_{\mathrm{G}}$ temporal
resolution were reconstructed using dynamical imaging, except the observations
at 5 THz due to the heaviness of the computations at this frequency. In this
subsection, we compare the quality of movie reconstructions and investigate
the optimal orbital separation.
Figure 3: Effect of the reconstruction method on the reconstruction quality.
From left to right: the time-averaged over 3.7 days theoretical emission map
of M87* at 230 GHz (the model with 8.9 hours between frames); frames of the
movies simulated for the EHI with 30 km orbital separation, each lasting 3.7
days, reconstructed independently (snapshot imaging method) and using
dynamical imaging (Johnson et al., 2017). The source is varying during the
simulated observation. Colours indicate brightness/pixel in mJy (square root
scale).
Figure 4: Effect of the temporal resolution on the reconstruction quality.
From left to right: the time-averaged over 3.7 days theoretical emission map
of M87* at 230 GHz (the model with 8.9 hours between frames); frames of the
movies simulated for the EHI with 30 km orbital separation, each lasting 3.7
and 37 days, reconstructed using snapshot imaging (to highlight the effect).
The source is varying during the simulated observation. Colours indicate
brightness/pixel in mJy (square root scale). Figure 5: Effect of the orbital
separation on the reconstruction quality. From left to right: the time-
averaged over 3.7 days theoretical emission map of M87* at 230 GHz (the model
with 8.9 hours between frames); frames of the movies simulated for the EHI
with 30, 400, and 1000 km orbital separations, each lasting 3.7 days,
reconstructed using snapshot imaging (to highlight the effect). The source is
varying during the simulated observation. Colours indicate brightness/pixel in
mJy (square root scale).
Figure 6: Quality of the M87* shadow movies obtained with different orbital
separations of the EHI system setup. The movie quality is shown with the
averaged normalized cross-correlation against the true image, or NXCORR, at
two frequencies: (1) 230 GHz, shown in the top panel; (2) 560 GHz, shown in
the bottom panel. Red, green and blue lines correspond to reconstructed movies
with temporal resolutions of 8.9 hours, 3.7 days and 37 days, respectively.
Dashed lines correspond to snapshot imaging; solid lines correspond to
dynamical imaging. The red semitransparent area indicates orbital separations
with the best quality of reconstructed movies, based on the quality of
individual frames in the movies. Figure 7: Reconstruction of a simulated M87*
shadow observation with the EHI at 230 GHz for 400 km orbital separation. From
left to right: the middle theoretical emission map in the period of 37 days
(the model with 3.7 days between frames); frames of the simulated movies, each
lasting 37 days, reconstructed using snapshot imaging and dynamical imaging
methods. For the EHI observing at 230 GHz, these movies demonstrate the best
quality, according to the NXCORR, among all reconstructions that image source
dynamics. The source is varying during the simulated observation. Colours
indicate brightness/pixel in mJy (square root scale). The full reconstruction
is available as an online movie. Figure 8: Same as Fig. 7, but for an
observation frequency of 560 GHz. The full reconstruction is available as an
online movie.
#### 4.1.1 The EHI: 230 and 560 GHz
In Figure 3, we illustrate the quality of individual frames of movies
reconstructed with snapshot and dynamical imaging methods. Snapshot imaging
reconstructs each frame of the movie independently. Hence the absence of
baselines results in an inability to reconstruct some frames with satisfactory
quality, especially for the short orbital separation used. At the same time,
dynamical imaging gets information for image reconstruction from other frames
and, therefore, provides a visually noticeable enhancement of the image
quality throughout the entire movie. Some improvement can be achieved with
lower temporal resolution (Figure 4). The longer duration of movie frames
corresponds to bigger parts of the $uv$-spiral in each snapshot. Nevertheless,
apart from a stronger violation of the static source assumption, a decrease in
temporal resolution implies a loss of information about the source
variability. An increase of the orbital separation leads to a more uniform
$uv$-coverage and a simultaneous decrease in its density (discussed in Sec.
3.2). As demonstrated in Figure 5, the quality of frames in movies obtained
with large orbital separations is significantly higher than with a small
separation. In regions of high intensity, the accuracy and fidelity of movies
are similar for large orbital separations. However, some distortions and
artefacts corresponding to the $uv$-coverage sparsity appear at regions with
low intensity, when the separation is too wide. The visually observable
difference is confirmed by the image quality metrics. Since the discussed
issues of the reconstruction quality depend on the uniformity and density of
the $uv$-plane coverage, they are relevant for all considered frequencies.
Figure 6 shows the averaged quality of the movie reconstructions depending on
the orbital separation for observations with the EHI system at 230 GHz on the
top panel and 560 GHz on the bottom panel. As discussed earlier, small orbital
separations provide coverage of the $uv$-plane per frame in a limited range of
baselines. When the separation is wide enough to ensure a comparatively
uniform $uv$-coverage per each frame, the average quality of the movie is
plateauing. Outside of the best quality range of separations, the averaged
quality of movies stays on a plateau due to individual frames with high image
quality, while the majority of frames demonstrate slightly reduced quality,
compared to movies within the best quality range. The best results for all
simulations displayed in Figure 6 are obtained for orbital separations between
200 and 500 km, based on the quality of individual frames in the movies,
although the source variability can be reconstructed reasonably well for a
wider range of separations. The NXCORR metric indicates that simulations of
observations at 230 GHz with 100 $t_{\mathrm{G}}$ temporal resolution provide
a better quality of movies than ones with higher temporal resolutions
reconstructed with the same method. It also shows a slight advantage in favour
of the snapshot imaging reconstruction method compared to dynamical imaging.
Since the spatial frequency is proportional to the observational frequency
considering the same baselines, the $uv$-coverage is less dense at 560 GHz
compared to 230 GHz with the same system setup. Therefore, dynamical imaging
provides better quality of the reconstructed movies at this frequency for all
tested temporal resolutions with a small difference in quality between the
latter.
Figures 7 and 8 show several frames of the movies from the simulated
observation at 230 GHz and 560 GHz respectively with 400 km orbital separation
and 100 $t_{\mathrm{G}}$ temporal resolution. These movies demonstrate the
best quality among all reconstructions that image source dynamics at given
frequencies according to the NXCORR metric. Although some temporal model
brightness distribution changes are blurred out due to the temporal
resolution, the characteristic shape of the features matches the model across
the corresponding observational period. The right part of the images is
dominated by the emission from the jet, while structures on the left part of
the images display the movement of the plasma in the accretion disk.
Therefore, the demonstrated detailed imaging of the changes around the shadow
and at the beginning of the jet provides high-quality information about M87*
dynamics. The total flux of M87* at 560 GHz is lower than at 230 GHz (Table 1)
which makes the signal-to-noise ratio significantly lower and affects the
quality of the images. However, the spatial resolution is higher and the
environment closer to the event horizon can be probed. An improvement in the
quality of movies may be reached with a different selection of reconstruction
parameters, which were chosen based on simulations at 230 GHz in this work
(see Appendix A). Besides, observations at 560 GHz can be improved
significantly by increasing the system sensitivity (Sec. 4.1.2).
Figure 9: Same as Fig. 5, but for observations with the EHI+ at 560 GHz.
Figure 10: Same as Fig. 6, but for the EHI+ system setup observing at 560 GHz
(the top panel) and 5 THz (the bottom panel). Figure 11: Same as Fig. 8, but
for the EHI+. Simulated movies with frames lasting 8.9 hours are reconstructed
using the dynamical imaging method; ones with frames lasting 3.7 days are
reconstructed using snapshot imaging and dynamical imaging methods. The full
reconstruction is available as an online movie. Figure 12: Same as Fig. 7 and
8, but for the EHI+ with 200 km orbital separation, observing at 5 THz. The
full reconstruction is available as an online movie.
#### 4.1.2 The EHI+: 560 GHz and 5 THz
The main trends in the image quality identified for the EHI system and
associated with the $uv$-coverage (Figures 3, 4 and 5) remain relevant for the
EHI+ system. Moreover, Figure 9 shows that reconstruction artefacts and
distortions observed at the largest orbital separations become clearer with an
increasing observation frequency due to a general decrease in $uv$-spiral
density. At the same time, the effect of the lower flux density at higher
frequencies is compensated by the lower noise in the EHI+ system.
Figure 10 shows the quality of the image reconstruction depending on the
orbital separation for observations with the EHI+ system at 560 GHz on the top
panel and 5 THz on the bottom panel. All simulations of observations with the
EHI+ at 560 GHz have a better quality of reconstructed movies compared to
observations with the EHI since EHI+ data have a higher signal-to-noise ratio.
Moreover, all displayed simulations demonstrate a similar quality regardless
of the reconstruction method and the temporal resolution. This shows that,
despite the issue of the $uv$-coverage per frame discussed above, the system
provides coverage that is dense and uniform enough for the very sharp and
accurate source reconstruction. The best results for all displayed simulations
of observations with the EHI+ at 560 GHz are obtained for orbital separations
between 300 and 500 km, based on the quality of individual frames in the
movies. Further increase of the observation frequency to 5 THz exacerbates the
$uv$-coverage issue due to the extra reduction of its density. Additional
imaging difficulties are introduced by the onward total flux decrease (see
Table 1). As demonstrated in Figure 10, data received in 10 $t_{\mathrm{G}}$
(3.7 days) becomes insufficient to ensure satisfactory quality of movies for
the source variability imaging. The temporal resolution of 100
$t_{\mathrm{G}}$ (37 days) improves image quality to some extent. In the case
of observations at 5 THz, the best quality range of separations lies between
60 and 400 km. The quality of movies obtained with orbital separations out of
this range is insufficient for the detailed imaging of the changes in the M87*
environment due to the unstable quality of frames in movies. Nevertheless, a 5
THz observation frequency can provide a resolution of $\sim 0.5$ $\mu$as,
which significantly enhances the imaging capabilities of the system.
Figure 11 shows several frames of the movies from the simulated observation at
560 GHz with 400 km orbital separation and temporal resolutions of 1
$t_{\mathrm{G}}$ and 10 $t_{\mathrm{G}}$. Although variability is hard to
notice even in the model on 1 $t_{\mathrm{G}}$ timescale, the reconstructed
movie reproduces features of the source with high accuracy. For 10
$t_{\mathrm{G}}$ temporal resolution, the snapshot imaging reconstruction
method gives slight artefacts, which mostly disappear with the dynamical
imaging method. The temporal resolution of 100 $t_{\mathrm{G}}$ provides a
lower quality of the movies since reconstruction artefacts intensify.
Observations with 200 km orbital separation and 100 $t_{\mathrm{G}}$ temporal
resolution reached the best quality among all at 5 THz. As seen in Figure 12,
both methods are able to reconstruct the main features of the source in
regions of high intensity. In low-intensity regions, movies demonstrate
artefacts, however, dynamical imaging significantly reduces them. Additional
improvement of the reconstruction quality may be reached with the dynamical
imaging regularizer $\mathcal{R}_{flow}$, which assumes that an image evolves
according to a steady flow of the flux density over time (Johnson et al.,
2017). In summary, all the reconstructed movies demonstrate that the EHI
concept allows for observations of the M87* dynamics from space at frequencies
up to 5 THz with the signal-to-noise ratio as the main limiting parameter.
Moreover, the angular resolution at 5 THz can provide exceptionally precise
measurements of black hole parameters and remarkably deep tests of different
theories of gravity, as well as the additional increase in the number of
sources with resolvable shadows.
Figure 13: Quality of black hole jet movies obtained with different orbital
separations of the EHI system setup at 43 GHz. The movie quality is shown with
the averaged NXCORR, for two sources: (1) M87*, shown in the top panel; (2)
NGC 1052, shown in the bottom panel. Green and magenta lines correspond to
reconstructed movies with a 3.7-day temporal resolution; blue and cyan lines
correspond to reconstructed movies with a 37-day temporal resolution. The red
semitransparent area indicates orbital separations with the best quality of
reconstructed movies, based on the quality of individual frames in the movies.
Figure 14: Reconstruction of a simulated M87* jet observation with the EHI at
43 GHz for 60 km orbital separation. From left to right: the middle
theoretical emission map in the period of 37 days (the model with 3.7 days
between frames); frames of the simulated movie, each lasting 37 days,
reconstructed using the snapshot imaging method. For the EHI observing at 43
GHz, this movie demonstrates the best quality, according to the NXCORR, among
all reconstructions that image source dynamics. The source is varying during
the simulated observation. Colours indicate brightness/pixel in mJy (square
root scale). The full reconstruction is available as an online movie.
Figure 15: Same as Fig. 14, but for NGC 1052, observed with 200 km orbital
separation. The full reconstruction is available as an online movie.
### 4.2 Imaging of an AGN jet
The possible secondary EHI science goal is imaging the variability in the
extended jets of various AGNs at 43 GHz. In this subsection, we research the
EHI imaging capability and the optimal orbital separation for this goal on the
example of M87* and NGC 1052. Simulations of imaging the M87* jet at 43 GHz
were produced similarly to the simulations of the M87* shadow imaging at 230
and 560 GHz. Observations of models with 1 and 10 $t_{\mathrm{G}}$ time
intervals between frames were reconstructed with temporal resolutions of 10
and 100 $t_{\mathrm{G}}$, respectively, considering the variability of the
source during the observation. For coarse models of NGC 1052 (see Sec. 3.1),
simulations were produced analogously. Observations of models with 8.9 and 89
hours between frames were reconstructed with temporal resolutions of 3.7 and
37 days per frame, respectively. All simulations were reconstructed using the
snapshot imaging method.
The image quality of reconstructions becomes more sensitive to the
$uv$-coverage issue (see Sec. 4.1) for the imaging of extended jets. Figure 13
shows the quality of reconstructed movies depending on the orbital separation
for M87* on the top panel and NGC 1052 on the bottom panel. As seen from M87*
and NGC 1052 jet imaging simulations, a relatively high temporal resolution
(3.7 days in this project) is insufficient for adequate observations. The
$uv$-coverage obtained during this period lacks the required amount of
baselines, and only a few frames in a movie can be reconstructed with
satisfactory quality. At the same time, movies with approximately a month-long
(37 days) temporal resolution demonstrate significantly better quality. In the
case of M87*, the best quality of reconstructed movies corresponds to orbital
separations in a range from 50 to 200 km, based on the quality of individual
frames in the movies. For NGC 1052, high-quality movies are produced with
orbital separations between 100 and 400 km. Orbital separations out of this
range provide abundant reconstruction artefacts.
Figure 14 shows several frames of the movie reconstructed from simulated
observations of M87* with 60 km orbital separation and 100 $t_{\mathrm{G}}$
temporal resolution. The spatial resolution at 43 GHz frequency is
significantly lower, but changes in the brightness distribution at the
beginning of the extended jet are visible and can be used to reconstruct jet
dynamics. The noticeable deficit of the observed intensity in the central
region of images can be confused with the shadow but it corresponds to the
spine of the jet directed away from the observer. There are limitations in the
modelling of extended jets since actual sources demonstrate much more extended
emission compared to our model. Thus, the signal-to-noise ratio is reduced in
our simulations, which decreases the quality of reconstructed movies. The best
result for simulated observation of the NGC 1052 jet is reached with 200 km
orbital separation. Figure 15 shows several frames of the corresponding movie
with clear changes in the jet shape and the brightness distribution along it
from frame to frame. The changes are most noticeable in the brightest parts of
the jet. For a more detailed study, physically correctly scaled models are
required in order to consider the timescales of the source variability.
Figure 16: Orbital separations and temporal resolutions providing the best
quality imaging of the source. The EHI system consists of three 4.0-metre
antennas with standard system parameters; the EHI+ system consists of three
15.0-metre antennas with more optimistic noise parameters. Simulated
observations at 230, 560 GHz, and 5 THz image the M87* shadow; simulated
observations at 43 GHz image M87* and NGC 1052 jets.
## 5 Conclusions and summary
In this paper, we have presented simulations of the EHI SVLBI system
consisting of three satellites in circular MEOs with slightly different radii
(Martin-Neira et al., 2017; Kudriashov et al., 2019; Roelofs et al., 2019;
Kudriashov et al., 2021b). The proposed accurate relative positioning of the
satellites and interchange of local oscillator signals allow for the use of
complex visibilities. The absence of atmospheric data corruptions allows for
imaging at frequencies significantly higher compared to ground-based
observations. In this work, we investigated the effects of different orbital
separations on the reconstructed image quality. The orbital separation can be
changed during the mission to the optimal separation for the currently
observed source and frequency.
The EHI setup provides a spiral-shaped sampling of the $uv$-plane and can
perform high-fidelity imaging of the M87* shadow without additional changes in
the satellite motion during observations. Nevertheless, dividing the
$uv$-coverage into snapshots, considered in this paper, requires a balance
between density and homogeneity in each snapshot that influences the selection
of the orbital separation. Too small separations provide coverage of the
$uv$-plane per frame in an overly narrow range of baselines and too large ones
produce excessively sparse $uv$-coverage, which results in distortions and
artefacts in the reconstructed movies. An increase in the observation
frequency also enlarges the $uv$-coverage sparsity.
Using GRMHD simulations of M87* and model system parameters, similar to those
discussed in Roelofs et al. (2019), we performed simulated long-lasting
observations to assess the quality of movies that can be expected. For the
imaging of the structural variations of the M87* environment, an important
potential EHI science goal, simulations were performed with temporal
resolutions of 1, 10, and 100 $t_{\mathrm{G}}$ (8.9 hours, 3.7 days, and 37
days) at 230 and 560 GHz for the standard EHI system, 560 GHz and 5 THz for
the EHI+ system with more optimistic noise parameters. Also, simulations for
the EHI were performed with temporal resolutions of 3.7 and 37 days at 43 GHz
for another potential goal, imaging the structural variations of extended
jets. The simulated reconstructed movies with temporal resolutions of 3.7 and
37 days included source variability during each frame of the resulting movies.
Figure 16 summarizes results of our simulations. The M87* shadow high-fidelity
imaging is possible at 230 and 560 GHz with the orbital separation in the
range from 200 to 500 km for the EHI. The best quality of the reconstructions
has been obtained by observations with 400 km orbital separation and 100
$t_{\mathrm{G}}$ (37 days) temporal resolution at both frequencies. Since the
resolvable variability of the source has approximately a monthly timescale, a
37-day temporal resolution is optimal for observations with the standard EHI
system. Simulations of the M87* shadow imaging at 560 GHz with the EHI+ system
demonstrate the highest accuracy and fidelity with the orbital separation in
the range from about 300 to 500 km and the best performance for 400 km
separation. The signal-to-noise ratio of observations with the EHI+ system at
560 GHz allows for the M87* environment variability imaging with 1
$t_{\mathrm{G}}$ (8.9 hours) temporal resolution. Changes in the M87*
environment are hardly resolvable on the gravitational timescale in the model
itself, nevertheless, a decrease in the temporal resolution is not favourable
since it produces artefacts in the reconstructed movies. In the case of
observations with the EHI+ at 5 THz, simulations only with orbital separations
in the range from 60 to 400 km and 100 $t_{\mathrm{G}}$ (37 days) temporal
resolution demonstrate the sufficient quality of movies for the detailed
imaging of the changes in the M87* environment. The best result at 5 THz,
obtained with 200 km orbital separation, shows accurate imaging of the source
main features in high-intensity regions, however, the $uv$-coverage sparsity
at such a high frequency leads to strong reconstruction artefacts in low-
intensity regions. Nevertheless, the angular resolution at 5 THz allows for
extraordinarily precise measurements.
The imaged changes in the brightness distribution around the shadow and at the
beginning of the jet are expected to provide information sufficient to
reconstruct the dynamics of the source. Such deep probes of the surrounding
environment of M87* can allow for deeper tests for general relativity and
alternative theories of gravity since these theories make predictions of the
appearance of the radio emission generated by material falling into the black
hole. Moreover, more accurate models of a black hole environment can be
tested. The emission at a higher frequency originates closer to the event
horizon, however, the M87* total flux is gradually decreasing at frequencies
higher than 230 GHz (Davelaar et al., 2019). Simulated observations
demonstrate that the reduction of the system noise level noticeably improves
the quality of reconstructed movies and makes source changes accessible for
imaging on shorter timescales and at higher frequencies. Therefore, the
signal-to-noise ratio is the main parameter limiting the EHI in observations
of the M87* shadow, which needs to be considered to optimize the efficiency
and the design of the system in general.
The resolution at 43 GHz is not high enough to resolve the shadow of M87* but
sufficient for imaging changes of the brightness distribution at the beginning
of the M87* jet. Orbital separations in the range from 50 to 200 km and 100
$t_{\mathrm{G}}$ (37 days) temporal resolution are expected to be the most
favourable for observations with the best result for 60 km separation. The EHI
capability for imaging jets of other sources was performed using NGC 1052 as a
generic example. The M87* models were scaled to match the field of view and
the total flux. This scaling is not physical because the black hole mass and
hence the variability timescales were not properly scaled when changing the
angular size of the source, therefore, additional research is essential.
Nevertheless, we have demonstrated that jet observations are exacting to
orbital separation and require lower temporal resolution to obtain sufficient
$uv$-coverage. Observations with a separation in the range from 100 to 400 km
and a 37-day temporal resolution can capture changes in the jet shape and the
brightness distribution along it. The best result has been achieved with 200
km orbital separation. Therefore, EHI observations at 43 GHz can provide
sufficient information to reconstruct structural variations of the
relativistic jets of M87* and other AGNs, for example, NGC 1052, 3C 273, 3C
84, M81* or Centaurus A. These observations can improve our understanding of
the jet launching and collimation processes.
Apart from M87*, the EHI is capable of imaging Sgr A*, as discussed by Roelofs
et al. (2019). Moreover, shadows of some other sources can potentially be
resolved by the EHI, such as the shadows of supermassive black holes at the
centres of the Sobrero Galaxy M104 and the elliptical galaxy M84 (Johannsen et
al., 2012). The EHI+ system observing at 5 THz could potentially resolve
shadows of M81* (Johannsen et al., 2012) and Centaurus A (Janssen et al.,
2021). With all these capabilities, the EHI concept will be of great
astrophysical interest.
The implementation of the concept into an actual mission implies several
technical challenges. The maximum possible orbit reconstruction accuracy
depends on laser ranging, accelerometers, orbital modelling, fringe fitting,
and other measures that are currently under investigation. Sending reduced
data to the ground requires the elaboration of the on-board correlation and
processing. Consequently, realistic observations may hinder image quality due
to uncertainties not considered in this paper. Nevertheless, we demonstrate
the general processes in the temporal structure of the EHI coverage. Moreover,
the signal-to-noise ratio is the primary parameter limiting EHI imaging
capabilities as discussed above, therefore, the possibilities of reducing the
system noise should also be investigated.
Technical system requirements for the EHI can be relaxed with the application
of closure phases to accompany visibility amplitudes instead of complex
visibilities. However, closure phase calculation requires data from all three
pairs of satellites for each frame of the reconstructed movies. Depending on
the orbital separation, partial sampling of the $uv$-plane, considered in this
paper, includes information from three pairs of satellites on different
fractions of the frames on the investigated observational period. $35\%$ of
appropriate frames for the 50 km orbital separation increase to $45\%$ for
separations of 100 and 200 km and $75\%$ and $90\%$ for 300 and 400 km ones,
respectively. Therefore, the dependence of the possibility to calculate
closure phases on the orbital separations should be considered in the design
of the system.
Additional consideration should also be given to the effect of the angle
between the orbital plane and the line of sight on the imaging quality.
Moreover, instead of a three-satellite system, two satellites can be
considered, as it is also one of the possible configurations. Another
possibility is investigating a space-space-ground hybrid system that would
provide fast baseline coverage for dynamical imaging of rapidly varying
sources such as Sgr A* (Palumbo et al., 2019).
###### Acknowledgements.
This work is supported by the ERC Synergy Grant “BlackHoleCam: Imaging the
Event Horizon of Black Holes” (Grant 610058). The authors thank Manuel Martin-
Neira, Volodymyr Kudriashov and Daniel Palumbo for their helpful comments and
discussions on this work. We are grateful to the anonymous referee for useful
and constructive comments. AS personally acknowledges Olesya Kuchay for her
unlimited support during the period of writing the manuscript and Jeremy
Tregloan-Reed for his invaluable advice. FR was supported by NSF grants
AST-1935980 and AST-2034306. This work was supported by the Black Hole
Initiative, which is funded by grants from the John Templeton Foundation
(Grant #62286) and the Gordon and Betty Moore Foundation (Grant GBMF-8273) -
although the opinions expressed in this work are those of the author(s) and do
not necessarily reflect the views of these Foundations. JD is supported by
NASA grant NNX17AL82G and a Joint Columbia/Flatiron Postdoctoral Fellowship.
Research at the Flatiron Institute is supported by the Simons Foundation. The
GRMHD simulations were performed on the Dutch National Supercomputing cluster
Cartesius and are funded by the NWO computing grant 16431.
## References
* Abuter et al. (2019) Abuter, R., Amorim, A., Bauböck, M., et al. 2019, A&A, 625, L10
* Andrianov et al. (2021) Andrianov, A. S., Baryshev, A. M., Falcke, H., et al. 2021, MNRAS, 500, 4866
* Arianespace (2018) Arianespace. 2018, Ariane 6 User’s Manual
* Bacchini et al. (2021) Bacchini, F., Mayerson, D. R., Ripperda, B., et al. 2021, Phys. Rev. Lett., 127
* Baczko et al. (2019) Baczko, A.-K., Schulz, R., Kadler, M., et al. 2019, A&A, 623, A27
* Ball et al. (2018) Ball, D., Sironi, L., & Özel, F. 2018, ApJ, 862, 80
* Bransgrove et al. (2021) Bransgrove, A., Ripperda, B., & Philippov, A. 2021, Phys. Rev. Lett., 127
* Broderick & Loeb (2009) Broderick, A. E. & Loeb, A. 2009, ApJ, 697, 1164–1179
* Bronzwaer et al. (2018) Bronzwaer, T., Davelaar, J., Younsi, Z., et al. 2018, A&A, 613, A2
* Bronzwaer & Falcke (2021) Bronzwaer, T. & Falcke, H. 2021, The Nature of Black Hole Shadows
* Bronzwaer et al. (2020) Bronzwaer, T., Younsi, Z., Davelaar, J., & Falcke, H. 2020, A&A, 641, A126
* Chael et al. (2019a) Chael, A., Bouman, K., Johnson, M., et al. 2019a, eht-imaging: v1.1.0: Imaging interferometric data with regularized maximum likelihood
* Chael et al. (2019b) Chael, A., Narayan, R., & Johnson, M. D. 2019b, MNRAS, 486, 2873–2895
* Chael et al. (2018) Chael, A. A., Johnson, M. D., Bouman, K. L., et al. 2018, ApJ, 857, 23
* Chael et al. (2016) Chael, A. A., Johnson, M. D., Narayan, R., et al. 2016, ApJ, 829, 11
* Chang et al. (2010) Chang, C. S., Ros, E., Kovalev, Y. Y., & Lister, M. L. 2010, A&A, 515, A38
* Chen et al. (2020) Chen, Y., Shu, J., Xue, X., Yuan, Q., & Zhao, Y. 2020, Phys. Rev. Lett., 124, 061102
* Crinquand et al. (2021) Crinquand, B., Cerutti, B., Dubus, G., Parfrey, K., & Philippov, A. 2021, A&A, 650, A163
* Davelaar et al. (2018) Davelaar, J., Mo´scibrodzka, M., Bronzwaer, T., & Falcke, H. 2018, A&A, 612, A34
* Davelaar et al. (2019) Davelaar, J., Olivares, H., Porth, O., et al. 2019, A&A, 632, A2
* Dexter et al. (2012) Dexter, J., McKinney, J. C., & Agol, E. 2012, MNRAS, 421, 1517–1528
* Doeleman et al. (2019) Doeleman, S., Blackburn, L., Dexter, J., et al. 2019, in Bulletin of the American Astronomical Society, Vol. 51, 256
* Doeleman et al. (2023) Doeleman, S. S., Barrett, J., Blackburn, L., et al. 2023, Galaxies, 11, 107
* Doeleman et al. (2008) Doeleman, S. S., Weintroub, J., Rogers, A. E. E., et al. 2008, Nature, 455, 78–80
* Event Horizon Telescope Collaboration et al. (2022a) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2022a, ApJ, 930, L12
* Event Horizon Telescope Collaboration et al. (2022b) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2022b, ApJ, 930, L13
* Event Horizon Telescope Collaboration et al. (2022c) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2022c, ApJ, 930, L14
* Event Horizon Telescope Collaboration et al. (2022d) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2022d, ApJ, 930, L15
* Event Horizon Telescope Collaboration et al. (2022e) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2022e, ApJ, 930, L16
* Event Horizon Telescope Collaboration et al. (2022f) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2022f, ApJ, 930, L17
* Event Horizon Telescope Collaboration et al. (2023) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2023, ApJ, 957, L20
* Event Horizon Telescope Collaboration et al. (2019a) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2019a, ApJ, 875, L1
* Event Horizon Telescope Collaboration et al. (2019b) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2019b, ApJ, 875, L2
* Event Horizon Telescope Collaboration et al. (2019c) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2019c, ApJ, 875, L3
* Event Horizon Telescope Collaboration et al. (2019d) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2019d, ApJ, 875, L4
* Event Horizon Telescope Collaboration et al. (2019e) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2019e, ApJ, 875, L5
* Event Horizon Telescope Collaboration et al. (2019f) Event Horizon Telescope Collaboration, Akiyama, K., Alberdi, A., et al. 2019f, ApJ, 875, L6
* Event Horizon Telescope Collaboration et al. (2021a) Event Horizon Telescope Collaboration, Akiyama, K., Algaba, J. C., et al. 2021a, ApJ, 910, L12
* Event Horizon Telescope Collaboration et al. (2021b) Event Horizon Telescope Collaboration, Akiyama, K., Algaba, J. C., et al. 2021b, ApJ, 910, L13
* Falcke et al. (2000) Falcke, H., Melia, F., & Agol, E. 2000, ApJ, 528, L13–L16
* Fish et al. (2019) Fish, V. L., Shea, M., & Akiyama, K. 2019, Adv. Space Res.
* Fromm et al. (2021) Fromm, C. M., Mizuno, Y., Younsi, Z., et al. 2021, A&A, 649, A116
* Gebhardt et al. (2011) Gebhardt, K., Adams, J., Richstone, D., et al. 2011, ApJ, 729, 119
* Gull & Daniell (1978) Gull, S. F. & Daniell, G. J. 1978, Nature, 272, 686
* Gurvits et al. (2022) Gurvits, L. I., Paragi, Z., Amils, R. I., et al. 2022, Acta Astronautica, 196, 314
* Gurvits et al. (2021) Gurvits, L. I., Paragi, Z., Casasola, V., et al. 2021, Experimental Astronomy
* Hada et al. (2016) Hada, K., Kino, M., Doi, A., et al. 2016, ApJ, 817, 131
* Hirabayashi et al. (2000) Hirabayashi, H., Hirosawa, H., Kobayashi, H., et al. 2000, PASJ, 52, 955
* Hirabayashi et al. (1998) Hirabayashi, H., Hirosawa, H., Kobayashi, H., et al. 1998, Science, 281, 1825
* Högbom (1974) Högbom, J. A. 1974, A&AS, 15, 417
* Issaoun et al. (2019) Issaoun, S., Johnson, M. D., Blackburn, L., et al. 2019, A&A, 629, A32
* Janssen et al. (2021) Janssen, M., Falcke, H., Kadler, M., et al. 2021, Nat. Astron.
* Johannsen et al. (2012) Johannsen, T., Psaltis, D., Gillessen, S., et al. 2012, ApJ, 758, 30
* Johnson et al. (2023) Johnson, M. D., Akiyama, K., Blackburn, L., et al. 2023, Galaxies, 11, 61
* Johnson et al. (2017) Johnson, M. D., Bouman, K. L., Blackburn, L., et al. 2017, ApJ, 850, 172
* Johnson et al. (2015) Johnson, M. D., Fish, V. L., Doeleman, S. S., et al. 2015, Science, 350, 1242–1245
* Johnson et al. (2020) Johnson, M. D., Lupsasca, A., Strominger, A., et al. 2020, Science Advances, 6, eaaz1310
* Kadler et al. (2004) Kadler, M., Kerp, J., Ros, E., et al. 2004, A&A, 420, 467–474
* Kardashev et al. (2013) Kardashev, N. S., Khartov, V. V., Abramov, V. V., et al. 2013, Astron. Rep., 57, 153–194
* Kravchenko et al. (2020) Kravchenko, E., Giroletti, M., Hada, K., et al. 2020, A&A, 637, L6
* Kudriashov et al. (2019) Kudriashov, V., Martin-Neira, M., Barat, I., et al. 2019, CJSS, 39, 250
* Kudriashov et al. (2021a) Kudriashov, V., Martin-Neira, M., Lia, E., et al. 2021a, Journal of Astronomical Instrumentation, 10, 2150010
* Kudriashov et al. (2021b) Kudriashov, V., Martin-Neira, M., Roelofs, F., et al. 2021b, CJSS, 41, 211
* Kuramochi et al. (2018) Kuramochi, K., Akiyama, K., Ikeda, S., et al. 2018, ApJ, 858, 56
* Kurczynski et al. (2022) Kurczynski, P., Johnson, M. D., Doeleman, S. S., et al. 2022, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 12180, Space Telescopes and Instrumentation 2022: Optical, Infrared, and Millimeter Wave, ed. L. E. Coyle, S. Matsuura, & M. D. Perrin, 121800M
* Levy et al. (1986) Levy, G. S., Linfield, R. P., Ulvestad, J. S., et al. 1986, Science, 234, 187
* Lu et al. (2018) Lu, R.-S., Krichbaum, T. P., Roy, A. L., et al. 2018, ApJ, 859, 60
* Marshall et al. (2002) Marshall, H. L., Miller, B. P., Davis, D. S., et al. 2002, ApJ, 564, 683–687
* Martin-Neira et al. (2017) Martin-Neira, M., Kudriashov, V., Barat, I., Duesmann, B., & Daganzo, E. 2017, in Proceedings of the 5th Workshop on Advanced RF Sensors and Remote Sensing Instruments (ARSI’17), ESTEC (the Netherlands), 12-14 September 2017
* Martin-Neira et al. (2019) Martin-Neira, M., Kudriashov, V., Barat, I., Duesmann, B., & Daganzo, E. 2019, CJSS, 39, 544
* Mizuno et al. (2018) Mizuno, Y., Younsi, Z., Fromm, C. M., et al. 2018, Nature Astronomy, 2, 585–590
* Mościbrodzka et al. (2017) Mościbrodzka, M., Dexter, J., Davelaar, J., & Falcke, H. 2017, Monthly Notices of the Royal Astronomical Society, 468, 2214–2221
* Mościbrodzka et al. (2016) Mościbrodzka, M., Falcke, H., & Shiokawa, H. 2016, A&A, 586, A38
* Mościbrodzka et al. (2009) Mościbrodzka, M., Gammie, C. F., Dolence, J. C., Shiokawa, H., & Leung, P. K. 2009, ApJ, 706, 497
* Nakahara et al. (2019) Nakahara, S., Doi, A., Murata, Y., et al. 2019, AJ, 159, 14
* Narayan et al. (2003) Narayan, R., Igumenshchev, I. V., & Abramowicz, M. A. 2003, PASJ, 55, L69–L72
* Olivares et al. (2019) Olivares, H., Porth, O., Davelaar, J., et al. 2019, A&A, 629, A61
* Olivares et al. (2020) Olivares, H., Younsi, Z., Fromm, C. M., et al. 2020, Monthly Notices of the Royal Astronomical Society, 497, 521–535
* Palumbo et al. (2019) Palumbo, D. C. M., Doeleman, S. S., Johnson, M. D., Bouman, K. L., & Chael, A. A. 2019, ApJ, 881, 62
* Parfrey et al. (2019) Parfrey, K., Philippov, A., & Cerutti, B. 2019, Phys. Rev. Lett., 122
* Pesce et al. (2019) Pesce, D. W., Haworth, K., Melnick, G. J., et al. 2019, Extremely long baseline interferometry with Origins Space Telescope
* Porth et al. (2019) Porth, O., Chatterjee, K., Narayan, R., et al. 2019, ApJS, 243, 26
* Porth et al. (2017) Porth, O., Olivares, H., Mizuno, Y., et al. 2017, Computational Astrophysics and Cosmology, 4
* Psaltis et al. (2020) Psaltis, D., Medeiros, L., Christian, P., et al. 2020, Phys. Rev. Lett., 125
* Psaltis et al. (2021) Psaltis, D., Talbot, C., Payne, E., & Mandel, I. 2021, Phys. Rev. D, 103
* Raymond et al. (2021) Raymond, A. W., Palumbo, D., Paine, S. N., et al. 2021, ApJS, 253, 5
* Roelofs et al. (2019) Roelofs, F., Falcke, H., Brinkerink, C., et al. 2019, A&A, 625, A124
* Roelofs et al. (2021) Roelofs, F., Fromm, C. M., Mizuno, Y., et al. 2021, A&A, 650, A56
* Ros & Kadler (2008) Ros, E. & Kadler, M. 2008, J. Phys. Conf. Ser., 131, 012056
* Rudin et al. (1992) Rudin, L. I., Osher, S., & Fatemi, E. 1992, Phys. D: Nonlinear Phenom., 60, 259
* Ryan et al. (2018) Ryan, B. R., Ressler, S. M., Dolence, J. C., Gammie, C., & Quataert, E. 2018, ApJ, 864, 126
* Sargent et al. (1978) Sargent, W. L. W., Young, P. J., Boksenberg, A., et al. 1978, ApJ, 221, 731
* Skilling & Gull (1991) Skilling, J. & Gull, S. F. 1991, Lect. Notes-Monogr. Ser., 20, 341
* Sparks et al. (1996) Sparks, W. B., Biretta, J. A., & Macchetto, F. 1996, ApJ, 473, 254
* Thompson et al. (2017) Thompson, A. R., Moran, J. M., & Swenson, George W., J. 2017, Interferometry and Synthesis in Radio Astronomy, 3rd Edition
* Vagnozzi et al. (2022) Vagnozzi, S., Roy, R., Tsai, Y.-D., et al. 2022, Horizon-scale tests of gravity theories and fundamental physics from the Event Horizon Telescope image of Sagittarius A∗
* Van der Gucht et al. (2020) Van der Gucht, J., Davelaar, J., Hendriks, L., et al. 2020, A&A, 636, A94
* Young et al. (1978) Young, P. J., Westphal, J. A., Kristian, J., Wilson, C. P., & Landauer, F. P. 1978, ApJ, 221, 721
## Appendix A Parameters of the image reconstruction
### A.1 Snapshot imaging
An RML algorithm minimizes the weighted sum of $\chi^{2}$ and a regularizer
(Gull & Daniell 1978). $\chi^{2}$ is the goodness-of-fit test statistic that
compares the visibilities of the test image to the data. The regularizer
contains prior information of the image, such as image smoothness or sparsity.
The RML algorithm, by its definition, gives an image that has no more
structures than required to fit the data. For snapshot imaging, we chose to
use two regularization terms.
One of the possible regularizers implemented into the `eht-imaging` software
is the Gull-Skilling entropy function (Skilling & Gull 1991). The Gull-
Skilling entropy function is a general form of the entropy function derived
from a Bayesian interpretation of the RML. In combination with the Total
Squared Variation regularizer, it shows the best visible result of the image
reconstruction during the regularizer selection and, therefore, was picked in
this work. The Total Squared Variation regularization (Kuramochi et al. 2018)
is the denoising process. The Total Squared Variation regularization is the
modification of the Total Variation regularization (Rudin et al. 1992). This
modification favours images with smaller variations between adjacent pixels,
which leads to the smoothing of edgelike features. Apart from better
consistency with observational data compared to post-processing smoothing,
edge-smoothed images demonstrate a better representation of the model images
in the case of diffuse astronomical objects.
The selection of the weighting parameters for the data term and both
regularization terms was performed by simulating an observation of the time-
averaged 10 $t_{\mathrm{G}}$ M87* model. The used system setup had a 50 km
orbital separation and an observing frequency of 230 GHz. Firstly, the
weighting parameters were varied from 10 to $10^{4}$. Figure 17 shows the
best-performing combinations of weights according to our image quality
metrics. The NRMSE performs a pixel-to-pixel comparison, while the NXCORR
compares the bulks of intensity patterns. Nevertheless, both metrics show
similar dependencies. The best performance according to image quality metrics
is achieved when the data term weighting $\alpha_{data}$ is bigger than the
weighting for Gull-Skilling entropy $\alpha_{GS}$. However, reconstructed
images are noticeably better when these two terms are equally weighted (Figure
18). The weighting parameter for the Total Squared Variation regularization
$\alpha_{TSV}$ demonstrates little effect on the quality of reconstruction.
When it is significantly less than the other two terms, the metric performance
is better. However, when this condition is reached, further alteration of the
Total Squared Variation regularization weight provides only a vanishingly
small variation in image quality metrics (Figure 19). Thus, the weighting
parameters for the data term $\alpha_{data}$ and the Gull-Skilling entropy
function term $\alpha_{GS}$ were both selected to equal 1000, the weighting
parameter for the Total Squared Variation regularization term $\alpha_{TSV}$
was selected to equal 200. To facilitate comparisons across different
frequencies and orbital separations, the weighting parameters for the data and
regularization terms are identical for all snapshot imaging reconstructions
displayed in this paper.
For both snapshot and dynamical imaging, the image reconstruction was
performed in several (typically 3-7) rounds to help the optimizer converge to
a single image structure. In the first round, a circular Gaussian with a FWHM
of 55 $\mu$as (700 $\mu$as for NGC 1052 jet simulations) and total flux equal
to that of the corresponding model frame were used as the initial and prior
image. In the following imaging rounds, the result of the previous round was
blurred to the nominal array resolution and used as a new initial and prior
image.
Figure 17: Quality of the reconstructions performed using the snapshot imaging
method with different weighting parameters. The image quality is measured in
two ways: (1) the normalized root-mean-square error against the true image, or
NRMSE, shown in the top panel; (2) the normalized cross-correlation against
the true image, or NXCORR, shown in the bottom panel. $\alpha_{data}$ is the
data term weighting parameter, $\alpha_{GS}$ is the Gull-Skilling entropy
function weighting parameter, $\alpha_{TSV}$ is the Total Squared Variation
regularization weighting parameter. Figure 18: Reconstructions, performed
using the snapshot imaging method with different weighting parameters. Time-
averaged M87* observation simulated at 230 GHz, 50 km orbital separation.
$a\\_data$ is the data term weighting parameter $\alpha_{data}$, $a\\_GS$ is
the Gull-Skilling entropy function weighting parameter $\alpha_{GS}$,
$a\\_TSV$ is the Total Squared Variation regularization weighting parameter
$\alpha_{TSV}$.
Figure 19: Same as Fig. 17, but with enlarged scale.
Figure 20: Quality of the M87* movies reconstructed using the dynamical
imaging method with different values of the dynamical regularizer parameter
$\sigma_{\Delta t}$. The image quality is measured in two ways: (1) the NRMSE,
shown in the top panel; (2) the NXCORR, shown in the bottom panel. Green and
blue lines correspond to reconstructed movies with temporal resolutions of 3.7
days and 37 days, respectively. $\alpha_{data}$ is the data term weighting
parameter, $\alpha_{TSV}$ is the Total Squared Variation regularization
weighting parameter, $\alpha_{SM}$ is the Second Moment regularization
weighting parameter, $\alpha_{\Delta t}$ is the dynamical regularizer
$\mathcal{R}_{\Delta t}$ weighting parameter.
Figure 21: Quality of the M87* movies reconstructed using the dynamical
imaging method with different values of the weighting parameter of the
dynamical regularizer $\mathcal{R}_{\Delta t}$. The image quality is measured
in two ways: (1) the NRMSE, shown in the top panel; (2) the NXCORR, shown in
the bottom panel. $\alpha_{data}$ is the data term weighting parameter,
$\alpha_{SM}$ is the Second Moment regularization weighting parameter,
$\alpha_{TSV}$ is the Total Squared Variation regularization weighting
parameter.
Figure 22: Quality of the M87* movies reconstructed using the dynamical
imaging method with different weighting parameters of static regularizers. The
image quality is measured in two ways: (1) the NRMSE, shown in the top panel;
(2) the NXCORR, shown in the bottom panel. $\alpha_{data}$ is the data term
weighting parameter, $\alpha_{\Delta t}$ is the dynamical regularizer
$\mathcal{R}_{\Delta t}$ weighting parameter, $\alpha_{SM}$ is the Second
Moment regularization weighting parameter, $\alpha_{TSV}$ is the Total Squared
Variation regularization weighting parameter.
Figure 23: Same as Fig. 22, but with enlarged scale.
### A.2 Dynamical imaging
Dynamical imaging assumes that the images in the set are connected and gets
information for image reconstruction from other frames (Johnson et al. 2017).
This allows the missing data to be partially inherited from the rest of the
movie by interpolation. In this work, a combination of the dynamical
regularization with two static regularization terms was used.
A generic regularizer $\mathcal{R}_{\Delta t}$ from Johnson et al. (2017),
used in this paper, assumes that the images in the set are temporally
connected, each being a small perturbation of the previous frame. This
enforces the continuity of features from frame to frame in the reconstructed
movie. For the static regularization terms, we chose the Second Moment
regularizer (Issaoun et al. 2019) and the Total Squared Variation regularizer
(described in Appendix A.1). The Second Moment regularization constrains the
spread of flux density in reconstructed images to a motivated region defined
by the user. In the case of the EHI system, although single frames can have
very limited coverage, the dense $uv$-coverage is produced in the whole range
of baselines considering time-averaged data. Hence source parameters, required
for the Second Moment regularization, can be obtained from the same
observational data by the time-averaged image reconstruction. The
implementation of the Second Moment regularizer into the dynamical imaging
method can grant the highest completeness, accuracy and fidelity of
reconstructed movies.
The dynamical regularizer $\mathcal{R}_{\Delta t}$ computes the similarity
between frames of the movie by calculating the summed difference of the
reconstructed flux density of pixels among all adjacent frames after blurring
using a circular Gaussian with standard deviation $\sigma_{\Delta t}$ (Johnson
et al. 2017). The selection of the parameter $\sigma_{\Delta t}$ was performed
for each temporal resolution independently. This parameter equals 0 for
observations with 1 $t_{\mathrm{G}}$ temporal resolution. The limit
$\sigma_{\Delta t}\to 0$ is appropriate when the expected motion between
consecutive frames is smaller than the finest resolution of reconstructed
features comparable to the nominal array spatial resolution (Johnson et al.
2017). To choose $\sigma_{\Delta t}$ for observations with 10 $t_{\mathrm{G}}$
temporal resolution, half the number of frames of the M87* model with 10
$t_{\mathrm{G}}$ interval between frames was used. For observations with 100
$t_{\mathrm{G}}$ temporal resolution, the full model with 10 $t_{\mathrm{G}}$
interval between frames was used considering the variability of the source
during the observation. The used system setup had 350 km orbital separation
and an observing frequency of 230 GHz. $\sigma_{\Delta t}$ was varied from 1
to 100 and from 1 to 1000 for reconstructions with averaging over 10 and 100
$t_{\mathrm{G}}$ per frame, respectively. According to image quality metrics
averaged over the duration of reconstructed movies, $\sigma_{\Delta t}$ was
selected to equal 10 and 100, respectively, in these two cases. (Figure 20).
The selection of weighting parameters for the dynamical imaging method was
performed by simulating observation of half the number of frames of the M87*
model with 1 $t_{\mathrm{G}}$ interval between frames. The used system setup
had 1000 km orbital separation for the widest representation of baselines on
each of the frames, the observing frequency was 230 GHz. The data term
weighting $\alpha_{data}$ was left default and equals 10. The weighting
parameter for dynamical regularization $\alpha_{\Delta t}$ was ranged from 10
to $10^{4}$. As $\alpha_{\Delta t}\to 0$ the dynamical imaging becomes
equivalent to snapshot imaging with corresponding static regularizers. Taking
$\alpha_{\Delta t}\to\infty$ leads to the reconstruction of the time-averaged
image (Johnson et al. 2017). According to averaged over the duration of
reconstructed movies image quality metrics, $\alpha_{\Delta t}$ was selected
to equal 500 (Figure 21). The weighting parameters for the Second Moment
regularizer $\alpha_{SM}$ and for the Total Squared Variation regularizer
$\alpha_{TSV}$ were ranged between 10 and $10^{8}$, and between 1 and
$10^{3}$, respectively. Figure 22 demonstrates image quality metrics averaged
over the duration of reconstructed movies for different combinations of these
parameters. Dependencies demonstrated by the NRMSE and the NXCORR are
identical, similar to the snapshot imaging method. The best reconstruction
quality corresponds to the smallest weight of the Total Squared Variation
regularization. The weighting of the Second Moment regularization produces
little effect on the quality of reconstruction with a slight improvement for
large values, as shown in Figure 23. Therefore, weighting parameters for the
Second Moment regularization $\alpha_{SM}$ and the Total Squared Variation
regularization $\alpha_{TSV}$ were selected to equal $10^{7}$ and 1,
respectively. The regularization parameters are the same for all dynamical
imaging reconstructions displayed in this paper.
|
* Zhao et al. (2023a) Zilong Zhao, Robert Birke, and Lydia Y. Chen. 2023a. GDTS: GAN-Based Distributed Tabular Synthesizer. In _2023 IEEE 16th International Conference on Cloud Computing (CLOUD)_. 570–576. https://doi.org/10.1109/CLOUD60044.2023.00078
* Zhao et al. (2021a) Zilong Zhao, Robert Birke, Aditya Kunar, and Lydia Y Chen. 2021a. Fed-tgan: Federated learning framework for synthesizing tabular data. _arXiv:2108.07927_ (2021).
* Zhao et al. (2023b) Zilong Zhao, Han Wu, Aad Van Moorsel, and Lydia Y Chen. 2023b. Gtv: Generating tabular data via vertical federated learning. _arXiv:2302.01706_ (2023).
* Zhao et al. (2023c) Zhuang Zhao, Feng Yang, and Guirong Liang. 2023c. Federated Learning Based on Diffusion Model to Cope with Non-IID Data. In _Chinese Conference on Pattern Recognition and Computer Vision (PRCV)_. Springer, 220–231.
* Zhou et al. (2021) Xu Zhou, Xiaofeng Liu, Gongjin Lan, and Jian Wu. 2021. Federated conditional generative adversarial nets imputation method for air quality missing data. _Knowledge-Based Systems_ 228 (2021), 107261. https://doi.org/10.1016/j.knosys.2021.107261
* Zhou et al. (2023) Yuhao Zhou, Mingjia Shi, Yuanxi Li, Qing Ye, Yanan Sun, and Jiancheng Lv. 2023. Communication-efficient Federated Learning with Single-Step Synthetic Features Compressor for Faster Convergence. arXiv:2302.13562 [cs.LG]
* Zhu et al. (2021b) Hangyu Zhu, Jinjin Xu, Shiqing Liu, and Yaochu Jin. 2021b. Federated learning on non-IID data: A survey. _Neurocomputing_ 465 (2021), 371–390.
* Zhu et al. (2021a) Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. 2021a. Data-free knowledge distillation for heterogeneous federated learning. In _International conference on machine learning_. PMLR, 12878–12889.
|
# On the absolute convergence of automorphic Dirichlet series
Ravi Raghunathan Department of Mathematics
Indian Institute of Technology Bombay
Mumbai, 400076
India<EMAIL_ADDRESS>
###### Abstract.
Let $F(s)=\sum_{n=1}^{\infty}\frac{a_{n}}{n^{s}}$ be a Dirichlet series in the
axiomatically defined class ${{\mathfrak{A}}}^{\\#}$. The class
${{\mathfrak{A}}}^{\\#}$ is known to contain the extended Selberg class
${\BOONDOX{S}}^{\\#}$, as well as all the $L$-functions of automorphic forms
on $GL_{n}/K$, where $K$ is a number field. Let $d$ be the degree of $F(s)$.
We show that $\sum_{n<X}|a_{n}|=\Omega(X^{\frac{1}{2}+\frac{1}{2d}})$, and
hence, that the abscissa of absolute convergence of $\sigma_{a}$ of $F(s)$
must satisfy $\sigma_{a}\geq 1/2+1/2d$.
###### Key words and phrases:
Selberg class, automorphic $L$-functions, Omega results for summatory
functions, abscissa of absolute convergence of Dirchlet series
###### 2010 Mathematics Subject Classification:
11F66, 11M41
## 1\. Introduction
In [Rag20] we introduced a class of Dirichlet series ${{\mathfrak{A}}}^{\\#}$
which is known to contain a very large number of $L$-functions attached to
automorphic forms and also (strictly) contains the extended Selberg class
${\BOONDOX{S}}^{\\#}$ of Kaczorowski and Perelli defined in [KP99]. Associated
to each Dirichlet series $F(s)=\sum_{n=1}^{\infty}\frac{a_{n}}{n^{s}}$ in
${{\mathfrak{A}}}^{\\#}$ is a non-negative real number - its degree $d_{F}$.
We denote the subset of Dirichlet series of degree $d$ in
${{\mathfrak{A}}}^{\\#}$ by ${{\mathfrak{A}}}^{\\#}_{d}$. We state the main
results of this paper first, referring the reader to Section 2 for the precise
definitions of the degree $d_{F}$ and other terms appearing below.
###### Theorem 1.1.
Let $F(s)$ be an element of ${{\mathfrak{A}}}^{\\#}_{d}$ with $d\geq 1$. Then,
$\sum_{n<X}|a_{n}|=\Omega(X^{\frac{1}{2}+\frac{1}{2d}}).$ (1.1)
In particular, the abscissa of absolute convergence $\sigma_{a}$ satisfies
$\sigma_{a}\geq 1/2+1/2d$.
The following corollary covers the cases of greatest interest.
###### Corollary 1.2.
Let $L(s,\pi)=\sum_{n=1}^{\infty}\frac{a_{n}}{n^{s}}$ be the standard
$L$-function associated to a unitary automorphic representation $\pi$ of ${\rm
GL}_{n}({\mathbb{A}}_{\mathbb{Q}})$, where ${\mathbb{A}}_{\mathbb{Q}}$ denotes
the adèles over ${\mathbb{Q}}$. Then,
$\sum_{n<X}|a_{n}|=\Omega(X^{\frac{1}{2}+\frac{1}{2n}}).$
Indeed, it is known that $L(s,\pi)\in{{\mathfrak{A}}}^{\\#}$ and that its
degree is $n$, so the corollary follows immediately from the theorem.
The corollary would appear to be new even for the $L$-functions of Maass forms
associated to higher level congruence subgroups (for which $n=2$). Theorem 1.1
was known previously for the extended Selberg class ${\BOONDOX{S}}^{\\#}$ (see
Corollary 2 of [KP05]). Elements in ${\BOONDOX{S}}^{\\#}$ are required to
satisfy the analogue of the Generalised Ramanujan Conjecture (GRC) at infinity
which is equivalent to the Selberg Eigenvalue Conjecture for Maass eigenforms.
Since these conjectures are very far from being established, Theorem 1.1 and
Corollary 1.2 are not subsumed by the earlier results.
In addition, we note that elements of ${{\mathfrak{A}}}^{\\#}$ may have (a
finite number of) poles at arbitrary locations and satisfy a more general
functional equation than those of ${\BOONDOX{S}}^{\\#}$. A priori they may
have an arbitrary abscissa of absolute convergence, in contrast to the
requirement $\sigma_{a}(F)\leq 1$ for elements of ${\BOONDOX{S}}^{\\#}$. Many
other $L$-functions are known to belong to ${{\mathfrak{A}}}^{\\#}$ (but are
not known to belong to ${\BOONDOX{S}}^{\\#}$). These include the exterior
square, symmetric square and tensor product $L$-functions associated to
(unitary) automorphic representations of ${\rm GL}_{n}({\mathbb{A}}_{K})$,
where ${\mathbb{A}}_{K}$ denotes the adèles over a number field $K$. The
$L$-functions of half integral weight forms and Siegel modular forms also
belong to ${{\mathfrak{A}}}^{\\#}$, but in general, do not belong to
${\BOONDOX{S}}^{\\#}$. Theorem 1.1 thus applies to a substantially larger
class of examples.
The proof of Theorem 1.1 uses a transform introduced by Soundararajan in
[Sou05] for the case $d=1$ in the context of the Selberg class $\BOONDOX{S}$,
but improves on the relevant stationary phase techniques following the
arguments in [BR20]. These allow us to prove an asymptotic formula for the
“standard addtive twist”
$F(0,\alpha,1/d):=\sum_{T<n<4T}^{\infty}a_{n}e^{-id\alpha n^{1/d}}\sim
c_{0}T^{1/2+1/2d}+o(T^{1/2+1/2d})$
for some constant $c_{0}$, when $\sigma_{a}<1/2+1/d-\delta$ for any $\delta>0$
(see equation (7.1)), from which Theorem 1.1 follows easily.
The asymptotic formula for the standard additive twist of elements in
${\BOONDOX{S}}^{\\#}$ was proved in [KP05] without any further assumptions.
The proof invokes the properties of Fox hypergeometric functions and other
complex analytics techniques. While our method is unable to recover this more
subtle statement, it does produce a completely different and shorter proof of
Theorem 1.1 even for the class ${\BOONDOX{S}}^{\\#}$.
In [KP15] it is shown that the conclusion of Theorem 1.1 holds for series
which are polynomials in the elements of the Selberg class $\BOONDOX{S}$. It
is likely that the same ideas work for polynomials in the elements of
${\mathfrak{A}}$, the class of series in ${{\mathfrak{A}}}^{\\#}$ which have
an Euler product. However, we do not attempt this here.
## 2\. Some basic definitions
The class ${{\mathfrak{A}}}^{\\#}$ was defined in [Rag20] as follows. For
$s\in{\mathbb{C}}$, we write $s=\sigma+it$, where $\sigma,t\in{\mathbb{R}}$.
Let $F(s)\neq 0$ be a meromorphic function on ${\mathbb{C}}$. We consider the
following conditions on $F(s)$.
1. (P1)
The function $F(s)$ is given by a Dirichlet series
$\sum_{n=1}^{\infty}\frac{a_{n}}{n^{s}}$ with abscissa of absolute convergence
$\sigma_{a}\geq 1/2$.
2. (P2)
There is a polynomial $P(s)$ such that $P(s)F(s)$ extends to an entire
function, and such that given any vertical strip
$\sigma_{1}\leq\sigma\leq\sigma_{2}$, there is some $M\in{\mathbb{R}}$ such
that $P(s)F(s)\ll(1+t)^{M}$.
3. (P3)
There exist a real number $Q>0$, a complex number $\omega$ such that
$|\omega|=1$, and a function $G(s)$ of the form
$G(s)=\prod_{j=1}^{r}\Gamma(\lambda_{j}s+\mu_{j})\prod_{j^{\prime}=1}^{r^{\prime}}\Gamma(\lambda_{j^{\prime}}^{\prime}s+\mu_{j^{\prime}}^{\prime})^{-1},$
(2.1)
where $\lambda_{j},\lambda_{j^{\prime}}^{\prime}>0$ are real numbers,
$\mu_{j},\mu_{j^{\prime}}^{\prime}\in{\mathbb{C}}$, and $\Gamma(s)$ denotes
the usual gamma function, such that
$\Phi(s):=Q^{s}G(s)F(s)=\omega\overline{\Phi(1-\bar{s})}.$ (2.2)
We will denote by ${{\mathfrak{A}}}^{\\#}$ the set of (non-zero) meromorphic
functions satisfying (P1)-(P3). We set
$d_{F}=2\sum_{j=1}^{r}\lambda_{j}-2\sum_{j^{\prime}=1}^{r^{\prime}}\lambda_{j^{\prime}}^{\prime}$.
Theorem 2.1 of [Rag20] shows that $d_{F}$ does not depend on the choice of the
functions $G(s)$ that appear in (2.2). The number $d_{F}$ is called the degree
of the function $F(s)$. The set of all functions
$F(s)\in{{\mathfrak{A}}}^{\\#}$ with $d_{F}=d$ will be denoted by
${{\mathfrak{A}}}^{\\#}_{d}$.
The class ${\BOONDOX{S}}^{\\#}$ is defined as the set of series
$F(s)\in{{\mathfrak{A}}}^{\\#}$ satisfying the conditions $\sigma_{a}\leq 1$,
$P(s)=(s-1)^{m}$ for some $m\geq 0$, $r^{\prime}=0$, and $\mu_{j}\geq 0$ for
all $1\leq j\leq r$, define. As we have outlined in the introduction, even
when we expect a series $F(s)$ to belong to ${\BOONDOX{S}}^{\\#}$, we can only
rarely prove that this is the case, since major conjectures like the GRC at
infinity are involved. In addition, there are a large number of examples that
belong to ${{\mathfrak{A}}}^{\\#}$, but do not belong to
${\BOONDOX{S}}^{\\#}$. Two simple examples to keep in mind are $\zeta(2s-1/2)$
and $\zeta(s+1/2)\zeta(s-1/2)$.
More detailed rationales for working in ${{\mathfrak{A}}}^{\\#}$ rather than
in ${\BOONDOX{S}}^{\\#}$, or in the class $\BOONDOX{L}$ introduced by A.
Booker in [Boo15], may be found in [Rag20] and [BR20].
## 3\. Preliminaries
In this section we record a few facts from [BR20] which we will need for our
proof. We first fix the following notation. For a complex function $f(s)$ we
define $\tilde{f}(s)=\overline{f(\bar{s})}$.
Let $z=x+iy$, and assume that $-\pi+\theta_{0}<\arg(z+it)<\pi-\theta_{0}$ for
some $\theta_{0}>0$. From Section 2.2 of [BR20] (see equations (2.1)-(2.4) of
that paper), we retrieve
$\frac{\tilde{G}(1-x-it)}{G(x+it)}=(Ce^{-d}t^{d})^{(\frac{1}{2}-x)}e^{-itd\log\frac{t}{e}}t^{iA}e^{iB}C^{-it}\cdot(1+O(1/t)),$
(3.1)
where
$A=-i((\bar{\mu}-\mu)-(\bar{\mu^{\prime}}-\mu^{\prime})),\quad
C=\prod_{j,j^{\prime}=1}^{r,r^{\prime}}{\lambda_{j}}^{2\lambda_{j}}{\lambda_{j^{\prime}}^{\prime}}^{-2\lambda_{j^{\prime}}^{\prime}},$
and
$\displaystyle B=$
$\displaystyle-i\left(\sum_{j=1}^{r}(\bar{\mu}_{j}-\mu_{j})\log\lambda_{j}-\sum_{j^{\prime}=1}^{r}(\overline{\mu_{j^{\prime}}^{\prime}}-\mu_{j^{\prime}}^{\prime})\log\lambda_{j,}\right)$
$\displaystyle-(\mu-\bar{\mu})+(\mu^{\prime}-\bar{\mu^{\prime}})-((\mu-\bar{\mu})-(\mu^{\prime}-\bar{\mu^{\prime}})+d/2)\frac{\pi}{2},$
(3.2)
with
$\mu=\sum_{j=1}^{r}\mu_{j}\quad\text{and}\quad\mu^{\prime}=\sum_{j^{\prime}=1}^{r}\mu_{j^{\prime}}^{\prime}.$
Note that $A\in{\mathbb{R}}$ and $C>0$. Replacing $x+it$ by $x+it+w$
($w=u+iv$) in (3.1), and taking absolute values, we obtain
$\frac{\tilde{G}(1-x-it-w)}{G(x+it+w)}\ll(1+|t+v|)^{-d(x-1/2+u)}.$ (3.3)
Additionally, we will need the following lemma from [BR20] which allows us to
pass from $F(s)$ to an everywhere convergent Dirichlet series.
###### Lemma 3.1.
Let $w=u+iv$, $z=x+iy$, $p>0$ and $d>0$. If
$F(s)\in{{\mathfrak{A}}}^{\\#}_{d}$ is holomorphic at $s=z+it$ and
$0<\eta<1-x+p-\sigma_{a}$, we have
$F(z+it)=\sum_{n=1}^{\infty}\frac{a_{n}e^{-(n/X)^{p}}}{n^{z+it}}+r_{1}(t,X)+r_{2}(t,X),$
(3.4)
where $r_{1}(t,X):=O(X^{\sigma_{a}-x}e^{-|t|/p})$ is identically zero if
$F(z)$ is entire, and
$r_{2}(t,X):=\frac{1}{2\pi ip}\int_{u=-p+\eta}F(z+it+w)X^{w}\Gamma(w/p)dw\ll
O(t^{d(\frac{1}{2}+p-x-\eta)}X^{-p+\eta}),$ (3.5)
where $u=-p+\eta$ is a line on which $F(z+it+w)$ is holomorphic.
###### Remark 3.2.
We can apply the lemma above to $\tilde{F}(s)$ instead of $F(s)$. This yields
$\tilde{F}(1-z-it)=\sum_{n=1}^{\infty}\frac{\overline{a_{n}}e^{-(n/X)^{p}}}{n^{1-z-it}}+\tilde{r}_{1}(t,X)+\tilde{r}_{2}(t,X),$
(3.6)
where $\tilde{r}_{i}(t,X)$ satisfies the same estimates as $r_{i}(t,X)$ when
$x$ is replaced by $1-x$, for $i=1,2$.
###### Remark 3.3.
It has been pointed out to me by D. Surya Ramana that the lemma above is valid
for $0<\eta<p$ if we use the standard convexity bounds for $F(s)$. In this
paper we will need only the weaker statement made in the lemma.
## 4\. Soundararajan’s transform
Suppose that $F(s)\in{{\mathfrak{A}}}^{\\#}_{d}$, with $d\geq 1$. For
$\alpha\geq 1$ and $T$ chosen large enough that $F(1/2+it)$ is holomorphic for
$t\geq T$, we define
$H(\alpha,T):=\frac{1}{\sqrt{\alpha}}\int_{K_{T}}F(1/2+it)e^{idt\log\left[\frac{t}{e\alpha}\right]-i\frac{\pi}{4}}dt,$
(4.1)
where $K_{T}=[2\alpha T,3\alpha T]$. Soundararajan introduced (a mild variant
of) this transform for $d=1$ and $F\in\BOONDOX{S}$ in [Sou05], and we used a
similar transform in [BR20] to study ${{\mathfrak{A}}}^{\\#}_{d}$ when
$1<d<2$. In what follows, $\alpha$ will be fixed, so we will study the
behaviour of $H(\alpha,T)$ as a function of $T$.
We use Lemma 3.1 when $z=1/2$. Substituting for $F(1/2+it)$ from equation
(3.4), we obtain (for any $X_{1}>0$),
$H(\alpha,T)=\frac{1}{\sqrt{\alpha}}\sum_{n=1}^{\infty}\frac{a_{n}}{\sqrt{n}}e^{-(n/X_{1})^{p}}I_{n}+R_{1}(\alpha,T,X_{1})+R_{2}(\alpha,T,X_{1}),$
(4.2)
where
$R_{i}(\alpha,T,X_{1})=\frac{1}{\sqrt{\alpha}}\int_{K_{T}}r_{i}(t,X_{1})e^{idt\log\left[\frac{t}{e\alpha}\right]-i\frac{\pi}{4}}dt$,
for $i=1,2$,
$I_{n}=I_{n}(\alpha,T):=\frac{1}{2\pi
i}\int_{K_{T}}e^{idt\log\left[\frac{t}{e\alpha
x_{n}}\right]-i\frac{\pi}{4}}dt,$ (4.3)
and $x_{n}=n^{1/d}$. Using the estimates for $r_{1}(t,X_{1})$ given above, we
see that $R_{1}(\alpha,T,X_{1})=O(X_{1}^{\sigma_{a}-1/2}e^{-\alpha T})$. We
will be choosing $X_{1}=T^{d+\rho}$ for some $\rho>0$. The term
$R_{1}(\alpha,T,X_{1})$ will thus have exponential decay in $T$ since $\alpha$
is fixed. Thus we can assume $R_{1}(\alpha,T,X_{1})=O(1)$.
We estimate the term $R_{2}(\alpha,T,X_{1})$ trivially. Indeed, integrating
the absolute value of the integrand and using the estimate (3.5), produces
$R_{2}(\alpha,T,X_{1})=O(T^{d(p+1-\eta)}X_{1}^{-p+\eta}).$
Since $\rho>0$, if $p-\eta$ is chosen large enough,
$R_{2}(\alpha,T,X_{1})=O(1)$.
We record this as a proposition.
###### Proposition 4.1.
With notation as above, $X_{1}=T^{d+\rho}$, and for $p-\eta$ chosen large
enough,
$R_{i}(\alpha,T,X_{1})=O(1).$
for $i=1,2$.
It remains to evaluate the sum appearing in (4.2) which we will do in the next
section.
## 5\. Estimating the oscillatory integral $I_{n}$
We will require two lemmas for evaluating the oscillatory integrals $I_{n}$
that appear in equation (4.2). The first is well known, and can be found in
Section 1.2 of Chapter VIII in [Ste93], for instance. It is needed to estimate
$I_{n}$ when $n$ is relatively small or large compared to $T^{d}$.
###### Lemma 5.1.
Suppose that $g(t)$ is a function of bounded variation on an interval
$K=[a,b]$ and $|g(t)|\leq M$ for all $t\in K$. For any
${\mathcal{C}}^{1}$-function $f$ on $K$, if $f^{\prime}(t)$ is monotonic and
$|f^{\prime}(t)|\geq m_{1}$ on $K$,
$\int_{K}g(t)e^{if(t)}dt\ll\frac{1}{m_{1}}\left\\{|M|+\int_{K}|g^{\prime}(t)|dt\right\\}.$
To evaluate the integrals $I_{n}$ when $n$ has roughly the size $T^{d}$ we
need Lemma 3.3 of [BB04].
###### Lemma 5.2.
Suppose that $f$ is a ${\mathscr{C}}^{3}$-function on an interval $K=[a,b]$
and $f^{\prime\prime}(t)\neq 0$ on $K$. If $f^{\prime}(c)=0$ for some $c\in
K$, and $m>0$ is such that $|f^{\prime\prime\prime}(t)|\leq m$ for $t\in
K\cap\left[c-\left|\frac{f^{\prime\prime}(c)}{m}\right|,c+\left|\frac{f^{\prime\prime}(c)}{m}\right|\right]$,
then
$\int_{K}e^{if(t)}dt=e^{\pm
i\frac{\pi}{4}}\frac{e^{if(c)}}{\sqrt{|f^{\prime\prime}(c)|}}+O\left(\frac{m}{|f^{\prime\prime}(c)|^{2}}\right)+O\left(\frac{1}{|f^{\prime}(a)|}+\frac{1}{|f^{\prime}(b)|}\right).$
The $\pm$ in the expression above occurs according to the sign of
$f^{\prime\prime}(c)$.
We recall that $K_{T}=[2\alpha T,3\alpha T]$. In the notation of the lemmas
above,
$I_{n}=\int_{K}g(t)e^{if(t)}dt,$
where $K=K_{T}$, $g(t)\equiv 1$ and
$f(t)=dt\log\left[\frac{t}{e\alpha n^{\frac{1}{d}}}\right]-\frac{i\pi}{4}.$
###### Proposition 5.3.
For $n\leq T^{d}$ and $4T^{d}\leq n<T^{d+\rho}$,
$I_{n}=O(1).$ (5.1)
If $T^{d}<n<4T^{d}$,
$I_{n}=\sqrt{\alpha}d^{-\frac{1}{2}}n^{1/2d}e^{id\alpha n^{1/d}}+O(1).$ (5.2)
###### Proof.
We follow the proof (for $d=1$) in [BR20]. Indeed, we have
$f^{\prime}(t)=d\log\left[\frac{t}{\alpha
n^{\frac{1}{d}}}\right],\,\,f^{\prime\prime}(t)=d/t\,\,\text{and}\,\,f^{\prime\prime\prime}(t)=-d/t^{2}.$
If $n\leq T^{d}$, then $|f^{\prime}(t)|\geq d\log 2$. Similarly, if
$4T^{d}\leq n<T^{d+\rho}$, $|f^{\prime}(t)|\geq d\log 4/3$. Then Lemma 5.1
shows that $I_{n}=O(1)$, and (5.1) follows.
If $T^{d}<n<4T^{d}$, we proceed as follows. Note that $f^{\prime}(c)=0$ means
that $c=\alpha n^{1/d}$. The first term on the right in Lemma 5.2 thus yields
$\sqrt{\alpha}d^{-\frac{1}{2}}n^{1/2d}e^{-id\alpha n^{1/d}}$. Now choose
$m=3d/c^{2}$, so $f^{\prime\prime}(c)/m=c/3$. If $t\in K_{T}\cap[2c/3,4c/3]$,
$|f^{\prime\prime\prime}(t)|=9d/4c^{2}\leq m$. Thus, the hypotheses of Lemma
5.2 are satisfied. The first error term in the lemma yields
$O\left(\frac{m}{|f^{\prime\prime}(c)|^{2}}\right)=O(1),$
while the last two error terms also yield $O(1)$. This proves (5.2). ∎
Note that when estimating the sum in equation (4.2), it is enough to estimate
the sum for $n<T^{d+\rho}$, since the terms in the sum decay exponentially
when $n$ exceeds this. Using the the estimates (5.1) and (5.2) in the sum in
equation (4.2) yields (for $X_{1}\geq T^{d+\rho}$)
$\frac{1}{\sqrt{\alpha}}\sum_{n=1}^{\infty}\frac{a_{n}}{\sqrt{n}}e^{-(n/X_{1})^{p}}I_{n}=\sum_{T^{d}<n<4T^{d}}\frac{a_{n}}{\sqrt{n}}e^{-(n/X_{1})^{p}}d^{-\frac{1}{2}}n^{1/2d}e^{-id\alpha
n^{1/d}}+O(X_{1}^{\sigma_{a}-\frac{1}{2}+\varepsilon}),$ (5.3)
for any $\varepsilon>0$. Combining equation (5.3) with the estimates in
Proposition 4.1, we get the following proposition.
###### Proposition 5.4.
For any $\varepsilon>0$, we have
$H(\alpha,T)=\sum_{T^{d}<n<4T^{d}}\frac{a_{n}}{\sqrt{n}}d^{-\frac{1}{2}}n^{1/2d}e^{-(n/T^{d+\rho})^{p}}e^{-id\alpha
n^{1/d}}+O(T^{(d+\rho)(\sigma_{a}-\frac{1}{2}+\varepsilon)}).$ (5.4)
## 6\. A second estimate for $H(\alpha,T)$
We now evaluate the transform $H(\alpha,T)$ in a second way, hewing to
Soundararajan’s arguments in [Sou05] for $d=1$. Applying the functional
equation to the integrand, and then using equation (3.1) for
$\tilde{F}(1/2-it)$, gives
$H(\alpha,T)=\frac{\omega
e^{iB}}{\sqrt{\alpha}}\int_{K_{T}}\tilde{F_{2}}(1/2-it)(CQ^{2}\alpha^{d})^{-it}t^{iA}\left[1+O(1/t)\right]dt.$
Using equation (3.6), we obtain
$\displaystyle H(\alpha,T)=\frac{\omega e^{iB}}{\sqrt{\alpha}}\int_{K_{T}}$
$\displaystyle\left[\sum_{n=1}^{\infty}\frac{\overline{a_{n}}}{\sqrt{n}}e^{-(n/X_{2})^{p}}+\tilde{r}_{1}(t,X_{2})+\tilde{r}_{2}(t,X_{2})\right]$
$\displaystyle\times(n^{-1}CQ^{2}\alpha^{d})^{-it}t^{iA}\left[1+O(1/t)\right]dt.$
(6.1)
for $X_{2}>0$. Imitating the arguments used for majorising
$R_{i}(\alpha,T,X_{1})$ in Proposition 4.1, the terms
$\tilde{R}_{i}(\alpha,T,X_{2})=\int_{K_{T}}\tilde{r}_{i}(t,X_{2})(CQ^{2}\alpha^{d})^{-it}t^{iA}dt$
yield $O(1)$ when estimated trivially, if $X_{2}=T^{d+\rho}$ for some
$\rho>0$, and $p-\eta$ is chosen large enough. We also have
$\int_{K_{T}}\tilde{r}_{i}(t,X_{2})O(1/t)dt\ll\tilde{R}_{i}(\alpha,T,X_{2})=O(1)$
for $i=1,2$.
We switch the order of summation and integration in the first term of (6) to
get the expression
$\frac{\omega
e^{iB}}{\sqrt{\alpha}}\sum_{n=1}^{\infty}\frac{\overline{a_{n}}}{\sqrt{n}}e^{-(n/X_{2})^{p}}J_{n},$
(6.2)
where
$J_{n}=\int_{K_{T}}(n^{-1}C\pi Q^{2}\alpha)^{-it}t^{iA}dt.$
As before, it is enough to evaluate or estimate this sum when
$n<X_{2}^{1+\varepsilon}$. Choose $m$ such $a_{m}\neq 0$, and fix $\alpha$ so
that $m=CQ^{2}\alpha^{d}$. Then $J_{m}$ can be evaluated exactly to give
$J_{m}=\frac{(3^{1+iA}-2^{1+iA})}{1+iA}\cdot\alpha^{1+iA}T^{1+iA}.$
For $n\neq m$, we use integration by parts to estimate $J_{n}$. We have
$J_{n}=O(1/\log(n^{-1}CQ^{2}\alpha^{d})=O(1)$
Let
$\kappa=\omega
e^{iB}\sqrt{C}Q\alpha^{\frac{1-d}{2}+iA}\frac{(3^{1+iA}-2^{1+iA})}{1+iA}$
Substituting for $J_{n}$ in (6.2), we obtain (for any $\varepsilon>0$)
$\frac{\omega
e^{iB}}{\sqrt{\alpha}}\sum_{n=1}^{\infty}\frac{\overline{a_{n}}}{\sqrt{n}}e^{-(n/T^{d+\rho})^{p}}J_{n}=\kappa
a_{m}T^{1+iA}+O(T^{(d+\rho)(\sigma_{a}-\frac{1}{2}+\varepsilon)})$
when $X_{2}=T^{d+\rho}$. The sum involving $\int_{K_{T}}(n^{-1}C\pi
Q^{2}\alpha)^{-it}t^{iA}O(1/t)dt$ is dominated by the sum involving $J_{n}$
above. We consolidate the arguments in this section as
###### Proposition 6.1.
Suppose that $a_{m}\neq 0$ and $\alpha$ is chosen so that
$m=CQ^{2}\alpha^{d}$. Then,
$H(\alpha,T)=\kappa
a_{m}T^{1+iA}+O(T^{(d+\rho)(\sigma_{a}-\frac{1}{2}+\varepsilon)}).$ (6.3)
## 7\. The proof of Theorem 1.1
We now have all the estimates necessary to prove Theorem 1.1. Equating (5.4)
and (6.3) gives us
$\sum_{T^{d}<n<4T^{d}}\frac{a_{n}}{\sqrt{n}}n^{1/2d}e^{-(n/T^{d+\rho})^{p}}e^{-id\alpha
n^{1/d}}=\kappa
d^{\frac{1}{2}}a_{m}T^{1+iA}+O(T^{(d+\rho)(\sigma_{a}-\frac{1}{2}+\varepsilon)}).$
This can be rewritten as
$\sum_{T<n<4T}a_{n}e^{-(n/T^{d+\rho})^{p}}e^{-id\alpha n^{1/d}}=\kappa
d^{\frac{1}{2}}a_{m}T^{\frac{1}{2}+\frac{1}{2d}+iA}+O(T^{(1+\frac{\rho}{d})(\sigma_{a}-\frac{1}{2}+\varepsilon)+\frac{1}{2}-\frac{1}{2d}}).$
(7.1)
Suppose that $F(s)$ converges absolutely when $\text{Re}(s)=1/2+1/2d$, so
$\sigma_{a}\leq 1/2+1/2d$. If $\rho$ and $\varepsilon$ are chosen small
enough, we see that the second term on the right hand side of (7.1) is
actually $o(T^{\frac{1}{2}+\frac{1}{2d}})$. But then, for $T$ large enough,
$\sum_{T<n<4T}|a_{n}|\geq\left|\sum_{T<n<4T}a_{n}e^{-(n/T^{(d+\rho)})^{p}}e^{-id\alpha
n^{1/d}}\right|\geq 2^{-1}|\kappa
d^{\frac{1}{2}}a_{m}|T^{\frac{1}{2}+\frac{1}{2d}}.$
This contradicts the assumption that $F(s)$ converges absolutely when
$\text{Re}(s)=1/2+1/2d$, and Theorem 1.1 follows.
## References
* [BB04] E. Bombieri and J. Bourgain. A remark on Bohr’s inequality. Int. Math. Res. Not., (80):4307–4330, 2004.
* [Boo15] Andrew R. Booker. $L$-functions as distributions. Math. Ann., 363(1-2):423–454, 2015.
* [BR20] R Balasubramanian and Ravi Raghunathan. Beyond the extended Selberg class: $1<d<2$. arXiv:2011.07525, 2020. Submitted for publication.
* [KP99] Jerzy Kaczorowski and Alberto Perelli. On the structure of the Selberg class. I. $0\leq d\leq 1$. Acta Math., 182(2):207–241, 1999.
* [KP05] J. Kaczorowski and A. Perelli. On the structure of the Selberg class. VI. Non-linear twists. Acta Arith., 116(4):315–341, 2005.
* [KP15] Jerzy Kaczorowski and Alberto Perelli. General $\Omega$-theorems for coefficients of $L$-functions. Proc. Amer. Math. Soc., 143(12):5139–5145, 2015.
* [Rag20] Ravi Raghunathan. Beyond the extended Selberg class: $d_{F}\leq 1$. arXiv:2005.11381, 2020. Submitted for publication.
* [Sou05] K. Soundararajan. Degree 1 elements of the Selberg class. Expo. Math., 23(1):65–70, 2005.
* [Ste93] Elias M. Stein. Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals, volume 43 of Princeton Mathematical Series. Princeton University Press, Princeton, NJ, 1993. With the assistance of Timothy S. Murphy, Monographs in Harmonic Analysis, III.
|
# Optimal control of a 2D diffusion-advection process with a team of mobile
actuators under jointly optimal guidance
Sheng Cheng<EMAIL_ADDRESS>Derek A. Paley<EMAIL_ADDRESS>University of
Maryland, College Park University of Maryland, College Park
(15 June 2021)
###### Abstract
This paper describes an optimization framework to control a distributed
parameter system (DPS) using a team of mobile actuators. The framework
simultaneously seeks optimal control of the DPS and optimal guidance of the
mobile actuators such that a cost function associated with both the DPS and
the mobile actuators is minimized subject to the dynamics of each. The cost
incurred from controlling the DPS is linear-quadratic, which is transformed
into an equivalent form as a quadratic term associated with an operator-valued
Riccati equation. This equivalent form reduces the problem to seeking for
guidance only because the optimal control can be recovered once the optimal
guidance is obtained. We establish conditions for the existence of a solution
to the proposed problem. Since computing an optimal solution requires
approximation, we also establish the conditions for convergence to the exact
optimal solution of the approximate optimal solution. That is, when
evaluating these two solutions by the original cost function, the difference
becomes arbitrarily small as the approximation gets finer. Two numerical
examples demonstrate the performance of the optimal control and guidance
obtained from the proposed approach.
###### keywords:
Infinite-dimensional systems; Multi-agent systems; Modeling for control
optimization; Guidance navigation and control.
††thanks: This paper was not presented at any IFAC meeting. Corresponding
author Sheng Cheng (Tel. +1 301 335 2995).
,
## 1 Introduction
Recent development of mobile robots (unmanned aerial vehicles, terrestrial
robots, and underwater vehicles) has greatly extended the type of distributed
parameter system (DPS) over which mobile actuation and sensing can be
deployed. Such a system is often modeled by a partial differential equation
(PDE), which varies in both time and space. Exemplary applications of mobile
control and estimation of a DPS can be found in thermal manufacturing [13],
monitoring and neutralizing groundwater contamination [12], and wildfire
monitoring [19].
We propose an optimization framework that simultaneously solves for the
guidance of a team of mobile actuators and the control of a DPS. We consider a
2D diffusion-advection process as the DPS for its capability of modeling a
variety of processes governed by continuum mechanics and the convenience of
the state-space representation. The framework minimizes an integrated cost
function, evaluating both the control of the DPS and the guidance of the
actuators, subject to the dynamics of the DPS and the mobile actuators. The
problem addresses the mobile actuator and the DPS as a unified system, instead
of solely controlling the DPS. Furthermore, the additional degree of freedom
endowed by mobility yields improved control performance in comparison to using
stationary actuators.
The study of the control of a PDE-modeled DPS dates back to the 1960s (see the
survey [29]). For fundamental results, see the textbooks [7, 28, 1]. Although
it is possible to categorize prior work by whether the input operator is
bounded, our literature review is categorized by the location of the
actuation. When actuation acts on the boundary of the spatial domain, it is
called boundary control. The main complexity in boundary control is that the
input operator, which associates with the PDE’s boundary condition, is
unbounded. This is addressed in the tutorial [15]. Recent developments on the
design of boundary control uses the backstepping method [33], where a Volterra
transformation determines a stabilizing control by transforming the system
into a stable target system. When actuation acts in the interior of the
spatial domain, it is called distributed control. For distributed control, the
DPS is actuated by in-domain actuators that are either stationary or mobile.
The problem of determining the location of stationary actuators is called the
actuator placement problem. Actuator placement has been studied for optimality
in the sense of linear-quadratic (LQ) [24], H2 [25], and H∞ [17]. The author
of [24] studies the actuator placement problem with the LQ performance
criterion. The actuators’ locations are chosen to minimize the operator norm
or trace norm of the Riccati operator solved from an algebraic Riccati
equation associated with an infinite-dimensional system. If the input operator
is compact and continuous with respect to actuator location in the operator
norm, then a solution exists for the problem minimizing the operator norm of
the Riccati operator [24, Theorem 2.6], under stabilizability and
detectability assumptions. When computing the optimal actuator locations, if
the approximated Riccati operator converges to the original Riccati operator
at each actuator location, then the approximate optimal actuator locations
converge to the exact optimal locations [24, Theorem 3.5]. For the above
results to hold when minimizing the trace-norm of the Riccati operator, the
input and output spaces have to be finite-dimensional [24, Theorems 2.10 and
3.9] in addition to the assumptions stated above.
The authors of [25] design optimal actuator placement by minimizing the
H2-control performance criterion, which minimizes the L2-norm of the linear
output of a linear system, subject state disturbances. Roughly speaking,
H2-control performance reduces the response to the disturbances while setting
a zero initial condition, whereas the LQ performance reduces the response to
the initial condition in a disturbance-free setting. For disturbances with
known or unknown spatial distribution, the trace of the Riccati solution
(scaled by the disturbance’s spatial distribution) or operator norm of the
Riccati solution are minimized, respectively, where the existence of a
solution and convergence to the exact optimal solution of the approximate
optimal solution are guaranteed. In [17], the H∞-performance criterion is
minimized for actuator placement. Specifically, the actuators are placed in
the locations that yield infimum of the attenuation bound (upper bound of the
infinity norm of the closed-loop disturbance-to-output transfer function). The
conditions for the existence of a solution and convergence to the exact
optimal placement of the approximate optimal placement are provided.
Geometric approaches have also been proposed for actuator placement. For
example, a modified centroidal Voronoi tessellation (mCVT) yields locations of
actuators and sensors that yields least-squares approximate control and
estimation kernels for a parabolic PDE [10]. The input operator is designed by
maximizing the H2-norm of the input-to-state transfer function, whereas the
kernel of the state feedback is obtained using the Riccati solution. Next,
mCVT determines the partition such that the actuator and sensor locations
achieve optimal performance (in the sense of least-squares) to the input
operator and state feedback kernel, respectively. A comparison of various
performance criteria for actuator placement has been conducted for controlling
a simply supported beam [26] and a diffusion process [27]. It has been
analyzed that maximizing the minimum eigenvalue of the controllability gramian
is not a useful criterion. Because the lower bound of the eigenvalue is zero,
the minimum eigenvalue approaches zero as the dimension of approximation
increases [26, 27].
The guidance of mobile actuators is designed to improve the control
performance in comparison to stationary actuators. Various performance
criteria have been proposed for guidance. In [13], a mobile heat source is
steered to maintain a spatially uniform temperature distribution in 1D using
the optimal control method. The formulation uses a finite-dimensional
approximation for modeling the process and evaluating performance.
Additionally, the admissible locations of the heat source are chosen within a
discrete set that yields approximate controllability requirements. Algorithms
are provided to solve the proposed problem with considerations on real-time
implementation and hardware constraints. Experimental results demonstrate the
performance of the proposed scheme. The authors of [14] propose an
optimization framework that steers mobile actuators to control a reaction-
diffusion process. A specific cost function, consisting of norms of control
input and measurement of the DPS and the steering force, is minimized subject
to dynamics of the actuator’s motion and the PDE, and bounds on the control
input and state of the DPS. The implementation of the framework is emphasized
by discrete mechanics and model predictive control which yield computationally
tractable solutions, in addition to an approximation of the PDE and a discrete
set of admissible actuator locations.
The problem of ultraviolet curing using a mobile radiant actuator is
investigated in [38], where the curing process is modeled by a 1D nonlinear
PDE. Both the radiant power and scanning velocity of the actuator are computed
for reaching a target curing state. A dual extended Kalman filter is applied
to estimate the state and parameters of the curing process for feedback
control, based on the phases of curing. In [11], a navigation problem is
studied in which a mobile agent moves through a diffusion process represented
by a hazardous field with given initial and terminal positions. Such a
formulation may be applied to emergency evacuation guidance from an indoor
environment with carbon monoxide. Both problems with minimum time and minimum
accumulated effects of hazards are formulated, and closed-form solutions are
derived using the Hamiltonian. A Lyapunov-based guidance strategy for
collocated sensors and actuators to estimate and control a diffusion process
is proposed in [8]. The decentralized guidance of the actuators for
controlling a diffusion process to a zero state is derived by constructing
suitable Lyapunov functions. The same methodology is applied to construct a
distributed consensus filter via the network among agents to improve state
estimation. A follow-up work [9] incorporates nonlinear vehicle dynamics in
addition.
The problem formulation in this paper includes a cost function that
simultaneously evaluates controlling the PDE-modeled DPS, referred to as the
PDE cost, and steering the mobile actuators, referred to as the mobility cost.
The PDE cost is a quadratic cost of the PDE state and control, whose optimal
value can be obtained by solving an operator-valued differential Riccati
equation. Our results are based on the related work [3], which establishes
Bochner integrable solutions of finite-horizon Riccati integral equations
(with values in Schatten p-classes) associated with infinite-dimensional
systems. The existence conditions for the solution of exact and approximate
Riccati integral equations are established in [3]. The significance of the
Bochner integrable solution is that it allows the implementation of simple
numerical quadratures for computing the approximated solution of Riccati
integral equations. In [3], the Riccati solution is applied in a sensor
placement problem, which computes optimal sensor locations that minimize the
trace of the covariance operator of the Kalman filter of a diffusion-advection
process. The same cost has been used in an optimization framework for mobile
sensor’s motion planning in [2]. The existence of a solution of the
optimization problem is established under the assumption that the integral
kernel of the output operator is continuous with respect to the location of
the sensor [2, Definition 4.5]. This assumption permits the construction of a
compact set of operators [2, Lemma 4.6] over which the cost function is
continuous, and hence establishes the existence of a solution. The assumption
is also made on the input operator in this paper, which allows the derivation
of a vital result on the Riccati operator’s continuity with respect to the
actuator trajectory (Lemma 3). The continuity property plays a crucial role in
establishing the existence of the proposed problem’s solution and the
convergence to the exact optimal solution of the approximate optimal solution.
The existence of a solution is established in using the fact that a weakly
sequentially lower semicontinuous function attains its minimum on a weakly
sequentially compact set over a normed linear space. In addition to the
assumptions made for the existence of a solution, a stringent (yet with
reasonable physical interpretation) requirement is placed on the admissible
set to yield compactness, which leads to convergence of the approximate
optimal solution. The convergence is in the sense that when evaluating the
exact and approximate optimal solutions by the original cost function, the
difference becomes arbitrarily small as the dimension of approximation
increases.
The contributions of this paper are threefold. First, we propose an
optimization framework for controlling a PDE-modeled system using a team of
mobile actuators. The framework incorporates both controlling the process and
steering the mobile actuators. The former is handled by the linear-quadratic
regulator of the PDE. The latter is taken care of by designing generic cost
functions that address the constraints and limitations of the vehicles
carrying the actuators. Second, existence conditions of a solution of the
proposed problem are established. It turns out that the conditions are
generally satisfied in engineering problems, which allows the results to be
applied to a wide range of applications. Third, conditions are also
established under which the optimal solution computed using approximations
converges to the exact optimal solution. The convergence is in the sense that
the cost function of the exact problem evaluated at these two solutions
becomes arbitrarily close as the dimensional of the approximation goes to
infinity. The convergence is verified in numerical studies and confirms the
appropriateness of the optimal solution of the approximation.
The proposed framework is well-suited for the limited onboard resources of
mobile actuators in the following two aspects: (1) it adopts a finite-horizon
optimization scheme that characterizes the resource limitation more precisely
than the alternative approaches that do not specify a terminal time, such as
an infinite-horizon optimization or Lyapunov-based method; and (2) it provides
an intermediate step for the optimization problem that characterizes the
limited resources as inequality constraints, because the constraints can be
used to augment the cost function and turned into the proposed form using the
method of Lagrange multipliers. Potential applications of this work include
forest firefighting using unmanned aerial vehicles and oil spill removal or
harmful algae containment using autonomous skimmer boats. A preliminary
version of this paper [4] considered controlling a 1D diffusion process by a
team of mobile actuators. The results in this paper extends the controlled
process in [4] to a 2D diffusion-advection process and generalizes the
mobility cost therein. Furthermore, the proofs of the existence of a solution
and convergence of the approximated optimal solution are presented for the
first time in this paper. The results for a dual estimation framework can be
found in [5].
The paper adopts the following notation. The symbols $\mathbb{R}$,
$\mathbb{R}^{+}$, and $\mathbb{N}$ denote the set of real numbers, nonnegative
real numbers, and nonnegative integers, respectively. The boundary of a set
$M$ is denoted by $\partial M$. The $n$-nary Cartesian power of a set $M$ is
denoted by $M^{n}$. A continuous embedding is denoted by $\hookrightarrow$. We
use $\left|\cdot\right|$ and $\left\lVert\cdot\right\rVert$ for the norm
defined on a finite- and infinite-dimensional space, respectively, with
subscript indicating type. The superscript ∗ denotes an optimal variable or an
optimal value, whereas ⋆ denotes the adjoint of a linear operator. The
transpose of a matrix $A$ is denoted by $A^{\top}$. An $n\times n$-dimensional
identity matrix is denoted by $I_{n}$. We denote by $0_{n\times m}$ and
$1_{n\times m}$ an $n\times m$-dimensional matrix with all entries being $0$
and $1$, respectively. The term guidance refers to the steering of the mobile
actuators, whereas the term control refers to the control input to the DPS.
For an optimization problem (P0) that minimizes cost function $J(\cdot)$ over
variable $x$ subject to constraints, we use $J_{\text{(P0)}}(x)$ to denote the
cost function of (P0) evaluated at $x$. Specifically, $J^{*}_{\text{(P0)}}(x)$
indicates that the optimal value of (P0) is attained when the cost function is
evaluated at $x$.
Section II introduces relevant mathematical background, including
representation of a diffusion-advection equation by an infinite-dimensional
system, the associated LQ optimal control, and its finite-dimensional
approximation. Section III introduces the proposed optimization problem and
its equivalent problem. Conditions for the existence of a solution are stated.
Section IV details the computation of an optimal solution using finite-
dimensional approximations. Conditions for the convergence to the exact
optimal solution of the approximate optimal solution are stated. A gradient-
based method is applied to find an optimal solution. Section V provides two
numerical examples to illustrate optimal guidance and control solved by the
proposed method. Section VI summarizes the paper and discusses ongoing work.
## 2 Background
This paper is motivated by the problem of controlling the following diffusion-
advection process on a two-dimensional spatial domain
$\Omega=[0,1]\times[0,1]$ with a team of $m_{a}$ mobile actuators:
$\displaystyle\frac{\partial z(x,y,t)}{\partial t}=$ $\displaystyle\
a\nabla^{2}z(x,y,t)-\mathbf{v}\cdot\nabla z(x,y,t)$ $\displaystyle\
+\sum_{i=1}^{m_{a}}(\mathcal{B}_{i}u_{i})(x,y,t),$ (1) $\displaystyle
z(\cdot,\cdot,t)|_{\partial\Omega}=$ $\displaystyle\ 0,$ (2) $\displaystyle
z(x,y,0)=$ $\displaystyle\ z_{0}(x,y),$ (3)
where $z(\cdot,\cdot,t)$ is the state at time $t$,
$\mathbf{v}\in\mathbb{R}^{2}$ is the velocity field for advection, and $u_{i}$
is the control implemented by actuator $i$, with the actuation characterized
spatially by $\mathcal{B}_{i}$. The state $z$ lives in the state space
$L^{2}(\Omega)$. A representative model of the actuation dispensed by each
actuator is Gaussian-shaped and centered at the actuator $i$’s location
$(x_{i},y_{i})$ with a bounded support such that
$\mathcal{B}_{i}(x,y)=\left\\{\begin{aligned}
\frac{1}{2\pi\sigma_{i}^{2}}&\text{exp}\left(-\frac{(x-x_{i})^{2}}{\sigma_{i}^{2}}-\frac{(y-y_{i})^{2}}{\sigma_{i}^{2}}\right)\\\
&\quad\text{if }|x-x_{i}|\leq\sigma_{i}\text{ and }|y-y_{i}|\leq\sigma_{i},\\\
0&\quad\text{otherwise},\end{aligned}\right.$ (4)
where the parameter $\sigma_{i}$ determines the spatial influence of the
actuation, which is concentrated mostly at the location of the actuator and
disperses to the surrounding with an exponential decay.
### 2.1 Dynamics of the mobile actuators
Assume the mobile actuators have linear dynamics, so that the dynamics of
actuator $i$ are
$\dot{\xi}_{i}(t)=\alpha_{i}\xi_{i}(t)+\beta_{i}p_{i}(t),\quad\xi_{i}(0)=\xi_{i,0},$
(5)
where $\xi_{i}(t)\in\mathbb{R}^{n}$ $(n\geq 2)$ and $p_{i}(t)\in
P_{i}\subset\mathbb{R}^{m}$ are the state and guidance at $t$, respectively.
Assume that system (5) is controllable. The first two elements of $\xi_{i}(t)$
are the horizontal and vertical position, $x_{i}(t)$ and $y_{i}(t)$, of the
actuator in the 2D domain. One special case would be a single integrator,
where $\xi_{i}(t)\in\mathbb{R}^{2}$ is the position,
$p_{i}(t)\in\mathbb{R}^{2}$ is the velocity commands, $\alpha_{i}=0_{2\times
2}$, and $\beta_{i}=I_{2}$.
For conciseness, we concatenate the states and guidance of all actuators,
respectively, and use one dynamical equation to characterize the dynamics of
all agents:
$\dot{\xi}(t)=\alpha\xi(t)+\beta p(t),\quad\xi(0)=\xi_{0},$ (6)
where matrices $\alpha$ and $\beta$ are assembled from $\alpha_{i}$ and
$\beta_{i}$ for $i\in\\{1,2,\dots,m_{a}\\}$, respectively and are consistent
with the concatenation for $\xi$ and $p$. With a slight abuse of notation, we
use $n$ for the dimension of $\xi(t)$ and $m$ for the dimension of $p(t)$.
Define the admissible set of guidance $P:=P_{1}\times P_{2}\times\dots\times
P_{m_{a}}$ such that $p(t)\in P$ for $t\in[0,t_{f}]$. Let
$M\in\mathbb{R}^{2m_{a}\times n}$ be a matrix such that $M\xi(t)$ is a vector
of locations of the actuators.
### 2.2 Abstract linear system and linear-quadratic regulation
To describe the dynamics of PDE (1)–(3), consider the following abstract
linear system:
$\dot{\mathcal{Z}}(t)=\mathcal{A}\mathcal{Z}(t)+\mathcal{B}(M\xi(t),t)u(t),\qquad\mathcal{Z}(0)=\mathcal{Z}_{0},$
(7)
where $\mathcal{Z}(\cdot)$ is the state within state space
$\mathcal{H}:=L^{2}(\Omega)$ and $u(\cdot)$ is the control within the control
space ${u(t)\in U\subseteq\mathbb{R}^{m_{a}}}$ for $t\in[0,t_{f}]$. In the
case of diffusion-advection process (1), for $\phi\in\mathcal{H}$,
$(\mathcal{A}\phi)(x,y)=a\nabla^{2}\phi(x,y)-\mathbf{v}\cdot\nabla\phi(x,y),$
(8)
where the operator $\mathcal{A}$ has domain
$\text{Dom}(\mathcal{A})=H^{2}(\Omega)\cap H_{0}^{1}(\Omega)$. The input
operator $\mathcal{B}(M\xi(t),t)\in\mathcal{L}(U,\mathcal{H})$ is a function
of the actuator locations such that
$\mathcal{B}(M\xi(t),t)=[\mathcal{B}_{1}(M\xi_{1}(t),t),\dots,\mathcal{B}_{m_{a}}(M\xi_{m_{a}}(t),t)]^{\top}$,
where $\mathcal{B}_{i}(\cdot,t)\in L^{2}(\Omega)$ for all $t\in[0,t_{f}]$ and
$i\in\\{1,2,\dots,m_{a}\\}$. A special case is the time-invariant input
operator in (4). Since the actuator state $\xi(t)$ is a function of time $t$,
we sometimes use $\mathcal{B}(t)$ for brevity.
The operator $\mathcal{A}:\text{Dom}(\mathcal{A})\rightarrow\mathcal{H}$ is an
infinitesimal generator of a strongly continuous semigroup $\mathcal{S}(t)$ on
$\mathcal{H}$. Subsequently, the dynamical system (7) has a unique mild
solution $\mathcal{Z}\in C([0,t_{f}];\mathcal{H})$ for any
$\mathcal{Z}_{0}\in\mathcal{H}$ and any $u\in L^{2}([0,t_{f}];U)$ such that
$\mathcal{Z}(t)=\mathcal{S}(t)\mathcal{Z}_{0}+\int_{0}^{t}\mathcal{S}(t-\tau)\mathcal{B}(\xi(\tau),\tau)u(\tau)\text{d}\tau$.
Similar to a finite-dimensional linear system, a linear-quadratic regulator
(LQR) problem can be formulated with respect to (7), which looks for a control
$u(\cdot)\in L^{2}([0,t_{f}];U)$ that minimizes the following quadratic cost:
$\displaystyle J(\mathcal{Z},u):=$
$\displaystyle\int_{0}^{t_{f}}\langle\mathcal{Z}(t),\mathcal{Q}(t)\mathcal{Z}(t)\rangle+u(t)^{\top}Ru(t)\text{d}t$
$\displaystyle+\langle\mathcal{Z}(t_{f}),\mathcal{Q}_{f}\mathcal{Z}(t_{f})\rangle,$
(9)
where $\mathcal{Q}(t)\in\mathcal{L}(\mathcal{H})$ and
$\mathcal{Q}_{f}\in\mathcal{L}(\mathcal{H})$ are self-adjoint and nonnegative,
which evaluates the running cost and terminal cost of the PDE state. The
coefficient $R$ is an $m_{a}\times m_{a}$-dimensional symmetric and positive
definite matrix that evaluates the control effort. We refers to
$J(\mathcal{Z},u)$ as the PDE cost.
Analogous to the finite-dimensional LQR, an optimal control $u^{*}$ that
minimizes the quadratic cost (9) is
$u^{*}(t)=-R^{-1}\mathcal{B}^{\star}(t)\Pi(t)\mathcal{Z}(t),$ (10)
where $\Pi$ is an operator that associates with the following backward
differential operator-valued Riccati equation:
$\dot{\Pi}(t)=-\mathcal{A}^{\star}\Pi(t)-\Pi(t)\mathcal{A}-\mathcal{Q}(t)\\\
+\Pi(t)\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(t)\Pi(t)$ (11)
with terminal condition $\Pi(t_{f})=\mathcal{Q}_{f}$, where
$\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(t)$ is short for
$\mathcal{B}(t)R^{-1}\mathcal{B}^{\star}(t)$. Before we proceed to state the
conditions for the existence of a unique solution of (11), we introduce the
$\mathcal{J}_{q}$-class as follows.
Denote the trace of a nonnegative operator $A\in\mathcal{L}(\mathcal{H})$ by
$\text{Tr}(A)$, where
$\text{Tr}(A):=\sum_{k=1}^{\infty}\langle\phi_{k},A\phi_{k}\rangle$ for any
orthonormal basis $\\{\phi_{k}\\}_{k=1}^{\infty}$ of $\mathcal{H}$ (the trace
is independent of the choice of basis functions). For $1\leq q<\infty$, let
$\mathcal{J}_{q}(\mathcal{H})$ denote the set of all bounded operators
$\mathcal{L}(\mathcal{H})$ such that
$\text{Tr}((\sqrt{A^{\star}A})^{q})<\infty$ [3]. If
$A\in\mathcal{J}_{q}(\mathcal{H})$, then the $\mathcal{J}_{q}$-norm of $A$ is
defined as $\left\lVert
A\right\rVert_{\mathcal{J}_{q}(\mathcal{H})}:=(\text{Tr}((\sqrt{A^{\star}A})^{q}))^{1/q}<\infty$.
The class $\mathcal{J}_{1}(\mathcal{H})$ and $\mathcal{J}_{2}(\mathcal{H})$
are known as the space of trace operators and the space of Hilbert-Schmidt
operators, respectively. Note that a continuous embedding
${\mathcal{J}_{q_{1}}(\mathcal{H})\hookrightarrow\mathcal{J}_{q_{2}}(\mathcal{H})}$
holds if $1\leq q_{1}<q_{2}\leq\infty$. In other words, if
$A\in\mathcal{J}_{q_{1}}(\mathcal{H})$, then
$A\in\mathcal{J}_{q_{2}}(\mathcal{H})$ and ${\left\lVert
A\right\rVert_{\mathcal{J}_{q_{2}}(\mathcal{H})}\leq\left\lVert
A\right\rVert_{\mathcal{J}_{q_{1}}(\mathcal{H})}}$.
The existence of a mild solution of (11) is established via Lemma 1. We omit
the proof of this lemma because it is a direct consequence of [3, Theorem
3.6].
Consider the following assumptions with $1\leq q<\infty$:
1. (A1)
$\mathcal{Q}_{f}\in\mathcal{J}_{q}(\mathcal{H})$ and $\mathcal{Q}_{f}$ is
nonnegative.
2. (A2)
$\mathcal{Q}(\cdot)\in L^{1}([0,t_{f}];\mathcal{J}_{q}(\mathcal{H}))$ and
$\mathcal{Q}(t)$ is nonnegative for all $t\in[0,t_{f}]$.
3. (A3)
$\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(\cdot)\in
L^{\infty}([0,t_{f}];\mathcal{L}(\mathcal{H}))$ and
$\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(t)$ is nonnegative for
$t\in[0,t_{f}]$.
###### Lemma 1.
Let $\mathcal{H}$ be a separable Hilbert space and let $\mathcal{S}(t)$ be a
strongly continuous semigroup on $\mathcal{H}$. Suppose assumptions (A1)–(A3)
hold. Then, the equation
$\Pi(t)=\mathcal{S}^{\star}(t_{f}-t)\mathcal{Q}_{f}\mathcal{S}(t_{f}-t)+\int_{t}^{t_{f}}\mathcal{S}^{\star}(\tau-t)\\\
\left(\mathcal{Q}(\tau)-\Pi(\tau)\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(\tau)\Pi(\tau)\right)\mathcal{S}(\tau-t)\text{d}\tau$
(12)
provides a unique mild solution to (11) in the space
$L^{2}([0,t_{f}];\mathcal{J}_{2q}(\mathcal{H}))$. The solution also belongs to
$C([0,t_{f}];$ $\mathcal{J}_{q}(\mathcal{H}))$ and is pointwise self-adjoint
and nonnegative. Furthermore, if $\mathcal{Q}(\cdot)\in
C([0,t_{f}];\mathcal{J}_{q}(\mathcal{H}))$ and
$\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(\cdot)\in
C([0,t_{f}];\mathcal{L}(\mathcal{H}))$, then $\Pi$ is a weak solution to (11).
The equality introduced next in Lemma 2 allows for turning the optimal
quadratic PDE cost into a quadratic term associated with the initial condition
of the PDE and the Riccati operator. We state it without proof because it can
be established by integrating
$\text{d}\langle\mathcal{Z}(t),\Pi(t)\mathcal{Z}(t)\rangle/\text{d}t$ from $0$
to $t_{f}$; the differentiability of
$\langle\mathcal{Z}(t),\Pi(t)\mathcal{Z}(t)\rangle$ is proven in [7, Theorem
6.1.9].
###### Lemma 2.
Suppose $\Pi(t)$ is a mild solution to (11), given by (12). For every
$\mathcal{Z}_{0}\in\mathcal{H}$, the optimal PDE cost (9) satisfies the
equality
$J(\mathcal{Z}^{*},u^{*})=\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle$,
where $\mathcal{Z}^{*}$ is the state that follows the dynamics (7) under
optimal control $u^{*}$ of (10), and $\Pi(0)$ is the solution (12) evaluated
at $t=0$.
The following assumption is vital to the main results in this paper.
1. (A4)
The input operator $\mathcal{B}_{i}(x,t)$ is continuous with respect to
location $x\in\mathbb{R}^{2}$ [2, Definition 4.5], that is, there exists a
continuous function $l:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ such that
$l(0)=0$ and
$\left\lVert\mathcal{B}_{i}(x,t)-\mathcal{B}_{i}(y,t)\right\rVert_{L^{2}(\Omega)}\leq
l(|x-y|_{2})$ for all $t\in[0,t_{f}]$, all $x,y\in\mathbb{R}^{2}$, and all
$i\in\\{1,2,\dots,m_{a}\\}$.
The actuators’ locations determine where the input is actuated and,
furthermore, how $\Pi(\cdot)$ evolves through (12). Since the input operator
$\mathcal{B}(\cdot,t)$ is a mapping of the actuators’ locations at time $t$,
the composite input operator
$\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(\cdot)$ is a mapping of the
actuator state in $[0,t_{f}]$ and so is $\Pi(0)$ by (12), although the
actuator state is not explicitly reflected in the notation of
$\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(\cdot)$ or $\Pi(0)$. Hence, we can
define the optimal PDE cost
$\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle$ as a mapping of the
actuator state. Let $K:C([0,t_{f}];\mathbb{R}^{n})\rightarrow\mathbb{R}^{+}$
such that $K(\zeta):=\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle$.
Assumption (A4) plays an important role in yielding the continuity of the
mapping $K(\cdot)$ stated below in Lemma 3, whose proof is in the
supplementary material.
###### Lemma 3.
Suppose $\mathcal{Z}_{0}\in\mathcal{H}$. Let assumptions (A1)–(A3) hold with
$q=1$ and $\Pi\in C([0,t_{f}];\mathcal{J}_{1}(\mathcal{H}))$ be defined as in
(12). If assumption (A4) holds, then the mapping
$K:C([0,t_{f}];\mathbb{R}^{n})\rightarrow\mathbb{R}^{+}$ such that
$K(\xi):=\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle$ is continuous.
Approximations to (7) and (12) permit numerical computation. Consider a
finite-dimensional subspace $\mathcal{H}_{N}\subset\mathcal{H}$ with dimension
$N$. The inner product and norm of $\mathcal{H}_{N}$ are inherited from that
of $\mathcal{H}$. Let $P_{N}:\mathcal{H}\to\mathcal{H}_{N}$ denote the
orthogonal projection of $\mathcal{H}$ onto $\mathcal{H}_{N}$. Let
$Z_{N}(t):=P_{N}\mathcal{Z}(t)$ and $S_{N}(t):=P_{N}\mathcal{S}(t)P_{N}$
denote the finite-dimensional approximation of $\mathcal{Z}(t)$ and
$\mathcal{S}(t)$, respectively. A finite-dimensional approximation of (7) is
$\displaystyle\dot{Z}_{N}(t)=$ $\displaystyle\
A_{N}Z_{N}(t)+B_{N}(M\xi(t),t)u(t),$ (13) $\displaystyle Z_{N}(0)=$
$\displaystyle\ Z_{0,N}:=P_{N}\mathcal{Z}_{0},$ (14)
where $A_{N}\in\mathcal{L}(\mathcal{H}_{N})$ and
$B_{N}(M\xi(t),t)\in\mathcal{L}(U,\mathcal{H}_{N})$ are approximations of
$\mathcal{A}$ and $\mathcal{B}(M\xi(t),t)$, respectively. Since the actuator
state $\xi(t)$ is a function of time $t$, we sometimes use $B_{N}(t)$ for
brevity. Correspondingly, the finite-dimensional approximation of (12) is
$\Pi_{N}(t)=S^{\star}_{N}(t_{f}-t)Q_{f,N}S_{N}(t_{f}-t)+\int_{t}^{t_{f}}S^{\star}_{N}(\tau-t)\\\
\left(Q_{N}(\tau)-\Pi_{N}(\tau)\bar{B}_{N}\bar{B}_{N}^{\star}(\tau)\Pi_{N}(\tau)\right)S_{N}(\tau-t)\text{d}\tau,$
(15)
where $Q_{N}=P_{N}\mathcal{Q}P_{N}$, $Q_{fN}=P_{N}\mathcal{Q}_{f}P_{N}$, and
$\bar{B}_{N}\bar{B}_{N}^{\star}(\tau)$ is short for
$B_{N}(\tau)R^{-1}B_{N}^{\star}(\tau)$.
The optimal control $u_{N}^{*}$ that minimizes the approximated PDE cost
$J_{N}(Z_{N},u_{N}):=\langle Z_{N}(t_{f}),Q_{f,N}Z_{N}(t_{f})\rangle\\\
+\int_{0}^{t_{f}}\langle
Z_{N}(t),Q_{N}(t)Z_{N}(t)\rangle+u_{N}^{\top}(t)Ru_{N}(t)\text{d}t$ (16)
is analogous to (10):
$u_{N}^{*}=-R^{-1}B_{N}^{\star}(t)\Pi_{N}(t)Z_{N}(t),$ (17)
where $\Pi_{N}(t)$ is a solution of (15).
The following assumptions are associated with the approximations:
1. (A5)
Both $\mathcal{Q}_{f}$ and sequence $\\{Q_{f,N}\\}_{N=1}^{\infty}$ are
elements of $\mathcal{J}_{q}(\mathcal{H})$. Both $\mathcal{Q}_{f}$ and
$Q_{f,N}$ are nonnegative for all $N\in\mathbb{N}$ and
$\left\lVert\mathcal{Q}_{f}-Q_{f,N}\right\rVert_{\mathcal{J}_{q}(\mathcal{H})}\rightarrow
0$ as $N\rightarrow\infty$.
2. (A6)
Both $\mathcal{Q}(\cdot)$ and sequence $\\{Q_{N}(\cdot)\\}_{N=1}^{\infty}$ are
elements of $L^{1}([0,t_{f}];\mathcal{J}_{q}(\mathcal{H}))$. Both
$\mathcal{Q}(\tau)$ and $Q_{N}(\tau)$ are nonnegative for all
$\tau\in[0,t_{f}]$ and all $N\in\mathbb{N}$ and satisfy
$\int_{0}^{t}\left\lVert\mathcal{Q}(\tau)-Q_{N}(\tau)\right\rVert_{\mathcal{J}_{q}(\mathcal{H})}\text{d}\tau\rightarrow
0$ for all $t\in[0,t_{f}]$ as $N\rightarrow\infty$.
3. (A7)
Both $\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(\cdot)$ and sequence
$\\{\bar{B}_{N}\bar{B}_{N}^{\star}(\cdot)\\}_{N=1}^{\infty}$ are elements of
$L^{\infty}([0,t_{f}];\mathcal{L}(\mathcal{H}))$. Both
$\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(t)$ and
$\bar{B}_{N}\bar{B}_{N}^{\star}(t)$ are nonnegative for all $t\in[0,t_{f}]$
and all $N\in\mathbb{N}$ and satisfy
$\underset{t\in[0,t_{f}]}{\operatorname*{ess\,sup}}\left\lVert\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}(t)-\bar{B}_{N}\bar{B}_{N}^{\star}(t)\right\rVert_{\text{op}}\rightarrow
0$ (18)
as $N\rightarrow\infty$ ($\left\lVert\cdot\right\rVert_{\text{op}}$ denotes
the operator norm).
Note that the assumptions (A1), (A2), and (A3) are contained in (A5), (A6),
and (A7), respectively.
The next theorem states the convergence of an approximate solution of the
Riccati equation, which is reproduced from [3, Theorem 3.5] and hence stated
without a proof.
###### Theorem 4.
Suppose $\mathcal{S}(t)$ is a strongly continuous semigroup of linear
operators over a Hilbert space $\mathcal{H}$ and that $\\{S_{N}(t)\\}$ is a
sequence of uniformly continuous semigroup over the same Hilbert space that
satisfy, for each $\phi\in\mathcal{H}$
$\left\lVert\mathcal{S}(t)\phi-S_{N}(t)\phi\right\rVert\rightarrow
0,\quad\left\lVert\mathcal{S}^{\star}(t)\phi-
S_{N}^{\star}(t)\phi\right\rVert\rightarrow 0$ (19)
as $N\rightarrow\infty$, uniformly in $[0,t_{f}]$. Suppose assumptions
(A5)–(A7) hold. If $\Pi(\cdot)\in C([0,t_{f}];\mathcal{J}_{q}(\mathcal{H}))$
is a solution of (12) and $\Pi_{N}(\cdot)\in
C([0,t_{f}];\mathcal{J}_{q}(\mathcal{H}))$ is the sequence of solution of
(15), then
$\underset{t\in[0,t_{f}]}{\sup}\left\lVert\Pi(t)-\Pi_{N}(t)\right\rVert_{\mathcal{J}_{q}(\mathcal{H})}\rightarrow
0$ (20)
as $N\rightarrow\infty$.
The following assumption and lemma are analogous to (A4) and Lemma 3,
respectively:
1. (A8)
The approximated input operator $B_{i,N}(x,t)$ is continuous with respect to
location $x\in\mathbb{R}^{2}$, that is, there exists a continuous function
$l_{N}:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ such that $l_{N}(0)=0$ and
$\left\lVert B_{i,N}(x,t)-B_{i,N}(y,t)\right\rVert_{L^{2}(\Omega)}\leq
l_{N}(|x-y|_{2})$ for all $t\in[0,t_{f}]$, all $x,y\in\mathbb{R}^{2}$, and all
$i\in\\{1,2,\dots,m_{a}\\}$.
Similar to the mapping $K(\cdot)$ in Lemma 3, the optimal approximated PDE
cost can be characterized as a mapping of the actuator state through (15),
where the continuity is established in Lemma 5, whose proof is in the
supplementary material.
###### Lemma 5.
Suppose $Z_{0,N}\in\mathcal{H}_{N}$. Let assumptions (A5)–(A7) hold and
$\Pi_{N}(t)$ be defined as in (15). If assumption (A8) holds, then the mapping
$K_{N}:C([0,t_{f}];\mathbb{R}^{n})\rightarrow\mathbb{R}^{+}$ such that
$K_{N}(\xi):=\langle Z_{0,N},\Pi_{N}(0)Z_{0,N}\rangle$ is continuous.
## 3 Problem formulation
This paper seeks to derive the guidance and control input of each actuator
such that the state $\mathcal{Z}$ of the abstract linear system (7) can be
driven to zero. Specifically, consider the following problem:
$\displaystyle\underset{\begin{subarray}{c}u\in L^{2}([0,t_{f}];U)\\\ p\in
L^{2}([0,t_{f}];P)\end{subarray}}{\text{minimize}}$ $\displaystyle
J(\mathcal{Z},u)+J_{\text{m}}(\xi,p)$ (P) subject to
$\displaystyle\dot{\mathcal{Z}}(t)=\mathcal{A}\mathcal{Z}(t)+\mathcal{B}(t)u(t),\quad\mathcal{Z}(0)=\mathcal{Z}_{0},$
$\displaystyle\dot{\xi}(t)=\alpha\xi(t)+\beta p(t),\quad\xi(0)=\xi_{0},$
where
$J_{\text{m}}(\xi,p):=\int_{0}^{t_{f}}h(\xi(t),t)+g(p(t),t)\text{d}t+h_{f}(\xi(t_{f}))$
is the cost associated with the motion of the actuators, named the mobility
cost, such that the mappings
$h:\mathbb{R}^{n}\times[0,t_{f}]\rightarrow\mathbb{R}^{+}$ and
$g:\mathbb{R}^{m}\times[0,t_{f}]\rightarrow\mathbb{R}^{+}$ evaluate the
running state cost and running guidance cost, respectively, and the mapping
$h_{f}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{+}$ evaluates the terminal state
cost.
The running state cost $h(\cdot,\cdot)$ may characterize restrictions to
actuator state. For example, a Gaussian-type function with its peak in the
center of the spatial domain, i.e.,
$h(\left[\begin{smallmatrix}x\\\ y\end{smallmatrix}\right],t)=\\\
\frac{1}{2\pi\sigma_{x}(t)\sigma_{y}(t)}\text{exp}\left(-\frac{(x-0.5)^{2}}{\sigma_{x}^{2}(t)}-\frac{(y-0.5)^{2}}{\sigma_{y}^{2}(t)}\right),$
(21)
where $\sigma_{x}(t),\sigma_{y}(t)>0$ and $x,y\in[0,1]$, can model a hazardous
field that may shorten the life span of an actuator. The integral of this
function in the interval $[0,t_{f}]$ evaluates the accumulated exposure of the
mobile actuator along its trajectory, which may need to be contained as small
as possible (see [11]). Another example is the artificial potential field
[16], cast as a soft constraint, that penalizes the trajectory when it passes
an inaccessible region such as an obstacle. The running guidance cost
$g(\cdot,\cdot)$ may be the absolute value or a quadratic function of the
guidance, which characterizes the total amount (of fuel) or energy for
steering, respectively. And the terminal state cost $h_{f}(\cdot)$ may
characterize restrictions of the terminal state of the mobile actuators. For
example, if an application specifies terminal positions, then $h_{f}(\cdot)$
may be a quadratic function that penalizes the deviation of the actual
terminal positions.
The formulation in (P) provides an intermediate step for minimizing the PDE
cost subject to mobility constraints, in addition to the dynamics constraints.
The mobility constraints are characterized by inequalities of $h_{f}(\cdot)$
and the integrals of $h(\cdot,\cdot)$ and $g(\cdot,\cdot)$, because these
constraints can be used to augment the cost function and turned into the form
of (P) using the method of Lagrange multipliers.
An equivalent problem of (P) can be derived using Lemma 2. For an arbitrary
admissible guidance $p$, the actuator trajectory $\xi$ is determined following
the dynamics (6), which also determines the input operator
$\mathcal{B}(\xi(\cdot),\cdot)$. By Lemma 2, the control $u$ that minimizes
the cost function of (P)—specifically, the PDE cost $J(\mathcal{Z},u)$—is
given by (10), and the minimum PDE cost is
$\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle$, where $\Pi(0)$ is the
mild solution of (12) with actuator trajectory steered by guidance $p$. Hence,
we derive the following problem equivalent to (P):
$\displaystyle\underset{p\in L^{2}([0,t_{f}];P)}{\text{minimize}}$
$\displaystyle\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle+J_{\text{m}}(\xi,p)$
(P1) subject to $\displaystyle\dot{\xi}(t)=\alpha\xi(t)+\beta
p(t),\quad\xi(0)=\xi_{0},$
where $\Pi(0)$ is defined in (12) with $t=0$.
To prove the existence of a solution to (P1), we make the following
assumptions on the admissible set of guidance and the functions composing the
mobility cost:
1. (A9)
The set of admissible guidance $P\subset\mathbb{R}^{m}$ is closed and convex.
2. (A10)
The mappings $h:\mathbb{R}^{n}\times[0,t_{f}]\rightarrow\mathbb{R}^{+}$,
$g:\mathbb{R}^{m}\times[0,t_{f}]\rightarrow\mathbb{R}^{+}$, and
$h_{f}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{+}$ are continuous. For every
$t\in[0,t_{f}]$, the function $g(\cdot,t)$ is convex.
3. (A11)
There exists a constant $d_{1}>0$ with $g(p,t)\geq d_{1}|p|_{2}^{2}$ for all
$(p,t)\in P\times[0,t_{f}]$.
Assumptions (A9)–(A11) are generally satisfied in applications with vehicles
carrying the actuators. Assumption (A9) is a physically reasonable
characterization of the steering of a vehicle, where the admissible steering
is generally a continuum with attainable limits within its range. Assumption
(A10) places a general continuity requirement on the cost functions and a
convexity requirement on the steering cost function. Assumption (A11) requires
the function $g(p,t)$ to be bounded below by a quadratic function of the
guidance $p$ for all $t$, which is generally satisfied, e.g., with $g$ itself
being a quadratic function of $p$. These assumptions are applied in Theorem 6
below regarding the existence of a solution of (P1), whose proof is in
Appendix A. Subsequently, the solution to (P1) can be used to reconstruct the
solutions to (P), which is stated in Theorem 7 with its proof in Appendix B.
###### Theorem 6.
Consider problem (P1) and let assumptions (A1)–(A4) and (A9)–(A11) hold. Then
(P1) has a solution.
###### Theorem 7.
Consider problems (P) and (P1). Let assumptions (A4) and (A9)–(A11) hold. Let
$p^{*}$ be the optimal solution of (P1) and $u^{*}$ be the optimal control
obtained from (10) with actuator trajectory steered by $p^{*}$. Then $u^{*}$
and $p^{*}$ minimize problem (P).
The equivalent problem (P1) allows us to search for an optimal guidance $p$
such that the mobility cost plus the optimal PDE cost is minimized. The
control is no longer an optimization variable, because it is determined by the
LQR of the abstract linear system for arbitrary trajectories of the mobile
actuators.
## 4 Computation of optimal control and guidance
Approximation of the infinite-dimensional terms in problem (P) is necessary
when computing the optimal control and guidance. Hence, we replace the PDE
cost and dynamics of (P) by (16) and (13), respectively, and obtain the
following approximate problem (AP):
$\displaystyle\underset{\begin{subarray}{c}u\in L^{2}([0,t_{f}];U)\\\ p\in
L^{2}([0,t_{f}];P)\end{subarray}}{\text{minimize}}$ $\displaystyle
J_{N}(Z_{N},u)+J_{\text{m}}(\xi,p)$ (AP) subject to
$\displaystyle\dot{Z}_{N}(t)=A_{N}Z_{N}(t)+B_{N}(M\xi(t),t)u(t)$
$\displaystyle Z_{N}(0)=Z_{0,N},$
$\displaystyle\dot{\xi}(t)=\alpha\xi(t)+\beta p(t),$
$\displaystyle\xi(0)=\xi_{0}.$
Similar to (P), problem (AP) can be turned into an equivalent form using LQR
results for a finite-dimensional system:
$\displaystyle\underset{p\in L^{2}([0,t_{f}];P)}{\text{minimize}}$
$\displaystyle\langle Z_{0,N},\Pi_{N}(0)Z_{0,N}\rangle+J_{\text{m}}(\xi,p)$
(AP1) subject to $\displaystyle\dot{\xi}(t)=\alpha\xi(t)+\beta
p(t),\quad\xi(0)=\xi_{0},$
where $\Pi_{N}(0)$ is defined in (15) with $t=0$. Analogous to Theorems 6 and
7, the existence of a solution of (AP1) and how to use its solution to
reconstruct a solution for (AP) are stated in Theorem 8 below, whose proof is
presented in Appendix C.
###### Theorem 8.
Consider problem (AP1) and let assumptions (A5)–(A8) and (A9)–(A11) hold. Then
(AP1) has a solution, denoted by $p_{N}^{*}$. Let $u_{N}^{*}$ be the optimal
control obtained from (17) with actuator trajectory steered by $p_{N}^{*}$.
Then $u_{N}^{*}$ and $p_{N}^{*}$ minimize problem (AP).
An extension to Theorem 8 is that an optimal feedback control can be obtained
from (17) whenever the optimal guidance is solved from (AP) or (AP1).
Basically, when the trajectory is determined via the optimal guidance, a
feedback control can be implemented.
To establish convergence to the solution of (P1) of (AP1)’s solution, we need
to restrict the set of admissible guidance to a smaller set as introduced
below in assumption (A12).
1. (A12)
There exist $p_{\max}>0$ and $a_{\max}>0$ such that the set of admissible
guidance is $\mathcal{P}(p_{\max},a_{\max}):=\\{p\in C([0,t_{f}];P):|p(t)|$ is
uniformly bounded by $p_{\max}$ and $|p(t_{1})-p(t_{2})|\leq
a_{\max}|t_{1}-t_{2}|,\ \forall t_{1},t_{2}\in[0,t_{f}]\\}$.
There are two perspectives to interpreting the assumption (A12).
Mathematically, (A12) requires the admissible guidance to be a continuous
function that is uniformly bounded and uniformly equicontinuous. These two
properties yield the sequential compactness of the set
$\mathcal{P}(p_{\max},a_{\max})$ by the Arzelà-Ascoli Theorem [30].
Practically, (A12) requires the input signal to be continuous and have bounds
$p_{\max}$ and $a_{\max}$ on the magnitude and the rate of change,
respectively. This requirement is reasonable and checkable because a
continuous signal is commonly used for smooth operation, and the bounds on
magnitude and changing rate are due to the physical limits of the motion of a
platform. For example, in the case of single integrator dynamics where $p$ is
the velocity command, $p_{\max}$ and $a_{\max}$ refer to the maximum speed and
maximum acceleration, respectively. Moreover, since time discretization of
the signal is applied when computing the optimal guidance, as long as the
bound $p_{\max}$ on the magnitude of the signal is determined, then the
changing rate is bounded by $a_{\max}:=2p_{\max}/\Delta t_{\min}$ for the
smallest discrete interval length $\Delta t_{\min}$. Theorem 9 below states
the convergence of the approximate optimal solution with its proof in Appendix
D.
###### Theorem 9.
Consider problem (P1) and its finite-dimensional approximation (AP1). Let
assumptions (A4)–(A12) hold and let $p^{*}$ and $p_{N}^{*}$ denote the optimal
guidance of (P1) and (AP1), respectively. Then
$\lim_{N\rightarrow\infty}|J^{*}_{\eqref{prob: equivalent finite dim approx
integrated optimization problem}}(p_{N}^{*})-J^{*}_{\eqref{prob: new
equivalent IOCA}}(p^{*})|=0.$ (22)
Furthermore, the cost function of (P1) evaluated at the guidance $p_{N}^{*}$
converges to the optimal cost of (P1)
$\lim_{N\rightarrow\infty}|J_{\eqref{prob: new equivalent
IOCA}}(p_{N}^{*})-J^{*}_{\eqref{prob: new equivalent IOCA}}(p^{*})|=0.$ (23)
###### Remark 10.
Two implications of Theorem 9 follow. First, (22) implies that the optimal
cost of the approximated problem (AP1) converges to that of the exact problem
(P1), which justifies the approximation in (AP). Second, (23) implies that the
approximate optimal guidance $p_{N}^{*}$, when evaluated by the cost function
of (P1), yields a cost that is arbitrarily close to the exact optimal cost of
(P1). Since $p_{N}^{*}$ is computable and $p^{*}$ is not, the convergence in
(23) qualifies $p_{N}^{*}$ as an appropriate optimal guidance.
The convergence stated in Theorem 9 is established based on several earlier
stated results, including
1. 1.
the input operator’s continuity with respect to location (assumption (A4)),
which leads to the continuity of the PDE cost with respect to actuator
trajectory (Lemma 3);
2. 2.
existence of the Riccati operator (Lemma 1) and convergence of its
approximation (Theorem 4); and
3. 3.
sequential compactness of the set of admissible guidance (assumption (A12)),
which leads to the continuity of the cost function with respect to guidance
(Lemma 12).
Notice that these key results, in an analogous manner, are also required in
[24] when establishing the convergence to the exact optimal actuator locations
of the approximate optimal locations [24, Theorem 3.5], i.e.,
1. 1.
continuity with respect to location and compactness of the input operator [24,
Theorem 2.6], which lead to continuity of the Riccati operator with respect to
actuator locations [24, Theorem 2.6];
2. 2.
existence of the Riccati operator [24, Theorem 2.3] and the convergence of its
approximation [24, Theorem 3.1]; and
3. 3.
sequential compactness of the set of admissible locations, which is inherited
from the setting that the spatial domain is closed and bounded in a finite-
dimensional space.
Although the establishment of convergence is similar to the one in [24], the
cost function and type of Riccati equation are different: we have the
quadratic PDE cost plus generic mobility cost and differential Riccati
equation in this paper for control and actuator guidance versus the Riccati
operator’s norm as cost function and algebraic Riccati equation in [24] for
actuator placement. The similarity comes from the infinite-dimensional nature
of PDEs such that approximation is necessary for computation, and convergence
in approximation qualifies the approximate optimal solutions.
### 4.1 Checking assumptions (A4)–(A12)
For the approximated optimal guidance to be a good proxy of the exact optimal
guidance, by Theorem 9, assumption (A4)–(A12) have to be checked to ensure the
convergence. We summarize methods for checking these assumptions here. (A4):
examine the explicit form of $\mathcal{B}$; (A5)–(A7): examine the explicit
form of the operators $\mathcal{Q}$, $\mathcal{Q}_{f}$, and
$\bar{\mathcal{B}}\bar{\mathcal{B}}^{\star}$ and their approximations; (A8):
examine the explicit form of $B_{N}$; (A9)–(A11): examine the explicit form of
$J_{\text{m}}$; (A12): examining the bounds on the magnitude and changing rate
of the admissible guidance.
### 4.2 Gradient-descent for solving problem (AP)
Define the costates $\lambda(t)\in\mathcal{H}_{N}$ and
${\mu(t)\in\mathbb{R}^{n}}$ associated with $Z_{N}(t)$ and $\xi(t)$,
respectively, for $t\in[0,t_{f}]$ and define the Hamiltonian:
$\displaystyle H(Z_{N}(t),\xi(t),u(t),p(t),\lambda(t),\mu(t))$
$\displaystyle=$ $\displaystyle\ \langle
Z_{N}(t),Q_{N}(t)Z_{N}(t)\rangle+u^{\top}(t)Ru(t)+h(\xi(t),t)$ $\displaystyle\
+g(p(t),t)+\lambda^{\top}(t)\left(A_{N}Z_{N}(t)+B_{N}(M\xi(t),t)u(t)\right)$
$\displaystyle\ +\mu^{\top}(t)\left(\alpha\xi(t)+\beta p(t)\right).$ (24)
By Pontryagin’s minimum principle [22], we can solve a two-point boundary
value problem originated from (24) to find a local minimum of (AP). The
iterative procedure for solving the two-point boundary value problem can be
implemented in a gradient-descent manner [18, 21].
## 5 Numerical examples
We demonstrate the performance of the optimal guidance and control in two
numerical examples. The first example uses the diffusion-advection process
with zero Dirichlet boundary condition (1)–(3). The second example uses the
same process but with zero Neumann boundary condition.
The examples are motivated by and simplified from practical applications,
e.g., removal of harmful algal blooms (HAB). In this case, the distribution of
the HAB’s concentration on the water surface can be modeled by a 2D diffusion-
advection process. The cases of zero Dirichlet and Neumann boundary conditions
correspond to the scenarios where the surface is circumvented by absorbent and
nonabsorbent materials, respectively. The control to the process is
implemented by the surface vehicles that use physical methods (e.g., emitting
ultrasonic waves or hauling algae filters) or chemical methods (by releasing
algal treatment) [32], whose impact on the process can be characterized by the
input operator (4). The magnitude of the control determines how fast the
concentration is reduced at the location of the actuator. The optimal control
and guidance minimize the cost such that the HAB concentration is reduced
while the vehicles do not exercise too much control nor conduct aggressive
maneuvers. And vehicles’ low-level control can track the optimal trajectories
despite the model mismatch between the dynamics of the vehicles and those
applied in the optimization problem (P).
We apply the following values in the numerical examples:
$\Omega=[0,1]\times[0,1]$, $z_{0}(x,y)=320(x-x^{2})(y-y^{2})$, $N=13$,
$m_{a}=4$, $t_{f}=1$, $\mathbf{v}=[0.1,-0.1]^{\top}$, $a=0.05$,
$U=\mathbb{R}^{4}$, $P_{i}=[-100,100]$, $p_{\max}=a_{\max}=100$, $R=0.1I_{4}$,
$\mathcal{Q}=\mathcal{Q}_{f}=\chi(x,y)$, $h(\xi(t),t)=h_{f}(\xi(t_{f}))=0$,
$g(p(t),t)=0.1p^{\top}(t)p(t)$, $\xi_{1}(0)=[0.1,0.1]^{\top}$,
$\xi_{2}(0)=[0.125,0.1]^{\top}$, $\xi_{3}(0)=[0.125,0.125]^{\top}$,
$\xi_{4}(0)=[0.1,0.125]^{\top}$, $\sigma_{i}=0.05$, $\alpha_{i}=0_{2\times
2}$, and $\beta_{i}=I_{2}$ for $i\in\\{1,2,3,4\\}$, where the indicator
function $\chi(x,y)=1$ if $x=y$, and $\chi(x,y)=0$ if $x\neq y$. We use (4)
for the input operator of each actuator. The Péclect number of the process is
$|\mathbf{v}|_{2}/a\approx 2.83$, which implies neither the diffusion or the
advection dominates the process.
### 5.1 Diffusion-advection process with Dirichlet boundary condition
We use the dynamics in (1)–(3) with the Dirichlet boundary condition. We use
the Galerkin scheme to approximate the infinite-dimensional variables. The
orthonormal set of eigenfunctions of the Laplacian operator $\nabla^{2}$ (with
zero Dirichlet boundary condition) over the spatial domain
$\Omega=[0,1]\times[0,1]$ is $\phi_{i,j}(x,y)=2\sin(\pi ix)\sin(\pi jy)$. We
introduce a single index $k:=(i-1)N+j$ such that $\phi_{k}:=\phi_{i,j}$. For
brevity, we use $\mathcal{H}_{N}$ to denote the $N^{2}$-dimensional space
spanned by the basis functions $\\{\phi_{k}\\}_{k=1}^{N^{2}}$. Recall the
orthogonal projection $P_{N}:\mathcal{H}\rightarrow\mathcal{H}_{N}$. It
follows that $P_{N}^{\star}=P_{N}$ and $P_{N}^{\star}P_{N}\rightarrow I$
strongly [3]. Let $\Phi_{N}:=[\phi_{1}\ \phi_{2}\ \dots\
\phi_{N^{2}}]^{\top}$. We choose $N=13$ because it is the smallest dimension
such that the resulting optimal cost is within the 1% of the optimal cost
evaluated with the maximum dimension $N=20$ in the numerical studies (see Fig.
7).
Assumption (A4) holds for the choice of input operator. With the Galerkin
approximation using the orthonormal eigenfunctions of the Laplacian operator
$\nabla^{2}$ with zero Dirichlet boundary condition, it can be shown that
assumption (A8) holds for $l_{N}(\cdot)=N^{2}l(\cdot)$. Assumptions (A5)–(A7)
hold with $q=1$ under the Galerkin approximation with aforementioned basis
functions $\Phi_{N}$ [3]. Assumptions (A9)–(A11) and (A12) hold for the choice
of functions in the mobility cost and parameters of the set of admissible
guidance, respectively.
We use the forward-backward sweeping method [23] to solve the two-point
boundary value problem originated from the Hamiltonian (24). The forward
propagation of $Z_{N}$ and $\xi$ and backward propagation of $\lambda$ and
$\mu$ are computed using the Runge-Kutta method. The same method is also
applied to propagate the approximate Riccati solution $\Pi(t)$. Spatial
integrals are computed using Legendre-Gauss quadrature. To verify the
convergence of the approximate optimal cost $J^{*}_{\eqref{prob: equivalent
finite dim approx integrated optimization problem}}(p_{N}^{*})$ stated in
(22), we compute $J^{*}_{\eqref{prob: equivalent finite dim approx integrated
optimization problem}}(p_{N}^{*})$ for $N\in\\{6,7,\dots,20\\}$. Note that the
total number of basis functions is $N^{2}$. The result is shown in Fig. 7,
where exponential convergence can be observed.
Figure 1: Evolution of the diffusion-advection process with Dirichlet boundary
condition under the optimal feedback control $\bar{u}^{*}$. The actuators are
steered by the optimal guidance $p^{*}$. Snapshots at $t=0.05$ and $0.2$ s
show the transient stage, whereas the one at $t=1$ s shows the relatively
steady stage. The mobile disturbance is shown by the gray circle. Figure 2:
Optimal feedback control $\bar{u}^{*}$ of each actuator in the case of
Dirichlet boundary condition. The circles along the horizontal axis correspond
to the snapshots in Fig. 1. Figure 3: Norm of the PDE state in the case of
Dirichlet boundary condition with pairs of control and guidance in Table 1.
The circles along the horizontal axis correspond to the snapshots in Fig. 1.
In the simulation, a mobile disturbance $0.5\mathcal{B}(x_{d}(t),t)$, whose
trajectory is $x_{d}(t)=[0.5+0.3\sin(2\pi t),0.5+0.3\cos(2\pi t)]^{\top}$ is
added to the right-hand side of the dynamics (1).
Denote the optimal open-loop control and optimal guidance solved using the
gradient-descent method in Section 4.2 by $u^{*}$ and $p^{*}$, respectively.
The trajectory steered by $p^{*}$ is denoted by $\xi^{*}$. Recall that an
optimal feedback control, denoted by $\bar{u}^{*}$, can be synthesized using
(17) based on the optimal trajectory $\xi^{*}$ of the actuators.
Fig. 1 shows the evolution of the process controlled by the optimal feedback
control and the optimal trajectories of the actuators. The actuation
concentrates in the first $0.2$ s, which is shown in Fig. 2. Meanwhile, the
actuators quickly pass the peak of the initial PDE at the center of the
spatial domain and spread evenly in space. Subsequently, the actuators 2–4
cease active steering and dispensing actuation. The flow field causes the
actuators to drift until the terminal time.
To demonstrate the performance of the optimal feedback control $\bar{u}^{*}$,
we compare it with semi-naive control $u_{\text{sn}}$ and naive control
$u_{\text{n}}$ defined as local feedback controls:
$u_{\text{sn}}(t)=-0.1z_{\text{sn}}(\xi^{*}(t),t)$ and
$u_{\text{n}}(t)=-0.1z_{\text{n}}(\xi_{\text{n}}(t),t)$. The semi-naive
actuators follow the optimal trajectory $\xi^{*}$, whereas the naive actuators
follow the trajectory $\xi_{\text{n}}$, which moves at a constant speed from
$\xi_{0}$ to $1_{n\times 1}-\xi_{0}$. Table 1 compares the cost breakdown of
all the control and guidance strategies. The optimal feedback control yields a
smaller cost than the optimal open-loop control due to the capability of
feedback control in rejecting disturbances. Simulations with a disturbance-
free model (not shown) yield identical total cost for optimal open-loop
control and optimal feedback control, which justifies the correctness of the
synthesis. Fig. 3 compares the norm of the PDE state controlled by pairs of
control and guidance listed in Table 1. As can be seen, the PDE is effectively
regulated using optimal feedback control. As a comparison, the norm associated
with optimal open-loop control grows slowly after $0.3$ s due the influence of
the disturbance, although its reduction in the beginning is indistinguishable
from that of the optimal feedback control.
Table 1: Cost comparison of control and guidance strategies in the case of Dirichlet boundary condition. All costs are normalized with respect to the total cost of the case with no control. Control (C) and Guidance (G) | | Cost
---|---|---
| C | G | | $J_{N}$ | $J_{\text{m}}$ | Total
opt. feedback | $\bar{u}^{*}$ | $\xi^{*}$ | | 13.7% | 3.0% | 16.7%
opt. open-loop | $u^{*}$ | $\xi^{*}$ | | 17.5% | 3.0% | 20.5%
semi-naive | $u_{\text{sn}}$ | $\xi^{*}$ | | 42.5% | 3.0% | 45.5%
naive | $u_{\text{n}}$ | $\xi_{\text{n}}$ | | 78.8% | 0.5% | 79.3%
no control | - | - | | 100.0% | 0.0% | 100.0%
### 5.2 Diffusion-advection process with Neumann boundary condition
The results derived in this paper also apply to the operator $\mathcal{A}$
defined in (8) with a Neumann boundary condition (BC), because a general
second-order and uniformly elliptic operator with Neumann BC yields a strongly
continuous analytic semigroup on $L^{2}(\Omega)$ [20]. In this example, we
consider the diffusion-advection process (1) with initial condition (3) and
zero Neumann BC: ${\partial z(x,y,t)}/{\partial\mathbf{n}}=0$, where
$\mathbf{n}$ is the normal to the boundary $\partial\Omega$ and
$(x,y)\in\partial\Omega$. Notice that the basis functions applied for Galerkin
approximation in this case are the eigenfunctions of the Laplacian with zero
Neumann BC, $\phi_{i,j}(x,y)=2\cos(\pi ix)\cos(\pi jy)$ for
$i,j\in\\{0,1,\dots\\}$. All the parameters, disturbance, and pairs of control
and guidance for comparison applied in this example are identical to those in
Section 5.1. Exponential convergence in the approximate optimal cost can be
observed in Fig. 7.
Fig. 4 shows the evolution of the process and the optimal trajectory of the
actuators. Similar to the case of Dirichlet BC, the actuators spread out to
cover most of the domain in the initial $0.2$ s, with most of the actuation
implemented during the same interval, seen in Fig. 5. However, the actuators
span a slightly larger area (Fig. 4) and the maximum amplitude of actuation is
bigger (Fig. 5), compared to the case of Dirichlet BC in Fig. 1 and Fig. 2,
respectively. The difference is a consequence of the fact that the zero
Neumann BC does not contribute to the regulation of the process because it
insulates the process from the outside. Contrarily, the zero Dirichlet BC acts
as a passive control that can essentially regulate the process to a zero state
when there is no inhomoegeneous term in the dynamics (1). This difference can
be observed when comparing the norm of the PDE state in Fig. 6 with Fig. 3.
The norm of the uncontrolled state reduces slightly in the case of Neumann BC
(Fig. 6) compared to the almost linear reduction in the case of Dirichlet BC
(Fig. 3). Fig. 6 also shows the difference of norm reduction between the
optimal feedback control and optimal open-loop control. Once again, the former
yields a smaller terminal norm than the latter due to the feedback’s
capability of disturbance rejection. The cost breakdown of the pairs of
control and guidance in comparison is shown in Table. 2.
Figure 4: Evolution of the process with Neumann boundary condition under the optimal feedback control $\bar{u}^{*}$. The actuators are steered by the optimal guidance $p^{*}$. Snapshots at $t=0.05$ and $0.2$ s show the transient stage, whereas the one at $t=1$ s shows the relatively steady stage. The mobile disturbance is shown by the gray circle. Figure 5: Optimal feedback control $\bar{u}^{*}$ of each actuator in the case of Neumann boundary condition. The circles along the horizontal axis correspond to the snapshots in Fig. 4. Figure 6: Norm of the PDE state in the case of Neumann boundary condition with pairs of control and guidance in Table 2. The circles along the horizontal axis correspond to the snapshots in Fig. 4. Table 2: Cost comparison of control and guidance strategies in the case of Neumann boundary condition. All costs are normalized with respect to the total cost of the case with no control. Control (C) and Guidance (G) | | Cost
---|---|---
| C | G | | $J_{N}$ | $J_{\text{m}}$ | Total
opt. feedback | $\bar{u}^{*}$ | $\xi^{*}$ | | 6.4% | 1.6% | 8.0%
opt. open-loop | $u^{*}$ | $\xi^{*}$ | | 7.1% | 1.6% | 8.7%
semi-naive | $u_{\text{sn}}$ | $\xi^{*}$ | | 63.7% | 1.6% | 65.3%
naive | $u_{\text{n}}$ | $\xi_{\text{n}}$ | | 65.9% | 0.2% | 66.1%
no control | - | - | | 100.0% | 0.0% | 100.0%
Figure 7: Approximate optimal costs $J^{*}_{\eqref{prob: equivalent finite dim
approx integrated optimization problem}}(p_{N}^{*})$ normalized with respect
to the optimal cost for $N^{2}=400$.
## 6 Conclusion
This paper proposes an optimization framework that steers a team of mobile
actuators to control a DPS modeled by a 2D diffusion-advection process.
Specifically, jointly optimal control of the DPS and guidance of the mobile
actuators are solved such that the sum of a quadratic PDE cost and a generic
mobility cost is minimized subject to the dynamics of the DPS and of the
mobile actuators. We obtain an equivalent problem using LQR of an abstract
linear system, which reduces the problem to search for optimal guidance only.
The optimal control can be synthesized once the optimal guidance is obtained.
Conditions on the existence of a solution are established based on the
equivalent problem. We use the Galerkin approximation scheme to reduce the
problem to a finite-dimensional one and apply a gradient-descent method to
compute optimal guidance and control numerically. We prove conditions under
which the approximate optimal guidance converges to that of the exact optimal
guidance in the sense that when evaluating these two solutions by the
original cost function, the difference becomes arbitrarily small as the
dimension of approximation increases. The convergence justifies the
appropriateness of both the approximate problem and its solution. The
performance of the proposed optimal control and guidance is illustrated with
two numerical examples , where exponential convergence of the approximate
optimal cost is observed.
Ongoing and future work includes establishing the convergence rate of the
approximate optimal cost and studying problems with other types of PDE cost,
such as the operator norm of the Riccati operator [24, 6] to characterize
unknown initial conditions and H2\- or H∞-performance criteria for different
types of perturbation [25, 17]. On the actuator side, decentralized guidance
design may be incorporated in future work to enable more autonomy of the team
than a centralized implementation. Actuators that travel along the boundary
may be considered as well which would result in boundary controller design.
## Appendix A Proof of Theorem 6
The proof uses Theorem 11 (stated below) to establish the existence of an
optimal solution of (P1).
###### Theorem 11.
[36, Theorem 6.1.4] Suppose $(X,\left\lVert\cdot\right\rVert)$ is a normed
linear space, $M_{0}\subset X$ is weakly sequentially compact and
$f:M_{0}\rightarrow\mathbb{R}$ is weakly sequentially lower semicontinuous on
$M_{0}$. Then there exists an $\bar{x}\in M_{0}$ such that
$f(\bar{X})=\inf\\{f(x):x\in M_{0}\\}$.
Proof of Theorem 6 Without loss of generality, we consider the case of one
mobile actuator, i.e., $m_{a}=1$. The case of $m_{a}\geq 2$ follows naturally.
We want to apply Theorem 11 to prove that the minimum of the cost function of
(P1) is achieved on a subset $\mathcal{P}_{0}$ (defined below) of the
admissible set in which the cost of guidance is upper bounded. Consider
problem (P1)’s admissible set of guidance functions $\mathcal{P}:=\\{p\in
L^{2}([0,t_{f}];\mathbb{R}^{m}):p(t)\in P,t\in[0,t_{f}]\\}$. Assume there
exists $p_{0}\in\mathcal{P}$ such that $J_{\eqref{prob: new equivalent
IOCA}}(p_{0})<\infty$ and let $\mathcal{P}_{0}:=\\{p\in\mathcal{P}:\
J_{\eqref{prob: new equivalent IOCA}}(p)\leq J_{\eqref{prob: new equivalent
IOCA}}(p_{0})\\}$. We wish to prove Condition-1, Condition-2, and Condition-3
stated below:
Condition-1: The set $\mathcal{P}_{0}$ is bounded.
Condition-2: The set $\mathcal{P}_{0}$ is weakly sequentially closed.
Condition-3: The mapping $J_{\eqref{prob: new equivalent
IOCA}}(\cdot):\mathcal{P}\rightarrow\mathbb{R}$ is weakly sequentially lower
semicontinuous on $\mathcal{P}_{0}$.
Condition-1 and Condition-2 imply that $\mathcal{P}_{0}$ is weakly
sequentially compact. By Theorem 11, problem (P1) has a solution when
Condition-1–Condition-3 hold.
Before proving these three conditions, we define a mapping
$T:L^{2}([0,t_{f}];\mathbb{R}^{m})\rightarrow C([0,t_{f}];\mathbb{R}^{n})$ by
$(Tp)(t):=\xi(t)=e^{\alpha t}\xi_{0}+\int_{0}^{t}e^{\alpha(t-\tau)}\beta
p(\tau)\text{d}\tau$ for $t\in[0,t_{f}]$. For $p_{1},p_{2}\in
L^{2}([0,t_{f}];\mathbb{R}^{m})$ and $t\in[0,t_{f}]$, we have
$\displaystyle|Tp_{1}(t)-Tp_{2}(t)|_{1}$ $\displaystyle\leq$
$\displaystyle\textstyle\int_{0}^{t}|e^{\alpha(t-\tau)}\beta|_{1}|p_{1}(\tau)-p_{2}(\tau)|_{1}\text{d}\tau$
$\displaystyle\leq$ $\displaystyle
c_{5}\textstyle\int_{0}^{t}|e^{\alpha(t-\tau)}\beta|_{1}|p_{1}(\tau)-p_{2}(\tau)|_{2}\text{d}\tau$
$\displaystyle=$ $\displaystyle
c_{5}(\textstyle\int_{0}^{t}|e^{\alpha(t-\tau)}\beta|_{1}^{2}\text{d}\tau)^{1/2}\left\lVert
p_{1}-p_{2}\right\rVert_{L^{2}([0,t];\mathbb{R}^{m})}$ $\displaystyle\leq$
$\displaystyle c_{5}c_{6}\left\lVert
p_{1}-p_{2}\right\rVert_{L^{2}([0,t];\mathbb{R}^{m})}$ (25)
for $c_{5}$ and $c_{6}>0$. Hence, $\left\lVert
Tp_{1}-Tp_{2}\right\rVert_{C([0,t_{f}];\mathbb{R}^{n})}=\sup_{t\in[0,t_{f}]}|Tp_{1}(t)-Tp_{2}(t)|_{1}\leq
c_{5}c_{6}\left\lVert
p_{1}-p_{2}\right\rVert_{L^{2}([0,t_{f}];\mathbb{R}^{m})}$, which also shows
that $T$ is a continuous mapping, i.e., $\left\lVert
Tp\right\rVert_{C([0,t_{f}];\mathbb{R}^{n})}\leq c_{5}c_{6}\left\lVert
p\right\rVert_{L^{2}([0,t_{f}];\mathbb{R}^{m})}$ for all $p\in
L^{2}([0,t_{f}];\mathbb{R}^{m})$.
Proof of Condition-1: Suppose $p\in\mathcal{P}_{0}$, then
$\displaystyle J_{\eqref{prob: new equivalent IOCA}}(p_{0})\geq$
$\displaystyle\ J_{\eqref{prob: new equivalent IOCA}}(p)$ $\displaystyle=$
$\displaystyle\
h_{f}(Tp(t_{f}))+\textstyle\int_{0}^{t_{f}}h(Tp(t),t)+g(p(t),t)\text{d}t$
$\displaystyle+\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle$
$\displaystyle\geq$ $\displaystyle\
\textstyle\int_{0}^{t_{f}}d_{1}|p(t)|^{2}_{2}\text{d}t$ $\displaystyle=$
$\displaystyle\ d_{1}\left\lVert
p\right\rVert_{L^{2}([0,t_{f}];\mathbb{R}^{m})}^{2},$ (26)
where the second inequality follows from the nonnegativity of
$h_{f}(\cdot),h(\cdot,\cdot)$, and
$\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle$. Since $d_{1}>0$, the
boundedness of $\mathcal{P}_{0}$ follows.
Proof of Condition-2: Suppose $\\{p_{k}\\}\subset\mathcal{P}_{0}$ and
$\\{p_{k}\\}$ converges weakly to $p$ (denoted by $p_{k}\rightharpoonup p$).
We want to show $p\in\mathcal{P}_{0}$. We start with proving that
$\mathcal{P}$ is weakly sequentially closed and, hence, $p\in\mathcal{P}$.
Subsequently, we show $J_{\eqref{prob: new equivalent IOCA}}(p)\leq
J_{\eqref{prob: new equivalent IOCA}}(p_{0})$ to conclude Condition-2.
To show that the set $\mathcal{P}$ is weakly sequentially closed, by [35,
Theorem 2.11], it suffices to show that $\mathcal{P}$ is closed and convex.
Let $\\{q_{k}\\}\subset\mathcal{P}$ and $q_{k}\rightarrow q$. We want to show
$q\in\mathcal{P}$, i.e., $q\in L^{2}([0,t_{f}],\mathbb{R}^{m})$ and $q(t)\in
P$ for $t\in[0,t_{f}]$. Since $L^{2}([0,t_{f}],\mathbb{R}^{m})$ is complete,
we can choose a subsequence $\\{q_{k_{j}}\\}\subset\mathcal{P}$ that converges
to $q$ pointwise almost everywhere on $[0,t_{f}]$ [37, p. 53]. Since $P$ is
closed (assumption (A9)), $q(t)\in P$ for almost all $t\in[0,t_{f}]$. Hence,
$\mathcal{P}$ is closed. The convexity of $\mathcal{P}$ follows from that of
$P$ (assumption (A9)), i.e., if $p_{1},p_{2}\in\mathcal{P}$, then $\lambda
p_{1}+(1-\lambda)p_{2}\in L^{2}([0,t_{f}];\mathbb{R}^{m})$ and $\lambda
p_{1}(t)+(1-\lambda)p_{2}(t)\in P$ for $t\in[0,t_{f}]$ and $\lambda\in[0,1]$.
What remain to be shown is $J_{\eqref{prob: new equivalent IOCA}}(p)\leq
J_{\eqref{prob: new equivalent IOCA}}(p_{0})$. Since $p_{k}\rightharpoonup p$,
by definition, we have $Tp_{k}\rightarrow Tp$. We now show that the sequence
$\\{Tp_{k}\\}$ contains a uniformly convergent subsequence in
$C([0,t_{f}];\mathbb{R}^{n})$. The sequence $\\{Tp_{k}\\}\subset
C([0,t_{f}];\mathbb{R}^{n})$ is uniformly bounded and uniformly equicontinuous
for the following reasons: Since $\left\lVert
Tp_{k}\right\rVert_{C([0,t_{f}];\mathbb{R}^{n})}\leq c_{5}c_{6}\left\lVert
p_{k}\right\rVert_{L^{2}([0,t_{f}];\mathbb{R}^{m})}$, it follows that
$\left\lVert Tp_{k}\right\rVert_{C([0,t_{f}];\mathbb{R}^{n})}$ is uniformly
bounded, because $\\{p_{k}\\}\subset\mathcal{P}_{0}$ which is a bounded set.
For $s,t\in[0,t_{f}]$, we have
$\displaystyle\ |Tp_{k}(t)-Tp_{k}(s)|_{1}$ $\displaystyle=$ $\displaystyle\
\left|\textstyle\int_{s}^{t}\alpha Tp_{k}(\tau)+\beta
p_{k}(\tau)\text{d}\tau\right|_{1}$ $\displaystyle\leq$ $\displaystyle\
|t-s||\alpha|_{1}\left\lVert Tp_{k}\right\rVert_{C([0,t_{f}];\mathbb{R}^{n})}$
$\displaystyle\ +|t-s|^{1/2}|\beta|_{2}\left\lVert
p_{k}\right\rVert_{L^{2}([0,t_{f}];\mathbb{R}^{m})}.$
Since $\\{\left\lVert p_{k}\right\rVert_{L^{2}([0,t_{f}];\mathbb{R}^{m})}\\}$
and $\\{\left\lVert Tp_{k}\right\rVert_{C([0,t_{f}];\mathbb{R}^{n})}\\}$ both
are uniformly bounded for all $p_{k}\in\mathcal{P}_{0}$, $\\{Tp_{k}\\}$ is
uniformly equicontinuous. By the Arzelà-Ascoli Theorem [30], there is a
uniformly convergent subsequence $\\{Tp_{k_{j}}\\}\subset\\{Tp_{k}\\}$.
Without loss of generality, we assume $p_{k}\rightharpoonup p$ and
$Tp_{k}\rightarrow Tp$ uniformly on $[0,t_{f}]$, and $J_{\eqref{prob: new
equivalent IOCA}}(p_{k})\leq J_{\eqref{prob: new equivalent IOCA}}(p_{0})$. We
have $J_{\eqref{prob: new equivalent IOCA}}(p_{0})-J_{\eqref{prob: new
equivalent IOCA}}(p)=J_{\eqref{prob: new equivalent
IOCA}}(p_{0})-J_{\eqref{prob: new equivalent IOCA}}(p_{k})+J_{\eqref{prob: new
equivalent IOCA}}(p_{k})-J_{\eqref{prob: new equivalent IOCA}}(p)\geq
J_{\eqref{prob: new equivalent IOCA}}(p_{k})-J_{\eqref{prob: new equivalent
IOCA}}(p)$, by which, to show $J_{\eqref{prob: new equivalent IOCA}}(p)\leq
J_{\eqref{prob: new equivalent IOCA}}(p_{0})$, it suffices to show
$J_{\eqref{prob: new equivalent
IOCA}}(p)\leq\liminf_{k\rightarrow\infty}J_{\eqref{prob: new equivalent
IOCA}}(p_{k})$, which is to show
$\displaystyle
h_{f}(Tp(t_{f}))+\textstyle\int_{0}^{t_{f}}h(Tp(t),t)+g(p(t),t)\text{d}t$
$\displaystyle+\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle$
$\displaystyle\leq$
$\displaystyle\liminf_{k\rightarrow\infty}h_{f}(Tp_{k}(t_{f}))+\textstyle\int_{0}^{t_{f}}h(Tp_{k}(t),t)+g(p_{k}(t),t)\text{d}t$
$\displaystyle+\langle\mathcal{Z}_{0},\Pi^{k}(0)\mathcal{Z}_{0}\rangle,$ (27)
where $\Pi^{k}(0)$ is the solution of (12) associated with actuator state
$Tp_{k}$. Since $\\{Tp_{k}\\}$ converges to $Tp$ uniformly on $[0,t_{f}]$, the
continuity of $h_{f}(\cdot)$ implies
$h_{f}(Tp(t_{f}))=\liminf_{k\rightarrow\infty}h_{f}(Tp_{k}(t_{f}));$ (28)
Fatou’s lemma [30] implies
$\textstyle\int_{0}^{t_{f}}h(Tp(t),t)\text{d}t\leq\liminf_{k\rightarrow\infty}\textstyle\int_{0}^{t_{f}}h(Tp_{k}(t),t)\text{d}t;$
(29)
and Lemma 3 implies
$\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle=\liminf_{k\rightarrow\infty}\langle\mathcal{Z}_{0},\Pi^{k}(0)\mathcal{Z}_{0}\rangle.$
(30)
To prove (27), based on (28)–(30), it suffices to show
$\textstyle\int_{0}^{t_{f}}g(p(t),t)\text{d}t\leq\liminf_{k\rightarrow\infty}\textstyle\int_{0}^{t_{f}}g(p_{k}(t),t)\text{d}t$.
By contradiction, assume there is $\lambda>0$ such that
$\liminf_{k\rightarrow\infty}\textstyle\int_{o}^{t_{f}}g(p_{k}(t),t)\text{d}t<\lambda<\textstyle\int_{0}^{t_{f}}g(p(t),t)\text{d}t.$
(31)
There exists a subsequence $\\{p_{k_{j}}\\}\subset\\{p_{k}\\}$ such that
$O_{\lambda}:=\\{q\in
L^{2}([0,t_{f}];\mathbb{R}^{m}):\textstyle\int_{0}^{t_{f}}g(q(t),t)\text{d}t\leq\lambda\\}$
and $\\{p_{k_{j}}\\}\subset O_{\lambda}$. We wish to show that $O_{\lambda}$
is weakly sequentially closed. By [35, Theorem 2.11], it suffices to show that
$O_{\lambda}$ is convex and closed. Since
$g(\cdot,t):\mathbb{R}^{m}\rightarrow\mathbb{R}$ is convex for all
$t\in[0,t_{f}]$, it follows that $O_{\lambda}$ is convex. Let
$\\{q_{k}\\}\subset O_{\lambda}$ and $\left\lVert
q_{k}-q\right\rVert_{L^{2}([0,t_{f}];\mathbb{R}^{m})}$ converges to $0$ as
$k\rightarrow\infty$. We can choose a subsequence
$\\{q_{k_{j}}\\}\subset\\{q_{k}\\}$ such that $q_{k_{j}}$ converges to $q$
pointwise almost everywhere on $[0,t_{f}]$ [37, p. 53]. Now we have
(1) $g(q_{k_{j}}(t),t)\geq 0$ for all $t\in[0,t_{f}]$ (assumption (A11));
(2) $\lim_{j\rightarrow\infty}g(q_{k_{j}}(t),t)=g(q(t),t)$ almost everywhere
on $[0,t_{f}]$.
By Fatou’s lemma [30],
$\textstyle\int_{0}^{t_{f}}g(q(t),t)\text{d}t\leq\liminf_{k\rightarrow\infty}\textstyle\int_{0}^{t_{f}}g(q_{k_{j}}(t),t)\text{d}t\leq\lambda,$
where the last inequality holds due to $\\{q_{k_{j}}\\}\subset O_{\lambda}$.
Hence, $q\in O_{\lambda}$ and $O_{\lambda}$ is closed.
Since $O_{\lambda}$ is weakly sequentially closed, $p_{k_{j}}\rightharpoonup
p$ implies that $p\in O_{\lambda}$, which contradicts (31). Hence,
$J_{\eqref{prob: new equivalent IOCA}}(p)\leq J_{\eqref{prob: new equivalent
IOCA}}(p_{0})$ is proved, and we conclude Condition-2.
Proof of Condition-3: We now show that the mapping $J_{\eqref{prob: new
equivalent IOCA}}(\cdot):\mathcal{P}\rightarrow\mathbb{R}$ is weakly
sequentially lower semicontinuous on $\mathcal{P}_{0}$. Suppose
$\\{p_{k}\\}\subset\mathcal{P}_{0}$ and $p_{k}\rightharpoonup
p\in\mathcal{P}_{0}$. We wish to show $J_{\eqref{prob: new equivalent
IOCA}}(p)\leq\liminf_{k\rightarrow\infty}J_{\eqref{prob: new equivalent
IOCA}}(p_{k})$, which has been established when we proved $J_{\eqref{prob: new
equivalent IOCA}}(p)\leq J_{\eqref{prob: new equivalent IOCA}}(p_{0})$ in
Condition-2 (starting from (27)).
So we conclude that the existence of a solution of problem (P1). ∎
## Appendix B Proof of Theorem 7
Proof By contradiction, assume there are $p_{0}^{*}$ and $u_{0}^{*}$
minimizing (P) and $p_{0}^{*}\neq p^{*}$ and $u_{0}^{*}\neq u^{*}$ such that
$J^{*}_{\eqref{prob: new IOCA}}(u_{0}^{*},p_{0}^{*})<J_{\eqref{prob: new
IOCA}}(u^{*},p^{*})=J_{\eqref{prob: new equivalent IOCA}}(p^{*})$. Denote
$\bar{u}_{0}^{*}$ the optimal control (10) associated with actuator trajectory
steered by $p_{0}^{*}$. It follows that $J^{*}_{\eqref{prob: new
IOCA}}(u_{0}^{*},p_{0}^{*})=J_{\eqref{prob: new
IOCA}}(\bar{u}_{0}^{*},p_{0}^{*})$, because $J^{*}_{\eqref{prob: new
IOCA}}(u_{0}^{*},p_{0}^{*})>J_{\eqref{prob: new
IOCA}}(\bar{u}_{0}^{*},p_{0}^{*})$ violates the optimality of $u_{0}^{*}$ and
$J^{*}_{\eqref{prob: new IOCA}}(u_{0}^{*},p_{0}^{*})<J_{\eqref{prob: new
IOCA}}(\bar{u}_{0}^{*},p_{0}^{*})$ contradicts the fact that $\bar{u}_{0}^{*}$
minimizes the quadratic cost $J(\mathcal{Z},u)$ (see Lemma 2). Since
$J_{\eqref{prob: new
IOCA}}(\bar{u}_{0}^{*},p_{0}^{*})=\langle\mathcal{Z}_{0},\Pi_{0}^{*}(0)\mathcal{Z}_{0}\rangle+J_{\text{m}}(\xi_{0}^{*},p_{0}^{*})=J_{\eqref{prob:
new equivalent IOCA}}(p_{0}^{*})<J^{*}_{\eqref{prob: new equivalent
IOCA}}(p^{*})$, where $\Pi_{0}^{*}(0)$ associates with trajectory
$\xi_{0}^{*}$ steered by $p_{0}^{*}$, it follows that $p^{*}$ is not an
optimal solution of (P1), which contradicts the optimality of $p^{*}$ for
(P1).∎
## Appendix C Proof of Theorem 8
Proof Since $\langle Z_{0,N},\Pi_{N}(0)Z_{0,N}\rangle\geq 0$ and the mapping
$K_{N}:C([0,t_{f}];\mathbb{R}^{n})\rightarrow\mathbb{R}^{+}$ is continuous
(see Lemma 5), the proof is analogous to that of Theorem 6, where we use
$\langle Z_{0,N},\Pi_{N}(0)Z_{0,N}\rangle$ to substitute
$\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle$. The proof that
$u_{N}^{*}$ and $p_{N}^{*}$ minimize problem (AP) follows from the same logic
as the proof of Theorem 7. ∎
## Appendix D Proof of Theorem 9
Before we prove Theorem 9, we first establish two intermediate results in
Lemma 12, whose proof is in the supplementary material.
###### Lemma 12.
Consider problem (P1) and its approximation (AP1). If assumptions (A4)–(A7)
and (A9)–(A12) hold, then the following two implications hold:
1\. For $p\in C([0,t_{f}];P)$, $\lim_{N\rightarrow\infty}|J_{\eqref{prob:
equivalent finite dim approx integrated optimization
problem}}(p)-J_{\eqref{prob: new equivalent IOCA}}(p)|=0$, where $N$ is the
dimension of approximation applied in (AP).
2\. The mapping $J_{\eqref{prob: new equivalent
IOCA}}:C([0,t_{f}];P)\rightarrow\mathbb{R}^{+}$ is continuous, where
$J_{\eqref{prob: new equivalent
IOCA}}(p)=\langle\mathcal{Z}_{0},\Pi(0)\mathcal{Z}_{0}\rangle+J_{\text{m}}(\xi,p)$.
Here, the actuator state $\xi$ follows the dynamics (6) steered by the
guidance $p$, and $\Pi(0)$ follows (11) with the actuator state $\xi$.
Proof of Theorem 9 In the notation $J_{\eqref{prob: equivalent finite dim
approx integrated optimization problem}}(p_{N}^{*})$, the dimension of
approximation in (AP1), which is $N$ in this case, is indicated by its
solution $p_{N}^{*}$. We append a subscript to indicate the dimension when it
is not explicitly reflected by the argument, e.g., $J_{\eqref{prob: equivalent
finite dim approx integrated optimization problem}_{N}}(p)$.
We first show (22), i.e., $|J^{*}_{\eqref{prob: equivalent finite dim approx
integrated optimization problem}}(p_{N}^{*})-J^{*}_{\eqref{prob: new
equivalent IOCA}}(p^{*})|\rightarrow 0$ as $N\rightarrow\infty$. First,
$\displaystyle J^{*}_{\eqref{prob: equivalent finite dim approx integrated
optimization problem}}(p_{N}^{*})=$ $\displaystyle\
\underset{p\in\mathcal{P}(p_{\max},a_{\max})}{\min}J_{\eqref{prob: equivalent
finite dim approx integrated optimization problem}}(p)$ $\displaystyle\leq$
$\displaystyle\ J_{\eqref{prob: equivalent finite dim approx integrated
optimization problem}}(p^{*})$ $\displaystyle\leq$ $\displaystyle\
|J_{\eqref{prob: equivalent finite dim approx integrated optimization
problem}}(p^{*})-J^{*}_{\eqref{prob: new equivalent
IOCA}}(p^{*})|+J^{*}_{\eqref{prob: new equivalent IOCA}}(p^{*}).$
Since $|J_{\eqref{prob: equivalent finite dim approx integrated optimization
problem}}(p^{*})-J^{*}_{\eqref{prob: new equivalent IOCA}}(p^{*})|\rightarrow
0$ as $N\rightarrow 0$ (see Lemma 12-1), it follows that
$\displaystyle\limsup_{N\rightarrow\infty}J^{*}_{\eqref{prob: equivalent
finite dim approx integrated optimization problem}}(p_{N}^{*})\leq$
$\displaystyle\ J^{*}_{\eqref{prob: new equivalent IOCA}}(p^{*}).$ (32)
To proceed with proving (22), in addition to (32), we shall show
$\liminf_{N\rightarrow\infty}J^{*}_{\eqref{prob: equivalent finite dim approx
integrated optimization problem}}(p_{N}^{*})\geq J^{*}_{\eqref{prob: new
equivalent IOCA}}(p^{*})$. Choose a convergent subsequence
$\\{J^{*}_{\eqref{prob: equivalent finite dim approx integrated optimization
problem}}(p_{N_{k}}^{*})\\}_{k=1}^{\infty}$ such that
$\lim_{k\rightarrow\infty}J^{*}_{\eqref{prob: equivalent finite dim approx
integrated optimization
problem}}(p_{N_{k}}^{*})=\liminf_{N\rightarrow\infty}J^{*}_{\eqref{prob:
equivalent finite dim approx integrated optimization problem}}(p_{N}^{*})$.
Since the guidance functions defined in the set
$\mathcal{P}(p_{\max},a_{\max})$ are uniformly equicontinuous and uniformly
bounded, by the Arzelà-Ascoli Theorem [30], there is a uniformly convergent
subsequence of $\\{p_{N_{k}}^{*}\\}_{k=1}^{\infty}$ which we use the same
index $\\{N_{k}\\}_{k=1}^{\infty}$ to simplify notation and let the limit of
$\\{p_{N_{k}}^{*}\\}_{k=1}^{\infty}$ be $p_{\inf}^{*}$, i.e.,
$\lim_{k\rightarrow\infty}\left\lVert
p_{N_{k}}^{*}-p_{\inf}^{*}\right\rVert_{C([0,t_{f}];\mathbb{R}^{n})}=0.$ (33)
Now, $|J^{*}_{\eqref{prob: equivalent finite dim approx integrated
optimization problem}}(p_{N_{k}}^{*})-J_{\eqref{prob: new equivalent
IOCA}}(p_{\inf}^{*})|\leq|J^{*}_{\eqref{prob: equivalent finite dim approx
integrated optimization problem}}(p_{N_{k}}^{*})-J_{\eqref{prob: new
equivalent IOCA}}(p_{N_{k}}^{*})|+|J_{\eqref{prob: new equivalent
IOCA}}(p_{N_{k}}^{*})-J_{\eqref{prob: new equivalent IOCA}}(p_{\inf}^{*})|$,
which implies
$\displaystyle\ \limsup_{k\rightarrow\infty}|J^{*}_{\eqref{prob: equivalent
finite dim approx integrated optimization
problem}}(p_{N_{k}}^{*})-J_{\eqref{prob: new equivalent IOCA}}(p_{\inf}^{*})|$
$\displaystyle\leq$ $\displaystyle\
\lim_{k\rightarrow\infty}|J^{*}_{\eqref{prob: equivalent finite dim approx
integrated optimization problem}}(p_{N_{k}}^{*})-J_{\eqref{prob: new
equivalent IOCA}}(p_{N_{k}}^{*})|$ $\displaystyle\
+\lim_{k\rightarrow\infty}|J_{\eqref{prob: new equivalent
IOCA}}(p_{N_{k}}^{*})-J_{\eqref{prob: new equivalent IOCA}}(p_{\inf}^{*})|.$
(34)
The first limit on the right-hand side of (34) is zero for the following
reason. For all $p\in\mathcal{P}(p_{\max},a_{\max})$, $J_{\eqref{prob:
equivalent finite dim approx integrated optimization problem}_{N}}(p)$
converges to $J_{\eqref{prob: new equivalent IOCA}}(p)$ pointwise as the
dimension of approximation $N\rightarrow\infty$ (see Lemma 12-1). Furthermore,
since the sequence of approximated PDE cost $\\{\langle
Z_{N}(0),\Pi_{N}(0)Z_{N}(0)\rangle\\}_{N=1}^{\infty}$ is a monotonically
increasing sequence, the sequence $\\{J_{\eqref{prob: equivalent finite dim
approx integrated optimization problem}_{N}}(p)\\}_{N=1}^{\infty}$ is a
monotonically increasing sequence for each $p$ on the compact set
$\mathcal{P}(p_{\max},a_{\max})$. By Dini’s Theorem [31, Theorem 7.13],
$|J_{\eqref{prob: equivalent finite dim approx integrated optimization
problem}_{N}}(p)-J_{\eqref{prob: new equivalent IOCA}}(p)|\rightarrow 0$
uniformly on $\mathcal{P}(p_{\max},a_{\max})$ as $N\rightarrow\infty$. By
Moore-Osgood Theorem [31, Theorem 7.11], this uniform convergence and the
convergence $p_{N_{k}}^{*}\rightarrow p_{\inf}^{*}$ as $k\rightarrow\infty$
(see (33)) imply that $\lim_{k\rightarrow\infty}J_{\eqref{prob: new equivalent
IOCA}}(p_{N_{k}}^{*})=\lim_{j\rightarrow\infty}\lim_{k\rightarrow\infty}J^{*}_{\eqref{prob:
equivalent finite dim approx integrated optimization
problem}_{j}}(p_{N_{k}}^{*})$, in which the iterated limit equals the double
limit [34, p. 140], i.e.,
$\displaystyle\lim_{j\rightarrow\infty}\lim_{k\rightarrow\infty}J^{*}_{\eqref{prob:
equivalent finite dim approx integrated optimization
problem}_{j}}(p_{N_{k}}^{*})=$ $\displaystyle\
\lim_{\begin{subarray}{c}j\rightarrow\infty\\\
k\rightarrow\infty\end{subarray}}J^{*}_{\eqref{prob: equivalent finite dim
approx integrated optimization problem}_{j}}(p_{N_{k}}^{*})$ $\displaystyle=$
$\displaystyle\ \lim_{k\rightarrow\infty}J^{*}_{\eqref{prob: equivalent finite
dim approx integrated optimization problem}}(p_{N_{k}}^{*}).$
The second limit on the right-hand side of (34) is zero due to Lemma 12-2.
Hence, it follows from (34) that $\lim_{k\rightarrow\infty}J^{*}_{\eqref{prob:
equivalent finite dim approx integrated optimization
problem}}(p_{N_{k}}^{*})=J_{\eqref{prob: new equivalent IOCA}}(p_{\inf}^{*})$,
which implies
$\displaystyle\liminf_{N\rightarrow\infty}J^{*}_{\eqref{prob: equivalent
finite dim approx integrated optimization problem}}(p_{N}^{*})=$
$\displaystyle\ \lim_{k\rightarrow\infty}J^{*}_{\eqref{prob: equivalent finite
dim approx integrated optimization problem}}(p_{N_{k}}^{*})$ $\displaystyle=$
$\displaystyle\ J_{\eqref{prob: new equivalent IOCA}}(p_{\inf}^{*})$
$\displaystyle\geq$ $\displaystyle\ J^{*}_{\eqref{prob: new equivalent
IOCA}}(p^{*}).$ (35)
Therefore, we conclude $\lim_{N\rightarrow\infty}J^{*}_{\eqref{prob:
equivalent finite dim approx integrated optimization
problem}}(p_{N}^{*})=J^{*}_{\eqref{prob: new equivalent IOCA}}(p^{*})$ from
(32) and (35).
Next, we show (23), i.e., $|J_{\eqref{prob: new equivalent
IOCA}}(p_{N}^{*})-J^{*}_{\eqref{prob: new equivalent IOCA}}(p^{*})|\rightarrow
0$ as $N\rightarrow\infty$. We start with $J^{*}_{\eqref{prob: new equivalent
IOCA}}(p^{*})\leq J_{\eqref{prob: new equivalent IOCA}}(p_{N}^{*})$ for all
$N$, which implies that
$J^{*}_{\eqref{prob: new equivalent
IOCA}}(p^{*})\leq\liminf_{N\rightarrow\infty}J_{\eqref{prob: new equivalent
IOCA}}(p_{N}^{*}).$ (36)
To prove (23), what remains to be shown is $J^{*}_{\eqref{prob: new equivalent
IOCA}}(p^{*})\geq\limsup_{N\rightarrow\infty}J_{\eqref{prob: new equivalent
IOCA}}(p_{N}^{*})$. Choose a convergent subsequence $\\{J_{\eqref{prob: new
equivalent IOCA}}(p_{N_{j}}^{*})\\}_{j=1}^{\infty}$ such that
$\lim_{j\rightarrow\infty}J_{\eqref{prob: new equivalent
IOCA}}(p_{N_{j}}^{*})=\limsup_{N\rightarrow\infty}J_{\eqref{prob: new
equivalent IOCA}}(p_{N}^{*})$. Since
$\\{p_{N_{j}}^{*}\\}_{j=1}^{\infty}\subset\mathcal{P}(p_{\max},a_{\max})$ is
uniformly equicontinuous and uniformly bounded, by Arzelà-Ascoli Theorem [30],
the sequence has a (uniformly) convergent subsequence which we denote with the
same indices $N_{j}$ to simplify notation. Denote the limit of
$\\{p_{N_{j}}^{*}\\}_{j=1}^{\infty}$ by $p_{\sup}^{*}$ such that
$\lim_{j\rightarrow\infty}\left\lVert
p_{N_{j}}^{*}-p_{\sup}^{*}\right\rVert_{C([0,t_{f}];\mathbb{R}^{m})}=0.$ (37)
Due to the continuity of $J_{\eqref{prob: new equivalent IOCA}}(\cdot)$ (see
Lemma 12-1), we have
$J_{\eqref{prob: new equivalent
IOCA}}(p_{\sup}^{*})=\lim_{j\rightarrow\infty}J_{\eqref{prob: new equivalent
IOCA}}(p_{N_{j}}^{*})=\limsup_{N\rightarrow\infty}J_{\eqref{prob: new
equivalent IOCA}}(p_{N}^{*}).$
It follows that
$\displaystyle J_{\eqref{prob: new equivalent IOCA}}(p_{\sup}^{*})$
$\displaystyle\leq$ $\displaystyle|J_{\eqref{prob: new equivalent
IOCA}}(p_{\sup}^{*})-J^{*}_{\eqref{prob: new equivalent
IOCA}}(p^{*})|+J^{*}_{\eqref{prob: new equivalent IOCA}}(p^{*})$
$\displaystyle=$ $\displaystyle|J_{\eqref{prob: new equivalent
IOCA}}(p_{\sup}^{*})-\lim_{N\rightarrow\infty}J^{*}_{\eqref{prob: equivalent
finite dim approx integrated optimization
problem}}(p_{N}^{*})|+J^{*}_{\eqref{prob: new equivalent IOCA}}(p^{*})$
$\displaystyle=$ $\displaystyle|J_{\eqref{prob: new equivalent
IOCA}}(p_{\sup}^{*})-\lim_{j\rightarrow\infty}J^{*}_{\eqref{prob: equivalent
finite dim approx integrated optimization
problem}}(p_{N_{j}}^{*})|+J^{*}_{\eqref{prob: new equivalent IOCA}}(p^{*}).$
(38)
Since the sequence of approximated PDE cost $\\{\langle{Z_{N}(0)},$
${\Pi_{N}(0)Z_{N}(0)}\rangle\\}_{N=1}^{\infty}$ is monotonically increasing,
the sequence $\\{J_{\eqref{prob: equivalent finite dim approx integrated
optimization problem}_{N}}(p)\\}_{N=1}^{\infty}$ is a monotonically increasing
sequence for each $p$ on the compact set $\mathcal{P}(p_{\max},a_{\max})$.
Since $\lim_{N\rightarrow\infty}J_{\eqref{prob: equivalent finite dim approx
integrated optimization problem}_{N}}(p)=J_{\eqref{prob: new equivalent
IOCA}}(p)$ for all $p\in\mathcal{P}(p_{\max},a_{\max})$ (see Lemma 12-1), by
Dini’s Theorem [31, Theorem 7.13], the limit holds uniformly on
$\mathcal{P}(p_{\max},a_{\max})$ as $N\rightarrow\infty$. By Moore-Osgood
Theorem [31, Theorem 7.11], this uniform convergence and the convergence
$p_{N_{j}}^{*}\rightarrow p_{\sup}^{*}$ as $j\rightarrow\infty$ (see (37))
imply that
$\displaystyle J_{\eqref{prob: new equivalent
IOCA}}(p_{\sup}^{*})=\lim_{k\rightarrow\infty}\lim_{j\rightarrow\infty}J^{*}_{\eqref{prob:
equivalent finite dim approx integrated optimization
problem}_{k}}(p_{N_{j}}^{*}).$ (39)
Furthermore, the iterated limit equals the double limit [34, p. 140], i.e.,
$\displaystyle\lim_{k\rightarrow\infty}\lim_{j\rightarrow\infty}J^{*}_{\eqref{prob:
equivalent finite dim approx integrated optimization
problem}_{k}}(p_{N_{j}}^{*})=$
$\displaystyle\lim_{\begin{subarray}{c}j\rightarrow\infty\\\
k\rightarrow\infty\end{subarray}}J^{*}_{\eqref{prob: equivalent finite dim
approx integrated optimization problem}_{k}}(p_{N_{j}}^{*})$ $\displaystyle=$
$\displaystyle\lim_{j\rightarrow\infty}J^{*}_{\eqref{prob: equivalent finite
dim approx integrated optimization problem}}(p_{N_{j}}^{*}).$ (40)
Hence, combining (38)–(40), we have $J^{*}_{\eqref{prob: new equivalent
IOCA}}(p^{*})\geq J_{\eqref{prob: new equivalent
IOCA}}(p_{\sup}^{*})=\limsup_{N\rightarrow\infty}J_{\eqref{prob: new
equivalent IOCA}}(p_{N}^{*})$, from which and (36) we conclude the desired
convergence $\lim_{N\rightarrow\infty}J_{\eqref{prob: new equivalent
IOCA}}(p_{N}^{*})=J^{*}_{\eqref{prob: new equivalent IOCA}}(p^{*})$. ∎
## References
* [1] A. Bensoussan, G. Da Prato, M. C. Delfour, and S. K. Mitter. Representation and control of infinite dimensional systems, volume 1. Birkhäuser Boston, 1992.
* [2] J. A. Burns and C. N. Rautenberg. The infinite-dimensional optimal filtering problem with mobile and stationary sensor networks. Numer. Funct. Anal. Optim., 36(2):181–224, 2015.
* [3] J. A. Burns and C. N. Rautenberg. Solutions and approximations to the riccati integral equation with values in a space of compact operators. SIAM J. Control Optim., 53(5):2846–2877, 2015.
* [4] S. Cheng and D. A. Paley. Optimal control of a 1D diffusion process with a team of mobile actuators under jointly optimal guidance. In Proc. 2020 American Control Conf., pages 3449–3454, 2020.
* [5] S. Cheng and D. A. Paley. Optimal guidance and estimation of a 2D diffusion-advection process by a team of mobile sensors. Submitted, 2021.
* [6] S. Cheng and D. A. Paley. Optimal guidance of a team of mobile actuators for controlling a 1D diffusion process with unknown initial conditions. In Proc. American Control Conf., pages 1493–1498, 2021.
* [7] R. F. Curtain and H. Zwart. An introduction to infinite-dimensional linear systems theory, volume 21. Springer Science & Business Media, 2012.
* [8] M. A. Demetriou. Guidance of mobile actuator-plus-sensor networks for improved control and estimation of distributed parameter systems. IEEE Trans. Automat. Control, 55(7):1570–1584, 2010.
* [9] M. A. Demetriou. Adaptive control of 2-D PDEs using mobile collocated actuator/sensor pairs with augmented vehicle dynamics. IEEE Trans. Automat. Control, 57(12):2979–2993, 2012.
* [10] M. A. Demetriou. Using modified Centroidal Voronoi Tessellations in kernel partitioning for optimal actuator and sensor selection of parabolic PDEs with static output feedback. In Proc. 56th IEEE Conf. Decision and Control, pages 3119–3124, 2017.
* [11] M. A. Demetriou and E. Bakolas. Navigating over 3D environments while minimizing cumulative exposure to hazardous fields. Automatica, 115:108859, 2020.
* [12] M. A. Demetriou and I. I. Hussein. Estimation of spatially distributed processes using mobile spatially distributed sensor network. SIAM J. Control Optim., 48(1):266–291, 2009.
* [13] M. A. Demetriou, A. Paskaleva, O. Vayena, and H. Doumanidis. Scanning actuator guidance scheme in a 1-D thermal manufacturing process. IEEE Trans. Control Systems Technology, 11(5):757–764, 2003.
* [14] S. Dubljevic, M. Kobilarov, and J. Ng. Discrete mechanics optimal control (DMOC) and model predictive control (MPC) synthesis for reaction-diffusion process system with moving actuator. In Proc. 2010 American Control Conf., pages 5694–5701, 2010.
* [15] Z. Emirsjlow and S. Townley. From PDEs with boundary control to the abstract state equation with an unbounded input operator: a tutorial. Eur. J. Control, 6(1):27–49, 2000.
* [16] M. Hoy, A. S. Matveev, and A. V. Savkin. Algorithms for collision-free navigation of mobile robots in complex cluttered environments: a survey. Robotica, 33(3):463–497, 2015.
* [17] D. Kasinathan and K. Morris. H∞-optimal actuator location. IEEE Trans. Automat. Control, 58(10):2522–2535, 2013.
* [18] D. E. Kirk. Optimal control theory: an introduction. Courier Corporation, 2012.
* [19] M. Kumar, K. Cohen, and B. HomChaudhuri. Cooperative control of multiple uninhabited aerial vehicles for monitoring and fighting wildfires. J. Aerospace Computing, Information, and Communication, 8(1):1–16, 2011.
* [20] I. Lasiecka and R. Triggiani. Control theory for partial differential equations: continuous and approximation theories, volume 1. Cambridge University Press Cambridge, 2000.
* [21] F. L. Lewis, D. Vrabie, and V. L. Syrmos. Optimal control. John Wiley & Sons, 2012.
* [22] D. Liberzon. Calculus of variations and optimal control theory: a concise introduction. Princeton University Press, 2011.
* [23] M. McAsey, L. Mou, and W. Han. Convergence of the forward-backward sweep method in optimal control. Comput. Optim. Appl., 53(1):207–226, 2012.
* [24] K. Morris. Linear-quadratic optimal actuator location. IEEE Trans. Automat. Control, 56(1):113–124, 2010.
* [25] K. Morris, M. A. Demetriou, and S. D. Yang. Using H2-control performance metrics for the optimal actuator location of distributed parameter systems. IEEE Trans. Automat. Control, 60(2):450–462, 2015.
* [26] K. Morris and S. Yang. Comparison of actuator placement criteria for control of structures. J. Sound and Vibration, 353:1–18, 2015.
* [27] K. Morris and S. Yang. A study of optimal actuator placement for control of diffusion. In Proc. 2016 American Control Conf., pages 2566–2571, 2016.
* [28] S. Omatu and J. H. Seinfeld. Distributed parameter systems: theory and applications. Clarendon Press, 1989.
* [29] A. C. Robinson. A survey of optimal control of distributed-parameter systems. Automatica, 7(3):371–388, 1971.
* [30] H. Royden and P. Fitzpatrick. Real analysis (4th edition). New Jersey: Prentice-Hall Inc, 2010.
* [31] W. Rudin. Principles of mathematical analysis, volume 3.
* [32] A. Schroeder. Mitigating harmful algal blooms using a robot swarm. PhD thesis, University of Toledo, 2018.
* [33] A. Smyshlyaev and M. Krstic. Adaptive control of parabolic PDEs. Princeton University Press, 2010.
* [34] A. E. Taylor. General theory of functions and integration. Courier Corporation, 1985.
* [35] F. Tröltzsch. Optimal control of partial differential equations: theory, methods, and applications, volume 112. American Mathematical Society, 2010.
* [36] J. Werner. Optimization theory and applications. Springer-Verlag, 2013.
* [37] K. Yosida. Functional analysis, volume 123. springer, 1988.
* [38] F. Zeng and B. Ayalew. Estimation and coordinated control for distributed parameter processes with a moving radiant actuator. J. Process Control, 20(6):743–753, 2010.
|
Hamiltonian formulation for the theory of gravity
and canonical transformations in extended phase space
T. P. Shestakova
Department of Theoretical and Computational Physics, Southern Federal
University,
Sorge St. 5, Rostov-on-Don 344090, Russia
E-mail<EMAIL_ADDRESS>
Abstract
A starting point for the present work was the statement recently discussed in
the literature that two Hamiltonian formulations for the theory of gravity,
the one proposed by Dirac and the other by Arnowitt – Deser – Misner, may not
be related by a canonical transformation. In its turn, it raises a question
about the equivalence of these two Hamiltonian formulations and their
equivalence to the original formulation of General Relativity. We argue that,
since the transformation from components of metric tensor to the ADM variables
touches gauge degrees of freedom, which are non-canonical from the point of
view of Dirac, the problem cannot be resolved in the limits of the Dirac
approach. The proposed solution requires the extension of phase space by
treating gauge degrees of freedom on an equal footing with other variables and
introducing missing velocities into the Lagrangian by means of gauge
conditions in differential form. We illustrate with a simple cosmological
model the features of Hamiltonian dynamics in extended phase space. Then, we
give a clear proof for the full gravitational theory that the ADM-like
transformation is canonical in extended phase space in a wide enough class of
possible parametrizations.
## 1\. Introduction
It is generally accepted that the problem of formulating Hamiltonian dynamics
for systems with constraints has been solved by Dirac in his seminal papers
[1, 2]. It was Dirac who pointed to the importance of Hamiltonian formulation
for any dynamical theory before its quantization [3]. Other approaches, such
as the Batalin – Fradkin – Vilkovisky (BFV) path integral approach [4, 5, 6]
follow the Dirac one in what concerns the rule of constructing a Hamiltonian
and the role of constraints as generators of transformations in phase space.
It is believed that Dirac generalized Hamiltonian dynamics is equivalent to
Lagrangian dynamics of original theory. However, even for electrodynamics the
constraints do not generate a correct transformation for zero component of
vector potential, $A_{0}$. The same situation we face in General Relativity,
since gravitational constraints cannot produce correct transformations for
$g_{00}$, $g_{0\mu}$ components of metric tensor. In fact, it means that the
group of transformations generated by constraints differs from the group of
gauge transformations of the original theory. Some authors have tried to
remedy this shortcoming by modifying the Dirac approach and proposing some
special prescriptions how the generator should be constructed (see, for
example, [7, 8]). Until now this problem has not attracted much attention
mainly because that it touches only transformations of gauge variables which,
according to conventional viewpoint, are redundant and must not affect the
physical content of the theory. It will be demonstrated in this paper that the
role of gauge degrees of freedom may be more significant that it is usually
thought, and the difference in the groups of transformations is the first
indication to the inconsistence of the theory.
Historically, while constructing Hamiltonian dynamics for gravitational field
theorists used various parametrizations of gravitational variables. Dirac
dealt with original variables, which are components of metric tensor [3],
whereas the most famous parametrization is probably that of Arnowitt – Deser –
Misner (ADM) [9], who expressed $g_{00}$, $g_{0\mu}$ through the lapse and
shift functions. To give another example, let us mention the work by Faddeev
[10] where quite specific variables $\lambda^{0}=1/h^{00}+1$,
$\lambda^{i}=h^{0i}/h^{00}$, $q^{ij}=h^{0i}h^{0j}-h^{00}h^{ij}$,
$h^{\mu\nu}=\sqrt{-g}g^{\mu\nu}$ were introduced. From the point of view of
Lagrangian formalism, all the parametrizations are rightful, and the
correspondent formulations are equivalent. Meanwhile, it has been shown in
[11] that components of metric tensor and the ADM variables are not related by
a canonical transformation. In other words, it implies that the Dirac
Hamiltonian formulation for gravitation and the ADM one are not equivalent,
though it is believed that each of them is equivalent to the Einstein
(Lagrangian) formulation. There exists the contradiction that again witnesses
about the incompleteness of the theoretical foundation.
The purpose of the present paper is to demonstrate that this contradiction can
be resolved if one treats gauge gravitational degrees of freedom on an equal
footing with physical variables in extended phase space. The idea of extended
phase space was put forward by Batalin, Fradkin and Vilkovisky [4, 5, 6] who
included integration over gauge and ghost degrees of freedom in their
definition of path integral. However, in their approach gauge variables were
still considered as non-physical, secondary degrees of freedom playing just an
auxiliary role in the theory. To construct Hamiltonian dynamics for a
constrained system which would be completely equivalent to Lagrangian
formulation, we need to take yet another step: we should introduce into the
Lagrangian missing velocities corresponding to gauge variables by means of
special (differential) gauge conditions. It actually extends the phase space
of physical degrees of freedom.
In Section 2 a mathematical formulation of the problem will be given. We shall
see that non-equivalence of Hamiltonian formulations for different
parametrizations prevents from constructing a generator of transformation in
phase space which would produce correct transformations for any
parametrizations. These ideas will be illustrated in Section 3 for a simple
model with finite number of degrees of freedom. The mentioned above algorithms
[7, 8] work correctly only for some parametrizations. One possible point of
view (advocated, in particular, in [11]) is that only these parametrizations
should be allowed while all other, not related with the first ones by
canonical transformations, should be prohibited, including the ADM
parametrization. However, imposing any limitations on admissible
parametrizations or transformations does not seem to be a true solution to the
problem. In Section 4 the outline of Hamiltonian dynamics in extended phase
space will be presented, and in Section 5 it will be demonstrated for the full
gravitational theory that different parametrizations from a wide enough class
are related by canonical transformations. In particular, it will restore a
legitimate status of the ADM parametrization. We shall discuss the results and
future problems in Section 6.
## 2\. Canonical transformations in phase space
It is generally known that for a system without constraints Lagrangian as well
as Hamiltonian equations maintain their form under transformations to a new
set of generalized coordinates
$q^{a}=v^{a}(Q),$ (1)
where $v^{a}(Q)$ are invertible functions of their arguments. It is easy to
see that any transformation (2.1) correspond to a canonical transformation in
phase space. Indeed, consider a quadratic in velocities Lagrangian
$L=\frac{1}{2}\;\Gamma_{ab}(q)\dot{q}^{a}\dot{q}^{b}-U(q).$ (2)
After the transformation (2.1) the Lagrangian (2.2) would read
$L=\frac{1}{2}\;\Gamma_{cd}(Q)\frac{\partial v^{c}}{\partial
Q^{a}}\frac{\partial v^{d}}{\partial Q^{b}}\dot{Q}^{a}\dot{Q}^{b}-U(Q).$ (3)
New momenta $\\{P_{a}\\}$ are expressed through old momenta $\\{p_{a}\\}$ by
relations
$P_{a}=p_{b}\frac{\partial v^{b}}{\partial Q^{a}}.$ (4)
The transformation (2.1), (2.4) is canonical with the generating function
which depends on new coordinates and old momenta,
$\Phi(Q,\,p)=-p_{a}v^{a}(Q).$ (5)
The equations
$q^{a}=-\frac{\partial\Phi}{\partial p_{a}};\qquad
P^{a}=-\frac{\partial\Phi}{\partial Q^{a}}$ (6)
reproduce exactly the transformation (2.1), (2.4). It is also easy to check
that the transformation (2.1), (2.4) maintains the Poisson brackets
$\\{Q^{a},\,Q^{b}\\}=0,\qquad\\{P_{a},\,P_{b}\\}=0,\qquad\\{Q^{a},\,P_{b}\\}=\delta^{a}_{b}.$
(7)
For a system with constraints, gauge variables (i.e. the variables whose
velocities cannot be expressed in terms of conjugate momenta) do not enter
into the Lagrangian quadratically, and a general transformation like (2.1) may
not be canonical. An example can be found in the theory of gravity by the
transformation from components of metric tensor to the ADM variables,
$g_{00}=\gamma_{ij}N^{i}N^{j}-N^{2},\qquad g_{0i}=\gamma_{ij}N^{j},\qquad
g_{ij}=\gamma_{ij}.$ (8)
This transformation concerns gauge degrees of freedom which, from the
viewpoint of Dirac, are not canonical variables at all. To pose the question,
if the transformation (2.8) is canonical, one should formally extend the
original phase space including into it gauge degrees of freedom and their
momenta. In order to prove non-canonicity of (2.8) it is enough to check that
some of the relations (2.7) are broken. Using the transformation inverted to
(2.8), one can see that $\\{N,\,\Pi^{ij}\\}\neq 0$, where $\Pi^{ij}$ are the
momenta conjugate to $\gamma_{ij}$ (see Equation (152) in [11]). More
generally, let us consider the ADM-like transformation
$N_{\mu}=V_{\mu}(g_{0\nu},\,g_{ij}),\qquad\gamma_{ij}=g_{ij}.$ (9)
Here $V_{\mu}$ are some functions of components of metric tensor (but
$N_{\mu}$ ought not to form 4-vector). A feature of this transformation is
that space components of metric tensor remain unchanged, and so do their
conjugate momenta: $\Pi^{ij}=p^{ij}$. Then
$\left.\\{N_{\mu},\,\Pi^{ij}\\}\right|_{g_{\nu\lambda},p^{\rho\sigma}}=\frac{\partial
V_{\mu}}{\partial g_{ij}}.$ (10)
It is equal to zero if only the functions $V_{\mu}$ do not depend on $g_{ij}$.
This is quite a trivial case when old gauge variables are expressed through
some new gauge variables only, and the ADM transformation (2.8) does not
belong to this class.
One could pose the question: Is it worth considering the equivalence of
different formulations in extended phase space? Would not it better to
restrict ourself by transformations in phase space of original canonical
variables in the sense of Dirac? In the second case, we can prove the
equivalence of equation of motion in Lagrangian and Hamiltonian formalism,
however, we have to fix a form of gravitational constraints by forbidding any
reparametrizations of gauge variables. Determination of the constraints’ form
is of importance for a subsequent procedure of quantization which gives rise
to the problem of parametrization noninvariance (see, for example, [12]). From
the viewpoint of subsequent quantization, the ADM parametrization is more
preferable, since the constraints do not depend on gauge variables in this
case. I would like to emphasize that there are no solid grounds for fixing the
form of the constraints, and, as we shall see in this paper, extension of
phase space enables us to solve the problem of equivalence of Lagrangian and
Hamiltonian formalism for gravity without any restriction on parametrizations.
As it has been already mentioned, the constraints, being considered as
generators of transformations in phase space, do not produce correct
transformation for all gravitational variables. To ensure the full equivalence
of two formulations one has to modify the Dirac prescription, according to
which the generator must be a linear combination of constraints, and replace
it by a more sophisticated algorithm. The known algorithms, firstly, are
relied upon algebra of constraints and, secondly, require extension of phase
space. Indeed, a transformation for a variable $q^{a}$ produced by any
generator $G$ in phase space reads
$\delta q^{a}=\\{q^{a},\,G\\}.$ (11)
So, to generate correct transformations for gauge variables the Poisson
brackets should be defined in extended phase space. Again, the dependence of
the algorithm on the algebra of constraints together with non-canonicity of
the transformations like (2.9) leads to the fact that the algorithm would work
only for a limited class of parametrizations. Thus, non-equivalence of
Hamiltonian formulations for different parametrizations, resulting in
different algebra of constraints, prevents from constructing the generator
which would produce correct transformations for any parametrizations. In the
next section we shall illustrate it making use of the algorithm [7], for a
simple model with finite number of degrees of freedom.
## 3\. The generator of gauge transformation: a simple example
Now we shall consider a closed isotropic cosmological model with the
Lagrangian
$L_{1}=-\frac{1}{2}\frac{a\dot{a}^{2}}{N}+\frac{1}{2}Na.$ (1)
This model is traditionally described in the ADM variables ($N$ is the lapse
function, $a$ is the scale factor). For our purpose, it is more convenient to
go to a new variable $\mu=N^{2}$ which corresponds to $g_{00}$. So the
Lagrangian is
$L_{2}=-\frac{1}{2}\frac{a\dot{a}^{2}}{\sqrt{\mu}}+\frac{1}{2}\sqrt{\mu}\,a.$
(2)
The canonical Hamiltonian constructed according to the rule
$H=p_{a}\dot{q}^{a}-L$, where $\\{p_{a},\;q^{a}\\}$ are pairs of variables
called canonical in the sense that all the velocities $\dot{q}^{a}$ can be
expressed through conjugate momenta, for our model is
$H_{C}=p\dot{a}-L_{2}=-\frac{1}{2}\frac{\sqrt{\mu}}{a}\;p^{2}-\frac{1}{2}\sqrt{\mu}\,a$
(3)
($p$ is the momentum conjugate to the scale factor). However, some authors
include into the form $p_{a}\dot{q}^{a}$ also gauge variables and their
momenta which are non-canonical variables in the above sense. Then we have the
so-called total Hamiltonian which for our model takes the form
$H_{T}=\pi\dot{\mu}+p\dot{a}-L_{2}=\pi\dot{\mu}-\frac{1}{2}\frac{\sqrt{\mu}}{a}\;p^{2}-\frac{1}{2}\sqrt{\mu}\,a$
(4)
($\pi$ is the momentum conjugate to the gauge variable $\mu$). Making use of
the total Hamiltonian implies a mixed formalism in which the Hamiltonian is
written in terms of canonical coordinates and momenta but as well of
velocities that cannot be expressed through the momenta. Nevertheless, this
very Hamiltonian plays the central role in the algorithm suggested in [7]
while the canonical Hamiltonian (3.3) will not lead to the correct result.
In [7] the generator of gauge transformations is sought in the form
$G=\sum\limits_{n}\theta_{\mu}^{(n)}G_{n}^{\mu},$ (5)
where $G_{n}^{\mu}$ are first class constraints, $\theta_{\mu}^{(n)}$ are the
$n$th order time derivatives of the gauge parameters $\theta_{\mu}$. In the
theory of gravity the variations of $g_{\mu\nu}$ involve first order
derivatives of gauge parameters, thus the generator is
$G=\theta_{\mu}G_{0}^{\mu}+\dot{\theta}_{\mu}G_{1}^{\mu}.$ (6)
$G_{n}^{\mu}$ satisfy the following conditions that were derived from the
requirement of invariance of motion equations under transformations in phase
space:
$G_{1}^{\mu}\quad{\rm are\;primary\;constraints};$ (7)
$G_{0}^{\mu}+\left\\{G_{1}^{\mu},\;H\right\\}\quad{\rm
are\;primary\;constraints};$ (8) $\left\\{G_{0}^{\mu},\;H\right\\}\quad{\rm
are\;primary\;constraints}.$ (9)
In our case $\pi=0$ is the only primary constraint of the model, so that
$G_{1}=\pi$. The secondary constraint is
$\dot{\pi}=\left\\{\pi,\;H_{T}\right\\}=-\frac{\partial
H_{T}}{\partial\mu}=\frac{1}{4}\frac{1}{a\sqrt{\mu}}\;p^{2}+\frac{1}{4}\frac{a}{\sqrt{\mu}}=T.$
(10)
The canonical Hamiltonian (3.3) appears to be proportional to the secondary
constraint $T$, $H_{C}=-2\mu T$.
The condition (3.8) becomes
$G_{0}+\left\\{\pi,\;H_{T}\right\\}=\alpha\pi;$ (11) $G_{0}=-T+\alpha\pi,$
(12)
$\alpha$ is a coefficient that can be found from the requirement (3.9):
$\left\\{G_{0},\;H_{T}\right\\}=\beta\pi;$ (13)
$\displaystyle\left\\{G_{0},\;H_{T}\right\\}$ $\displaystyle=$
$\displaystyle-\left\\{T,\;H_{T}\right\\}+\alpha\left\\{\pi,\;H_{T}\right\\}=-\left\\{T,\;\pi\dot{\mu}-2\mu
T\right\\}+\alpha T$ (14) $\displaystyle=$
$\displaystyle-\left\\{T,\;\pi\right\\}\dot{\mu}+\alpha
T=\frac{1}{2\mu}\;\dot{\mu}T+\alpha T;$
$\beta=0;\qquad\alpha=-\frac{1}{2\mu}\;\dot{\mu};$ (15)
$G_{0}=-\frac{1}{2\mu}\;\dot{\mu}\pi-T.$ (16)
The full generator $G$ (3.6) can be written as
$G=\left(-\frac{1}{2\mu}\;\dot{\mu}\pi-T\right)\theta+\pi\dot{\theta}.$ (17)
The transformation of the variable $\mu$ is
$\delta\mu=\left\\{\mu,\;G\right\\}=-\frac{1}{2\mu}\;\dot{\mu}\theta+\dot{\theta}.$
(18)
The same expression (up to the multiplier being equal to 2) can be obtained
from general transformations of the metric tensor,
$\delta
g_{\mu\nu}=\theta^{\lambda}\partial_{\lambda}g_{\mu\nu}+g_{\mu\lambda}\partial_{\nu}\theta^{\lambda}+g_{\nu\lambda}\partial_{\mu}\theta^{\lambda};$
(19) $\delta g_{00}=\dot{g}_{00}\theta^{0}+2g_{00}\dot{\theta}^{0},$ (20)
if one keeps in mind that $g_{00}=\mu$ and in the above formulas
$\theta=\theta_{0}=g_{00}\theta^{0}$.
Ir is easy to see that the correct expression (3.18) is entirely due to the
replacement of the canonical Hamiltonian (3.3) by the total Hamiltonian (3.4),
otherwise one would miss the contribution from the Poisson bracket
$\\{T,\;\pi\\}$ to the generator (3.17) (see the second line of (3.14)).
On the other hand, making use of the total Hamiltonian may not lead to a
correct result for another parametrization. Let us return to the Lagrangian
(3.1). Now the total Hamiltonian is
$H_{T}=\pi\dot{N}-\frac{1}{2}\frac{N}{a}\;p^{2}-\frac{1}{2}\;N\,a$ (21)
Again, $\pi$ is the momentum conjugate to the gauge variable $N$, and $\pi=0$
is the only primary constraint. The secondary constraint does not depend on
$N$ in this case:
$\dot{\pi}=\left\\{\pi,\;H_{T}\right\\}=-\frac{\partial H_{T}}{\partial
N}=\frac{1}{2a}\;p^{2}+\frac{1}{2}\;a=T,$ (22)
therefore, the Poisson bracket $\left\\{T,\;\pi\right\\}$ in (3.14) is equal
to zero, and one would obtain an incorrect expression for the generator,
$G=-T\theta+\pi\dot{\theta}.$ (23)
It cannot produce the correct variation of $N$, that reads
$\delta N=-\dot{N}\theta-N\dot{\theta}.$ (24)
As we can see, this algorithm fails to produce correct results for an
arbitrary parametrization. In the next section we shall construct Hamiltonian
dynamics in extended phase space and discuss its features and advantages.
## 4\. Extended phase space: the isotropic model
We shall consider the effective action including gauge and ghost sectors as it
appears in the path integral approach to gauge field theories,
$S=\int dt\left(L_{(grav)}+L_{(gauge)}+L_{(ghost)}\right)$ (1)
As was mentioned above, it is not enough just to extend pase space by
including formally gauge degrees of freedom in it. One should also introduce
missing velocities into the Lagrangian. It can be done by means of special
(differential) gauge conditions that actually extends the phase space and
enables one to avoid the mixed formalism. For our model (3.1) the equation
$N=f(a)$ determines in a general form a relation between the only gauge
variable $N$ and the scale factor $a$. The differential form of this relation
is
$\dot{N}=\frac{df}{da}\;\dot{a}.$ (2)
The ghost sector of the model reads
$L_{(ghost)}=\dot{\bar{\theta}}N\dot{\theta}+\dot{\bar{\theta}}\left(\dot{N}-\frac{df}{da}\;\dot{a}\right)\theta,$
(3)
so that
$\displaystyle L$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\frac{a\dot{a}^{2}}{N}+\frac{1}{2}Na+\lambda\left(\dot{N}-\frac{df}{da}\;\dot{a}\right)+\dot{\bar{\theta}}\left(\dot{N}-\frac{df}{da}\;\dot{a}\right)\theta+\dot{\bar{\theta}}N\dot{\theta}=$
(4) $\displaystyle=$
$\displaystyle-\frac{1}{2}\frac{a\dot{a}^{2}}{N}+\frac{1}{2}Na+\pi\left(\dot{N}-\frac{df}{da}\;\dot{a}\right)+\dot{\bar{\theta}}N\dot{\theta}.$
The conjugate momenta are:
$\pi=\lambda+\dot{\bar{\theta}}\theta;\quad
p=-\frac{a\dot{a}}{N}-\pi\frac{df}{da};\quad\bar{\cal
P}=N\dot{\bar{\theta}};\quad{\cal P}=N\dot{\theta}.$ (5)
Let us now go to a new variable
$N=v(\tilde{N},a).$ (6)
At the same time, the rest variables are unchanged:
$a=\tilde{a};\quad\theta=\tilde{\theta};\quad\bar{\theta}=\tilde{\bar{\theta}}.$
(7)
It is the analog of the transformation from the original gravitational
variables $g_{\mu\nu}$ to the ADM variables. Indeed, in the both cases only
gauge variables are transformed while the rest variables remain unchanged.
After the change (4.6) the Lagrangian is written as (below we shall omit the
tilde over $a$ and ghost variables which remain unchanged)
$L=-\frac{1}{2}\;\frac{a\dot{a}^{2}}{v(\tilde{N},a)}+\frac{1}{2}\;v(\tilde{N},a)\;a+\pi\left(\frac{\partial
v}{\partial\tilde{N}}\;\dot{\tilde{N}}+\frac{\partial v}{\partial
a}\;\dot{a}-\frac{df}{da}\;\dot{a}\right)+v(\tilde{N},a)\;\dot{\bar{\theta}}\dot{\theta}.$
(8)
The new momenta are:
$\tilde{\pi}=\pi\frac{\partial
v}{\partial\tilde{N}};\qquad\tilde{p}=-\frac{a\dot{a}}{v(\tilde{N},a)}+\pi\frac{\partial
v}{\partial a}-\pi\frac{df}{da}=p+\pi\frac{\partial v}{\partial a};$ (9)
$\tilde{\bar{\cal P}}=v(\tilde{N},a)\;\dot{\bar{\theta}}=\bar{\cal
P};\qquad\tilde{\cal P}=v(\tilde{N},a)\;\dot{\theta}={\cal P}.$ (10)
It is easy to demonstrate that the transformations (4.6), (4.7), (4.9), (4.10)
are canonical in extended phase space. The generating function will depend on
new coordinates and old momenta,
$\Phi\left(\tilde{N},\;\tilde{a},\;\tilde{\bar{\theta}},\;\tilde{\theta},\;\pi,\;p,\;\bar{\cal
P},\;{\cal P}\right)=-\pi\,v(\tilde{N},\tilde{a})-p\,\tilde{a}-\bar{\cal
P}\,\tilde{\theta}-\tilde{\bar{\theta}}\,{\cal P}.$ (11)
One can see that the generating function has the same form as in (2.5). The
relations
$N=-\frac{\partial\Phi}{\partial\pi};\qquad a=-\frac{\partial\Phi}{\partial
p};\qquad\tilde{\pi}=-\frac{\partial\Phi}{\partial\tilde{N}};\qquad\tilde{p}=-\frac{\partial\Phi}{\partial\tilde{a}};$
(12) $\theta=-\frac{\partial\Phi}{\partial\bar{\cal
P}\vphantom{\sqrt{N}}};\qquad\bar{\theta}=-\frac{\partial\Phi}{\partial{\cal
P}};\qquad\tilde{\cal
P}=-\frac{\partial\Phi}{\partial\tilde{\bar{\theta}}};\qquad\tilde{\bar{\cal
P}}=-\frac{\partial\Phi}{\partial\tilde{\theta}}$ (13)
give exactly the transformation (4.6), (4.7), (4.9), (4.10). On the other
hand, one can check that Poisson brackets among all phase variables maintain
their canonical form.
Now we are going to write down equations of motion in extended phase space.
Firstly, we rewrite the Lagrangian (4.8) through the momentum $\tilde{\pi}$.
$\displaystyle L$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\;\frac{a\dot{a}^{2}}{v(\tilde{N},a)}+\frac{1}{2}\;v(\tilde{N},a)\;a$
(14) $\displaystyle+$
$\displaystyle\tilde{\pi}\left[\dot{\tilde{N}}+\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial v}{\partial
a}\;\dot{a}-\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{df}{da}\;\dot{a}\right]+v(\tilde{N},a)\;\dot{\bar{\theta}}\dot{\theta}.$
The variation of (4.14) gives, accordingly, the equation of motion (4.15), the
constraint (4.16), the gauge condition (4.17) and the ghost equations (4.18) –
(4.19):
$\displaystyle\frac{a\ddot{a}}{v(\tilde{N},a)}$ $\displaystyle+$
$\displaystyle\frac{1}{2}\;\frac{\dot{a}^{2}}{v(\tilde{N},a)}-\frac{1}{2}\;\frac{a\dot{a}^{2}}{v^{2}(\tilde{N},a)}\;\frac{\partial
v}{\partial a}-\frac{a\dot{a}}{v^{2}(\tilde{N},a)}\;\frac{\partial
v}{\partial\tilde{N}}\dot{\tilde{N}}$ (15) $\displaystyle+$
$\displaystyle\frac{1}{2}\;\frac{\partial v}{\partial
a}\;a+\frac{1}{2}v(\tilde{N},a)-\dot{\tilde{\pi}}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial v}{\partial
a}+\dot{\tilde{\pi}}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{df}{da}$ $\displaystyle+$
$\displaystyle\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\frac{\partial^{2}v}{\partial\tilde{N}^{2}}\;\frac{\partial
v}{\partial a}\;\dot{\tilde{N}}-\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial^{2}v}{\partial\tilde{N}\partial
a}\;\dot{\tilde{N}}$ $\displaystyle-$
$\displaystyle\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\frac{\partial^{2}v}{\partial\tilde{N}^{2}}\;\frac{df}{da}\;\dot{\tilde{N}}+\frac{\partial
v}{\partial a}\;\dot{\bar{\theta}}\dot{\theta}=0;$
$\displaystyle\frac{1}{2}\;\frac{a\dot{a}^{2}}{v^{2}(\tilde{N},a)}\;\frac{\partial
v}{\partial\tilde{N}}$ $\displaystyle+$
$\displaystyle\frac{1}{2}\;\frac{\partial
v}{\partial\tilde{N}}\;a-\dot{\tilde{\pi}}-\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\frac{\partial^{2}v}{\partial\tilde{N}^{2}}\;\frac{\partial
v}{\partial a}\;\dot{a}+\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial^{2}v}{\partial\tilde{N}\partial
a}\;\dot{a}$ (16) $\displaystyle+$
$\displaystyle\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\frac{\partial^{2}v}{\partial\tilde{N}^{2}}\;\frac{df}{da}\;\dot{a}+\frac{\partial
v}{\partial\tilde{N}}\;\dot{\bar{\theta}}\dot{\theta}=0;$ $\frac{\partial
v}{\partial\tilde{N}}\;\dot{\tilde{N}}+\frac{\partial v}{\partial
a}\;\dot{a}-\frac{df}{da}\;\dot{a}=0;$ (17)
$v(\tilde{N},\;a)\;\ddot{\theta}+\frac{\partial
v}{\partial\tilde{N}}\;\dot{\tilde{N}}\dot{\theta}+\frac{\partial v}{\partial
a}\;\dot{a}\dot{\theta}=0;$ (18)
$v(\tilde{N},\;a)\;\ddot{\bar{\theta}}+\frac{\partial
v}{\partial\tilde{N}}\;\dot{\tilde{N}}\dot{\bar{\theta}}+\frac{\partial
v}{\partial a}\;\dot{a}\dot{\bar{\theta}}=0.$ (19)
The Hamiltonian in extended phase space looks like
$\displaystyle H$ $\displaystyle=$
$\displaystyle-\frac{1}{2}\;\frac{v(\tilde{N},a)}{a}\left[\tilde{p}^{2}+2\tilde{p}\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{df}{da}+\tilde{\pi}^{2}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\left(\frac{df}{da}\right)^{2}\right.$ (20)
$\displaystyle-$ $\displaystyle\left.2\tilde{p}\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial v}{\partial
a}-2\tilde{\pi}^{2}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\frac{\partial v}{\partial
a}\;\frac{df}{da}+\tilde{\pi}^{2}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\left(\frac{\partial v}{\partial
a}\right)^{2}\right]$ $\displaystyle-$
$\displaystyle\frac{1}{2}\;v(\tilde{N},a)\;a+\frac{1}{v(\tilde{N},a)}\;\bar{\cal
P}{\cal P}.$
The Hamiltonian equations in extended phase space are:
$\displaystyle\dot{\tilde{p}}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left[\frac{1}{a}\frac{\partial v}{\partial
a}-\frac{v(\tilde{N},a)}{a^{2}}\right]\left[\tilde{p}+\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{df}{da}-\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial v}{\partial a}\right]^{2}$
(21) $\displaystyle-$
$\displaystyle\frac{v(\tilde{N},a)}{a}\left[\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\frac{\partial^{2}v}{\partial\tilde{N}\partial
a}\;\frac{df}{da}-\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{d^{2}f}{da^{2}}\right.$
$\displaystyle-$ $\displaystyle\left.\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\frac{\partial^{2}v}{\partial\tilde{N}\partial
a}\;\frac{\partial v}{\partial a}+\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial^{2}v}{\partial a^{2}}\right]$
$\displaystyle\times$
$\displaystyle\left[\tilde{p}+\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{df}{da}-\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial v}{\partial a}\right]$
$\displaystyle+$ $\displaystyle\frac{1}{2}\;\frac{\partial v}{\partial
a}\;a+\frac{1}{2}\;v(\tilde{N},a)+\frac{1}{v^{2}(\tilde{N},a)}\;\bar{\cal
P}{\cal P};$ $\displaystyle\dot{a}$ $\displaystyle=$
$\displaystyle-\frac{v(\tilde{N},a)}{a}\left[\tilde{p}+\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{df}{da}-\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial v}{\partial a}\right];$ (22)
$\displaystyle\dot{\tilde{\pi}}$ $\displaystyle=$
$\displaystyle\frac{1}{2a}\;\frac{\partial
v}{\partial\tilde{N}}\left[\tilde{p}+\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{df}{da}-\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial v}{\partial a}\right]^{2}$
(23) $\displaystyle-$
$\displaystyle\frac{v(\tilde{N},a)}{a}\left[\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\frac{\partial^{2}v}{\partial\tilde{N}^{2}}\;\frac{df}{da}-\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-2}\frac{\partial^{2}v}{\partial\tilde{N}^{2}}\;\frac{\partial
v}{\partial a}\right.$ $\displaystyle+$
$\displaystyle\left.\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial^{2}v}{\partial\tilde{N}\partial
a}\right]\left[\tilde{p}+\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{df}{da}-\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial v}{\partial a}\right]$
$\displaystyle+$ $\displaystyle\frac{1}{2}\;\frac{\partial
v}{\partial\tilde{N}}\;a+\frac{1}{v^{2}(\tilde{N},a)}\;\frac{\partial
v}{\partial\tilde{N}}\;\bar{\cal P}{\cal P};$ $\displaystyle\dot{\tilde{N}}$
$\displaystyle=$
$\displaystyle-\frac{v(\tilde{N},a)}{a}\left[\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{df}{da}-\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial v}{\partial a}\right]$ (24)
$\displaystyle\times$
$\displaystyle\left[\tilde{p}+\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{df}{da}-\tilde{\pi}\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\frac{\partial v}{\partial a}\right];$
$\displaystyle\dot{\bar{\cal P}}$ $\displaystyle=$ $\displaystyle 0;$ (25)
$\displaystyle\dot{\theta}$ $\displaystyle=$
$\displaystyle\frac{1}{v(\tilde{N},a)}\;{\cal P};$ (26)
$\displaystyle\dot{\cal P}$ $\displaystyle=$ $\displaystyle 0;$ (27)
$\displaystyle\dot{\bar{\theta}}$ $\displaystyle=$
$\displaystyle\frac{1}{v(\tilde{N},a)}\;\bar{\cal P}.$ (28)
One can check that the Hamiltonian equations (4.21) – (4.28) are completely
equivalent to the Lagrangian equations (4.15) – (4.19), the constraint (4.23)
and the gauge condition (4.24) being true Hamiltonian equations.
The Hamiltonian equations (4.21) – (4.28) in extended phase space, as well as
the equations (4.15) – (4.19), include gauge-dependent terms. In this
connection one can object that the equations are not equivalent to the
original Einstein equation, which are known to be gauge-invariant. However, we
remember that any solution to the gauge-invariant Einstein equation is
determined up to arbitrary functions which have to be fix by a choice of a
reference frame (a state of the observer). It is usually done on the final
stage of solving the Einstein equations. It is important that one cannot avoid
fixing a reference frame to obtain a final form of the solution. By varying
the gauged action (4.1), in fact, we deal with a generalized mathematical
problem, its generalization has come from the development of quantum field
theory.
In the case of the extended set of equations (4.21) – (4.28) (or, (4.15) –
(4.19)) one can keep the function $f(a)$ non-fixed up to the final stage of
their resolution. Further, under the conditions $\bar{\pi}=0$, $\theta=0$,
$\bar{\theta}=0$ all gauge-dependent terms are excluded, and the extended set
of equations is reduced to gauge-invariant equations, therefore, any solution
of the Einstein equations can be found among solutions of the extended set.
Solutions with non-trivial $\bar{\pi}$, $\theta$, $\bar{\theta}$ should be
considered and physically interpreted separately.
One can also reveal that there exists a quantity conserved if the Hamiltonian
(or, equivalently, Lagrangian) equations hold. It plays the role of the BRST
generator for our model:
$\Omega=-H\theta-\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}\tilde{\pi}{\cal P}.$ (29)
It generates correct transformations for the variables $a$, $\theta$,
$\bar{\theta}$ and for any gauge variable $\tilde{N}$ given by the relation
(4.6),
$\delta\tilde{N}=-\frac{\partial
H}{\partial\tilde{\pi}}\;\theta-\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}{\cal
P}=-\dot{\tilde{N}}\theta-\left(\frac{\partial
v}{\partial\tilde{N}}\right)^{-1}v(\tilde{N},a)\;\dot{\theta}.$ (30)
In particular, for the original variable $N$ one gets the transformation
(3.24).
## 5\. The canonicity of transformations in extended phase space
for the full gravitational theory
In this section we shall demonstrate for the full gravitational theory that
different parametrizations from a wide enough class (2.9) are related by
canonical transformations. Again, we shall start from the gauged action
$S=\int d^{4}x\left({\cal L}_{(grav)}+{\cal L}_{(gauge)}+{\cal
L}_{(ghost)}\right)$ (1)
We shall use a gauge condition in a general form, $f^{\mu}(g_{\nu\lambda})=0$.
The differential form of this gauge condition introduces the missing
velocities and actually extends phase space,
$\frac{d}{dt}f^{\mu}(g_{\nu\lambda})=0,\qquad\frac{\partial f^{\mu}}{\partial
g_{00}}\dot{g}_{00}+2\frac{\partial f^{\mu}}{\partial
g_{0i}}\dot{g}_{0i}+\frac{\partial f^{\mu}}{\partial g_{ij}}\dot{g}_{ij}=0.$
(2)
Then,
${\cal L}_{(gauge)}=\lambda_{\mu}\left(\frac{\partial f^{\mu}}{\partial
g_{00}}\dot{g}_{00}+2\frac{\partial f^{\mu}}{\partial
g_{0i}}\dot{g}_{0i}+\frac{\partial f^{\mu}}{\partial
g_{ij}}\dot{g}_{ij}\right).$ (3)
Taking into account the gauge transformations,
$\delta
g_{\mu\nu}=\partial_{\lambda}g_{\mu\nu}\theta^{\lambda}+g_{\mu\lambda}\partial_{\nu}\theta^{\lambda}+g_{\nu\lambda}\partial_{\mu}\theta^{\lambda},$
(4)
one can write the ghost sector:
${\cal L}_{(ghost)}=\bar{\theta}_{\mu}\frac{d}{dt}\left[\frac{\partial
f^{\mu}}{\partial
g_{\nu\lambda}}\left(\partial_{\rho}g_{\nu\lambda}\theta^{\rho}+g_{\lambda\rho}\partial_{\nu}\theta^{\rho}+g_{\nu\rho}\partial_{\lambda}\theta^{\rho}\right)\right].$
(5)
It is convenient to write down the action (5.1), (5.3), (5.5) in the form
$\displaystyle S$ $\displaystyle=$ $\displaystyle\int d^{4}x\left[{\cal
L}_{(grav)}+\Lambda_{\mu}\left(\frac{\partial f^{\mu}}{\partial
g_{00}}\dot{g}_{00}+2\frac{\partial f^{\mu}}{\partial
g_{0i}}\dot{g}_{0i}+\frac{\partial f^{\mu}}{\partial
g_{ij}}\dot{g}_{ij}\right)\right.$ (6) $\displaystyle-$
$\displaystyle\dot{\bar{\theta_{\mu}}}\left(\frac{\partial f^{\mu}}{\partial
g_{00}}\left(\partial_{i}g_{00}\theta^{i}+2g_{0\nu}\dot{\theta}^{\nu}\right)+2\frac{\partial
f^{\mu}}{\partial
g_{0i}}\left(\partial_{j}g_{0i}\theta^{j}+g_{0\nu}\partial_{i}\theta^{\nu}+g_{i\nu}\dot{\theta}^{\nu}\right)\right.$
$\displaystyle+$ $\displaystyle\left.\left.\frac{\partial f^{\mu}}{\partial
g_{ij}}\left(\partial_{k}g_{ij}\theta^{k}+g_{i\nu}\partial_{j}\theta^{\nu}+g_{j\nu}\partial_{i}\theta^{\nu}\right)\right)\right].$
Here $\Lambda_{\mu}=\lambda_{\mu}-\dot{\bar{\theta_{\mu}}}\theta^{0}$. One can
see that the generalized velocities enter into the bracket multiplied by
$\Lambda_{\mu}$, in addition to the gravitational part ${\cal L}_{(grav)}$.
This very circumstance will ensure the canonicity of the transformation to new
variables.
Our goal now is to introduce new variables by
$g_{0\mu}=v_{\mu}\left(N_{\nu},g_{ij}\right).\qquad
g_{ij}=\gamma_{ij};\qquad\theta^{\mu}=\tilde{\theta}^{\mu};\qquad\bar{\theta}_{\mu}=\tilde{\bar{\theta}_{\mu}}.$
(7)
This is the inverse transformation for (2.9) and concerns only $g_{0\mu}$
metric components. After the transformation the action will read
$\displaystyle S$ $\displaystyle=$ $\displaystyle\int d^{4}x\left[{\cal
L^{\prime}}_{(grav)}+\Lambda_{\mu}\left(\frac{\partial f^{\mu}}{\partial
g_{00}}\;\frac{\partial v_{0}}{\partial N_{\nu}}\;\dot{N}_{\nu}+\frac{\partial
f^{\mu}}{\partial g_{00}}\;\frac{\partial v_{0}}{\partial
g_{ij}}\;\dot{g}_{ij}\right.\right.$ (8) $\displaystyle+$
$\displaystyle\left.2\;\frac{\partial f^{\mu}}{\partial
g_{0i}}\;\frac{\partial v_{i}}{\partial
N_{\nu}}\;\dot{N}_{\nu}+2\;\frac{\partial f^{\mu}}{\partial
g_{0k}}\;\frac{\partial v_{k}}{\partial g_{ij}}\;\dot{g}_{ij}+\frac{\partial
f^{\mu}}{\partial g_{ij}}\;\dot{g}_{ij}\right)$ $\displaystyle-$
$\displaystyle\dot{\bar{\theta_{\mu}}}\left(\frac{\partial f^{\mu}}{\partial
g_{00}}\;\frac{\partial v_{0}}{\partial
N_{\nu}}\;\partial_{i}N_{\nu}\theta^{i}+\frac{\partial f^{\mu}}{\partial
g_{00}}\;\frac{\partial v_{0}}{\partial
g_{ij}}\;\partial_{k}g_{ij}\theta^{k}+2\;\frac{\partial f^{\mu}}{\partial
g_{00}}\;v_{\nu}(N_{\lambda},g_{ij})\;\dot{\theta}^{\nu}\right.$
$\displaystyle+$ $\displaystyle 2\;\frac{\partial f^{\mu}}{\partial
g_{0i}}\;\frac{\partial v_{i}}{\partial
N_{\nu}}\;\partial_{j}N_{\nu}\theta^{j}+2\;\frac{\partial f^{\mu}}{\partial
g_{0i}}\;\frac{\partial v_{i}}{\partial g_{jk}}\;\partial_{l}g_{jk}\theta^{l}$
$\displaystyle+$ $\displaystyle 2\;\frac{\partial f^{\mu}}{\partial
g_{0i}}\left[v_{\nu}(N_{\lambda},g_{jk})\;\partial_{i}\theta^{\nu}+v_{i}(N_{\lambda},g_{jk})\;\dot{\theta}^{0}+g_{ij}\dot{\theta}^{j}\right]$
$\displaystyle+$ $\displaystyle\frac{\partial f^{\mu}}{\partial
g_{ij}}\left[\partial_{k}g_{ij}\theta^{k}+v_{i}(N_{\lambda},g_{kl})\;\partial_{j}\theta^{0}+g_{ik}\partial_{j}\theta^{k}\right.$
$\displaystyle+$
$\displaystyle\left.\left.\left.v_{j}(N_{\lambda},g_{kl})\;\partial_{i}\theta^{0}+g_{jk}\partial_{i}\theta^{k}\right]\right)\right]$
We can write down the “old” momenta,
$\displaystyle\pi^{ij}$ $\displaystyle=$ $\displaystyle\frac{\partial{\cal
L}_{(grav)}}{\partial\dot{g}_{ij}}+\Lambda_{\mu}\;\frac{\partial
f^{\mu}}{\partial g_{ij}};$ (9) $\displaystyle\pi^{0}$ $\displaystyle=$
$\displaystyle\frac{\partial{\cal
L}_{(grav)}}{\partial\dot{g}_{00}}+\Lambda_{\mu}\;\frac{\partial
f^{\mu}}{\partial g_{00}};$ (10) $\displaystyle\pi^{i}$ $\displaystyle=$
$\displaystyle\frac{\partial{\cal
L}_{(grav)}}{\partial\dot{g}_{0i}}+2\Lambda_{\mu}\;\frac{\partial
f^{\mu}}{\partial g_{0i}},$ (11)
and the “new” momenta are:
$\displaystyle\Pi^{ij}$ $\displaystyle=$ $\displaystyle\frac{\partial{\cal
L^{\prime}}_{(grav)}}{\partial\dot{g}_{ij}}+\Lambda_{\mu}\left(\frac{\partial
f^{\mu}}{\partial g_{00}}\;\frac{\partial v_{0}}{\partial
g_{ij}}+2\;\frac{\partial f^{\mu}}{\partial g_{0k}}\;\frac{\partial
v_{k}}{\partial g_{ij}}+\frac{\partial f^{\mu}}{\partial g_{ij}}\right);$ (12)
$\displaystyle\Pi^{0}$ $\displaystyle=$ $\displaystyle\frac{\partial{\cal
L^{\prime}}_{(grav)}}{\partial\dot{N}_{0}}+\Lambda_{\mu}\left(\frac{\partial
f^{\mu}}{\partial g_{00}}\;\frac{\partial v_{0}}{\partial
N_{0}}+2\;\frac{\partial f^{\mu}}{\partial g_{0i}}\;\frac{\partial
v_{i}}{\partial N_{0}}\right);$ (13) $\displaystyle\Pi^{i}$ $\displaystyle=$
$\displaystyle\frac{\partial{\cal
L^{\prime}}_{(grav)}}{\partial\dot{N}_{i}}+\Lambda_{\mu}\left(\frac{\partial
f^{\mu}}{\partial g_{00}}\;\frac{\partial v_{0}}{\partial
N_{i}}+2\;\frac{\partial f^{\mu}}{\partial g_{0j}}\;\frac{\partial
v_{j}}{\partial N_{i}}\right).$ (14)
The relations between the “old” and “new” momenta:
$\displaystyle\Pi^{ij}$ $\displaystyle=$
$\displaystyle\pi^{ij}+\left(\pi^{\mu}-\frac{\partial{\cal
L}_{(grav)}}{\partial\dot{g}_{0\mu}}\right)\frac{\partial v_{\mu}}{\partial
g_{ij}};$ (15) $\displaystyle\Pi^{\mu}$ $\displaystyle=$
$\displaystyle\frac{\partial{\cal
L^{\prime}}_{(grav)}}{\partial\dot{N}_{\mu}}+\left(\pi^{\nu}-\frac{\partial{\cal
L}_{(grav)}}{\partial\dot{g}_{0\nu}}\right)\frac{\partial v_{\nu}}{\partial
N_{\mu}}.$ (16)
It is easy to check that the momenta conjugate to ghosts remain unchanged,
$\tilde{\cal P}^{\mu}={\cal P}^{\mu}$, $\tilde{\bar{\cal P}}_{\mu}=\bar{\cal
P}_{\mu}$.
As any Lagrangian is determined up to total derivatives, the gravitational
Lagrangian density ${\cal L}_{(grav)}$ can be modified in such a way for the
primary constraints to take the form $\pi^{\mu}=0$, where $\pi^{\mu}$ are the
momenta conjugate to gauge variables $g_{0\mu}$. This change of the Lagrangian
density does not affect the equation of motion. It was made by Dirac [3] to
simplify the calculations. A similar change of the Lagrangian density by
omitting a divergence and a total time derivative was made also in the ADM
paper [9]. Therefore, one can put
$\frac{\partial{\cal
L}_{(grav)}}{\partial\dot{g}_{0\mu}}=0,\qquad\frac{\partial{\cal
L^{\prime}}_{(grav)}}{\partial\dot{N}_{\mu}}=0.$ (17)
Then, the relations (5.15) – (5.16) would become simpler and take the form
$\Pi^{ij}=\pi^{ij}+\pi^{\mu}\frac{\partial v_{\mu}}{\partial
g_{ij}};\qquad\Pi^{\mu}=\pi^{\nu}\frac{\partial v_{\nu}}{\partial N_{\mu}}.$
(18)
It is easy to demonstrate that the transformations (2.9), (5.18) are canonical
in extended phase space. The generating function again depends on new
coordinates and old momenta and has the same form as for a non-constrained
system (see (2.5), compare also with (4.11)),
$\Phi\left(N_{\mu},\;\gamma_{ij},\;\tilde{\theta}^{\mu},\;\tilde{\bar{\theta}}_{\mu},\;\pi^{\mu},\;\pi^{ij},\;\bar{\cal
P}_{\mu},\;{\cal
P}^{\mu}\right)=-\pi^{\mu}v_{\mu}(N_{\nu},\gamma_{ij})-\pi^{ij}\gamma_{ij}-\bar{\cal
P}_{\mu}\tilde{\theta}^{\mu}-\tilde{\bar{\theta}}_{\mu}{\cal P}^{\mu}.$ (19)
Then the following relations take place
$g_{0\mu}=-\frac{\partial\Phi}{\partial\pi^{\mu}};\qquad
g_{ij}=-\frac{\partial\Phi}{\partial\pi^{ij}};\qquad\theta^{\mu}=-\frac{\partial\Phi}{\partial\bar{\cal
P}\vphantom{\sqrt{N}}_{\mu}};\qquad\bar{\theta}_{\mu}=-\frac{\partial\Phi}{\partial{\cal
P}^{\mu}};$ (20) $\Pi^{\mu}=-\frac{\partial\Phi}{\partial
N_{\mu}};\qquad\Pi^{ij}=-\frac{\partial\Phi}{\partial\gamma_{ij}};\qquad\tilde{\bar{\cal
P}}_{\mu}=-\frac{\partial\Phi}{\partial\tilde{\theta}^{\mu}};\qquad\tilde{\cal
P}^{\mu}=-\frac{\partial\Phi}{\partial\tilde{\bar{\theta}}\vphantom{\sqrt{N}}_{\mu}},$
(21)
that give exactly the transformations
$g_{0\mu}=v_{\mu}(N_{\nu},\gamma_{ij});\qquad
g_{ij}=\gamma_{ij};\qquad\qquad\qquad\qquad\theta^{\mu}=\tilde{\theta}^{\mu};\qquad\bar{\theta}_{\mu}=\tilde{\bar{\theta}}_{\mu};$
(22) $\Pi^{\mu}=\pi^{\nu}\frac{\partial v_{\nu}}{\partial
N_{\mu}};\qquad\qquad\Pi^{ij}=\pi^{ij}+\pi^{\mu}\frac{\partial
v_{\mu}}{\partial g_{ij}};\qquad\quad\tilde{\bar{\cal P}}_{\mu}=\bar{\cal
P}_{\mu};\qquad\tilde{\cal P}^{\mu}={\cal P}^{\mu}.$ (23)
We can now check if the Poisson brackets maintain their form. Differentiating
the first relation in (2.9) with respect to $g_{ij}$ one gets
$\frac{\partial V_{\mu}}{\partial g_{ij}}+\frac{\partial V_{\mu}}{\partial
g_{0\lambda}}\frac{\partial v_{\lambda}}{\partial g_{ij}}=0,$ (24)
Similarly, differentiating the same relation with respect to $N_{\nu}$ gives
$\delta_{\mu}^{\nu}-\frac{\partial V_{\mu}}{\partial
g_{0\lambda}}\frac{\partial v_{\lambda}}{\partial N_{\nu}}=0.$ (25)
Making use of (5.24), (5.25), it is not difficult to calculate the Poisson
brackets. So, we can recalculate the bracket (2.10) to see that it will be
zero in our extended phase space formalism,
$\displaystyle\left.\\{N_{\mu},\,\Pi^{ij}\\}\right|_{g_{\nu\lambda},p^{\rho\sigma}}$
$\displaystyle=$ $\displaystyle\frac{\partial N_{\mu}}{\partial
g_{0\rho}}\;\frac{\partial\Pi^{ij}}{\partial\pi^{\rho}}+\frac{\partial
N_{\mu}}{\partial
g_{kl}}\;\frac{\partial\Pi^{ij}}{\partial\pi^{kl}}=\left\\{V_{\mu}(g_{0\nu},g_{kl}),\;\pi^{ij}+\pi^{\lambda}\frac{\partial
v_{\lambda}}{\partial g_{ij}}\right\\}$ (26) $\displaystyle=$
$\displaystyle\frac{\partial V_{\mu}}{\partial g_{0\rho}}\;\frac{\partial
v_{\lambda}}{\partial g_{ij}}\delta_{\rho}^{\lambda}+\frac{\partial
V_{\mu}}{\partial
g_{kl}}\;\frac{1}{2}\left(\delta_{k}^{i}\delta_{l}^{j}+\delta_{l}^{j}\delta_{k}^{i}\right)=\frac{\partial
V_{\mu}}{\partial g_{0\lambda}}\;\frac{\partial v_{\lambda}}{\partial
g_{ij}}+\frac{\partial V_{\mu}}{\partial g_{ij}}=0.$
To give another example, let us check the following bracket:
$\left.\\{N_{\mu},\,\Pi^{\nu}\\}\right|_{g_{\lambda\rho},p^{\sigma\tau}}=\frac{\partial
N_{\mu}}{\partial
g_{0\rho}}\;\frac{\partial\Pi^{\nu}}{\partial\pi^{\rho}}=\left\\{V_{\mu}(g_{0\rho},g_{ij}),\;\pi^{\lambda}\frac{\partial
v_{\lambda}}{\partial N_{\nu}}\right\\}=\frac{\partial V_{\mu}}{\partial
g_{0\rho}}\;\frac{\partial v_{\rho}}{\partial N_{\nu}}=\delta_{\mu}^{\nu}.$
(27)
The rest of the brackets can be checked by analogy. This completes the proof
of canonicity of the transformation (2.9) for the full gravitational theory.
## 6\. Discussion
A starting point for the present investigation was the paper [11] and the
statement made by its authors that components of metric tensor and the ADM
variables are not related by a canonical transformation. However, it is
misunderstanding to pose the question about canonicity of the transformation
(2.8) which involves, from the viewpoint of the Dirac approach, non-canonical
variables. Let us remind that Dirac himself consider these variables,
$g_{0\mu}$ (along with the zero component of vector potential of
electromagnetic field $A_{0}$) as playing the role of Lagrange multipliers
while the phase space in his approach includes pairs of generalized
coordinates and momenta for which corresponding velocities can be expressed
through the momenta.
We should remember also that the Einstein equations were originally formulated
in Lagrangian formalism. Dirac’s Hamiltonian formulation for gravity is
equivalent to Einstein’s formulation at the level of equations. It means that
Hamiltonian equations for canonical variables (in Dirac’s sense) are
equivalent to the ($ij$) Einstein equations, and the gravitational constraints
are equivalent to the ($0\mu$) Einstein equations. On the other hand, it
implies that a group of transformations in Hamiltonian formalism must involve
the full group of gauge transformations of the original theory. However, in
the limits of the Dirac approach we fail to construct a generator that would
produce correct transformations for all variables. We inevitably have to
modify the Dirac scheme, and attempts to do it were presented yet in [7, 8].
Therefore, we cannot consider the Dirac approach as fundamental and undoubted.
The ADM formulation of Hamiltonian dynamics for gravity is, first of all, the
choice of parametrization, which is preferable because of its geometrical
interpretation. There is no any special ”ADM procedure”: Arnowitt, Deser and
Misner constructed the Hamiltonian dynamics following exactly the Dirac
scheme, just making use of another variables. The fact that two Hamiltonian
formulations (both according to the Dirac scheme, but the one for original
variables and the other for the ADM variables) are not related by canonical
transformations, should not lead to any bad-grounded conclusions like the one
made in [11], p. 68, that the gravitational Lagrangian used by Dirac and the
ADM Lagrangian are not equivalent. At the Lagrangian level, the transition to
the ADM variables is nothing more as a change of variables in the Einstein
equations, and there are no mathematical rules that would prohibit such change
of variables. It is the Lagrangian formulation of General Relativity which is
original and fundamental while its Hamiltonian formulation still remains
questionable, in spite of fifty years passed after Dirac’s paper [3]. The
extended phase space approach treating all degrees of freedom on an equal
footing may be a real alternative to the Dirac generalization of Hamiltonian
dynamics.
The example considered in Section 4 shows that the BRST charge can play the
role of a sought generator in extended phase space. Nevertheless, the
algorithm suggested by BFV for constructing the BRST charge again relies upon
the algebra of constraints. Even for the model from Section 4 it would not
lead to the correct result (4.30). Another way is to construct the BRST charge
as a conserved quantity based on BRST-invariance of the action and making use
of the first Noether theorem. This method works satisfactory for simple models
with a given symmetry. Below we mentioned that the gravitational Lagrangian
density can be modified for the primary constraints to take the simplest form
$\pi^{\mu}=0$ without affecting the equation of motion. However, after this
modification the full action may not be BRST-invariant. Some authors (see, for
example, [12, 13]) use some boundary conditions to exclude total derivatives
and ensure BRST-invariance. The boundary conditions (as a rule, these are
trivial boundary conditions for ghosts and $\pi^{\mu}$) correspond to
asymptotic states and are well-grounded in ordinary quantum field theory. This
way does not seem to be general enough, and for gravitational field the
justification of the boundary conditions, as well as the control of BRST-
invariance of the action, requires special study.
In [3] Dirac pointed out that “any dynamical theory must first be put in the
Hamiltonian form before one can quantize it”. Based upon Hamiltonian dynamics
in extended phase space, a new approach to quantum theory of gravity has been
proposed in [14, 15]. Ir was argued that it is impossible to construct a
mathematically consistent quantum theory of gravity without taking into
account the role of gauge degrees of freedom in description of quantum
gravitational phenomena from the point of view of different observers. The
present paper show that even at the classical level gauge degrees of freedom
cannot be excluded from consideration. As we have seen, the extension of phase
space by introducing the missing velocities changes the relations between the
“old” and “new” momenta (see (5.18)). As a consequence, the transformation
(2.9) is canonical. In that way, we consider extended phase space not just as
an auxiliary construction which enables one to compensate residual degrees of
freedom and regularize a path integral, as it was in the Batalin – Fradkin –
Vilkovisky approach [4, 5, 6], but rather as a structure that ensures
equivalence of Hamiltonian dynamics for a constrained system and Lagrangian
formulation of the original theory.
## Acknowledgements
I would like to thank Giovanni Montani and Francesco Cianfrani for attracting
my attention to the paper [11] and discussions.
## References
* [1] P. A. M. Dirac, Can. J. Math. 2 (1950), P. 129–148.
* [2] P. A. M. Dirac, Proc. Roy. Soc. A246 (1958), P. 326–332.
* [3] P. A. M. Dirac, Proc. Roy. Soc. A246 (1958), P. 333–343.
* [4] E. S. Fradkin and G. A. Vilkovisky, Phys. Lett B55 (1975), P. 224–226.
* [5] I. A. Batalin and G. A. Vilkovisky, Phys. Lett B69 (1977), P. 309–312.
* [6] E. S. Fradkin and T. E. Fradkina, Phys. Lett B72 (1978), P. 343–348.
* [7] L. Castellani, Ann. Phys. 143 (1982), P. 357–371.
* [8] R. Banerjee, H. J. Rothe and K. D. Rothe, Phys. Lett. B463 (1999), P. 248–251.
* [9] R. Arnowitt, S. Deser and C. W. Misner, “The Dynamics of General Relativity”, in: Gravitation: an Introduction to Current Research, ed. by L. Witten, John Wiley & Sons, New York (1962), P. 227–265.
* [10] L. D. Faddeev, Usp. Fiz. Nauk 136 (1982), P. 435–457 [Sov. Phys. Usp. 25 (1982). P. 130–142].
* [11] N. Kiriushcheva and S. V. Kuzmin, “The Hamiltonian formulation of General Relativity: myth and reality”, E-print arXiv: gr-qc/0809.0097.
* [12] J. J. Halliwell, Phys. Rev. D38 (1988), P. 2468–2481.
* [13] M. Hennaux, Phys. Rep. 126 (1985), P. 1–66.
* [14] V. A. Savchenko, T. P. Shestakova and G. M. Vereshkov, Gravitation & Cosmology 7 (2001), P. 18–28.
* [15] V. A. Savchenko, T. P. Shestakova and G. M. Vereshkov, Gravitation & Cosmology 7 (2001), P. 102–116.
|
# DiffSRL: Learning Dynamic-aware State Representation for Deformable Object
Control with Differentiable Simulation
Sirui Chen∗, Yunhao Liu∗, Jialong Li, Shang Wen Yao, Tingxiang Fan, Jia Pan† *
denotes equal contribution. † denotes the corresponding author.S. Chen, Y.
Liu, J. Li, S. Yao, T. Fan, J. Pan are with the University of Hong Kong.
###### Abstract
Dynamic state representation learning is an important task in robot learning.
Latent space that can capture dynamics related information has wide
application in areas such as accelerating model free reinforcement learning,
closing the simulation to reality gap, as well as reducing the motion planning
complexity. However, current dynamic state representation learning methods
scale poorly on complex dynamic systems such as deformable objects, and cannot
directly embed well defined simulation function into the training pipeline. We
propose DiffSRL, a dynamic state representation learning pipeline utilizing
differentiable simulation that can embed complex dynamics models as part of
the end-to-end training. We also integrate differentiable dynamic constraints
as part of the pipeline which provide incentives for the latent state to be
aware of dynamical constraints. We further establish a state representation
learning benchmark on a soft-body simulation system, PlasticineLab, and our
model demonstrates superior performance in terms of capturing long-term
dynamics as well as reward prediction. The source code and more experiments
results is available at https://ericcsr.github.io/DiffSRL/.
## I Introduction
Deep neural networks have become a powerful tool in representing high
dimensional data using fewer dimensions. Its rapid development and application
nowadays have significantly enhanced and accelerated the machines to process
complex data such as images, words, sentences, and graphs. Among the
applications, one that has recently aroused great research interest is
representing real-world objects in dynamical environments from high
dimensional point clouds or images for control. Low dimension latent state can
benefit learning algorithms by reducing the trainable parameters, accelerating
the training, as well as achieving better control result. The efficiency of
this paradigm on rigid-body objects has been demonstrated in recent research
[1, 2].
Figure 1: Similar states but expecting distinct action trajectories.
However, representing more commonly seen deformable objects in low dimension
remains to be a challenging problem [3] and is essential for deformable object
manipulation. Representing deformable objects is difficult for multiple
reasons: a) The state of objects tends to be extremely large due to their high
degrees of freedom (DoFs) during deformation [3]. For example, a piece of
cloth can be translated integrally like an ordinary rigid object. But it can
also be rolled, crumpled, and folded with no predetermined pattern. Due to the
complexity resulting from high DoFs, soft-body objects lack uniform and
convenient representations but rely on surface particles or meshes to record
their current states. b) Different state elements of the same deformable
object can have very different correlations when undergoing different
deformation [3]. For instance, surface points on a piece of unfolded cloth,
when being translated forward, all obey the same movement. However, once being
folded, the cloth divides into upper and lower parts where surface points move
in opposite directions. c) Visually similar states can generate distinct
outcomes, which can confuse most static state representation learning methods.
Fig. 1 shows one such example where a robotic arm is removing a rod from a
piece of plasticine and then placing it at a target position. The initial
states of the given two situations are similar but with subtle differences: on
the left, the rod is completely surrounded by plasticine while on the right
there is a large enough notch in the plasticine. The difference between them
can be overlooked by static representation learning approaches because the
notch occupies merely a tiny number of volume. As a result, the controller
will generate resemblant actions while we prefer the robotic arm to first drag
the rod out (action 1) and then translate it (action 2) in the first case,
and a direct translation towards the target is more desirable for the second
one. Similar challenges are ubiquitous in soft-body object manipulation,
calling for the development of new state representation learning algorithms
that are dynamic-aware, i.e., the representation shall not only examine the
current state but also understand the dynamics to reflect different futures
bifurcated from similar current states.
Existing state representation learning methods, such as AutoEncoder [2] and
Embed to Control (E2C) [1], lack specific designs for dealing with the above
challenges brought by the dynamics of deformable objects. Newly emerged
solutions focusing on deformable objects, such as CFM [3] or G-Doom [4], as
will be elaborated in Sec. II, cannot directly embed simulation into their
training pipelines, leading to the performance bottleneck on soft-body object
manipulation tasks [5]. To improve the performance of state representation
learning on deformable objects, we hereby propose a new pipeline DiffSRL that
utilizes a differentiable simulator to better encode dynamic- and constraint-
aware information.
Our main contributions are threefold.
* •
We propose an end-to-end dynamic state representation learning pipeline
utilizing differentiable simulation to capture multistep dynamic of deformable
objects. To the best of our knowledge, this is the first time that state
representation learning directly uses the ground truth simulation function as
part of the end-to-end training pipeline.
* •
Our method explicitly considers dynamics constraints related to the training
pipeline and demonstrates awareness of physical constraints.
* •
We establish a benchmark on PlasticineLab [6], a non-Newtonian deformable
object simulation environment. As baselines, we also implement a set of state-
of-the-art state representation learning algorithms, such as CFM [3], inverse
model learning [7], AutoEncoder [2] and E2C [1].
We have compared our proposed pipeline and baselines according to the reward
prediction accuracy and the trajectory reconstruction errors, and investigated
the performance of the learned encoder on important downstream tasks such as
model-free reinforcement learning and model-based policy optimization. We have
demonstrated that our approach outperforms all baselines on all tasks most of
the time.
## II Related Work
Figure 2: Overview of our DiffSRL model pipeline.
Dynamic state representation learning is an emerging topic together with
reinforcement learning for the acceleration and enhancement it offers. Auto-
Encoder [2] trains a CNN encoder-decoder pair to extract features from images
to achieve better performances in deep reinforcement learning than training
all parameters from scratch. This method, however, only uses static state
information and will fail to distinguish visually similar but dynamically
different states [5] as previously shown. As a solution to this issue, Embed
to Control (E2C) [1] proposes a pipeline including a learned forward model, an
auto-encoder, and a linear dynamic transfer constraint between each current
state and the next state. E2C demonstrates its strength in enhancing deep RL’s
performance in target-reaching problems with image observations, but its
linear model cannot well describe the complex dynamics of deformable objects
especially for multiple steps. Replacing the linear model in E2C with a neural
network is not trivial because the neural net with significantly more
parameters may be hard to converge and prevent the E2C variant from achieving
desirable outcomes as well as failed to rollout accurate long run
trajectories. Alternatively, the Inverse-model based state representation
learning [8] is based on the heuristics that good state representation should
be able to predict action leading to the transfer between two consecutive
states. Thus, the encoder can be trained with an action predictor by
minimizing the MSE loss between real and predicted action. Inverse model based
method has been further applied in model-free RL tasks [9, 10], with more
variants and applications elucidated in [5], but existing designs do not take
the complexity of deformable objects into consideration either. CFM [3]
explored the area of state representation learning on deformable objects such
as clothes and ropes. It utilizes contrastive learning to jointly train both
the forward model and the encoder to represent images of deformable objects
with distinguishable latent states. The trained forward model and latent space
has been used in some MPC-based manipulation control. G-Doom [4] applies a
similar paradigm as CFM but with a different architecture in terms of using
latent graphs as the representation in the latent space and graph dynamic
models as the forward model. However, despite their achievement in
manipulating deformable objects, both CFM and G-Doom assume no access to the
ground-truth analytical model of the dynamic system and didn’t take physical
constraints into consideration, which may lead to their forward models’
incapability in accurate long-term planning.
Recent advances of physically-based simulation, such as Material Point Method
(MPM) [11], allow us to accurate and efficiently simulate deformable objects
such as cloth and plasticine. What’s more, the differentiable simulator [12,
13, 14] allows each simulation step to be integrated as a layer in deep neural
networks, making end-to-end training possible.This provide opportunity to
overcome the difficulty of capture complex dynamic with a neural net faced by
E2C, CFM and G-DOOM. In end-to-end training, gradients from the upstream
neural networks are propagated through physics engines and affect network
parameters accordingly. For instance, DiffTaichi [14], a newly-emerged
differentiable physics engine based on Taichi programming language [15], has
included many MPM simulation examples and established a soft-body control
benchmark with differentiable simulation in PlasticineLab [6]. We will also
perform our experiments on PlasticineLab for a fair comparison among different
methods.
As for the states to be encoded, common methods take 2D images as the input
and thus adopt the convolution methods. But they may encounter difficulties
when being applied in point clouds, due to the intrinsic permutation
irregularity of point clouds. PointNet [16] is a pioneering work of applying
deep learning on point cloud data, where the permutation invariance is
achieved using point-wise symmetric functions, including point-wise MLP and
max-pooling layers. Moreover, being trained on point clouds allows the deep
learning algorithms to tolerate the variety of input shapes, regardless of how
many points are contained in the incoming point clouds. Thus, our proposed
DiffSRL method will utilize point clouds to extract samples from the
PlasticineLab environment as the training dataset.
## III Problem Formulation
A general dynamic system consists of three components: a state space
$\mathcal{S}$, an action space $\mathcal{A}$, and a transfer function:
$\displaystyle s_{t+1}=f_{\text{sim}}(s_{t},a_{t}),\
s_{t},s_{t+1}\in\mathcal{S};a_{t}\in\mathcal{A}.$ (1)
The state $s_{t}$, obtained from the differentiable simulator, is a mixture of
observable parts (such as the particles’ coordinates) and imperceptible parts
(including the velocity fields and constraints). The observable part of the
state $s_{t}$ is denoted as $s^{\text{obs}}_{t}$. The sensors of the robotic
system may only observe part of the entire observable surroundings (denoted as
$\mathcal{O}$) due to its limited visual angles or occlusion of obstacles.
This can be modeled as an observation function
$o_{t}=g_{\text{obs}}(s^{\text{obs}}_{t})$, which selects accessible
observation from observable states. Our goal is to establish a dynamic system
state representation learning method capable of computing a low dimensional
latent state $h_{t}$ mapped from observation:
$h_{t}=f^{\theta}(o_{t}),o\in\mathcal{O}$. The mapping function $f^{\theta}$
is expected to capture sufficient information from the current state
observation related to dynamics so that it can figure out whether two latent
states are dynamically equivalent or not. In particular, two latent states
$h_{t}$ and $h_{t}^{\prime}$ are regarded as dynamically equivalent if given
an arbitrary action sequence $a_{t},a_{t+1}...,a_{t+k}$, their future states
$s_{t+k},s^{\prime}_{t+k}$ will be similar.
## IV Approach
To learn the state representation that is aware of dynamic-equivalence, we
propose the DiffSRL framework, which consists of three main components: 1) a
state observation Autoencoder, which compresses the dynamic state to a latent
state and reconstructs the state from latent state; 2) a constraint regulator,
which regulates the decoded state against system dynamical constraints, and 3)
a differentiable simulator, which rolls out the trajectory starting from the
decoded state. The rolling-out process is differentiable so that gradients can
be propagated from the end of the trajectory all the way back to the encoder.
The overall pipeline of DiffSRL is shown in Fig. 2.
In this section, we will first introduce some preliminary knowledge of the
Particle-In-Cell paradigm and the differentiability relevant to the simulator
component in our design, and then move on to design details of the three major
components of our model as well as our loss function.
### IV-A Preliminaries
Particle-In-Cell paradigm: Lagrangian methods and Euler methods are two major
branches of simulation algorithms [11]. The former method carries physical
property with particles and is convenient for numerical integration; while the
latter one stores physical properties on fixed grids and can be used for fast
interactive simulation. The Particle-In-Cell (PIC) method is a combination of
both methods and is commonly applied in fluid and deformable object
simulation. The deformable object simulation in PlasticineLab is based on
Material Point Method [11], a variant of Particle-In-Cell. Our constraint
regulator also utilizes PIC paradigm to efficiently detect penetration.
Differentiable Simulation: Physically-based simulation in Eq. 1 maps the
current state $s_{t}$ and action $a_{t}$ to the next state $s_{t+1}$.
Traditional simulators such as Bullet [17], Dart [18], Mujoco [19] only
provide non-differentiable forward functions. Thanks to the recent advances in
differentiable simulation Sec. II, now the gradient of simulation function can
be also acquired:
$\displaystyle\nabla{s}_{t}$
$\displaystyle=\nabla{s}_{t+1}\text{Jac}^{s_{t+1}}_{s_{t}}f(s_{t},a_{t})$ (2)
$\displaystyle\nabla{a}_{t}$
$\displaystyle=\nabla{s}_{t+1}\text{Jac}^{s_{t+1}}_{a_{t}}f(s_{t},a_{t})$
which allows us to embed the simulation function into a deep learning pipeline
such as Pytorch and Tensorflow as part of the end-to-end training pipeline. In
DiffSRL’s architecture, the trajectory rolling-out module is based on
differentiable simulation.
Distance Metric on Point Clouds: Point clouds usually are expressed as an
unordered set whose distance cannot be directly measured by common metrics
such as MSE or MAE. Instead, here we use the Earth Mover’s Distance (EMD) [20]
to measure the difference between two point clouds in terms of the minimum
total pairwise distance between all the points. The EMD distance can be
expressed as:
$\displaystyle\text{Dist}_{\text{emd}}(A,B)=\min_{\phi:A\rightarrow
B}\sum_{a\in A}d(a,\phi(a))\vspace{-5pt}$ (3)
where $A$ and $B$ are two unordered sets, $d(\cdot,\cdot)$ is the distance
between two points from these two sets, and $\phi(\cdot)$ is the
correspondence between $a$ and $b$ to be optimized. The minimization is
implemented using Iterative Closest Point (ICP) [21] algorithm. Due to the
high computational cost when solving this optimization, in practice, a greedy
point matching based variant of EMD, Chamfer Distance [22], is also commonly
used, which averages all the nearest neighbor distance by nearest neighbors:
$\displaystyle\text{Dist}_{\text{chamfer}}(A,B)=\sum_{a\in A}\min_{b\in
B}d(a,b)+\sum_{b\in B}\min_{x\in A}d(a,b).$ (4)
We will use the losses based on Chamfer Distance and ICP algorithm as the
constraint regulator.
### IV-B State Observation Autoencoder
To obtain the latent states of a dynamic system, we use a deep neural network
with learnable parameters $\theta_{\text{encoder}}$ as the encoder, where the
specific architecture depends on the type of system’s observation. In
particular, we will use Multi-layer Perceptron (MLP), Convolution Neural
Network (CNN), and permutation invariant encoders such as PointNet [16] for
observations in form of an ordered vector, image, or an unordered set of
particles such as point clouds, respectively. Meanwhile, a decoder with
parameters $\theta_{\text{decoder}}$ is trained simultaneously for recovering
the observable state from the latent state. Formally, the Autoencoder can be
expressed as:
$\displaystyle h_{t}$ $\displaystyle=f^{\theta_{\text{encoder}}}(o_{t}),$ (5)
$\displaystyle\tilde{s}^{\text{obs}}_{t}$
$\displaystyle=f^{\theta_{\text{decoder}}}(h_{t}),$
where $o_{t},s^{obs}_{t},h_{t}$ are as defined in Sec. III and
$\tilde{s}^{\text{obs}}_{t}$ is the reconstructed observation from $h_{t}$.
### IV-C Differentiable Constraint Regulator
After the reconstructed observable state $\tilde{s}^{\text{obs}}_{t}$ is
obtained from the decoder, the reconstructed state $\tilde{s}_{t}$ can be
obtained by replacing observable part $s^{\text{obs}}_{t}$ from $s_{t}$ with
$\tilde{s}^{\text{obs}}_{t}$. However, the reconstructed state may violate
hard dynamics constraints, such as non-interpenetration constraints, joint
limit constraints, as well as continuity constraints within continuous
deformable objects. Constraint violation usually causes simulator failure or
significant numerical errors. Hence, it is necessary to maintain the encoder’s
awareness of the constraints from observation throughout the training
procedure to enforce the gradient to be accurate and meaningful. Therefore,
the constraints regulator is designed to find the feasible state closest to
the generated state and, meanwhile, punish the Autoencoder for reconstructing
unrealistic states. Formally, the reconstructed state, the constraint
regulator loss, and the regulated state are computed as
$\displaystyle\tilde{s}_{t}$ $\displaystyle=s_{t}\ominus
s^{\text{obs}}_{t}\oplus\tilde{s}^{\text{obs}}_{t},$ (6)
$\displaystyle\mathcal{L}_{\text{constraint}}$
$\displaystyle=\min_{s\in\mathcal{S}}\text{Dist}(s,\tilde{s}_{t}),$ (7)
$\displaystyle\hat{s}_{t}$
$\displaystyle=\arg\min_{s\in\mathcal{S}}\text{Dist}(s,\tilde{s}_{t}),$ (8)
where $\ominus$ and $\oplus$ represent removing and adding back the observable
parts from and to the full state, respectively. For ordered state vectors, the
distance Dist can be measured directly as the weighted MSE between the two
vectors; as for unordered states such as point clouds, we use EMD as the
distance metric.
### IV-D Differentiable Simulator and loss design
By using the reconstructed state, the simulation in Eq. 1 then becomes
$\displaystyle\hat{s}_{t+1}$ $\displaystyle=f_{\text{sim}}(\hat{s}_{t},a_{t})$
(9)
where $\hat{s}_{t}$ is reconstructed state after regulation, onto which
$f_{\text{sim}}$ executes the input action $a_{t}$ in the simulation world and
reaches the corresponding result state $\hat{s}_{t+1}$. When being applied to
an trajectory of length $k$, we obtain a sequence:
$\displaystyle\hat{s}_{t:t+k}$
$\displaystyle=\text{DiffRollout}(\hat{s}_{t},a_{t:t+k})$ (10)
where DiffRollout refers to the successive execution of the simulator,
starting from $\hat{s}_{t}$, along the trajectory consisting of an action
sequence $a_{t},a_{t+1},...,a_{t+k}$.
The model is trained with both the constraints violation loss
$\mathcal{L}_{\text{constraint}}$ (Eq. 7) and the multi-step reconstruction
loss $\mathcal{L}_{\text{multi-step}}$ defined as
$\displaystyle\mathcal{L}_{\text{multi-
step}}=\sum_{i=1}^{k}\gamma\text{Dist}(s_{t+i},\hat{s}_{t+i}),$ (11)
which punishes the distance between each state $s_{t+i}$ in the ground truth
trajectory and its correspondence in the rolling-out trajectory starting from
the reconstructed state. A decaying factor $\gamma$ is used to mitigate the
gradient instability effect when performing back propagation through a long
horizon. In this way, the total loss to be optimized is
$\displaystyle\mathcal{L}_{\text{total}}=(1-\beta)\mathcal{L}_{\text{multi-
step}}+\beta\mathcal{L}_{\text{constraints}}$ (12)
where the weight factor $\beta$ is introduced to trade off among different
terms through the entire training scheme. Initially, the state reconstructed
by the decoder violates constraints significantly, so $\beta$ is set to $1$ to
encourage the Autoencoder to respect the constraints in the physics system.
Progressively, $\beta$ decays exponentially at rate $\lambda$ so that the
multi-step reconstruction loss dominates.
## V Experiments and Results
To evaluate effectiveness of DiffSRL, we evaluated our model with three
experiments: 1) trajectory reconstruction, 2) reward prediction, and 3) model-
free reinforcement learning on Chopsticks and Rope tasks from the
PlasticineLab benchmark [6]. Our proposed DiffSRL has achieved state-of-the-
art performance in these experiments most of the time.
### V-A Baselines for Comparison
We compared our approach against state-of-the-art method CFM [3] and four
commonly used dynamic state representation learning methods: E2C [1], Forward
[23] that learns a dynamic forward model on latent states, Inverse [7] that
conducts inverse model learning for action prediction, as well as AutoEncoder
directly reconstructing the state [2]. In Tab. I, we summarize characteristics
of different approaches.
In terms of details of baseline implementation, similar to our method, we use
Chamfer Distance [22] as loss function to train autoencoder in the baseline.
The forward model used in E2C and Forward
$f^{\theta}_{\text{fwd}}(h_{t},a_{t})$ is 3 layer MLP on latent space, the
action predictor
$a^{\text{pred}}_{t}=f^{\phi}_{\text{inverse}}(h_{t},h_{t+1})$ in inverse
model [7] uses concatenation of two latent states as input and predicts the
action with a 3 layer MLP. The contrastive loss in [3] uses the same formation
and negative sample selection method as in the original paper. All MLPs
mentioned here use 256 as the size of a hidden layer.
| predictive dynamic model | predict action | observation reconstruction | multistep information | physical constraints
---|---|---|---|---|---
DiffSRL (Our) | $\surd$ | | $\surd$ | $\surd$ | $\surd$
E2C[1] | $\surd$ | | $\surd$ | |
Forward[23] | $\surd$ | | | |
Inverse[7] | | $\surd$ | | |
CFM[3] | $\surd$ | | | |
Autoencoder[2] | | | $\surd$ | |
TABLE I: Comparison of different components used in different state
representation learning pipelines. Figure 3: Collision-free and smooth
velocity field constraint regulator, where the dots and the capsule represent
the point clouds and the rigid-body object which may collide with it,
respectively.
### V-B Implementation Details
Since PlasticineLab is a deformable object simulator based on Material Point
Method (MPM) [11], the particles used for simulation can be naturally
considered as an unordered point cloud. For simplicity, we choose each
observation $o_{t}$ as the position of all particles at time $t$. An
observation, together with velocity and other particle wise properties, forms
a full state $s_{t}$. A variant of PCN [24] is then used to implement the
Autoencoder for the point cloud $o_{t}$, with the detailed architecture
presented in Fig. 2.
As for the constraint regulator, the MPM simulation disallows penetration
between particles and rigid bodies, and therefore requires smooth velocity
fields within continuous materials. Hence, our regulator is constructed in two
stages: 1) The signed distance field (SDF) algorithm with MPM is used to
enforce non-penetration and detect collisions efficiently. For those particles
$p_{\text{collide}}$ resided inside rigid bodies, we resolve the collision by
finding the minimum distance $d_{p_{\text{collide}}}$ to exit the colliding
object. For simplicity, it is assumed that each rigid-body object is a
geometric primitive and has analytical collision detection routine, so that
the minimum exit distance can be solved efficiently using a simple linear
programming solver. The loss function for this objective is defined as the sum
of all $d_{p_{\text{collide}}}$. 2) The Iterative Cloest Point (ICP) algorithm
is applied to find the best pairwise matches between particles in ground truth
observation and those in the reconstructed one where some particles have been
translated outside rigid-body objects to avoid collisions. This step is
necessary because the decoded observation $\tilde{o}_{t}$ are expected to
satisfy soft-body dynamic properties from $s_{t}$ such as continuous velocity
fields. After pairwise relationships are established by ICP, the velocity and
smoothness constraints of each ground-true particle are exerted to the matched
pairs. The details are described in Algorithm 1 and also illustrated in Fig.
3.
Algorithm 1 Constraints Regulator
Input: Rigid bodies $\mathcal{R}$, the ground-truth particle observable state
$s^{\text{obs}}_{t}$ (i.e., position of particles), and decoded particles
position $\tilde{s}^{\text{obs}}_{t}$
Initialize: Grids to store interpenetrating rigid bodies information
$c_{\text{grid}}^{\text{col}}$, loss of penetration $l_{\text{penetration}}$,
and loss reconstruction $l_{\text{rec}}$.
$\hat{s}_{t}^{\text{obs, no-
penetration}}\leftarrow\text{ResolvePenetration}(\mathcal{R},\tilde{s}^{\text{obs}}_{t})$
$\triangleleft$ Compute grid mass using particle-in-cell
$m_{\text{grid}}\leftarrow\text{ComputeGridMass}(\tilde{s}^{\text{obs}}_{t})$
for $r$ in $\mathcal{R}$ do
$\triangleleft$ Check penetration using signed distance field
$c_{\text{grid}}^{\text{col}}\leftarrow\text{SDF}(r,m_{\text{grid}})$
end for
$\triangleleft$ Compute Interpenetration related information of each particle
to all rigid bodies
for $p$ in $\hat{s}_{t}^{obs\text{no-penetration}}$ do
if p.grid is not empty then
$\triangleleft$ Solve linear programming problem to find minimum displacement
for a particle to stay out of penetrating rigid body
$\text{p.pos}\leftarrow\text{LPSolver}(\mathcal{R},p)$
$\triangleleft$ Sum up minimum displacement costs as loss
$l_{\text{penetration}}\mathrel{+}=\text{LPCost}(r,p)$
end if
end for
$\triangleleft$
$\hat{s}^{\text{obs}}_{t}\leftarrow\text{ResolveSmoothness}(\hat{s}_{t}^{\text{obs
no-penetration}})$
$\text{order}\leftarrow\text{ICPSolver}(s^{\text{obs}}_{t},\hat{s}^{\text{obs}}_{t})$
$\hat{s}^{\text{obs}}_{t}\leftarrow\text{perm}(\hat{s}_{t}^{\text{obs no-
penetration}},\text{perm(order)})$
return $\hat{s}^{\text{obs}}_{t}$
The distance between the regulated observation and the corresponding ground
truth is the sum of pairwise squared distance, i.e., the EMD [20] distance
between $\tilde{o}_{t}$ and $o_{t}$.
### V-C Data Collection
We collect 6,000 trajectories of length 8 from benchmarks Chopsticks and Rope.
To make the datasets cover as many states as possible, 30% of data are sampled
from random policies while others are collected when optimizing trajectories
for manipulating different targets. This is because random actions frequently
cause the manipulator to detach from the deformable object too early to
capture sufficient deformation information. We use the ratio of 10:1:1 to
split collected samples into training, validation and testing sets.
Datasets | Rope | Chopsticks
---|---|---
Metric | ChamferDist | MSE | ChamferDist | MSE
AutoEncoder | 0.048 | 0.236 | 0.044 | 1.102
Inverse | 0.042 | 0.276 | 0.048 | 1.19
Forward | 0.083 | 5.435 | 0.124 | 6.599
E2C | 0.048 | 0.223 | 0.064 | 1.244
CFM | 0.037 | 0.236 | 0.037 | 1.010
DiffSRL | 0.027 | 0.218 | 0.031 | 0.840
DiffSRL w/o Reg | 0.039 | 0.292 | 0.036 | 1.226
TABLE II: Comparison of the average and standard deviation of trajectory
rollout chamfer distance and mean squared error of reward prediction. The
smaller the MSE and Chamfer Distance, the better the model. The best result is
highlighted in bold and the second best is highlighted with underline. DiffSRL
w/o Reg is our proposed method trained without non-penetration constraint
regulator.
### V-D Comparison in Trajectory Reconstruction
A straight forward evaluation of dynamical equivalence is the similarity
between the real trajectory and the trajectory roll-out from the reconstructed
state given the same action sequence, which is computed by accumulating the
Chamfer Distance [22], i.e.,
$\sum_{s_{t:t+k}}D_{\text{chamfer}}(s_{t},\hat{s}_{t})$, between each pairs of
states in both trajectories. Some baseline methods such as CFM, Forward and
Inverse only train the encoder without decoder and thus cannot reconstruct
states. Thus, we trained a decoder for each fixed encoder using the
Autoencoder training pipeline until convergence. Tab. II shows that DiffSRL
achieves the best performance among all models. Moreover, to evaluate the
effectiveness of the constraint regulator, we conduct an ablation study by
removing non-penetration regulation modules during the training phase. It
shows the removal of non-penetration regulator will result in higher
trajectory reconstruction error as well as reward prediction error. We also
tried to remove the smoothness regulator during training, but this will
results in NaN in gradient from differentiable simulator since the MPM’s
requirement that particles in the same grid have consistent properties can be
violated. One thing worth noting is that the Forward model is the worst among
all models, which may be because it tends to be trapped in a trivial minimum
as mentioned in [5].
### V-E Comparison in Reward Prediction
Figure 4: The architecture of policy network that we used in model-based
policy optimization and model-free reinforcement learning.
For many downstream control tasks, it is essential that the encoded latent
states can capture the state’s main characters for accurate reward prediction.
To predict rewards from latent states, we train a 3-layer MLP for latent
states obtained from each encoder. The reward that is related to poses of the
deformable object is the only part we predicted. The error is measured in mean
square error:
$\frac{1}{|\mathcal{D}|}\sum_{s\in|\mathcal{D}|}\|\hat{r}_{\text{pred}}-r_{\text{real}}\|^{2}$.
As summarized in Tab. II, our model achieves the best performance on both
datasets. The reward prediction error also increases when non-penetration
regulator is absent.
### V-F Model-Based Policy Optimization (MBPO)
Figure 5: TD3 Reward in different environments. Figure 6: Keyframes of
different approaches in Model-Based Policy Optimization (MBPO) task.
To investigate adaptability as well as effectiveness on various more complex
downstream tasks, we first evaluated the encoders trained from different state
representation learning methods on end-to-end policy optimization with the
differentiable simulator. Our experiment setting is similar to the end-to-end
policy optimization in DiffTaichi [14, 25]. The trained policy is expected to
control manipulators to move the plasticine to different target positions
within finite steps. Detailed target and reward design are the same as
described in [6]. The architecture of the policy network is shown in Fig. 4,
where the encoder’s weights are fixed during policy optimization. For
comparison, we add another baseline of the default down-sampling based method
provided by PlasticineLab, which samples 400 points from each point cloud and
keeps track of those points in the sampling order throughout the
trajectory111The down-sampling based method is not feasible in practice since
tracking 400 points all the way on deformable object accurately is difficult..
The policy network is initialized as a simple MLP and its inputs are obtained
by concatenating features. We evaluated performance using the per-epoch
average reward. Each experiment is repeated 5 times and the means and standard
deviations are plotted in Fig. 5. The first row illustrates the reward of
different methods in Chopstick and Rope environments, where most methods using
state representation learning can improve their policies faster and achieve
better final rewards, compared with the down-sampling based method. Moreover,
our approach achieves the best result among all state representation learning
methods including the AutoEncoder that is trained using a similar loss
function. Surprisingly, the Forward method achieves competitive performance
compared with other methods even though it has the worst performance in
previous two tasks, which may indicate that the performance correlation
between MBPO and reward / trajectory reconstruction is not significant and we
will leave the study to the future work. Some key frames from the trajectory
are shown in Fig. 6.
Figure 7: Model based policy optimization reward in rope environment when
using different number of point as encoder input.
### V-G Model-Free Reinforcement Learning (MFRL)
Finally, we tested all models on model-free reinforcement learning on the same
target-reaching tasks as described in MBPO. Since the trajectory information
has been used in MBPO experiments similar to on-policy reinforcement learning,
to fully evaluate different state representation learning methods, our
experiment uses the state-of-the-art off-policy algorithm TD3 [26], which uses
latent state for both state value function and policy function. The
architecture of policy function is shown in Fig. 4. We use similar experiments
and evaluation criteria as MBPO but increase the epoch number from 200 to
1,000 since the model-free method requires more data. The results are shown in
Fig. 5, from which we can observe that most state representation learning
methods have higher maximum rewards and faster convergence speeds than the
sample-based method, and DiffSRL achieves the best performance while CFM’s
performance in two environments are very different, implying its limitation in
general situations.
### V-H Ablation Study
We conduct ablation study to show the effectiveness of constraints regulator.
Since the removal of smoothness constraint regulator will crash the
representation learning training procedure, we retrained the encoders using
our pipeline without non-penetration constraints regulator and test them using
MFRL and MBPO training to verify the effectiveness. Fig. 8 shows the result,
we can justify the effectiveness of our design since the reward drop
significantly if we remove the non-penetration regulator.
Figure 8: Ablation Study: without non-penetration regulator
### V-I Robustness Analysis
The number of points observed by a sensor may vary greatly in real
applications. Since the point cloud encoder can be directly applied on
different point number without re-training, we use different numbers of
observed points to test the robustness of our model. We trained a DiffSRL
model using point clouds consisting of 8,192 points, and then investigated the
deployment performance using 6,000, 4,000, 2,000 particles, by computing the
reward curves when training MBPO from latent states. As shown in Fig. 7,
although using fewer observable particles (from 8,192 to 6,000) than training
will decrease the performance, further decreasing of the particle number (from
6,000 to 4,000 or 2,000) will not significantly downgrade the performance,
which demonstrates that our method is relatively robust against disturbance in
observation.
### V-J Hyperparameters Setting
Figure 9: 10-step Chamfer loss on both rope and chopstick benchmarks when
using different number of training steps.
Parameters | Meaning | Value
---|---|---
N | Number of particles | 8192
I | ICP iterations | 3000
$d_{\text{latent}}$ | Latent Space dimension | 1024
$\alpha$ | Learning rate | 1e-5
$\gamma$ | weight decay rate in 11 | 0.99
$\beta$ | Initial weight between in 12 | 0.99
$\lambda$ | Decay rate of $\beta$ per epoch | 0.9
E | Number of epochs | 20
TABLE III: Hyperparameters
Tab. III summarizes the hyperparameters used in our model. One important
hyperparameter of our method is $k$, the number of trajectory rollout steps
used when training the state representation. We choose $k$ by evaluating the
10-step trajectory reconstruction loss on the validation set, with the result
shown in Fig. 9. For both environments, 7-step achieves the best performance
and thus we choose $k=7$. Notice that the loss curve demonstrates non
monotonic behavior as we increase the training step size, and the possible
reason is that as the step size increases, the gradient back propagation
though time tends to be more noisy due to numerical issues and the
optimization landscape becomes more wiggly. We leave further investigation as
future work.
## VI Conclusion and Future Work
We have presented a novel dynamic state representation learning model and its
sample efficient training scheme. Moreover, we evaluated our model using
multiple tasks on soft body benchmarks in PlasticineLab [6] and set up
benchmarks to compare with other state-of-the-art models. Currently our model
uses part of states directly from the simulator as observation, while in the
real world, robots may not necessarily have sensors to access such
information. In future, it is essential to extract the observation using
physical sensors such as Lidar or cameras for deploying our method on real
robots. One possible follow-up might include the differentiable rendering
method [27] in our pipeline, which enables using easily accessible images as
observation while maintaining the overall differentiability.
## References
* [1] M. Watter, J. T. Springenberg, J. Boedecker, and M. A. Riedmiller, “Embed to control: A locally linear latent dynamics model for control from raw images,” in _NeurIPS_ , 2015, pp. 2746–2754.
* [2] C. Finn, X. Y. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel, “Deep spatial autoencoders for visuomotor learning,” in _ICRA_ , 2016, pp. 512–519.
* [3] W. Yan, A. Vangipuram, P. Abbeel, and L. Pinto, “Learning predictive representations for deformable objects using contrastive estimation,” _CoRR_ , vol. abs/2003.05436, 2020.
* [4] X. Ma, D. Hsu, and W. S. Lee, “Learning latent graph dynamics for deformable object manipulation,” _CoRR_ , vol. abs/2104.12149, 2021.
* [5] T. Lesort, N. D. Rodríguez, J. Goudou, and D. Filliat, “State representation learning for control: An overview,” _Neural Networks_ , vol. 108, pp. 379–392, 2018.
* [6] Z. Huang, Y. Hu, T. Du, S. Zhou, H. Su, J. B. Tenenbaum, and C. Gan, “Plasticinelab: A soft-body manipulation benchmark with differentiable physics,” in _ICLR_ , 2021.
* [7] P. Agrawal, A. Nair, P. Abbeel, J. Malik, and S. Levine, “Learning to poke by poking: Experiential learning of intuitive physics,” _CoRR_ , vol. abs/1606.07419, 2016.
* [8] W. Duan and K. Jens, “Learning state representations for robotic control: Information disentangling and multi-modal learning,” Master’s thesis, TU Delft, 2017.
* [9] E. Shelhamer, P. Mahmoudieh, M. Argus, and T. Darrell, “Loss is its own reward: Self-supervision for reinforcement learning,” in _ICLR_ , 2017\.
* [10] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell, “Curiosity-driven exploration by self-supervised prediction,” in _CVPR Workshops_ , 2017, pp. 488–489.
* [11] C. Jiang, C. A. Schroeder, J. Teran, A. Stomakhin, and A. Selle, “The material point method for simulating continuum materials,” in _SIGGRAPH_ , 2016, pp. 24:1–24:52.
* [12] K. Werling, D. Omens, J. Lee, I. Exarchos, and C. K. Liu, “Fast and feature-complete differentiable physics engine for articulated rigid bodies with contact constraints,” in _RSS_ , 2021.
* [13] J. Xu, T. Chen, L. Zlokapa, M. Foshey, W. Matusik, S. Sueda, and P. Agrawal, “An end-to-end differentiable framework for contact-aware robot design,” in _RSS_ , 2021.
* [14] Y. Hu, L. Anderson, T. Li, Q. Sun, N. Carr, J. Ragan-Kelley, and F. Durand, “Difftaichi: Differentiable programming for physical simulation,” in _ICLR_ , 2020.
* [15] Y. Hu, T. Li, L. Anderson, J. Ragan-Kelley, and F. Durand, “Taichi: a language for high-performance computation on spatially sparse data structures,” _ACM Trans. Graph._ , vol. 38, no. 6, pp. 201:1–201:16, 2019\.
* [16] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “Pointnet: Deep learning on point sets for 3d classification and segmentation,” in _CVPR_ , 2017, pp. 77–85.
* [17] E. Coumans _et al._ , “Bullet physics library,” _Open source: bulletphysics. org_ , vol. 15, no. 49, p. 5, 2013.
* [18] J. Lee, M. X. Grey, S. Ha, T. Kunz, S. Jain, Y. Ye, S. S. Srinivasa, M. Stilman, and C. K. Liu, “DART: Dynamic animation and robotics toolkit,” _The Journal of Open Source Software_ , vol. 3, no. 22, p. 500, 2018.
* [19] E. Todorov, T. Erez, and Y. Tassa, “Mujoco: A physics engine for model-based control,” in _IROS_ , 2012, pp. 5026–5033.
* [20] Y. Rubner, C. Tomasi, and L. J. Guibas, “A metric for distributions with applications to image databases,” in _ICCV_ , 1998, pp. 59–66.
* [21] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-squares fitting of two 3-d point sets,” _IEEE Trans. Pattern Anal. Mach. Intell._ , vol. 9, no. 5, pp. 698–700, 1987.
* [22] M. Liu, O. Tuzel, A. Veeraraghavan, and R. Chellappa, “Fast directional chamfer matching,” in _CVPR_ , 2010, pp. 1696–1703.
* [23] L. Kaiser, M. Babaeizadeh, P. Milos, B. Osinski, R. H. Campbell, K. Czechowski, D. Erhan, C. Finn, P. Kozakowski, S. Levine, A. Mohiuddin, R. Sepassi, G. Tucker, and H. Michalewski, “Model based reinforcement learning for atari,” in _ICLR_ , 2020.
* [24] W. Yuan, T. Khot, D. Held, C. Mertz, and M. Hebert, “PCN: point completion network,” in _3DV_ , 2018, pp. 728–737.
* [25] Y. Qiao, J. Liang, V. Koltun, and M. C. Lin, “Scalable differentiable physics for learning and control,” in _ICML_ , vol. 119, 2020, pp. 7847–7856.
* [26] S. Fujimoto, H. van Hoof, and D. Meger, “Addressing function approximation error in actor-critic methods,” in _ICML_ , vol. 80, 2018, pp. 1582–1591.
* [27] H. Kato, D. Beker, M. Morariu, T. Ando, T. Matsuoka, W. Kehl, and A. Gaidon, “Differentiable rendering: A survey,” _CoRR_ , vol. abs/2006.12057, 2020\.
|
0 Research please specify Bahador Saket is with Georgia Tech. E-mail:
<EMAIL_ADDRESS>Dominik Moritz, Halden Lin, and Jeffrey Heer are with
University of Washington. E-mail: domoritz<EMAIL_ADDRESS>Çağatay
Demiralp is with Columbia University. E-mail<EMAIL_ADDRESS>Victor
Dibia is with IBM Research. E-mail<EMAIL_ADDRESS>Saket and Moritz et
al.: Beyond Heuristics: Learning Visualization Design
# Beyond Heuristics: Learning Visualization Design
Bahador Saket Bahador Saket and Dominik Moritz contributed equally to this
work. Dominik Moritz∗ Halden Lin Victor Dibia Çağatay Demiralp Jeffrey Heer
###### Abstract
In this paper, we describe a research agenda for deriving design principles
directly from data. We argue that it is time to go beyond manually curated and
applied visualization design guidelines. We propose learning models of
visualization design from data collected using graphical perception studies
and build tools powered by the learned models. To achieve this vision, we need
to 1) develop scalable methods for collecting training data, 2) collect
different forms of training data, 3) advance interpretability of machine
learning models, and 4) develop adaptive models that evolve as more data
becomes available.
###### keywords:
Automated visualization design, machine learning, design guidelines,
visualization recommendation, feature engineering, visualization
recommendation systems
K.6.1Management of Computing and Information SystemsProject and People
ManagementLife Cycle; K.7.mThe Computing ProfessionMiscellaneousEthics
Introduction
The demand for data visualization has significantly grown in recent years with
the increasing availability and complexity of digitized data across everyday
domains. By visually encoding data properties, visualizations aim to enhance
data understanding by leveraging human visual perception, which has evolved
for fast pattern detection and recognition. Understanding the effectiveness of
a given visualization in achieving this goal is a fundamental pursuit in data
visualization research and has important implications in practice.
Visualization researchers regularly conduct empirical studies to investigate
how people decode and interpret visualizations (e.g., [6, 12, 27, 23, 26]).
Such empirical studies are important for understanding and improving the
effectiveness of visualizations. Indeed, design guidelines and heuristics that
we use today in data visualization are an accumulation of what we have learned
from such empirical studies over decades.
It is not, however, always possible to distill guidelines through analysis of
data collected by empirical studies due to various confounding factors. Even
when this is possible, derived guidelines might be inadequate for conveying
the subtleties present in user data or might be marred by those providing the
guidelines. Moreover, design guidelines provided by empirical studies often
make their way to visualization tools slowly for two main reasons. First, our
design knowledge is continually evolving as visualization researchers
regularly publish results from empirical studies and provide new sets of
guidelines for designing effective visualizations. Second, designers of
visualization tools need to spend a significant amount of time and effort to
manually incorporate these guidelines.
In recent years, there has been an increasing trend to publish data collected
from empirical studies with increased awareness of the importance of
replicability and reproducibility. Researchers have made large datasets of
experimental data of visualizations’ effectiveness publicly available (e.g.,
[10, 26, 12, 23]). We advocate learning models from this data and building
tools that automatically apply the learned model. We believe machine learning
models provide a practical opportunity to implicitly engineer the insights
provided by empirical user performance data into visualization systems. The
main advantage is that new guidelines can be applied in practice faster in an
unbiased and reproducible manner.
In this paper, we discuss how we might automatically derive visualization
design principles from experimental data in a scalable way by using machine
learning. We further categorize and discuss the existing approaches for
learning visualization designs to illustrate feasibility and how future
systems fit into these categories. We argue that visualization design
principles used today are also derived from data. However, these principles
had to be abstracted by a visualization researcher, taught by a teacher, and
applied manually by the designer. We propose that the next step for the
research community is to curate a knowledge base of design principles that can
be applied automatically. Next, we should aim to learn from data both design
principles and how to trade off among them. As more data becomes available, we
may one day be able to learn visualization design end-to-end. In the
following, we describe each of these steps, and future research directions to
achieve this vision.
## 1 Visualization Recommendation Systems
In this section, we discuss existing and previous work on automated
visualization design and recommendation engines that incorporate guidelines
derived from graphical perception studies. We categorize these systems into
knowledge-based, data-driven, and hybrid visualization design tools. These
categories are used in recommender systems [2] and represent where the
knowledge that the system uses to recommend visualizations comes from. Table 1
gives an overview of existing machine learning and knowledge-based systems.
### 1.1 Knowledge-Based Systems
A large body of existing automated visualization design tools focuses on
suggesting visualizations based on user-defined constraints (such as which
data attributes to visualize) and design constraints. Mackinlay’s APT [16]
leverages a compositional algebra to enumerate the space of visualizations. It
then uses _expressiveness_ and _effectiveness_ criteria based on the work by
Bertin [5] and Cleveland & McGill [6] to prune and rank visualizations.
_Expressiveness_ refers to the ability of a visualization to convey all and
only the facts in the data. _Effectiveness_ refers to the ability of a
visualization when the information it conveys is more readily perceived than
with other visualizations. The SAGE project [18] extends APT by taking into
account additional data properties such as cardinality, uniqueness, and
functional dependencies. Tableau’s Show Me [17] introduces a set of heuristic
rules to recommend visualizations. Following this line of work, CompassQL [28]
also uses similar heuristic rules to develop expressiveness and effectiveness
criteria to evaluate visualization options. However, CompassQL extends the
earlier work by using a set of hand-tuned scores to specify criteria such as
space efficiency and legibility based on visualization and data properties.
Many of the automated design tools discussed above prune and rank candidate
visualizations based on a set of manually curated design guidelines derived
from previous empirical studies. Designers of these tools often spent a
considerable amount of time and effort to incorporate the findings of the
previous and current empirical studies.
System | Recommender Type | Modeling Approach | Learning | Model | Input Features | Design Space
---|---|---|---|---|---|---
VizDeck | Data Driven | Basic Features | Yes | Linear | 5 Data properties per field | 9+ Types
Kopol | Data Driven | Basic Features | No | Tree | Data properties, Task | 5 Types
Kim et al. | Data Driven | Basic Features | No | Linear | Data properties, Task | 12 Scatterplots
Data2Vis | Data Driven | Raw Features | Yes | Deep | Raw Dataset | Vega-Lite
APT | Knowledge Based | Hand Tuned | No | Rules | Data Types, Importance ordering | APT
CompassQL | Knowledge Based | Hand Tuned | No | Linear | Partial Specification | Vega-Lite
Draco | Hybrid | Learning Trade-Offs | Yes | Linear | Partial Specification, Task | Vega-Lite
Table 1: Comparison of different approaches to model automated visualization
design by the type of recommender(section 1), the modeling approach based on
the kind of features that are used (section 2), whether the approach uses
machine learning with genralization, the type of model, the input to the
model, and the design space.
### 1.2 Data-Driven Systems
The visualization literature is no stranger to data-driven models elicited
through human-subject experiments. For example, low-level perceptual models
such as Weber-Fechner Law [9], Stevens’ Power Law [25], and perceptual kernels
[7] are all based on fitting parametric and non-parametric models to empirical
user data, informing low-level visual encoding design. While data-driven
models are prevalent, using data-driven models for automated visualization
design is a nascent area and only a handful of papers exist to date.
With advances in machine learning, more researchers in the visualization
community started taking initial steps towards developing machine learning
models to recommend visualizations. A machine learning model tries to best
predict what visualization is most appropriate based on the given inputs
(e.g., tasks, data attributes, etc.). Developers of visualization tools need
to hand-craft informative, discriminating, and independent features for
learning such models. To the best of our knowledge, VizDeck [11] was the first
attempt at learning a recommendation model for high-level visual design.
VizDeck is a web-based visualization recommender tool designed for exploratory
data analysis of unorganized data. Using users’ up and down votes on a gallery
of visualization, VizDeck learns to recommend charts that the user is most
likely going to vote up. It does so by learning a linear scoring function over
statistical properties of the dataset. Visualizations are picked from a corpus
of possible candidate visualizations.
Saket et al. [23] recently conducted a crowdsourced study to evaluate the
effectiveness of five visualization types (Table, Line Chart, Bar Chart,
Scatterplot, and Pie Chart) across 10 different visual analysis tasks and from
two different datasets. Based on their findings, they developed
Kopol111https://kopoljs.github.io/, a mini JavaScript prototype visualization
recommender that uses a decision tree to suggest visualizations based on the
given tasks and data attribute types. Kim et al. [12] also recently developed
a model for 12 scatterplot encodings, the task type, and the cardinality and
entropy of some data fields. During their crowd-sourced experiment, Kim et al.
had users perform tasks for a combination of features and used results to
create a ranking.
Luo et al. [15] conducted a study in which 100 participants annotated 33,412
visualizations as good/bad, and provided 285,236 pairwise comparisons between
visualizations. They used 42 public datasets to create the visualizations for
their experiment. Luo et al. then developed DeepEye [15], a visualization
recommender system that uses a binary classifier to decide if a visualization
is good or bad, and a supervised learning-to-rank model to rank the
visualizations based on users’ preferences data collected in their experiment.
A more recent trend in machine learning is to learn models on all available
features instead of hand-crafting good features—a labor intensive and often
biased process. For example, Data2Vis [8] is a neural translation model that
generates visualization specifications in the Vega-Lite grammar from
descriptions of a dataset. The model was trained on thousands of example pairs
of data and visualizations recommended by CompassQL.
### 1.3 Hybrid Systems
In knowledge-based systems, the system designer provides the knowledge about
visualization design. In data-driven systems the system learns a
recommendation model from data. Hybrid recommender systems are both knowledge-
based and data-driven. The system designer has full control over the
recommendation process but can augment the knowledge base with machine
learning. Recently, machine learning experts argued that learning with
structure must be a top priority for AI to achieve human-like abilities [3].
Draco [19] is a formal model that represents (1) visualizations as sets of
logical facts and (2) design principles as a collection of hard and soft
constraints over these facts. Using constraints, we can take theoretical
design knowledge and express it in a concrete, extensible, and testable form.
Compared to VizDeck and Kopol, Draco’s visualization model covers a larger set
of visualizations supported by Vega-Lite [24]. Similar to CompassQL [28],
Draco’s default model uses rules that are informed by empirical studies but
formalized by experts. To avoid ad-hoc choices and support an evolving
knowledge base, Draco can learn how to trade off among competing design rules
using a simple linear model. Draco’s learning algorithm uses learning to rank,
a machine learning method that enables it to learn a preference model from
ordered pairs of complete visualizations. Using this method, Draco can learn
without the need to normalize scores between perceptual studies with different
methods and conditions.
## 2 Features for Modeling Visualization Design
In this section, we discuss the different approaches to automated
visualization design with respect to the features that they use. In
particular, we discuss building models from basic features (visualization
type, data properties, task), feature engineering, learning design rules, and
learning without feature engineering. Table 1 provides an overview over the
systems discussed here.
In the discussion below, we assume that a machine learning model for
visualization design takes as input a set of features that can include some
specification of the visualization, data, and task and outputs a corresponding
score. This score can represent how preferred a visualization is (based on
effectiveness, how easy it to read the visualization, etc.). The magnitude of
the score may not be meaningful, but it provides a rank ordering of the
feature vectors (i.e., possible designs). An automated visualization design
system can use any model of this form by enumerating possible designs and
recommending the one with the highest score.
### 2.1 Modeling Using Basic Features
The simplest and fastest way to model a score is to use the results of
experimental studies of effectiveness. These models use data such as the type
of visualization, data properties, and task type as features. For example,
Kopol by Saket et al. [23] and Kim et al. [12] are examples of this approach.
They use data such as tasks, visualization types, and data attribute types as
features. VizDeck [11] is another example of a system that uses basic
features. Instead of learning from effectiveness studies, VizDeck [11] uses
users’ up and down votes to change the scoring of each visualization
alternative. In addition to the visualization type, VizDeck’s model uses
statistics of the data as input.
While it is often simple and fast to use the results of experimental studies
to learn models (similar to Kopol [23] and Kim et al. [12]), the design space
of these models is small since they only support a small set of visualization
types and a basic set of features. In practice, visualization designers might
want to consider a larger design space. Going forward, the visualization
community needs to develop models that cover a much broader design space. To
effectively discriminate data in a larger design space, models need to use
more expressive features. In the next section, we discuss a method for
learning generalizable models with expressive features for visualization
design.
### 2.2 Using Visualization Design Rules as Features
Another method for automating visualization design is to use the violations of
design rules as features for learning models. A design rule is a predicate
over properties of a visualization, the user’s task, and the data. For
example, “Do not use the same field for the x and the y channel of a
scatterplot” describes the properties scatterplot and the same field on x and
y should not occur together. Assuming a set of design rules, the feature
vector representation of a visualization design is the number of times a
design rule is being violated by the design. For example, a design that does
not violate any design rules can be represented as a feature vector with only
zeros. We can use these feature vectors to learn a model. Since each feature
corresponds to a design rule, the weight of each feature in the learned model
is a measure of the relevance of its corresponding rule. Many of the design
rules that we use today are not always prescriptive. Today’s visualization
designers need to decide what rules they prefer for the specific visualization
they are working on. Machine learning models use statistical models to handle
uncertainty and noise in training data and are thus well equipped to handle
design rules that are not prescriptive. A model that uses design rules as
features learns the trade-offs among competing rules from data and is more
deterministic than human designers when applying them.
Recently, Moritz et al. proposed Draco [19], a method to derive feature
vectors for a machine learning model from Vega-Lite specifications [24]. This
allows for more sophisticated features to be composed, through feature-
engineering, from the underlying grammar. The main idea in Draco’s learning
approach is to use the violations of _design rules as features_ as outlined
above.
The main advantage of this approach over pure data-driven learning is that the
model can incorporate existing knowledge into the algorithm and thus learn a
model that generalizes from little data. This approach is also a
generalization of learning from basic features (subsection 2.1), as every
feature about the visualization type or the data can be written as a predicate
over the structural representation of the visualization (e.g., in Vega-Lite)
or the data schema. For example, to support tasks, we may just add rules for
each task. However, one of the main challenges with this approach is the
limited availability of design rules encoded in the system. Researchers should
work on systematically enumerating design rules and encode them in a machine
readable form so that they can be used as features in machine learning
systems. In the next section, we discuss how machine learning may support them
in this endeavour.
### 2.3 Learning Features from Data
Systems that use design rules for feature engineering and then learn a linear
scoring function over a vector of violations are limited by these rules.
Design rules relate properties of a visualization, and learning these
relations is known in machine learning as structure learning. For example,
inductive logic programming methods can infer logic programs from specific
instances [21]. However, many algorithms assume no or very little noise.
Design rules, however, are not prescriptive and have exceptions. For example,
the design rule that quantitative scales should start at zero does not make
sense when the data has no meaningful zero such as temperature data. However,
that does not mean that the zero rule should not be used. Markov logic
networks [22] are statistical models that support Bayesian reasoning and thus
can handle this uncertainty well. Their structure can be learned [13]. It
remains an open question whether these learning methods produce reasonable
design rules from the experimental data that is available today. Moreover,
more work is needed to investigate how to design an experiment to collect data
that results in reliable rules. One approach may be to learn rules from data
but have experts confirm them.
The key to learning structured rules is the availability of large high quality
datasets of examples. However, as more data becomes available, we may be able
to skip the feature engineering step and learn an end-to-end model, as we
discuss in the next section.
### 2.4 Learning without Feature Engineering
A recent trend in machine learning is to learn models on all available
features instead of hand-crafting good features. In particular, deep learning
models shift the burden of feature engineering to the system by automatically
learning abstract features and complex relationships that would otherwise be
difficult to capture by humans. This also means models can be more flexible.
Machine learning on the full data has led to impressive results in computer
vision and machine translation among other areas. However, deep learning
models in particular are extremely data hungry often requiring millions of
training examples. The data also needs to cover a large fraction of the design
space. For example, to learn a visualization design system that synthesizes
visualizations not just from templates but a grammar, the model needs to first
learn the grammar. For instance, Data2Vis [8] uses $215\,000$ training
examples to learn a subset of the Vega-Lite grammar and recommendations from
CompassQL. For this reason, this approach is not practical yet and more work
is needed to collect enough high quality training data. A main concern with
the quality of the training data is also whether the data is representative of
the true distribution of visualizations in the wild. Machine learning models
are probabilistic models that rank examples higher if they have seen many
similar examples in their training corpus. Consequently, if the training data
is biased, the model may only recommend a single visualization type.
Understanding how the bias in training data affects neural models is an open
area of research.
## 3 Next Steps in Learning Visualization Models
We view the existing body of work as the first step towards moving beyond
heuristics and learning visualization design from data. Multiple avenues for
future work lie in designing more interpretable machine learning models,
developing scalable methods for collecting different forms of training data,
and designing adaptive and evolving models.
### 3.1 Tooling to Help Designers Understand Models
Machine learning models can remove the manual effort of writing and tweaking
rules by building rules on the fly. Moreover, as we are getting more data, it
is easy to retrain the machine learning models rather quickly and frequently,
thus improving the accuracy of the models. Despite the advantages of learning
models, potential downsides of incorporating machine learning models include
an extra layer of complexity and diminished transparency [1], both of which
can make it difficult for both developers and end users to understand how the
system works. Unlike cases where the designers apply visualization design
guidelines manually, incorporating machine learning models might result in
losing the ability to look at the designed guidelines and their underlying
rationale.
Going forward, designers incorporating the empirical data to learn models
should provide methods to better investigate the underlying rationale and
convey an understanding of how the trained models will behave in the future.
Such visualization systems should communicate a variety of technical
information, such as the most representative features used in training the
model, the level of correlation among different features, and others. This
would help designers better understand the underlying logic behind rules
extracted by the model. Moreover, we envision visualization systems combining
machine learning models with state-of-the-art human-computer interface
techniques capable of translating models into understandable and useful
explanation dialogues for end users. Such systems should explain to the end
users such things as: why does the learned model recommend a specific set of
visualizations? What factors does the learned model use to prune and rank
visualizations? How do user interactions with the system affect the
recommendations?
### 3.2 Collecting Training Data
In order to learn the design of effective visualizations from user data,
access to high quality data with sufficient variety and context is critical.
Perceptual studies measuring effectiveness as defined in a controlled setting
can provide training data to learn design guidelines. However, current data
from graphical perception studies are stored in different destinations all
over the web (e.g., different repositories, personal webpages, etc.). These
inconsistencies make it harder for researchers and designers to track and
access available datasets. Better organizing data collected by graphical
perception studies can make deriving models easier. Going forward, one
solution might be to create a single data repository where the community can
share data collected from these empirical studies, thus improving the
accessibility of the collected data.
Graphical perception experiments often have specific research questions they
set out to answer. These studies are typically conducted under different
conditions, varying sample sizes, varying datasets, and for a diverse set of
tasks. As such, they may provide useful but inherently partial and incomplete
data to be used as training data for a generalizable models. Going forward, we
need large-scale, focused data collection efforts to provide the desired
consistency, size, context, and variation required to train generalizable
models with large learning capacities. Active learning methods can guide the
data collection.
An alternative to collecting new data is to use existing corpora of
visualizations on the web, such as Tableau Public [20], Beagle [4], and the
Chartmaker directory222http://chartmaker.visualisingdata.com/ or figures in
papers from Viziometrics [14]. The design principles learned from this data
would reflect the reality of the kinds of visualizations scientists, data
analysts, and journalists create. However, the data may be confounded by the
tools used in a particular community, as well as network effects.
### 3.3 Adapting and Evolving Models
All systems have only partial knowledge of context and user intent. For
example, the analyst’s goals often depend on seeing a rendered chart with real
data before they realize what needs to be adjusted. As a specific example, an
analyst may decide to use a log scale upon seeing that the spread of data is
very large. Thus, it is crucial that recommendation systems support iterative
refinement of users’ intent. Moreover, individual preferences may mean that
the same model is not optimal for everyone. A core concern in machine learning
for visualization design should be how models can be adapted to the group or
individual using it. As such, we need to develop models that take into account
user feedback during visual data exploration. Ideally, the accuracy of such
models should increase as users interact with the system.
Even a model that adapts to users will never provide perfect recommendation,
as the models are limited. Current models are restricted by the number of
features they use and the data they were trained on. Models need to evolve and
expand as more data becomes available and researchers find new design rules.
However, this requires designers to spend a tremendous amount of time and
effort to combine existing and new data since the data are collected in
different formats. One possible next step towards creating adapting and
evolving models is to find a common language/format and destination to collect
the results of the empirical studies. Ideally, we can develop systems that
incorporate the incoming data and update the model automatically.
## 4 Conclusion
In the past, visualization recommendation systems have incorporated
visualization design guidelines through a manual process of curation and
application. In this paper, we argue it is time to move beyond this laborious
process that is hard to scale. We imagine a future in which these systems are
machine learning models learned from experimental data. To achieve this
future, steps must be taken in engineering robust and adaptable models,
developing tools for interpretability of the created models, and consolidating
data. Once achieved, however, new guidelines and data collected from graphical
perception studies can be applied in practice faster and in an unbiased and
reproducible manner.
### Acknowledgements
We thank the reviewers and Kevin Hu for their feedback.
## References
* [1] A. Abdul, J. Vermeulen, D. Wang, B. Y. Lim, and M. Kankanhalli. Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda. CHI ’18, 2018. doi: 10 . 1145/3173574 . 3174156
* [2] C. C. Aggarwal et al. Recommender systems. Springer, 2016.
* [3] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, C. Gulcehre, F. Song, A. Ballard, J. Gilmer, G. Dahl, A. Vaswani, K. Allen, C. Nash, V. Langston, C. Dyer, N. Heess, D. Wierstra, P. Kohli, M. Botvinick, O. Vinyals, Y. Li, and R. Pascanu. Relational inductive biases, deep learning, and graph networks, 2018\. arXiv:1806.01261v1.
* [4] L. Battle, P. Duan, Z. Miranda, D. Mukusheva, R. Chang, and M. Stonebraker. Beagle: Automated extraction and interpretation of visualizations from the web. In Proceedings of CHI. ACM, 2018.
* [5] J. Bertin. Semiology of graphics: diagrams, networks, maps. 1983\.
* [6] W. S. Cleveland and R. McGill. Graphical perception: Theory, experimentation, and application to the development of graphical methods. Journal of the American Statistical Association, 1984.
* [7] Ç. Demiralp, M. Bernstein, and J. Heer. Perceptual kernels for visualization design. IEEE Vis (Proc. InfoVis), 2014.
* [8] V. Dibia and Ç. Demiralp. Data2Vis: Automatic generation of data visualizations using sequence to sequence recurrent neural networks. CoRR, abs/1804.03126, 2018.
* [9] G. Fechner. Elements of psychophysics. 1966\.
* [10] J. Heer, N. Kong, and M. Agrawala. Sizing the horizon: The effects of chart size and layering on the graphical perception of time series visualizations. In ACM Human Factors in Computing Systems (CHI), 2009.
* [11] A. Key, B. Howe, D. Perry, and C. R. Aragon. Vizdeck: self-organizing dashboards for visual analytics. In Proceedings of the ACM SIGMOD International Conference on Management of Data, SIGMOD 2012, Scottsdale, AZ, USA, May 20-24, 2012, 2012\. doi: 10 . 1145/2213836 . 2213931
* [12] Y. Kim and J. Heer. Assessing effects of task and data distribution on the effectiveness of visual encodings. Proc. EuroVis, 2018.
* [13] S. Kok and P. Domingos. Learning the structure of markov logic networks. In Proceedings of the 22Nd International Conference on Machine Learning, ICML ’05. ACM, New York, NY, USA, 2005.
* [14] P. Lee, J. West, and B. Howe. Viziometrix: A platform for analyzing the visual information in big scholarly data. In BigScholar Workshop (co-located at WWW), 2016.
* [15] Y. Luo, X. Qin, N. Tang, G. Li, and X. Wang. Deepeye: Creating good data visualizations by keyword search. In Proceedings of the 2018 International Conference on Management of Data, SIGMOD ’18, pp. 1733–1736. ACM, New York, NY, USA, 2018\.
* [16] J. Mackinlay. Automating the design of graphical presentations of relational information. ACM Transactions on Graphics, 1986.
* [17] J. D. Mackinlay, P. Hanrahan, and C. Stolte. Show Me: Automatic presentation for visual analysis. IEEE Vis (Proc. InfoVis), 13, 2007.
* [18] V. O. Mittal, G. Carenini, J. D. Moore, and S. Roth. Describing complex charts in natural language: A caption generation system. Computational Linguistics, 1998.
* [19] D. Moritz, C. Wang, G. Nelson, H. Lin, A. Smith, B. Howe, and J. Heer. Formalizing visualization design knowledge as constraints: Actionable and extensible models in draco. In IEEE Vis (Proc. InfoVis), 2018.
* [20] K. Morton, M. Balazinska, D. Grossman, R. Kosara, and J. Mackinlay. Public data and visualizations: How are many eyes and tableau public used for collaborative analytics? SIGMOD Rec., 2014.
* [21] J. R. Quinlan. Learning logical definitions from relations. Machine Learning, 1990. doi: 10 . 1007/BF00117105
* [22] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62, 2006. doi: 10 . 1007/s10994-006-5833-1
* [23] B. Saket, A. Endert, and Ç. Demiralp. Task-based effectiveness of basic visualizations. IEEE Vis (Proc. InfoVis), 2018.
* [24] A. Satyanarayan, D. Moritz, K. Wongsuphasawat, and J. Heer. Vega-Lite: A grammar of interactive graphics. IEEE Vis (Proc. InfoVis), 2017.
* [25] S. S. Stevens. On the psychophysical law. Psychological review, 1957.
* [26] D. A. Szafir. Modeling color difference for visualization design. IEEE Vis (Proc. InfoVis), Jan 2018. doi: 10 . 1109/TVCG . 2017 . 2744359
* [27] J. Talbot, J. Gerth, and P. Hanrahan. Arc length-based aspect ratio selection. IEEE Trans. Visualization & Comp. Graphics, 2011.
* [28] K. Wongsuphasawat, D. Moritz, A. Anand, J. Mackinlay, B. Howe, and J. Heer. Voyager: Exploratory analysis via faceted browsing of visualization recommendations. IEEE Vis (Proc. InfoVis), 2016.
|
# Photometric Survey of Neptune’s Trojan Asteroids I: The Color Distribution
Larissa Markwardt Department of Physics, University of Michigan
450 Church Street
Ann Arbor, MI 48109-1107, USA Hsing Wen Lin ( 林省文) Department of Physics,
University of Michigan
450 Church Street
Ann Arbor, MI 48109-1107, USA David Gerdes Department of Astronomy,
University of Michigan
1085 South University Avenue
Ann Arbor, MI 48109-1107, USA Department of Physics, University of Michigan
450 Church Street
Ann Arbor, MI 48109-1107, USA Fred C. Adams Department of Astronomy,
University of Michigan
1085 South University Avenue
Ann Arbor, MI 48109-1107, USA Department of Physics, University of Michigan
450 Church Street
Ann Arbor, MI 48109-1107, USA Larissa Markwardt<EMAIL_ADDRESS>
###### Abstract
In 2018, Jewitt identified the “The Trojan Color Conundrum”, namely that
Neptune’s Trojan asteroids (NTs) had no ultra-red members, unlike the the
nearby Kuiper Belt. Since then, numerous ultra-red NTs have been discovered,
seemingly resolving this conundrum (Lin et al., 2019; Bolin et al., 2023).
However, it is still unclear whether or not the Kuiper Belt has a color
distribution consistent with the NT population, as would be expected if it
were the source population. In this work, we present a new photometric survey
of 15 out of 31 NTs. We utilized the Sloan
$g^{\prime}$$r^{\prime}$$i^{\prime}$$z^{\prime}$ filters on the IMACS f/4
instrument which is mounted on the 6.5m Baade telescope. In this survey, we
identify four NTs as being ultra-red using a Principal Component Analysis
(PCA). This result brings the ratio of red to ultra-red NTs to 7.75:1, more
consistent with the corresponding Trans-Neptunian Object (TNO) ratio of
4-11:1. We also identify three targets as being blue (nearly Solar) in color.
Such objects may be C-type surfaces, but we see more of these blue NTs than
has been observed in the Kuiper Belt (Seccull et al., 2018). Finally, we show
that there are hints of a color-absolute magnitude (H) correlation, with
larger H (smaller sized, lower albedo) tending to be more red, but more data
is needed to confirm this result. The origin of such a correlation remains an
open question which will be addressed by future observations of the surface
composition of these targets and their rotational properties.
Neptune trojans (1097) – Multi-color photometry (1077) – CCD photometry (208)
## 1 Introduction
Trojan asteroids are planetary companions that reside in the asymmetric 1:1
mean-motion resonance of planets; these asteroids librate at the planet-Sun L4
and L5 Lagrange points, meaning that they have the same orbit as the planet
but librate about a point 60∘ ahead of (L4) or behind (L5) the planet.
Numerical simulations show that orbits of Trojan asteroids can be quite
stable, on order the age of the Solar System (Ćuk et al., 2012; Gomes &
Nesvorný, 2016; Lykawka et al., 2011). Therefore, the stable members of these
populations are likely relatively undisturbed remnants of our primordial
planetary disk. The physical properties of these populations can thus give us
a window into the early Solar System.
However, Neptune’s Trojan asteroids are not thought to have formed in-situ.
Rather, this population likely grew through capture of planetesimals during
the epoch of planetary migration, during which the outer planets migrated from
the location of their formation to their present day locations (Fernandez &
Ip, 1984; Malhotra, 1993, 1995; Hahn & Malhotra, 1999). Assuming Neptune
migrated significantly in its early evolution, the Lagrange points must have
also migrated with it (Kortenkamp et al., 2004) Therefore, the NT population
can be used to constrain migratory models (Gomes & Nesvorný, 2016; Nesvorný et
al., 2013, 2018; Pike et al., 2017a). Such migration would have occurred in
the first several hundred Myr in the history of the Solar System, so while
these objects may not have formed in-situ, they still are remnants of the very
early Solar System.
Such models show that primordial Jupiter Trojan populations do not survive
this planetary migration, indicating they must have originated from elsewhere
in the Solar System. (Roig & Nesvorný, 2015). Similarly, since the dynamics of
planetary migration likely dispersed any primordial NTs as well, from where
did the current population of NTs originate? The most likely source is the
nearby Kuiper Belt. If that were the case, one would expect these two
populations to be similar in size and color (surface composition). Regarding
the color of the KBOs, the bimodality of red ($g-i$ < 1.2) vs. ultra-red
($g-i$ > 1.2) members has been well established (Sheppard, 2010; Schwarz et
al., 2011; Hainaut et al., 2012; Peixinho et al., 2012; Sheppard, 2012;
Lacerda et al., 2014; Peixinho et al., 2015; Pike et al., 2017b; Wong & Brown,
2017; Schwamb et al., 2019). Similarly, the centaur population, small bodies
which orbit between Jupiter and Neptune, are thought to be fed by
planetesimals escaping the NT region (Horner & Lykawka, 2010). These objects
are also red/ultra-red in color (Peixinho et al., 2012, 2015).
Through 2018, no ultra-red NTs had been found, making their color distribution
distinctly different than their expected origins or offshoots. Termed the
“Trojan Color Conundrum”, this tension is not easy to resolve (Jewitt, 2018).
One explanation is that some sort of resurfacing has happened to the NT
population specifically that affected neither the centaurs or KBOs. Jupiter’s
Trojan population is also devoid of ultra-red members which is thought to be
due to thermal resurfacing (Luu & Jewitt, 1996; Jewitt, 2002). However, the
temperatures at the distance of Neptune are too cold for such a scenario to be
valid (Jewitt, 2018). Another potential explanation is collisional
resurfacing, which could strip the ultra-red crust off of the surfaces of
these bodies revealing a bluer surface underneath. One source of such
collisions could be Plutinos, 3:2 resonators with Neptune, which have
significant orbital overlap with the NT population (Almeida et al., 2009).
Such collisions are expected to occur when Plutinos have high libration
amplitudes, high eccentricities, and low inclinations; therefore, we would
expect the color distribution of NTs to be inclination-dependent as well,
where high inclination NTs avoid these collisions and retain their ultra-red
surfaces. Finally, this discrepancy could be due to a primordial boundary
between red/ultra-red bodies that was subsequently mixed by Neptune’s
migration (DeMeo & Carry, 2014; Neveu & Vernazza, 2019). Based on the exact
nature of the epochs of radial mixing, mass removal, and planet migration, the
resulting NT population could be devoid of ultra-red members while the Centaur
population is not (Neveu & Vernazza, 2019), but specific simulations of these
two populations have not been conducted. This hypothesis has been supported by
the discovery of two Trans-Neptunian Object (TNO)-like (red) objects all the
way in the asteroid belt (Hasegawa et al., 2021).
In 2019, the first ultra-red NT, 2013 VX30, was discovered (Lin et al., 2019),
and additional ultra-red NTs have been discovered since then (Bolin et al.,
2023). On the surface, these discoveries seem to resolve the conundrum.
However, the color distribution of NTs still appears distinct from that of
other TNO populations (Bolin et al., 2023). Further observations of the NT
population are needed to determine whether or not these distributions are
truly distinct.
The structure of this paper is as follows: Section 2 describes the design of
our photometric survey. Section 3 outlines our data reduction process. Section
4 presents the results of our survey. Section 5 discuss the meaning of our
results. Section 6 is conclusions drawn from these results.
## 2 Survey Design
The goal of this paper is to measure the optical colors of currently known NTs
in order to better understand the physical characteristics of their surfaces.
The targets are listed in Table 1. All of our targets have been previously
observed but not by the same survey. All of our targets, except 2015 VU207,
were already known to be stable for $\sim$Gyr (Lin et al., 2021, 2022).
Following the methods of Lin et al. (2022), we find that 2015 VU207 is also
stable for Gyr in our simulations.
We used the IMACS f/4 instrument on the 6.5m Baade telescope at Las Campanas
Observatory on 4 unique nights to observe this population. IMACS was most
suitable for this task with its optical wavelength coverage ($\sim$400 - 900
nm) and large FOV to account for the positional uncertainty of the targets.
The Sloan $g^{\prime}$$r^{\prime}$$i^{\prime}$$z^{\prime}$ filters were used
for our photometric measurements. In order to account for any variation due to
a target’s rotational period, we observed each target with “bounding”
$r^{\prime}$-band observations (i.e. each observation in a different filter
was preceded and followed by an observation in $r^{\prime}$). . We chose
$r^{\prime}$ to be the bounding observations since this filter reaches the
highest SNR in the shortest amount of time. The fast readout mode with 2x2
binning was used.
Table 1: NT targets of this survey. Columns: (1) Object Designation, has
previous color measurements taken from 1: Sheppard (2012), 2: Parker et al.
(2013), 3: Jewitt (2018), 4: Lin et al. (2019), 5: Bolin et al. (2023); (2)
Lagrange Point; (3) Eccentricity; (4) Inclination (∘); (5) Absolute Magnitude;
(6) Dates observed; (7) Measured ave. SDSS r-band magnitude; (8) Measured SDSS
g-r (mag); (9) Measured SDSS r-i (mag); (10) Measured SDSS r-z (mag); (11)
Color classification determined based on the Principal Component Analysis (see
Sec. 4.2).
Name | L4/L5 | e | i | H | Date Observed | Ave. r | $g-r$ | $r-i$ | $r-z$ | Color Class.
---|---|---|---|---|---|---|---|---|---|---
2006 RJ1031,2 | L4 | 0.03 | 8.2 | 7.56 | 113021 | 21.97 | 0.59 $\pm$ 0.045 | 0.16 $\pm$ 0.035 | 0.17 $\pm$ 0.058 | red
| | | | | 120222 | 21.88 | — | — | 0.24 $\pm$ 0.055 | indeterminate
2007 VL3051,2 | L4 | 0.07 | 28.1 | 8.51 | 113021 | 22.60 | 0.60 $\pm$ 0.054 | 0.25 $\pm$ 0.038 | -0.15 $\pm$ 0.109 | red
| | | | | 120222 | 22.60 | — | — | 0.30 $\pm$ 0.047 | indeterminate
2010 TS1913 | L4 | 0.05 | 6.6 | 8.07 | 113021 | 22.39 | 0.61 $\pm$ 0.029 | 0.30 $\pm$ 0.029 | 0.64 $\pm$ 0.078 | red
2011 SO2773 | L4 | 0.01 | 9.6 | 7.76 | 113021 | 22.43 | 0.60 $\pm$ 0.067 | — | —- | indeterminate
| | | | | 120222 | 22.53 | — | 0.57 $\pm$ 0.050 | 0.82 $\pm$ 0.047 | ultra-red
2012 UD1855 | L4 | 0.04 | 28.3 | 7.59 | 113021 | 22.32 | 0.61 $\pm$ 0.033 | 0.37 $\pm$ 0.045 | 0.12 $\pm$ 0.081 | red
2012 UV1775 | L4 | 0.07 | 20.8 | 9.28 | 113021 | 23.76 | 0.71 $\pm$ 0.058 | 0.23 $\pm$ 0.051 | — | red
2013 RL1245 | L4 | 0.03 | 10.1 | 8.83 | 113021 | 23.37 | 0.38 $\pm$ 0.075 | 0.54 $\pm$ 0.086 | 0.67 $\pm$ 0.128 | red
2013 TZ1875 | L4 | 0.07 | 13.1 | 8.19 | 113021 | 23.27 | 0.90 $\pm$ 0.053 | 0.30 $\pm$ 0.057 | — | ultra-red
2013 VX304,5 | L4 | 0.09 | 31.2 | 8.31 | 113021 | 22.60 | 1.01 $\pm$ 0.043 | 0.44 $\pm$ 0.043 | 0.86 $\pm$ 0.049 | ultra-red
| | | | | 091122 | 22.96 | 0.70 $\pm$ 0.104 | 0.47 $\pm$ 0.048 | 0.73 $\pm$ 0.045 | ultra-red
2014 RO745 | L4 | 0.05 | 29.5 | 8.39 | 120222 | 23.34 | 0.65 $\pm$ 0.052 | 0.42 $\pm$ 0.064 | 1.42 $\pm$ 0.069 | ultra-red
2014 SC3745 | L4 | 0.10 | 33.7 | 8.18 | 113021 | 23.24 | 0.43 $\pm$ 0.066 | 0.12 $\pm$ 0.081 | — | blue
2014 YB92${5}$ | L4 | 0.10 | 30.8 | 8.62 | 091222 | 23.41 | 0.46 $\pm$ 0.187 | 0.07 $\pm$ 0.100 | 0.36 $\pm$ 0.090 | blue
2015 VU2075 | L4 | 0.03 | 38.9 | 7.28 | 080922 | 22.23 | 0.31 $\pm$ 0.034 | 0.24 $\pm$ 0.031 | 0.40 $\pm$ 0.024 | red
| | | | | 091122 | 22.10 | 0.47 $\pm$ 0.052 | 0.09 $\pm$ 0.068 | 0.35 $\pm$ 0.028 | blue
2015 VV1655 | L4 | 0.09 | 16.8 | 9.02 | 113021 | 23.32 | 0.87 $\pm$ 0.049 | 0.32 $\pm$ 0.055 | — | ultra-red
2015 VW1655 | L4 | 0.05 | 5.0 | 8.39 | 113021 | 22.89 | 0.45 $\pm$ 0.032 | 0.36 $\pm$ 0.048 | — | red
| | | | | 120222 | 22.93 | — | — | 0.61 $\pm$ 0.060 | indeterminate
## 3 Photometric Reduction
### 3.1 Calibration
To calibrate the photometry of our IMACS observations, we cross-matched the
in-frame background stars against PS1 sources (Magnier et al., 2013). We first
converted the PS1 griz photometry to the SDSS system using the transformation
equations in Tonry et al. (2012), and then selected the sources with $g-r$
between 0.25 and 2.0, $r-i$ between 0.0 and 0.8 as the reference sources. By
solving the equation below using the apparent magnitude of the reference
sources, we determined the photometric zeropoint of each frame:
$m_{sdss}=m_{ins}+2.5\log_{10}(\tau_{exp})+m_{0},$ (1)
where $m_{sdss}$ is the apparent magnitude of a specific band of the cross-
matched reference sources, $m_{ins}$ is the instrumental magnitude of that
specific band measured from the IMACS image, $\tau_{exp}$ is the exposure
time, and $m_{0}$ is the photometric zeropoint of that frame.
After we determined the zeropoints of each frame, we used every cross-matched
star in every frame to evaluate the linear color conversions between the IMACS
and SDSS photmetric system by solving the following equation:
$m_{M}=m_{sdss}+a~{}(g-r)_{sdss}+b,$ (2)
where $m_{M}$ and $m_{sdss}$ are the IMACS and SDSS magnitude, respectively,
and a, b are the coefficients of the linear conversion. The results are:
$\begin{split}g_{M}=g_{sdss}-0.078(g-r)_{sdss}+0.069\\\
r_{M}=r_{sdss}-0.024(g-r)_{sdss}+0.024\\\
r_{M}=r_{sdss}-0.038(r-i)_{sdss}+0.015\\\
i_{M}=i_{sdss}-0.188(r-i)_{sdss}+0.134\\\
z_{M}=z_{sdss}-0.026(g-r)_{sdss}+0.031\end{split}$ (3)
With the photometric zeropoints and the color conversion equations, we are
able to measure the griz colors of targets in SDSS photmetry system.
### 3.2 PSF Modeling
To accurately measure the flux and apparent magnitude of NTs, we select stars
around the target NT to model the local PSF. Several popular analytical
functions are considered for modeling the PSF, such as Moffat (Moffat, 1969)
and the sum of 2D Gaussians (Bendinelli et al., 1990). Both functions can
adequately model the “wing” of PSF. However, considering our PSF can be
asymmetric (not round, see Figure 1), we model the PSF by using the
superposition of n asymmetric 2D Gaussians. The flux of the PSF at any point
in the $(x^{\prime},y^{\prime})$ orthogonal coordinate system is:
$PSF(x^{\prime},y^{\prime})=b(x^{\prime},y^{\prime})+\sum_{i=1}^{n}\mathrm{A_{i}}~{}\Big{(}\texttt{exp}\big{[}-(\frac{x^{\prime
2}}{2\sigma^{2}_{x^{\prime}i}}+\frac{y^{\prime
2}}{2\sigma^{2}_{y^{\prime}i}})\big{]}\big{)},$ (4)
where $b(x^{\prime},y^{\prime})$ is the background flux at that point, n is a
small number, Ai is the amplitude of individual Gaussian,
$\sigma_{x^{\prime}i}$ and $\sigma_{y^{\prime}i}$ are the widths on
$x^{\prime}$ and $y^{\prime}$ axes of individual Gaussian, respectively. This
equation can be rotated to the image reference frame $(x,y)$ with a position
angle $\theta$ and translating the centroid to $(x_{0},y_{0})$ such that
$\begin{pmatrix}x^{\prime}\\\
y^{\prime}\end{pmatrix}=\begin{bmatrix}\cos\theta&-\sin\theta\\\
\sin\theta&\cos\theta\end{bmatrix}\begin{pmatrix}x-x_{0}\\\
y-y_{0}\end{pmatrix}.$
Therefore, the Gaussian functions share the same center, position angle, and
ellipticity but have unequal contribution and different width. To proper chose
‘n’, the number of Gaussians we should use, we calculate Bayesian information
criterion (BIC) for each n we use. The BIC is defined as:
$\mathrm{BIC}=-2\>\mathrm{ln}(\hat{\mathcal{L}})+k\>\mathrm{ln(m)},$ (5)
where $\hat{\mathcal{L}}$ is the maximum likelihood of the model, $k$ is the
number of parameters estimated by the model, and m is number of data points we
use to fit the model. The models with lower BIC values are generally
preferred, and it penalizes the model with larger $k$ automatically. Since the
multiple Gaussian PSF model can be linearized by taking logarithm and assuming
that the errors are normally-distributed, the $\hat{\mathcal{L}}$ is
equivalent to the least squares estimation. Thus, BIC can be written as a
function of error variance $\hat{\sigma_{e}^{2}}$:
$\mathrm{BIC}=m\>\mathrm{ln}(\hat{\sigma_{e}^{2}})+k\>\mathrm{ln(m)},$ (6)
In other words, the model with lower residual and fewer parameters is
preferred. We find that the model with n = 1, a single 2D Gaussian, always has
highest BIC. On the other hand, the models with n = 2 and n = 3 generally have
similar BICs, therefore we conclude that using any model with n $>$ 3 is
redundant.
Finally, we use the PSF model with n = 2 or 3, depending on which one has
lower BIC. Once all of the parameters are measured via modeling the stars, the
target NT can be modeled by refitting the center and amplitude of the PSF. The
flux is the sum of the final model. Figure 1 demonstrates that both the star
and the NT can be properly subtracted by the PSF model.
Figure 1: PSF modeling and subtraction. top-left: A star with the PSF model
contour. bottom-left: The image of NT. middle: The model of the star (top) and
the NT (bottom). right: the images after subtraction of the model.
### 3.3 Rotation Curve Correction
The observed magnitudes, and the resulting colors we are trying to measure,
are subject to rotational variations on the surface of these objects. To
approximately account for this, we use a model that exhibits a linear
variation in source brightness (its $r^{\prime}$-band magnitudes) and constant
$g^{\prime}-r^{\prime}$, $r^{\prime}-i^{\prime}$, $r^{\prime}-z^{\prime}$
colors (to convert each measurement to and $r^{\prime}$-band magnitude). This
model was then fit using a least-squares approach (see Fig. 2). The resulting
colors have been converted to SDSS magnitudes ($griz$; see Eq. 3), which are
reported in Table 1.
### 3.4 Reddening Line
Taken from Hainaut & Delsanti (2002), the reddening, or the spectral index,
can be expressed as the percent of reddening per 100 nm:
$S(\lambda_{1},\lambda_{2})=100*\frac{R(\lambda_{2})-R(\lambda_{1})}{(\lambda_{2}-\lambda_{1})/100}$
(7)
where $R(\lambda)$ is taken from Jewitt & Meech (1986):
$R(\lambda)=10^{-0.4(m(\lambda)-m_{\odot}(\lambda))}$ (8)
such that $m(\lambda)$ and $m_{\odot}(\lambda)$ are the magnitude of the
object and the Sun, respectively, at a particular wavelength, $\lambda$.
Setting the reddening line to pass through the color of the Sun (i.e. for
$S(\lambda_{1},\lambda_{2})$ = 0,
$m(\lambda_{1})-m(\lambda_{2})=m_{\odot}(\lambda_{1})-m_{\odot}(\lambda_{2})$),
we can derive the following equation, assuming
$m(\lambda_{1})=m_{\odot}(\lambda_{1})$:
$m(\lambda_{2})=-2.5log[1-10^{-4}S(\lambda_{1},\lambda_{2})(\lambda_{1}-\lambda_{2})]+m_{\odot}(\lambda_{2})$
(9)
Assuming $S(\lambda_{1},\lambda_{2})$ varies from -10% to 80%, we can plot the
reddening line for $g-r$ vs $r-i$ and $g-r$ vs $r-z$ in Fig. 3 and Fig. 4
respectively. Note that our targets generally fall along the reddening line,
as has been observed for small bodies in the outer Solar System previously
(Hainaut & Delsanti, 2002). Objects that fall above/below the reddening line
must exhibit emission/absorption lines at those particular wavelengths causing
them to deviate from a flat spectral index.
Figure 2: This figure shows our least-squares approach to fitting
$r^{\prime}$-band lightcurves and colors for an example NT target, 2013 VX30.
Each observation is shown as a colored point (blue downward triangle for
$g^{\prime}$, green square for $r^{\prime}$, yellow diamond for $i^{\prime}$,
and orange sideways triangle for $z^{\prime}$
. We then used a constant, but free parameter, color term to convert each
observation to an $r^{\prime}$-band observations; these point are shown as
black circles. The solid line is our least-squares fit to the
$r^{\prime}$-band (black) points. The dotted lines show our 1$\sigma$
deviation from this fit.
## 4 Results
### 4.1 Color-Color Results
In Fig. 3, we show the $g-r$ and $r-i$ colors measured for our NT targets.
Similar to the scattered TNOs, our targets exhibit wide range in this color
space; while most targets fall within the “red” zone (principal component
<1.75; see Sec. 4.2), there are three firm and one potential NTs in the
“ultra-red” zone (principal component >1.75). Of these objects, we identified
two new ultra-red NTs, 2013 TZ187 and 2015 VV165, which were also
independently found and reported in Bolin et al. (2023). The potential “ultra-
red” NT, 2011 SO277 has varying results from different observations (Jewitt
(2018), Lin et al. (2019), and this work); see more discussion of this object
in Sec. 4.4.
With extra ultra-red colored NTs, the red to ultra-red ratio for our sample is
3.75:1, or 7.75:1 for the entire known population. This ratio is much more
consistent with the dynamically excited KBO ratio of between 4-11 : 1 (Schwamb
et al., 2019). However, comparing these ratios is not sufficient to determine
if the NT and KBO population come from the same source distribution (see Sec.
4.2). We also show the kernal density estimations (KDEs) of g-r and r-i color
in Fig. 3. Unlike the results from previous works, which claimed that the NTs
and JTs have very similar color distributions, our new results show that the
KDEs of NTs are closer the the KDEs of scattered TNOs. Further analysis is
presented in Sec. 4.2.
Figure 3: Measured $g-r$ vs $r-i$ of the NT population. Blue points are colors
of scattered TNOs and orange triangles are JTs, both taken from the literature
(Hainaut et al., 2012). Light blue x’s are previously observed colors of NTs
which the ”Trojan Color Conundrum” was based on (Sheppard & Trujillo, 2006;
Sheppard, 2012; Parker et al., 2013; Jewitt, 2018), while the blue plus signs
are more recently observed NT colors which bring this conundrum into question
(Lin et al., 2019; Bolin et al., 2023). Targets observed in this paper are
shown as green squares. Solar color and the reddening line (see 3.4) are
depicted as an yellow star and orange dotted line respectively. Objects that
have multiple observations in this paper are connected by a dot-dashed line.
NTs that have been previously observed in the literature are connected by a
dashed line. The yellow line marks values where the PCA yields values equal to
our cutoff of 1.75 (see Fig. 6 and Sec. 4.2). Objects in the yellow region are
above this cutoff and considered ultra-red in this paper. The blue line marks
values where the PCA yields values equal to our cutoff of -1.25 (see Fig. 6
and Sec. 4.2). Objects in the blue region are blue this cutoff and considered
blue in this paper. The top and right inset plots show the kernel density
estimation (KDE) of the g-r and r-i distributions respectively of the included
sub-populations.
In Fig. 4, we show the $g-r$ and $r-z$ colors measured for our NT targets. All
of our targets are consistent with the scattered/hot TNO populations. This
result is expected as NTs are thought to have originated from
scattered/unstable TNOs. The physical cause of this $z$-band colorization of
the cold TNO population is not currently clear, but must be due to some
absorption feature around 900 nm based on the displacement from reddening
line. Spectroscopic information, such as will be taken with JWST (Markwardt et
al., 2021), will shed further light on chemical links between these
populations.
Figure 4: Measured $g-r$ vs $r-z$ of the NT population. Navy upward triangles,
green downward triangles, and blue circles are measurements taken from the
literature of TNOs (scattered, cold, and hot respectively) (Schwamb et al.,
2019). Teal plus signs are colors of NTs taken from the literature (Lin et
al., 2019). Targets observed in this paper are shown as orange squares. Solar
color and the reddening line (see 3.4) are depicted as an yellow star and
orange dotted line respectively. Objects that have observations taken in this
paper and from the literature are connected with a dashed line. Objects that
have multiple observations in this paper are connected by a dot-dashed line.
The green ellipse demarcates the region of color-color space occupied only by
cold TNOs. The top and right inset plots show the kernel density estimation
(KDE) of the g-r and r-z distributions respectively of the included sub-
populations.
### 4.2 Comparison to Previous Observations
All of the targets in our sample have previous observations (though not all
from the same survey). Therefore, we compare the difference between our
measurements and those from the literature to our computed errors, shown in
Fig. 5, to determine if there is any systematic offset in our observations. We
find that the observed differences in g-r are mostly within our observational
errors, meaning our observations are roughly consistent with previous
literature. However, previous observations are split between being slightly
systematically larger in r-i and systematically lower than our measurements.
Further investigation indicated that the larger group has smaller offset in
the order of 0.05, and the lower group has larger offset about -0.15. We also
find an instrument dependency on the groups; the smaller offset samples were
mostly measured with Gemini and Dark Energy Survey, which both have proper
photometry transformation equations to SDSS system. On the other hand, the
larger offset samples were mostly measured by using the R and I filters or
without proper photometry transformation equations. Therefore it is likely
that the different photometry systems mostly contribute such systematic
offsets. In every case, this did not change the result much on the following
Principal Component Analysis (PCA) analysis, since the g-r axis is the
dominant element on our Principal Component.
Figure 5: The differences in observed color between NTs in this paper and the
literature as compared to the average error on our observations. The
differences in g-r, r-i, and r-z observations are shown as blue, orange, and
green histograms respectively. The average g-r, r-i, and r-z errors are shown
as blue dotted, green dot-dashed, and orange dashed lines respectively
### 4.3 Comparison to Other Populations
The ultimate goal of this work is to determine how similar the NT colors are
to other populations in the Solar System. A simple statistical test to measure
the likelihood that two distributions are drawn from the same underlying
distribution is the Kolmogorov-Smirnov (KS) test (Darling, 1957). Although the
KS test can be generalized to more than a single dimension, the interpretation
becomes complicated. For simplicity, we reduce the dimension of our data and
use the traditional statistical test. Specifically, we performed a Principal
Component Analysis (PCA) of our data, using the scikit-learn python package
(Pedregosa et al., 2011). Fig. 6 demonstrates that the PCA is able to
successfully reduce the g-r vs. r-i color-color plot to a 1-D parameter that
still distinguishes between the red and ultra-red populations of TNOs and the
whole JT population (which is comprised of only red objects). The principal
component value (PC1) which separates these populations is 1.75 (shown as a
dotted line in Fig. 6). We use this definition to classify our NT targets as
red or ultra-red; the corresponding region in g-r vs r-i space is shown in
Fig. 3 as a yellow shaded region. We then applied this PCA model to other
populations in the Solar System, including JTs and previous observations of
NTs, the results of which are shown in Fig. 7. By eye, the JT population is
clearly unique in that it is nearly devoid of any ultra-red members (i.e.
targets with a PC1 >1.75). Also of note, about 25% of the NT targets presented
in this paper occupy a unique region of PC1 $\sim-1$. This region corresponds
to blue objects that are not frequently present in the outer Solar System
populations (see Sec. 4.4 for a more in-depth discussion of these objects).
Figure 6: The results of running a Principal Component Analysis (PCA) with the g-r and r-i colors of certain Solar System populations. The green histogram corresponds to the JTs (taken from Hainaut et al. (2012)). The blue and orange histograms correspond to the red and ultra-red subpopulations of the scattered TNOs, taken from Hainaut et al. (2012); the classification of red vs ultra-red was determined by using a clustering algorithm (DBSCAN; Pedregosa et al. (2011)) which separated the TNOs into two sub-populations. Figure 7: Cumulative distributions of the Principal Component (see Sec. 4.2) values of populations in the Solar System. The cut-off between red and ultra-red as defined by this PCA is shown as a black dashed line (see Fig. 6). The cut-off between red and blue objects is similarly shown as a dot-dashed line. The JT and scattered TNO results are shown as orange and navy histograms respectively. The NTs observations from previous literature are shown as a blue histogram. The NT observations from this work are shown as a green histogram. KS Test P-value | NTs (This Work) | NTs (Pre-2019) | NTs (Post-2019) | TNOs | JTs
---|---|---|---|---|---
NTs (This Work) | 1 | 0.020 | 0.61 | 0.56 | 0.003
NTs (Pre-2019) | 0.020 | 1 | 0.15 | 0.03 | 0.27
NTs (Post-2019) | 0.61 | 0.15 | 1 | 0.14 | 0.05
TNOs | 0.56 | 0.03 | 0.14 | 1 | 0.0002
JTs | 0.003 | 0.27 | 0.05 | 0.0002 | 1
Table 2: The resulting p-values of the KS Test on each combination of sub-
populations considered in this work.
We then ran a KS test for each combination of these Solar System populations
to determine the likelihood that they came from the same underlying
distribution; the results of these tests are recorded in Table 2. We conclude
that the compared populations are from different distributions if they have a
p-value of $\leq$ 0.05, corresponding to a 95% confidence level to reject the
null hypothesis. Therefore, we find that the population observed in this work
is not consistent with being drawn from the same distribution as the JTs, but
is instead more consistent with the TNO population. This result is the
opposite of what was found pre-2019, where the NTs were more consistent with
the JT population. The results from post-2019 data also show that the NT
population is more consistent with the TNO population, but this work shores up
this result significantly. Further observations of members of the NT
population in particular could also increase the statistical significance of
this result. However, we feel confident in claiming that our results show NTs
and TNOs are consistent with coming from the same underlying distribution
based on their optical colors with the greatest confidence to date.
### 4.4 Color-Absolute Magnitude Relations
In Fig. 8, we plot the Principal Component for our targets as a function of
absolute magnitude (H). We look for any significant clustering or correlations
in these plots which would indicate that the color classification of NTs is
dependent on their size.
To search for clustering in our datasets, we run a Mean Shift Clustering
algorithm (Pedregosa et al., 2011), which does not need a number of clusters
as an input parameter (just a bandwith which can be initialized with the
estimate_bandwith function). To test the significance of clustering we
calculate the Cluster Index. The Cluster Index from the SigClust evaluation
tool is defined as (Ferland et al., 2013):
$CI=\frac{\sum_{k=1}^{N}\sum_{i\in
C_{k}}\parallel\boldsymbol{x_{i}}-\boldsymbol{\bar{x}}^{(k)}\parallel^{2}}{\sum_{i=1}^{n}\parallel\boldsymbol{x_{i}}-\boldsymbol{\bar{x}}\parallel^{2}}$
(10)
where $\boldsymbol{\bar{x}}^{(k)}$ represents the mean of the kth cluster for
k = 1, 2, … N for N clusters and $\boldsymbol{\bar{x}}$ represents the overall
mean. The CI provides a p-value for the significance of the cluster between
these two clusters. To test if our data was correlated, we used the Pearson
Correlation Coefficient (Kirch, 2008) which is defined as:
$r=\frac{\sum(x_{i}-\bar{x})(y_{i}-\bar{y})}{\sqrt{\sum(x_{i}-\bar{x})^{2}\sum(y_{i}-\bar{y})^{2}}}$
(11)
where $x_{i}$ and $y_{i}$ are the data points and $\bar{x}$ and $\bar{y}$ are
the respective means. We calculated each of these values for all of the plots
shown in Fig. 8. To determine whether or not these values could be obtained
from random noise we generated 1000 sets of points with the same number of
objects as our observation within the same region of Principal Component vs H
space and ran the same analysis on those sets. These results are shown in the
inset histograms in Fig. 8.
We found that the cluster is consistent with random noise and should not be
considered significant. This result also suggest that the color of NTs are
distributed continuously from blue to ultra-red rather than bimodally. The
positive correlation with size is intriguing and may point to primordial
differences in objects of different sizes in the outer Solar System. However,
H is not a direct correlation to size as the object’s albedo must be taken
into account. Such observations do not currently exist for the NT population
and will be necessary to establish a color-size correlation. Indeed,
photometric observations of the rest of the NT population are necessary to
confirm this slight correlation.
Figure 8: NT colors as a function of absolute magnitude. Grey points are taken
from the literature (Sheppard & Trujillo, 2006; Sheppard, 2012; Parker et al.,
2013; Jewitt, 2018; Lin et al., 2019; Schwamb et al., 2019). Colored squares
were measured in this paper. Duplicate observations of the same object are
connected by dashed lines. The inset plots contain histograms of the Cluster
Indices and Pearson Correlation Coefficients of a random distribution colors
and absolute magnitude (see Sec. 4.1). Each grey dashed line in the inset
plots shows the corresponding value calculated for the observed distribution.
### 4.5 Unique Targets
While most of our targets are consistent with previous color measurements, one
object, 2011 SO277 is classified here as ultra-red while its previous
observations place it firmly with in the red zone. Based on our other
observations, we consider our results to be roughly consistent with previous
literature (see Fig. 5), so this result is indeed unexpected. One explanation
as to why this object has such different colors in independent observations is
that its surface is not homogeneous. To test this hypothesis, a more in depth
study of the rotational properties of the surface of this object is necessary,
which will be upcoming in our next work on the lightcurves of NTs.
Three of our targets, 2014 SC375, 2014 YB92, and 2015 VU207, are much bluer,
nearly solar in color, as compared to the other NTs or KBOs. Bolin et al.
(2023) also reported 2014 YB92 and 2015 VU207 have blue, near solar color. In
fact, these objects are as blue as the blue B/C-type asteroids, such as 3200
Phaetheon (Tabeshian et al., 2019; Lisse & Steckloff, 2022). A similarly blue
TNO has been observed, which appears to be covered ferric oxides and
phyllosilicates (Seccull et al., 2018). This TNO has a highly eccentric and
inclined orbit, suggesting it may have a common origin with C-type asteroids
and has since been implanted into trans-Neptunian space. It is possible that
these NTs originated elsewhere in the Solar System, but their current orbits
are stable for > Gyrs (see Sec. 2), implying that they were captured just
after Neptune’s migration. However, based on these results the blue ratio for
NTs is currently much higher than that of the TNO population. This result may
suggest that inner Solar System material may be more efficiently transferred
to NT orbits which have a smaller perihelion than the Kuiper Belt. Future
spectral observations would be necessary to reveal any compositional
differences this target may have as compared to the rest of the NT population.
## 5 Why were the ultra-red NTs rare before 2019?
Prior to 2019, the ultra-red NTs were very rare; none of the 13 NT samples in
Jewitt (2018) are ultra-red NTs, which led to the claim of a “Trojan Color
Conundrum”. Here we propose two possibilities to explain this inconsistency:
1. 1.
Small number statistics: Small number statistics could generate such a
surprising result. If we assume a 7.75:1 apparent red to ultra-red ratio of
NTs, the chance to randomly select 13 objects without picking up any ultra-red
one is about 18%, which is very likely. If we use a 3.75:1 apparent red to
ultra-red ratio, the chance is now 0.5%. While it is not impossible, we may
also consider alternative explanations.
2. 2.
Selection effect: Since bigger objects are easier to detect and obtain color
measurements for, the 13 objects in Jewitt (2018) trend to be large; 10 of 13
have H $\leqslant 8$. Moreover, many NTs have been discovered by deeper (Lin
et al., 2021) or wider (Bernardinelli et al., 2022, 2020; Lin et al., 2019)
surveys since 2018, which included many high-inclination objects. Thus the
Jewitt (2018) sample appears to be biased toward bigger sized and lower
inclination objects. In fact, 8 of 13 NTs in the Jewitt (2018) sample have
orbital inclination $<10^{\circ}$; 9 of the 31 currently known NTs have
inclination $<10^{\circ}$, meaning that 8 of the 9 total low-inclination NTs
were included in the Jewitt (2018) sample. Such objects has very similar red
color (see Figure 8). Therefore, the possible color-orbit-size correlation in
NT population could be at least partially explain why the “Trojan Color
Conundrum” was observed, especially when there were some selection biases in
that sample.
## 6 Conclusions
In this paper, we measure the griz colors for 15 of the 24 known NTs. We used
the IMACS f/4 instrument on the 6.5m Baade telescope with Sloan g’r’i’z’
filters to conduct our photometric survey. We confirm that 2013 VX30 is ultra-
red in color, and identify three NTs as ultra-red. This result brings the red
to ultra-red ratio of NTs to 7.75:1, much more consistent with the
corresponding TNO ratio and resolving the “Trojan Color Conundrum”. Moreover,
the color distribution of NTs is now indistinguishable from the scattered
population of TNOs and different from the Jovian Trojans. We also find three
targets which have solar color, the origin of which is unclear; the most
likely explanation is that these objects originated from the inner solar
system. For the entire NT population, we find that color of NTs may correlated
to their absolute magnitude, and the objects with larger H trend to have
redder color. The explanation behind this correlation remains an open question
that is difficult to address with current data. More discoveries of NTs
(especially around L5) are clearly needed. The L5 point has historically been
difficult to study due to its overlap with the galactic plane, but the NT L5
region is moving away from this high stellar density region, making now the
perfect time to start studying this population. The true degree of asymmetry
between the L4 and L5 clouds will be an important to distinguishing different
formation scenarios for the NT population. Moreover, our ongoing work to
measure the rotational period and specific composition of these small bodies
directly will be vital to understanding the true origin of the NT population.
This paper includes data gathered with the 6.5 meter Magellan Telescopes
located at Las Campanas Observatory, Chile. This material is based upon work
supported by the National Aeronautics and Space Administration under grant No.
NNX17AF21G issued through the SSO Planetary Astronomy Program and by the
National Science Foundation under grant No. AST-2009096. This research was
supported in part through computational resources and services provided by
Advanced Research Computing at the University of Michigan, Ann Arbor.
## References
* Almeida et al. (2009) Almeida, A. J. C., Peixinho, N., & Correia, A. C. M. 2009, A&A, 508, 1021, doi: 10.1051/0004-6361/200911943
* Bendinelli et al. (1990) Bendinelli, O., Parmeggiani, G., Zavatti, F., & Djorgovski, S. 1990, AJ, 99, 774, doi: 10.1086/115373
* Bernardinelli et al. (2020) Bernardinelli, P. H., Bernstein, G. M., Sako, M., et al. 2020, ApJS, 247, 32, doi: 10.3847/1538-4365/ab6bd8
* Bernardinelli et al. (2022) —. 2022, ApJS, 258, 41, doi: 10.3847/1538-4365/ac3914
* Bolin et al. (2023) Bolin, B. T., Fremling, C., Morbidelli, A., et al. 2023, MNRAS, 521, L29, doi: 10.1093/mnrasl/slad018
* Ćuk et al. (2012) Ćuk, M., Hamilton, D. P., & Holman, M. J. 2012, MNRAS, 426, 3051, doi: 10.1111/j.1365-2966.2012.21964.x
* Darling (1957) Darling, D. A. 1957, The Annals of Mathematical Statistics, 28, 823. http://www.jstor.org/stable/2237048
* DeMeo & Carry (2014) DeMeo, F. E., & Carry, B. 2014, Nature, 505, 629, doi: 10.1038/nature12908
* Ferland et al. (2013) Ferland, G. J., Porter, R. L., van Hoof, P. A. M., et al. 2013, Rev. Mexicana Astron. Astrofis., 49, 137. https://arxiv.org/abs/1302.4485
* Fernandez & Ip (1984) Fernandez, J. A., & Ip, W. H. 1984, Icarus, 58, 109, doi: 10.1016/0019-1035(84)90101-5
* Gomes & Nesvorný (2016) Gomes, R., & Nesvorný, D. 2016, A&A, 592, A146, doi: 10.1051/0004-6361/201527757
* Hahn & Malhotra (1999) Hahn, J. M., & Malhotra, R. 1999, AJ, 117, 3041, doi: 10.1086/300891
* Hainaut et al. (2012) Hainaut, O. R., Boehnhardt, H., & Protopapa, S. 2012, A&A, 546, A115, doi: 10.1051/0004-6361/201219566
* Hainaut & Delsanti (2002) Hainaut, O. R., & Delsanti, A. C. 2002, A&A, 389, 641, doi: 10.1051/0004-6361:20020431
* Hasegawa et al. (2021) Hasegawa, S., Marsset, M., DeMeo, F. E., et al. 2021, ApJ, 916, L6, doi: 10.3847/2041-8213/ac0f05
* Horner & Lykawka (2010) Horner, J., & Lykawka, P. S. 2010, MNRAS, 402, 13, doi: 10.1111/j.1365-2966.2009.15702.x
* Jewitt (2018) Jewitt, D. 2018, AJ, 155, 56, doi: 10.3847/1538-3881/aaa1a4
* Jewitt & Meech (1986) Jewitt, D., & Meech, K. J. 1986, ApJ, 310, 937, doi: 10.1086/164745
* Jewitt (2002) Jewitt, D. C. 2002, AJ, 123, 1039, doi: 10.1086/338692
* Kirch (2008) Kirch, W., ed. 2008, Pearson’s Correlation Coefficient (Dordrecht: Springer Netherlands), 1090–1091, doi: 10.1007/978-1-4020-5614-7_2569
* Kortenkamp et al. (2004) Kortenkamp, S. J., Malhotra, R., & Michtchenko, T. 2004, Icarus, 167, 347, doi: 10.1016/j.icarus.2003.09.021
* Lacerda et al. (2014) Lacerda, P., Fornasier, S., Lellouch, E., et al. 2014, ApJ, 793, L2, doi: 10.1088/2041-8205/793/1/L2
* Lin et al. (2022) Lin, H.-W., Markwardt, L., Napier, K. J., Adams, F. C., & Gerdes, D. W. 2022, Research Notes of the American Astronomical Society, 6, 79, doi: 10.3847/2515-5172/ac6752
* Lin et al. (2019) Lin, H. W., Gerdes, D. W., Hamilton, S. J., et al. 2019, Icarus, 321, 426, doi: 10.1016/j.icarus.2018.12.006
* Lin et al. (2021) Lin, H. W., Chen, Y.-T., Volk, K., et al. 2021, Icarus, 361, 114391, doi: 10.1016/j.icarus.2021.114391
* Lisse & Steckloff (2022) Lisse, C., & Steckloff, J. 2022, Icarus, 381, 114995, doi: https://doi.org/10.1016/j.icarus.2022.114995
* Luu & Jewitt (1996) Luu, J., & Jewitt, D. 1996, AJ, 112, 2310, doi: 10.1086/118184
* Lykawka et al. (2011) Lykawka, P. S., Horner, J., Jones, B. W., & Mukai, T. 2011, MNRAS, 412, 537, doi: 10.1111/j.1365-2966.2010.17936.x
* Magnier et al. (2013) Magnier, E. A., Schlafly, E., Finkbeiner, D., et al. 2013, ApJS, 205, 20, doi: 10.1088/0067-0049/205/2/20
* Malhotra (1993) Malhotra, R. 1993, Nature, 365, 819, doi: 10.1038/365819a0
* Malhotra (1995) —. 1995, AJ, 110, 420, doi: 10.1086/117532
* Markwardt et al. (2021) Markwardt, L., Adams, F., Gerdes, D., et al. 2021, The First Near-IR Spectroscopic Survey of Neptune Trojans, JWST Proposal. Cycle 1, ID. #2550
* Moffat (1969) Moffat, A. F. J. 1969, A&A, 3, 455
* Nesvorný et al. (2018) Nesvorný, D., Vokrouhlický, D., Bottke, W. F., & Levison, H. F. 2018, Nature Astronomy, 2, 878, doi: 10.1038/s41550-018-0564-3
* Nesvorný et al. (2013) Nesvorný, D., Vokrouhlický, D., & Morbidelli, A. 2013, ApJ, 768, 45, doi: 10.1088/0004-637X/768/1/45
* Neveu & Vernazza (2019) Neveu, M., & Vernazza, P. 2019, ApJ, 875, 30, doi: 10.3847/1538-4357/ab0d87
* Parker et al. (2013) Parker, A. H., Buie, M. W., Osip, D. J., et al. 2013, AJ, 145, 96, doi: 10.1088/0004-6256/145/4/96
* Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., et al. 2011, Journal of Machine Learning Research, 12, 2825
* Peixinho et al. (2015) Peixinho, N., Delsanti, A., & Doressoundiram, A. 2015, A&A, 577, A35, doi: 10.1051/0004-6361/201425436
* Peixinho et al. (2012) Peixinho, N., Delsanti, A., Guilbert-Lepoutre, A., Gafeira, R., & Lacerda, P. 2012, A&A, 546, A86, doi: 10.1051/0004-6361/201219057
* Pike et al. (2017a) Pike, R. E., Lawler, S., Brasser, R., et al. 2017a, AJ, 153, 127, doi: 10.3847/1538-3881/aa5be9
* Pike et al. (2017b) Pike, R. E., Fraser, W. C., Schwamb, M. E., et al. 2017b, AJ, 154, 101, doi: 10.3847/1538-3881/aa83b1
* Roig & Nesvorný (2015) Roig, F., & Nesvorný, D. 2015, AJ, 150, 186, doi: 10.1088/0004-6256/150/6/186
* Schwamb et al. (2019) Schwamb, M. E., Fraser, W. C., Bannister, M. T., et al. 2019, ApJS, 243, 12, doi: 10.3847/1538-4365/ab2194
* Schwarz et al. (2011) Schwarz, G. J., Ness, J.-U., Osborne, J. P., et al. 2011, ApJS, 197, 31, doi: 10.1088/0067-0049/197/2/31
* Seccull et al. (2018) Seccull, T., Fraser, W. C., Puzia, T. H., Brown, M. E., & Schönebeck, F. 2018, ApJ, 855, L26, doi: 10.3847/2041-8213/aab3dc
* Sheppard (2010) Sheppard, S. S. 2010, AJ, 139, 1394, doi: 10.1088/0004-6256/139/4/1394
* Sheppard (2012) —. 2012, AJ, 144, 169, doi: 10.1088/0004-6256/144/6/169
* Sheppard & Trujillo (2006) Sheppard, S. S., & Trujillo, C. A. 2006, Science, 313, 511, doi: 10.1126/science.1127173
* Tabeshian et al. (2019) Tabeshian, M., Wiegert, P., Ye, Q., et al. 2019, The Astronomical Journal, 158, 30, doi: 10.3847/1538-3881/ab245d
* Tonry et al. (2012) Tonry, J. L., Stubbs, C. W., Lykke, K. R., et al. 2012, ApJ, 750, 99, doi: 10.1088/0004-637X/750/2/99
* Wong & Brown (2017) Wong, I., & Brown, M. E. 2017, AJ, 153, 145, doi: 10.3847/1538-3881/aa60c3
|
UNIFORM ENTROPY AND ENERGY BOUNDS FOR FULLY NON-LINEAR EQUATIONS 111Work
supported in part by the National Science Foundation under grant DMS-22-03273.
Bin Guo and Duong H. Phong
###### Abstract
Energy bounds which are uniform in the background metric are obtained from
upper bounds for entropy-like quantities. The argument is based on auxiliary
Monge-Ampère equations involving sublevel sets, and bypasses the Alexandrov-
Bakelman-Pucci maximum principle. In particular, it implies uniform
$L^{\infty}$ bounds for systems coupling a fully non-linear equation to its
linearization, generalizing the cscK equation.
## 1 Introduction
Let $(X,\omega_{X})$ be a compact $n$-dimensional Kähler manifold. If
$\varphi$ is any smooth $\omega_{X}$-plurisubharmonic function, its entropy
${\rm Ent}(\varphi)$ is defined as the entropy of the measure
$\omega_{\varphi}^{n}=(\omega_{X}+i\partial\bar{\partial}\varphi)^{n}$ with
respect to the measure $\omega_{X}^{n}$,
$\displaystyle{\rm Ent}(\varphi)=\int_{X}\,{\rm
log}\,({\omega_{\varphi}^{n}\over\omega_{X}^{n}})\omega_{\varphi}^{n}$ (1.1)
and, if $\varphi$ is normalized so that ${\rm sup}_{X}\varphi=0$, its energy
$E(\varphi)$ is defined as
$\displaystyle
E(\varphi)=\int_{X}(-\varphi)\omega_{\varphi}^{n}=\|\varphi\|_{L^{1}(\omega_{\varphi}^{n})}.$
(1.2)
Both notions are essential for the study of the Monge-Ampère equation and the
problem of constant scalar curvature Kähler metrics. For example, the entropy
is the leading term in the Mabuchi functional, while the energy is closely
related to the well-known Aubin-Yau functionals $I(\varphi)$ and $J(\varphi)$
of Kähler geometry. In a recent major breakthrough, it had actually been shown
by X.X. Chen and J.R. Cheng [5] that, in the constant scalar curvature
equation, an upper bound for the entropy is equivalent to bounds for $\varphi$
to all orders. In another direction, it has been known for some time that
bounds for the entropy imply bounds for the energy [4], and more precise
embeddings of spaces of potentials with finite entropy into
$L^{p}(\omega_{\varphi}^{n})$ have now been established in [10].
Our interest in entropy-like quantities comes from a related but different
source: entropy-like quantities can also be defined for general fully non-
linear equations on Kähler manifolds, and they turn out to be central to the
existence of a priori $L^{\infty}$ estimates [12]. More precisely, let
$\omega$ be a Kähler form on $X$, and consider an equation of the form
$\displaystyle f(\lambda[h_{\varphi}])=c_{\omega}e^{F_{\omega}},\quad{\rm
sup}_{X}\varphi=0,\quad\lambda[h_{\varphi}]\in\Gamma,$ (1.3)
where
$(h_{\varphi})^{j}{}_{k}=\omega_{X}^{j\bar{m}}(\omega_{\varphi})_{\bar{m}k}$
is the relative endomorphism between $\omega_{X}$ and
$\omega_{\varphi}=\omega+i\partial\bar{\partial}\varphi$,
$\lambda[h_{\varphi}]$ the un-ordered vector of eigenvalues of $h_{\varphi}$,
and $f(\lambda)$ a given function defined on a cone $\Gamma\subset{\bf R}^{n}$
satisfying the conditions (1-4) spelled out in §2 below. The function
$F_{\omega}$ is normalized to satisfy
$\int_{X}e^{nF_{\omega}}\omega_{X}^{n}=\int_{X}\omega_{X}^{n}$, and we set
$V_{\omega}=\int_{X}\omega^{n}$ to be the volume of $\omega$. The case
considered in [12] is when $\omega=\chi+t\omega_{X}$, $t\in(0,1]$, where
$\chi$ is a given non-negative closed $(1,1)$-form, and the estimates were
required to be uniform in $t$. This case includes the one corresponding to a
fixed background metric, which can be obtained by setting $\chi=0$ and $t=1$.
It is then proved in Theorem 2, [12] that for such $\omega$ and any $p>n$, we
must have
$\displaystyle{\rm sup}_{X}|\varphi|\leq C$ (1.4)
where $C$ depends only on $\omega_{X},\chi,n,p,\gamma$, and upper bounds for
the following three quantities
$\displaystyle{c_{\omega}^{n}\over V_{\omega}},\quad
E(\omega)={c_{\omega}^{n}\over
V_{\omega}}\int_{X}(-\varphi)f(\lambda[h_{\omega}])^{n}\omega_{X}^{n},\quad{\rm
Ent}_{p}(e^{nF_{\omega}})=\int_{X}e^{nF_{\omega}}|F_{\omega}|^{p}\omega_{X}^{n}.$
(1.5)
In the particular case of the Monge-Ampère equation
$f(\lambda)=(\prod_{j=1}^{n}\lambda_{j})^{1\over n}$,
$\Gamma=\\{\lambda_{j}>0,1\leq j\leq n\\}$, the first two quantities in the
above list can be bounded by elementary arguments, so we obtain uniform bounds
for the complex Monge-Ampère equation depending only on an upper bound for
${\rm Ent}_{p}(\omega)$ for any fixed $p>n$, thus recovering the classic
estimate of Kolodziej [25], as well as the uniform version established by
Demailly and Pali [8] and Eyssidieux, Guedj, and Zeriahi [11].
The proof of the $L^{\infty}$ bound for fully non-linear PDE’s which we just
described was based on a comparison with an auxiliary Monge-Ampère equation
involving integrals $A_{s}$ on the sublevel sets $\Omega_{s}=\\{z\in
X;\varphi<-s\\}$ of the function $\varphi$. This method turns out to be
particularly effective: it has been extended by various authors to stability
estimates [13], nef classes [14], moduli of continuity [15, 19], lower bounds
for the Green’s function [16, 18], as well as Hermitian manifolds [17] and
parabolic equations [6].
However, a natural question concerning the above general $L^{\infty}$ estimate
was whether uniform bounds for the energy $E(\omega)$ can be obtained from
bounds for the entropy-like quantity ${\mathrm{E}nt}_{p}(\omega)$. For fixed
background metric $\omega$, this had been done in Theorem 3, [12], using an
argument inspired by Chen-Cheng [5] which relied on the Alexandrov-Bakelman-
Pucci (ABP) maximum principle. The ABP maximum principle is a powerful method
pioneered by Blocki ([3]; see more applications in e.g. [26, 27]), but its
dependence on the background metric can be delicate, so uniform energy bounds
are still lacking.
The main goal of the present paper is to supply such uniform bounds for the
energy $E(\omega)$. It turns out that that they can be obtained once again by
a modification of the auxiliary Monge-Ampère equation involving sublevel sets
of $\varphi$ used in [12]. This auxiliary Monge-Ampère equation bypasses the
ABP maximum principle, and does yield uniform estimates. As an indirect
consequence, it can be used to simplify parts of the arguments in [5], and
generalize the $C^{0}$ estimates there to uniform estimates as well. We
provide a description of the precise results in the next section.
## 2 Statement of the main results
We begin by stating the precise conditions on the nonlinear operator
$f(\lambda)$. As in [12], we require that $f:\Gamma\to{\bf R}_{+}$ satisfies
(1) $\Gamma\subset{\bf R}^{n}$ is a symmetric cone with
$\Gamma_{n}\subset\Gamma\subset\Gamma_{1};$ (2.1)
Here $\Gamma_{k}$ is the cone of vectors $\lambda$ with
$\sigma_{j}(\lambda)>0$ for $1\leq j\leq k$, where $\sigma_{j}(\lambda)$ is
the $j$-th symmetric polynomial in $\lambda$. In particular, $\Gamma_{1}$ is
the half-space defined by $\lambda_{1}+\cdots+\lambda_{n}>0$, and $\Gamma_{n}$
is the first octant, defined by $\lambda_{j}>0$ for $1\leq j\leq n$.
(2) $f(\lambda)$ is symmetric in
$\lambda=(\lambda_{1},\ldots,\lambda_{n})\in\Gamma$ and it is homogeneous of
degree one;
(3) $\frac{\partial f}{\partial\lambda_{j}}>0$ for each $j=1,\ldots,n$ and
$\lambda\in\Gamma$;
(4) There is a $\gamma>0$ such that
$\prod_{j=1}^{n}\frac{\partial
f(\lambda)}{\partial\lambda_{j}}\geq\gamma,\quad\forall\lambda\in\Gamma.$
(2.2)
It is well-known that equations such as the Monge-Ampère equation, with
$f(\lambda)=(\prod_{j=1}^{n}\lambda_{j})^{1\over n}$, or the Hessian equation
with $f(\lambda)=\sigma_{k}(\lambda)^{1\over k}$, or the $p$-Monge-Ampère
equation of Harvey and Lawson [20, 21] with
$f(\lambda)=\Big{(}\prod_{I}\lambda_{I}\Big{)}^{\frac{n!}{(n-p)!p!}}$
where $I$ runs over all distinct multi-indices
$1\leq{i_{1}}<\cdots<{i_{p}}\leq n$,
$\lambda_{I}=\lambda_{i_{1}}+\cdots+\lambda_{i_{p}}$, and $\Gamma$ is the cone
defined by $\lambda_{I}>0$ for all $p$-indices $I$, all satisfy the structural
condition (4). In a remarkable recent development, Harvey and Lawson [22]
showed that the condition (4) actually holds for very large classes of non-
linear operators, including all invariant Garding-Dirichlet operators. As
noted in [22], the condition (4) also arose independently in [2] in the study
of $W^{2,p}$ interior regularity.
###### Theorem 1
Let $(X,\omega_{X})$ is a compact $n$-dimensional Kähler manifold. Let
$\omega$ be any Kähler form on $X$ with
$\displaystyle\omega\leq\kappa\,\omega_{X}$ (2.3)
for some constant $\kappa>0$. Consider the equation (1.3) with the operator
$f(\lambda)$ satisfying the conditions (1-4). Then for any $p>0$, any $C^{2}$
solution $\varphi$ of (1.3) satisfies the following
(i) Trudinger-like inequalities
$\int_{X}e^{\alpha(-\varphi)^{q}}\omega_{X}^{n}\leq C_{T},$ (2.4)
(ii) and energy-like inequalities
$\int_{X}(-\varphi)^{pq}e^{nF_{\omega}}\omega_{X}^{n}\leq C_{e}.$ (2.5)
Here the exponent $q$ is given by $q=\frac{n}{n-p}$ if $p<n$, and can be any
strictly positive exponent if $p\geq n$. The constants $C_{T}$ and $C_{e}$ are
computable constants depending only on $n,p,q,\omega_{X},\kappa,\gamma$, and
upper bounds for the following two quantities
$\displaystyle{c_{\omega}^{n}\over
V_{\omega}},\quad{\mathrm{Ent}}_{p}(e^{nF_{\omega}})=\int_{X}e^{nF_{\omega}}|F_{\omega}|^{p}\omega_{X}^{n},$
(2.6)
and the $\alpha>0$ in (2.4) is a constant that depends only on
$n,p,\gamma,{c^{n}_{\omega}\over V_{\omega}}$ and $\kappa$.
We observe that, in the case of a fixed background Kähler metric $\omega$,
this theorem was proved as Theorem 3 in [12]. The point of the new theorem is
to have uniform estimates, even as the background metric $\omega$ may
degenerate to the boundary of the Kähler cone. For this same reason, we
include the case $p>n$ in the statement. When $p>n$ and the background Kähler
form $\omega$ is fixed, it follows from Theorem 1, [12], that the solution
$\varphi$ of the equation is actually bounded, and the above Trudinger-like
and energy-like inequalities follow at once. But here again, the existing
results do not give the inequalities uniform in $\omega$ that we seek.
To obtain estimates which are uniform with respect to the background metric
$\omega$, we have to improve on the proof for fixed $\omega$ in [12], which
was modeled on the arguments of [5] for the constant scalar curvature
equation, and made essential use of the ABP maximum principle. What appears
needed for uniform estimates is rather arguments in the spirit of Theorem 2,
[12], and indeed, it turns out that these arguments can be adapted to the case
at hand.
Theorem 1 readily combines with Theorem 2, [12] to give the following
improvement, which we state for easy reference in the future:
###### Theorem 2
Let $(X,\omega_{X})$ be a compact $n$-dimensional Kähler manifold. Let
$\omega$ be a Kähler form satisfying the condition (2.3) for a fixed constant
$\kappa>0$, and consider the equation (1.3) with the operator $f(\lambda)$
satisfying the conditions (1-4). Then for any $p>n$, a $C^{2}$ solution
$\varphi$ of the equation (1.3) must satisfy
$\displaystyle{\rm sup}_{X}|\varphi|\leq C$ (2.7)
where $C$ is a constant depending only on $\omega_{X},n,p,\gamma,\kappa$, and
upper bounds for the following two quantities
$\displaystyle{c_{\omega}^{n}\over V_{\omega}},\quad{\rm
Ent}_{p}(e^{nF_{\omega}}).$ (2.8)
We observe that in [12], Theorem 2 was stated for background Kähler metrics of
the form $\omega=\chi+t\omega_{X}$, $t\in(0,1]$. However, as noted in [18],
the proof applies uniformly for background Kähler forms $\omega$ satisfying
(2.3), as long as we allow a dependence of all relevant constants on the bound
$\kappa$.
We would also like to note that, for the specific case of the Monge-Ampère
equation on strongly pseudoconvex domains in ${\bf C}^{n}$, a proof of
$L^{\infty}$ estimates using the Monge-Amp‘ere energy and corresponding
Sobolev inequalities has been given in [29, 30].
As we have stressed above, the key to the proof of Theorem 1 is an argument
bypassing the use of the ABP maximum principle. As such, it can also apply and
simplify several of those parts in the paper of Chen-Cheng [5] which relied on
the ABP maximum principle. As an illustration, we state here a $C^{0}$
estimate for a coupled system, generalizing the coupled system corresponding
to the constant scalar curvature equation, which is uniform with respect to
the background Kähler form $\omega$.
Let $(X,\omega_{X})$ be a compact $n$-dimensional Kähler manifold as before,
$\omega$ be a Kähler form, and let $\theta$ be a given smooth $(1,1)$-form on
$X$. We consider the coupled system
$\displaystyle
f(\lambda[h_{\varphi}])=c_{\omega}\,e^{F_{\omega}},\quad\sup_{X}\varphi=0$
$\displaystyle\Box_{\omega_{\varphi}}F_{\omega}=-c_{\theta}+G^{i\bar{j}}\theta_{i\bar{j}},$
(2.9)
where $G^{i\bar{j}}=\frac{\partial}{\partial h_{i\bar{j}}}\,{\rm
log}\,f(\lambda[h_{\varphi}])$ is the linearized operator of $\,{\rm
log}\,f(\lambda[h])$ which is positive definite by condition (3) in the
definition of $f(\lambda)$, and
$\Box_{\omega_{\varphi}}F_{\omega}=G^{i\bar{j}}(F_{\omega})_{i\bar{j}}$. The
function $F_{\omega}$ is again normalized (for the purpose of determining
$c_{\omega}$) to satisfy
$\int_{X}e^{nF_{\omega}}\omega_{X}^{n}=\int_{X}\omega_{X}^{n}$, and
$c_{\theta}$ is a constant determined by the equation. We have then
###### Theorem 3
Assume that the function $f(\lambda)$ satisfies the conditions (1-4) spelled
out at the beginning of this section, and $\omega$ satisfies the condition
$\omega\leq\kappa\,\omega_{X}$ for some constant $\kappa$. Fix a number
$p\in(0,n]$. We assume that
${\mathrm{Ent}}_{p}(e^{nF_{\omega}})=\int_{X}|F_{\omega}|^{p}e^{nF_{\omega}}\omega_{X}^{n}\leq
K_{1},\quad{\rm and}\quad\theta\geq-K_{2}\omega$ (2.10)
for some constants $K_{1}>0$ and $K_{2}>0$. Then
$\displaystyle{\rm sup}_{X}|\varphi|\leq C,\,\mbox{and
}\sup\nolimits_{X}F_{\omega}\leq C,$
for a constant $C$ depending only on
$\omega_{X},p,n,\gamma,\kappa,c_{\omega}^{n}/V_{\omega},c_{\theta},K_{1}$ and
$K_{2}$. If we assume further that
$\displaystyle\theta\leq K_{3}\omega$ (2.11)
then the function $F_{\omega}$ is bounded from below by another constant
depending further on $K_{3}$.
We remark that when
$f(\lambda[h_{\varphi}])=(\frac{\omega_{\varphi}^{n}}{\omega_{X}^{n}})^{1/n}$
and $\theta={\rm Ric}(\omega_{X})$ is the Ricci curvature of $\omega_{X}$, the
coupled system (2) is the constant scalar curvature Kähler equation (cscK)
studied in [5]. In this case, the constants $c_{\omega}=V_{\omega}^{1/n}$, and
$c_{\theta}\in{\bf R}$ depend only on the cohomology classes $[\omega]$ and
$c_{1}(X)$. The lower bound for $F_{\omega}$ was established in [23]. One of
the main results of [5] is that, assuming an upper bound for the entropy
${\mathrm{E}nt}_{p=1}(e^{nF})$, one can obtain a priori estimates for
$\varphi$ of all orders. What our result shows is that the $C^{0}$ bounds for
$\varphi$ and $F_{\omega}$ for this particular coupled system still hold even
when $\omega$ degenerates to the boundary of the Kähler cone. We remark that
the condition $-K_{2}\omega\leq\theta\leq K_{3}\omega$ in (2.10) and (2.11) is
not very restrictive for a degenerating family. For example, it holds for
$\omega=\chi+t\omega_{X}$ and $\theta=-\chi$ for some nonnegative $(1,1)$-form
$\chi$.
## 3 Proof of Theorem 1
For notational simplicity, we will omit the subscript $\omega$ in $F_{\omega}$
and simply write $F$ in this section. Suppose $\varphi\in C^{2}(X)$ solves the
equation (1.3) with $\sup_{X}\varphi=0$. Since
$\lambda[h_{\varphi}]\in\Gamma\subset\Gamma_{1}$, we have
${\rm tr}_{\omega_{X}}\omega+\Delta_{\omega_{X}}\varphi>0.$
The assumption (2.3) implies that $\Delta_{\omega_{X}}\varphi\geq-{\rm
tr}_{\omega_{X}}\omega\geq-n\kappa$. An application of the Green’s formula
shows the uniform $L^{1}(X,\omega_{X}^{n})$ estimate of $\varphi$, i.e.
###### Lemma 1
There exists a constant $C_{0}=C_{0}(n,\kappa,\omega_{X})$ such that
$\int_{X}|\varphi|\omega^{n}_{X}\leq C_{0}.$
We will write $K>0$ to be an upper bound of the $p$-th entropy of $e^{nF}$ for
$p\in(0,n]$. As in Theorem 1, let $q=\frac{n}{n-p}$ if $p<n$ and $q>0$ be any
positive number if $p=n$. Let $s>0$ be a positive number and
$\Omega_{s}\subset X$ be the sub-level set
$\Omega_{s}=\\{z\in X|~{}-\varphi(z)-s>0\\}.$ (3.1)
We also define a monotonically decreasing function $\phi(s)$ as in [12]
$\phi(s)=\int_{\Omega_{s}}e^{nF}\omega_{X}^{n}.$ (3.2)
Given these definitions, we have the following lemma about the decay of
$\phi(s)$.
###### Lemma 2
There exists a constant $C_{1}>0$ that depends on $n,p,K,C_{0}>0$ such that
for any $s>1$
$\phi(s)\leq\frac{C_{1}}{(\,{\rm log}\,\,s)^{p}}.$ (3.3)
Proof of Lemma 2. We observe that by the Hölder-Young’s inequality (c.f.
[12]), there is a constant $C_{p}>0$ depending only $p$ such that
$\displaystyle\int_{\Omega_{s}}e^{nF}\omega_{X}^{n}$ $\displaystyle\leq$
$\displaystyle 2^{p}\int_{\Omega_{s}}\Big{(}\frac{2^{-1}\,{\rm
log}\,(-\varphi)}{\,{\rm log}\,s}\Big{)}^{p}e^{nF}\omega_{X}^{n}$
$\displaystyle\leq$ $\displaystyle\frac{2^{p}}{(\,{\rm
log}\,s)^{p}}\int_{\Omega_{s}}\Big{(}e^{nF}(1+n^{p}|F|^{p})+C_{p}e^{\,{\rm
log}\,(-\varphi)}\Big{)}\omega_{X}^{n}$ $\displaystyle\leq$
$\displaystyle\frac{C}{(\,{\rm
log}\,s)^{p}}(1+K+C_{p}C_{0})\leq\frac{C_{1}}{(\,{\rm log}\,s)^{p}}.$
From Lemma 2, we see that $\phi(s)$ can be arbitrarily small if $s>1$ is
sufficiently large. We now prove Theorem 1.
Proof of Theorem 1. We will modify the approach given in [12]. We break the
proof into four steps.
Step 1. Let $\tau_{k}(x):{\bf R}\to{\bf R}_{+}$ be a sequence of positive
smooth functions that converges monotonically decreasingly to the function
$x\cdot\chi_{{\bf R}_{+}}(x)$. Let $a=pq=\frac{np}{n-p}$ ($a$ is any positive
number if $p=n$). We solve the following complex Monge-Ampère equation
$(\omega+i\partial\bar{\partial}\psi_{s,k})^{n}=\frac{\tau_{k}(-\varphi-s)^{a}}{A_{s,k}}c_{\omega}^{n}e^{nF}\omega_{X}^{n},\quad\sup_{X}\psi_{s,k}=0.$
(3.4)
Here the constant $A_{s,k}$ is defined by
$A_{s,k}=\frac{c_{\omega}^{n}}{V_{\omega}}\int_{X}\tau_{k}(-\varphi-s)^{a}e^{nF}\omega_{X}^{n}$
(3.5)
to make the equation (3.4) compatible. By assumption $[\omega]$ is a Kähler
class, so by Yau’s theorem [31], equation (3.4) admits a unique smooth
solution $\psi_{s,k}$. We observe that by dominated convergence theorem that
as $k\to\infty$
$A_{s,k}\to
A_{s}:=\frac{c_{\omega}^{n}}{V_{\omega}}\int_{\Omega_{s}}(-\varphi-s)^{a}e^{nF}\omega_{X}^{n}.$
(3.6)
Step 2. Define a smooth function
$\Phi:=-\varepsilon(-\psi_{s,k}+\Lambda)^{b}-\varphi-s,$
where the constants are given by
$b=\frac{n}{n+a}\in(0,1),\quad\mbox{and
}\varepsilon=\frac{1}{\gamma^{1/(n+a)}(nb)^{n/(n+a)}}A_{s,k}^{\frac{1}{n+a}},$
(3.7)
and $\Lambda$ is chosen so that $\varepsilon b\Lambda^{-(1-b)}=1$, i.e.
$\Lambda=\frac{b^{1/(1-b)}}{(\gamma^{1/(n+a)}(nb)^{n/(n+a)})^{1/(1-b)}}A_{s,k}^{\frac{1}{a}}.$
(3.8)
We claim that $\Phi\leq 0$ on $X$. To see this, we note that $\Phi<0$ on
$X\backslash\Omega_{s}$ by definition. If $\max_{X}\Phi$ is achieved somewhere
on $X\backslash\Omega_{s}$, we are done. So we assume
$\max_{X}\Phi=\Phi(x_{0})$ for some point $x_{0}\in\Omega_{s}$. Let
$G^{i\bar{j}}=\frac{\partial\,{\rm log}\,f(\lambda[h_{\varphi}])}{\partial
h_{i\bar{j}}}=\frac{1}{f}\frac{\partial f(\lambda[h_{\varphi}])}{\partial
h_{i\bar{j}}}$ (3.9)
be the coefficients of the linearized operator of $f(\lambda[h_{\varphi}])$ at
the point $x_{0}$. $G^{i\bar{j}}$ is positive definite by condition (3) of the
function $f$. Hence we have $G^{i\bar{j}}\Phi_{i\bar{j}}\leq 0$ at the point
$x_{0}$. Choosing local holomorphic coordinates at $x_{0}$ such that at this
point $(\omega_{X})_{i\bar{j}}|_{x_{0}}=\delta_{ij}$ and
$(\omega_{\varphi})|_{x_{0}}$ is diagonal with eigenvalues
$\lambda_{1},\ldots,\lambda_{n}$. Then we have
${\rm det}G^{i\bar{j}}=\frac{1}{f^{n}}\prod_{j=1}^{n}\frac{\partial
f}{\partial\lambda_{j}}\geq\frac{\gamma}{f^{n}},$ (3.10)
by the condition (4) in the definition of the nonlinear operator $f(\lambda)$.
We calculate at $x_{0}$ as follows:
$\displaystyle\noindent 0$ $\displaystyle\geq$ $\displaystyle
G^{i\bar{j}}\Phi_{i\bar{j}}$ $\displaystyle=$ $\displaystyle\varepsilon
b(-\psi_{s,k}+\Lambda)^{b-1}G^{i\bar{j}}(\psi_{s,k})_{i\bar{j}}+\varepsilon
b(1-b)(-\psi_{s,k}+\Lambda)^{b-2}G^{i\bar{j}}(\psi_{s,k})_{i}(\psi_{s,k})_{\bar{j}}$
$\displaystyle-G^{i\bar{j}}(\omega_{\varphi}-\omega)_{i\bar{j}}$
$\displaystyle\geq$ $\displaystyle\varepsilon
b(-\psi_{s,k}+\Lambda)^{b-1}G^{i\bar{j}}(\omega_{\psi_{s,k}})_{i\bar{j}}-1+(1-\varepsilon
b(-\psi_{s,k}+\Lambda)^{b-1})G^{i\bar{j}}\omega_{i\bar{j}}$
$\displaystyle\geq$ $\displaystyle n\varepsilon
b(-\psi_{s,k}+\Lambda)^{b-1}({\rm det}G^{i\bar{j}})^{1/n}({\rm
det}(\omega_{\psi_{s,k}})_{i\bar{j}})^{1/n}-1+(1-\varepsilon
b\Lambda^{-(1-b)})G^{i\bar{j}}\omega_{i\bar{j}}$ $\displaystyle\geq$
$\displaystyle n\varepsilon
b(-\psi_{s,k}+\Lambda)^{b-1}\gamma^{\frac{1}{n}}\Big{(}\frac{(-\varphi-s)^{a}}{A_{s,k}}\Big{)}^{\frac{1}{n}}-1.$
Here we have applied the arithmetic-geometric inequality, (3.10), and the
equation (3.4) of $\omega_{\psi_{s,k}}$. It follows easily from the above and
the choice of constants in (3.7) that at $x_{0}$
$(-\varphi-s)\leq\frac{A_{s,k}^{1/a}}{(\gamma^{1/n}n\varepsilon
b)^{n/a}}(-\psi_{s,k}+\Lambda)^{(1-b)\frac{n}{a}}=\varepsilon(-\psi_{s,k}+\Lambda)^{b}$
that is, $\Phi(x_{0})\leq 0$.
Step 3. From the previous step, $\Phi\leq 0$. Thus on $\Omega_{s}$ we have
$\frac{(-\varphi-s)}{A_{s,k}^{1/(n+a)}}\leq
C(-\psi_{s,k}+CA_{s,k}^{1/a})^{\frac{n}{n+a}},$ (3.11)
for some uniform constant $C>0$ that depends on $n,p,\gamma$. Taking
$((n+a)p/n)$-th power on both sides of (3.11), and multiplying $e^{nF}$, we
obtain that on $\Omega_{s}$
$\frac{(-\varphi-s)^{\frac{p(n+a)}{n}}}{A_{s,k}^{p/n}}e^{nF}\leq
C(-\psi_{s,k}+A_{s,k}^{1/a})^{p}e^{nF}\leq
C_{2}[(-\psi_{s,k})^{p}e^{nF}+A_{s,k}^{p/a}e^{nF}],$ (3.12)
for some constant $C_{2}>0$ depending only on $n,p,\gamma$. We note that by
Hölder-Young’s inequality, for any $\beta>0$ there is a constant $C_{p}>0$
depending only on $p$ such that
$(-\frac{\beta}{2}\psi_{s,k})^{p}e^{nF}\leq
e^{nF}(1+|nF|^{p})+C_{p}e^{-\beta\psi_{s,k}}.$ (3.13)
Since $\omega+i\partial\bar{\partial}\psi_{s,k}>0$ and by the assumption
(2.3), that is $\omega\leq\kappa\omega_{X}$, we see that $\psi_{s,k}\in
PSH(X,\kappa\omega_{X})$. Hence there exists a
$\beta=\beta(X,\kappa\omega_{X})>0$ such that ([24, 28])
$\int_{X}e^{-\beta\psi_{s,k}}\omega_{X}^{n}\leq C_{X},$ (3.14)
for some uniform constant $C_{X}=C_{X}(\kappa\omega_{X},n)$. We integrate both
sides of (3.12) against $\omega_{X}^{n}$ over $\Omega_{s}$ and apply (3.13)
and (3.14),
$\int_{\Omega_{s}}\frac{(-\varphi-s)^{\frac{p(n+a)}{n}}}{A_{s,k}^{p/n}}e^{nF}\omega_{X}^{n}\leq
C_{3}+C_{2}A_{s,k}^{p/a}\int_{\Omega_{s}}e^{nF}\omega_{X}^{n}=C_{3}+C_{2}A_{s,k}^{p/a}\phi(s).$
(3.15)
Here $C_{3}>0$ is a uniform constant depending on $n,p,\kappa,C_{X}$ and $K$,
the upper bound of ${\mathrm{Ent}}_{p}(e^{nF})$. Letting $k\to\infty$ in
(3.15), we obtain that
$\int_{\Omega_{s}}(-\varphi-s)^{\frac{p(n+a)}{n}}e^{nF}\omega_{X}^{n}\leq
C_{3}A_{s}^{p/n}+C_{2}A_{s}^{\frac{p}{a}+\frac{p}{n}}\phi(s),$ (3.16)
where $A_{s}$ is given by (3.6). On the other hand, by Hölder inequality we
have
$\displaystyle A_{s}$ $\displaystyle=$
$\displaystyle\frac{c_{\omega}^{n}}{V_{\omega}}\int_{\Omega_{s}}(-\varphi-s)^{a}e^{nF}\omega_{X}^{n}$
(3.17) $\displaystyle\leq$
$\displaystyle\frac{c_{\omega}^{n}}{V_{\omega}}\Big{(}\int_{\Omega_{s}}(-\varphi-s)^{\frac{p(n+a)}{n}}e^{nF}\omega_{X}^{n}\Big{)}^{\frac{na}{p(n+a)}}\Big{(}\int_{\Omega_{s}}e^{nF}\omega_{X}^{n}\Big{)}^{1-\frac{na}{(n+a)p}}$
$\displaystyle\leq$
$\displaystyle\frac{c_{\omega}^{n}}{V_{\omega}}\Big{(}C_{3}A_{s}^{p/n}+C_{2}A_{s}^{\frac{p}{a}+\frac{p}{n}}\phi(s)\Big{)}^{\frac{na}{p(n+a)}}\phi(s)^{1-\frac{na}{(n+a)p}}$
$\displaystyle\leq$ $\displaystyle
C_{4}\frac{c_{\omega}^{n}}{V_{\omega}}A_{s}^{\frac{a}{n+a}}\phi(s)^{1-\frac{na}{(n+a)p}}+C_{5}\frac{c_{\omega}^{n}}{V_{\omega}}A_{s}\phi(s).$
Here the constant $C_{5}>0$ depends only on $n,p,\gamma$ and $C_{4}>0$ depends
additionally on $\kappa,K$. Note that the inequality (3.17) holds for any
$s>0$, and the constants $C_{4},C_{5}$ are independent of $s$. We remark that
by the choice of $q$, $\frac{na}{(n+a)p}=1$ when $p\in(0,n)$, and
$\frac{na}{(n+a)p}<1$ when $p=n$, which justifies the Hölder inequality used
above.
We now apply Lemma 2 to conclude that when
$s\geq\bar{s}=\max(1,\,{\rm
exp}\,[(2C_{1}C_{5}c_{\omega}^{n}/V_{\omega})^{1/p}])$ (3.18)
we have
$\phi(s)\leq\frac{C_{1}}{(\,{\rm
log}\,s)^{p}}\leq\frac{1}{2}\frac{1}{C_{5}c_{\omega}^{n}/V_{\omega}}.$ (3.19)
Combining (3.19) and (3.17), we see that when $s\geq\bar{s}$
$A_{s}\leq
2C_{4}\frac{c_{\omega}^{n}}{V_{\omega}}A_{s}^{\frac{a}{n+a}}\phi(s)^{1-\frac{na}{p(n+a)}}.$
Dividing both sides by $A_{s}^{\frac{a}{n+a}}$, we easily obtain that when
$s\geq\bar{s}$
$A_{s}\leq(2C_{4})^{\frac{n+a}{n}}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{n+a}{n}}\phi(s)^{\frac{n+a}{n}(1-\frac{na}{p(n+a)})}\leq
C_{6}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{n+a}{n}}.$ (3.20)
By the definition of $A_{s}$ in (3.6), we see from (3.20) that
$\int_{\Omega_{\bar{s}}}(-\varphi-\bar{s})^{a}e^{nF}\omega_{X}^{n}\leq
C_{6}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{a}{n}}.$ (3.21)
Since $a>0$, by the calculus inequality $|x-y|^{a}\leq 2^{a}(x^{a}+y^{a})$ for
any $x,y>0$, we easily obtain from (3.21) that
$\int_{\Omega_{\bar{s}}}(-\varphi)^{a}e^{nF}\omega_{X}^{n}\leq
2^{a}C_{6}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{a}{n}}+C_{7}\bar{s}^{a}.$
(3.22)
Finally note that on $X\backslash\Omega_{\bar{s}}$,
$0\leq-\varphi\leq\bar{s}$, hence from (3.22)
$\displaystyle\int_{X}(-\varphi)^{a}e^{nF}\omega_{X}^{n}$ $\displaystyle\leq$
$\displaystyle\int_{\Omega_{\bar{s}}}(-\varphi)^{a}e^{nF}\omega_{X}^{n}+\int_{X\backslash\Omega_{\bar{s}}}\bar{s}^{a}e^{nF}\omega_{X}^{n}$
$\displaystyle\leq$ $\displaystyle
2^{a}C_{6}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{a}{n}}+C_{8}\bar{s}^{a}$
$\displaystyle=$ $\displaystyle
2^{a}C_{6}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{a}{n}}+C_{8}\,{\rm
exp}\,[a(2C_{1}C_{5}c^{n}_{\omega}/V_{\omega})^{1/p}]=:C_{e}.$
Here $C_{e}>0$ is the desired constant in (2.5) with an explicit dependence on
the relative volume $c^{n}_{\omega}/V_{\omega}$.
Step 4. We now show the Trudinger-like inequality (2.4). We take
$(\frac{n+a}{n})$-th power on both sides of (3.11) and multiply the resulted
inequality by a small constant $\alpha>0$ to be determined. Then it follows
that on $\Omega_{s}$
$\alpha(-\varphi-s)^{\frac{n+a}{n}}\leq C_{9}\alpha
A_{s,k}^{1/n}(-\psi_{s,k}+A_{s,k}^{1/a}).$ (3.23)
Taking exponential on both sides of (3.23) and integrating it over
$\Omega_{s}$, we then obtain
$\int_{\Omega_{s}}e^{\alpha(-\varphi-s)^{\frac{n+a}{n}}}\omega_{X}^{n}\leq
e^{C_{9}\alpha A_{s,k}^{\frac{n+a}{na}}}\int_{\Omega_{s}}e^{-C_{9}\alpha
A_{s,k}^{1/n}\psi_{s,k}}\omega_{X}^{n}.$ (3.24)
Note that by (3.20), $A_{\bar{s}}\leq
C_{6}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{n+a}{n}}$ and
$A_{\bar{s},k}\to A_{\bar{s}}$, so when $k$ is large enough we have
$A_{\bar{s},k}\leq 2C_{6}(\frac{c^{n}_{\omega}}{V_{\omega}})^{\frac{n+a}{n}}.$
If we choose $\alpha>0$ small enough such that
$C_{9}A_{\bar{s},k}^{1/n}\alpha\leq(2C_{6})^{1/n}C_{9}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{n+a}{n^{2}}}\alpha<\alpha(X,\kappa\omega_{X})$
where $\alpha(X,\kappa\omega_{X})>0$ is the $\alpha$-invariant of the Kähler
manifold $(X,\kappa\omega_{X})$. Then from (3.24) we get
$\int_{\Omega_{\bar{s}}}e^{\alpha(-\varphi-\bar{s})^{\frac{n+a}{n}}}\omega_{X}^{n}\leq\,{\rm
exp}\,\Big{(}{C_{10}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{(n+a)}{na}}}\Big{)}.$
(3.25)
It is then elementary to see that (3.25) implies
$\int_{\Omega_{\bar{s}}}e^{\alpha(-\varphi)^{\frac{n+a}{n}}}\omega_{X}^{n}\leq\,{\rm
exp}\,\Big{(}{C_{10}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{(n+a)}{na}}}+2\alpha\bar{s}^{\frac{n+a}{n}}\Big{)}.$
(3.26)
Again observing that $-\varphi\leq\bar{s}$ on $X\backslash\Omega_{\bar{s}}$,
we conclude from (3.26) that
$\int_{X}e^{\alpha(-\varphi)^{\frac{n+a}{n}}}\omega_{X}^{n}\leq
V_{\omega_{X}}e^{\alpha\bar{s}^{\frac{n+a}{n}}}+\,{\rm
exp}\,\Big{(}{C_{10}(\frac{c_{\omega}^{n}}{V_{\omega}})^{\frac{(n+a)}{na}}}+2\alpha\bar{s}^{\frac{n+a}{n}}\Big{)}=:C_{T}.$
(3.27)
Since $\bar{s}$ is explicitly given in (3.18), the constant $C_{T}$ has an
explicit dependence on $c^{n}_{\omega}/V_{\omega}$. Finally note that
$\frac{n+a}{n}=1+\frac{p}{n-p}=\frac{n}{n-p}=q$. This completes the proof of
the inequality (2.4). Q.E.D.
## 4 Proof of Theorem 3
Again, we drop the subindex $\omega$ from $F_{\omega}$ for notational
simplicity. Let $(\varphi,F)$ solve the coupled system (2) stated in §2. Fix
the number $p\in(0,n]$.
Let $\delta=\frac{1}{10}K_{2}$. We solve the auxiliary complex Monge-Ampère
equation
$(\omega+i\partial\bar{\partial}\psi_{k})^{n}=\frac{\tau_{k}(-\varphi+\delta
F)^{p}}{A_{k}}c_{\omega}^{n}e^{nF}\omega_{X}^{n},\quad\sup_{X}\psi_{k}=0,$
(4.1)
where
$A_{k}=\frac{c_{\omega}^{n}}{V_{\omega}}\int_{X}\tau_{k}(-\varphi+\delta
F)^{p}e^{nF}\omega_{X}^{n}\to\frac{c_{\omega}^{n}}{V_{\omega}}\int_{\Omega}(-\varphi+\delta
F)^{p}e^{nF}\omega_{X}^{n}=:A_{\infty},$ (4.2)
as $k\to\infty$, where $\Omega=\\{-\varphi+\delta F>0\\}$. Note that the
constant $q>1$ in Theorem 1, so by Hölder inequality and (2.5) in Theorem 1,
we have $\int_{X}(-\varphi)^{p}e^{nF}\omega_{X}^{n}\leq C$ for some constant
$C>0$ depending additionally on $K_{1}$ and $c_{\omega}^{n}/V_{\omega}$.
Moreover, by the assumption (2.10),
$\int_{\Omega}|F|^{p}e^{nF}\omega_{X}^{n}\leq K_{1}$, so we have
$A_{\infty}\leq C$ for some constant depending on $K_{1}$ and
${c_{\omega}^{n}/}{V_{\omega}}$. Thus $A_{k}\leq C$ for $k$ sufficiently
large.
We consider the function
$\Psi=-\varepsilon(-\psi_{k}+\Lambda)^{\frac{n}{n+p}}-\varphi+\delta F.$
with
$\varepsilon=\Big{(}\frac{(n+p)(1+\delta
c_{\theta})}{n^{2}}\Big{)}^{\frac{n}{n+p}}A_{k}^{\frac{1}{n+p}},$
and $\Lambda$ is chosen so that $\Lambda^{p/(n+p)}=\frac{2n}{n+p}\varepsilon$,
i.e.
$\Lambda=(\frac{2n}{n+p})^{\frac{n+p}{p}}\Big{(}\frac{(n+p)(1+\delta
c_{\theta})}{n^{2}}\Big{)}^{{n/p}}A_{k}^{1/p}.$
We claim that $\Psi\leq 0$. By the definition of $\Omega$, it suffices to show
the case when $\max_{X}\Psi=\Psi(x_{0})$ for some point $x_{0}\in\Omega$. We
calculate at $x_{0}$ as in the previous section.
$\displaystyle 0$ $\displaystyle\geq$ $\displaystyle
G^{i\bar{j}}\Psi_{i\bar{j}}$ $\displaystyle\geq$
$\displaystyle\frac{\varepsilon
n}{n+p}(-\psi_{k}+\Lambda)^{-\frac{p}{n+p}}G^{i\bar{j}}(\omega_{\psi_{k}})_{i\bar{j}}-\frac{\varepsilon
n}{n+p}(-\psi_{k}+\Lambda)^{-\frac{p}{n+p}}G^{i\bar{j}}\omega_{i\bar{j}}$
$\displaystyle-G^{i\bar{j}}(\omega_{\varphi})_{i\bar{j}}+G^{i\bar{j}}\omega_{i\bar{j}}-\delta
c_{\theta}+\delta G^{i\bar{j}}\theta_{i\bar{j}}$ $\displaystyle\geq$
$\displaystyle\frac{\varepsilon
n^{2}}{n+p}(-\psi_{k}+\Lambda)^{-\frac{p}{n+p}}\Big{(}\frac{(-\varphi+\delta
F)^{p}}{A_{k}}\Big{)}^{1/n}-1-\delta c_{\theta}+(1-\frac{\varepsilon
n\Lambda^{-\frac{p}{n+p}}}{n+p}-\frac{1}{10})G^{i\bar{j}}\omega_{i\bar{j}}$
$\displaystyle\geq$ $\displaystyle\frac{\varepsilon
n^{2}}{n+p}(-\psi_{k}+\Lambda)^{-\frac{p}{n+p}}\Big{(}\frac{(-\varphi+\delta
F)^{p}}{A_{k}}\Big{)}^{1/n}-1-\delta c_{\theta}.$
By the choice of $\varepsilon$, it follows that at $x_{0}$,
$(-\varphi+\delta F)^{p}\leq\Big{(}\frac{(n+p)(1+\delta
c_{\theta})}{n^{2}\varepsilon}\Big{)}^{n}A_{k}(-\psi_{k}+\Lambda)^{\frac{np}{n+p}}=\varepsilon^{p}(-\psi_{k}+\Lambda)^{\frac{np}{n+p}},$
that is $\Psi(x_{0})\leq 0$. This proves the claim that $\Psi\leq 0$ on $X$.
Since $A_{k}\leq C$ for $k$ large enough, we derive from $\Psi\leq 0$ that in
$X$
$\delta F\leq-\varphi+\delta F\leq
C(-\psi_{k}+1)^{\frac{n}{n+p}}\leq-\epsilon\psi_{k}+C_{\epsilon},$
where the last inequality follows from the elementary inequality
$(x+1)^{n/(n+p)}\leq\epsilon x+C_{\epsilon}$ for any $\epsilon>0$. In
particular, this shows that for any $r\geq 1$
$\int_{X}e^{r\delta F}\omega_{X}^{n}\leq
C\int_{X}e^{-r\epsilon\psi_{k}}\omega_{X}^{n}\leq C_{r},$ (4.3)
where we choose $\epsilon>0$ small so that $\epsilon r<$ the alpha-invariant
of $(X,\kappa\omega_{X})$. In particular, this implies that $e^{nF}$ is
bounded in $L^{p^{\prime}}(X,\omega_{X}^{n})$ for any $p^{\prime}>1$. An
application of [12] then implies that the $L^{\infty}$ norm of $\varphi$ is
bounded by a constant depending only
$\|e^{nF}\|_{L^{{}^{p^{\prime}}}(X,\omega_{X}^{n})}$ for some $p^{\prime}>1$
and $c_{\omega}^{n}/V_{\omega}$ (c.f. Theorem 2 in §2), and subsequently by
(4.3), $\|\varphi\|_{L^{\infty}}$ depends only on
$\kappa,K_{1},K_{2},c_{\theta},\gamma$ and $c_{\omega}^{n}/V_{\omega}$.
Moreover, under the assumption (2.10), we will show $\sup_{X}F$ is bounded
above by a uniform constant. To see this, we begin with a mean-value type
inequality which was proved in [16] for complex Monge-Ampère equations and the
arguments there can be easily adapted to the current situation. But for
convenience of the readers, we include a proof of this inequality in Lemma 3
below.
###### Lemma 3
Let $u\in C^{2}(X)$ be a $C^{2}$ function on $X$ that satisfies the
differential inequality
$\Box_{\omega_{\varphi}}u\geq-a,$ (4.4)
then there is a constant $C>0$ depending on
$n,\kappa,\gamma,\kappa,p,K_{1},c_{\omega}^{n}/V_{\omega}$ and $a\geq 0$ such
that
$\sup\nolimits_{X}u\leq
C(1+\frac{c_{\omega}^{n}}{V_{\omega}}\int_{X}|u|e^{nF}\omega_{X}^{n}).$
Proof of Lemma 3. As in [16], we may assume that
$N:=\frac{c_{\omega}^{n}}{V_{\omega}}\int_{X}|u|e^{nF}\omega_{X}^{n}\leq 1$,
otherwise, we can consider the rescaled function $\tilde{u}=u/N$, which still
satisfies the differential inequality (4.4) with the same $a$. We also assume
$\\{u>0\\}\neq\emptyset$, otherwise this lemma is trivial.
Let $s>0$ be a positive number such that the super-level set
$U_{s}:=\\{u>s\\}$ is non-empty. We consider the auxiliary Monge-Ampère
equation
$(\omega+i\partial\bar{\partial}\psi_{s,k})^{n}=\frac{\tau_{k}(u-s)}{A_{s,k}}c_{\omega}^{n}e^{nF}\omega_{X}^{n},\quad\sup_{X}\psi_{s,k}=0,$
(4.5)
where as $k\to\infty$
$A_{s,k}=\frac{c_{\omega}^{n}}{V_{\omega}}\int_{X}\tau_{k}(u-s)e^{nF}\omega_{X}^{n}\to
A_{s}=\frac{c_{\omega}^{n}}{V_{\omega}}\int_{U_{s}}(u-s)e^{nF}\omega_{X}^{n}.$
The condition that $N\leq 1$ implies that $A_{s}\leq 1$, so if $k>1$ is
sufficiently large, we have $A_{s,k}\leq 2$.
The following argument is similar to that in Step 2 in the proof of Theorem 1.
We have known that $\sup_{X}|\varphi|$ is bounded. So take
$\Lambda_{0}=\sup_{X}|\varphi|+1$. We consider the function
$\Phi_{u}:=-\varepsilon(-\psi_{s,k}+\varphi+\Lambda_{0})^{\frac{n}{n+1}}+u-s,$
and we claim that for $\varepsilon=\varepsilon(s,k,a)>0$ satisfying the
equation
$\varepsilon^{n+1}=A_{s,k}(a+\frac{n\varepsilon}{n+1})^{n},$ (4.6)
we have $\sup_{X}\Phi_{u}\leq 0$. First we observe that from equation (4.6)
and $A_{s,k}\leq 2$ it holds that $\varepsilon\leq CA_{s,k}^{1/(n+1)}$ for
some $C=C(n,a)>0$. If the maximum of $\Phi_{u}$ is achieved at some point
$x_{0}\in U_{s}$, then at this point by maximum principle
$\displaystyle 0$ $\displaystyle\geq$
$\displaystyle\Box_{\omega_{\varphi}}\Phi_{u}(x_{0})$ $\displaystyle\geq$
$\displaystyle\frac{n\varepsilon}{n+1}(-\psi_{s,k}+\varphi+\Lambda_{0})^{-\frac{1}{n+1}}\Big{(}G^{i\bar{j}}(\omega_{\psi_{s,k}})_{i\bar{j}}-G^{i\bar{j}}(\omega_{\varphi})_{i\bar{j}}\Big{)}-a$
$\displaystyle\geq$
$\displaystyle\frac{n\varepsilon}{n+1}(-\psi_{s,k}+\varphi+\Lambda_{0})^{-\frac{1}{n+1}}\Big{(}n(\frac{u-s}{A_{s,k}})^{1/n}-1\Big{)}-a$
$\displaystyle\geq$
$\displaystyle\frac{n^{2}\varepsilon}{n+1}(-\psi_{s,k}+\varphi+\Lambda_{0})^{-\frac{1}{n+1}}\Big{(}\frac{u-s}{A_{s,k}}\Big{)}^{1/n}-\frac{n\varepsilon}{n+1}-a.$
It is then elementary to see that $\Phi_{u}\leq 0$ by the choice of
$\varepsilon$ in (4.6). This immediately implies
$\int_{U_{s}}\,{\rm
exp}\,\Big{(}\alpha\frac{(u-s)^{\frac{n+1}{n}}}{A_{s,k}^{1/n}}\Big{)}\omega_{X}^{n}\leq
C(n,a)\int_{U_{s}}e^{-\alpha C(n,a)\psi_{s,k}}\omega_{X}^{n}\leq C,$
if we choose $\alpha>0$ small enough so that $\alpha C(n,a)$ is less than the
alpha invariant of $(X,\kappa\omega_{X})$. Letting $k\to\infty$ we obtain
$\int_{U_{s}}\,{\rm
exp}\,\Big{(}\alpha\frac{(u-s)^{\frac{n+1}{n}}}{A_{s}^{1/n}}\Big{)}\omega_{X}^{n}\leq
C.$ (4.7)
This equation together with Hölder-Young’s inequality yield that for any $r>n$
$\int_{U_{s}}(u-s)^{\frac{(n+1)r}{n}}e^{nF}\omega_{X}^{n}\leq CA_{s}^{r/n}.$
(4.8)
From now on we fix an $r>n$. Then we can apply Hölder inequality to obtain
$\displaystyle A_{s}$ $\displaystyle=$
$\displaystyle\frac{c^{n}_{\omega}}{V_{\omega}}\int_{U_{s}}(u-s)e^{nF}\omega_{X}^{n}$
$\displaystyle\leq$
$\displaystyle\frac{c_{\omega}^{n}}{V_{\omega}}\Big{(}\int_{U_{s}}(u-s)^{\frac{(n+1)r}{n}}e^{nF}\omega_{X}^{n}\Big{)}^{\frac{n}{r(n+1)}}\Big{(}\int_{U_{s}}e^{nF}\omega_{X}^{n}\Big{)}^{1-\frac{n}{r(n+1)}}$
$\displaystyle\leq$
$\displaystyle\frac{c_{\omega}^{n}}{V_{\omega}}CA_{s}^{1/(n+1)}\phi_{u}(s)^{1-\frac{n}{r(n+1)}},$
where $\phi_{u}(s)=\int_{U_{s}}e^{nF}\omega_{X}^{n}$. Then we have
$A_{s}\leq
C(\frac{c_{\omega}^{n}}{V_{\omega}})^{1+\frac{1}{n}}\phi_{u}(s)^{1+\frac{r-n}{rn}}.$
This combined with the definition of $A_{s}$ implies that for any $t>0$, there
exists a constant $\bar{C}>0$ depending on $n,\gamma,\kappa,a,K_{1},K_{2},$
such that
$t\phi_{u}(s+t)\leq\bar{C}(\frac{c_{\omega}^{n}}{V_{\omega}})^{1/n}\phi_{u}(s)^{1+\frac{r-n}{rn}}.$
Let $s_{0}>0$ be a number such that
$\bar{C}(\frac{c_{\omega}^{n}}{V_{\omega}})^{1/n}\phi_{u}(s_{0})^{\frac{r-n}{rn}}\leq
1/2$. This $s_{0}$ can be chosen as
$(2\bar{C})^{\frac{rn}{r-n}}(\frac{c_{\omega}^{n}}{V_{\omega}})^{n/(r-n)}$,
since by the assumption $N\leq 1$
$\phi_{u}(s_{0})\leq\frac{N}{s_{0}}\frac{1}{c_{\omega}^{n}/V_{\omega}}\leq\frac{1}{s_{0}}\frac{1}{c_{\omega}^{n}/V_{\omega}}.$
Then a De Giorgi type iteration argument of Kolodziej [25] (see also [12] and
[9]) implies that $\phi_{u}(s)=0$ for $s\geq S_{\infty}$ for some uniform
constant
$S_{\infty}=s_{0}+\frac{1}{1-2^{-{(r-n)/}{rn}}},$
and this gives $u\leq S_{\infty}$ as desired. Q.E.D.
We now apply Lemma 3 to $u:=F-K_{2}\varphi$, which satisfies
$\Box_{\omega_{\varphi}}u=-c_{\theta}+G^{i\bar{j}}\theta_{i\bar{j}}-K_{2}+K_{2}G^{i\bar{j}}\omega_{i\bar{j}}\geq-
c_{\theta}-K_{2}=:-a.$
Lemma 3 yield the upper bound of $u$ (hence that of $F$) in terms of its
$L^{1}$ integral, while the latter is bounded since $e^{nF}$ is bounded in
$L^{r}(\omega_{X}^{n})$ for any $r>1$ and $\varphi$ is bounded in $L^{\infty}$
by [12].
If in addition we assume (2.11), that is
$\theta\leq K_{3}\omega,$
then apply Lemma 3 to $u:=-F-K_{3}\varphi$, we can also get a uniform lower
bound of $F$ depending on $K_{3}$. Q.E.D.
## References
* [1]
* [2] S. Abja and G. Olive, “Local regularity for concave homogeneous complex degenerate elliptic equations comparable to the Monge-Ampère equation”, arXiv:2102.07553v1.
* [3] Z. Blocki, “On the uniform estimate in the Calabi-Yau theorem II”, Science China Math. 54 (2011) 1375-1377.
* [4] R. Berman, S. Boucksom, P. Eyssidieux, V. Guedj, and A. Zeriahi, “Kähler-Einstein metrics and the Kähler-Ricci flow on log Fano varieties”. J. Reine Angew. Math. 751 (2019), 27 - 89.
* [5] X.X. Chen and J.R. Cheng, “On the constant scalar curvature Kähler metrics I - a priori estimates”, J. Amer. Math. Soc. (2021) DOI: https://doi.org/10.1090/jams/967, arXiv: 1712.06697.
* [6] X.X. Chen and J.R. Cheng, “The $L^{\infty}$ estimates for parabolic complex Monge-Ampère and Hessian equations”, arXiv:2201.13339.
* [7] E. De Giorgi, “Sulla differenziabilità e l’analiticità delle estremali degli integrali multipli regolari”. Mem. Accad. Sci. Torino. Cl. Sci. Fis. Mat. Nat. (3) 3 1957 25 - 43.
* [8] J.P. Demailly and N. Pali, “Degenerate complex Monge-Ampère equations over compact Kähler manifolds”, Intern. J. Math. 21 (2010) no. 3, 357-405.
* [9] S. Dinew, “Lectures on pluripotential theory on compact Hermitian manifolds”. Complex non-Kähler geometry, 1 - 56, Lecture Notes in Math., 2246, Fond. CIME/CIME Found. Subser., Springer, Cham, 2019.
* [10] E. Di Nezza, V. Guedj, and C.H. Lu, “Finite energy vs finite entropy”, arXiv: 2006.07061.
* [11] P. Eyssidieux, V. Guedj, and A. Zeriahi, “Singular Kähler-Einstein metrics”, J. Amer. Math. Soc. 22 (2009) 607-639.
* [12] B. Guo, D.H. Phong, and F. Tong, “On $L^{\infty}$ estimates for complex Monge-Ampère equations”, arXiv:2106.02224
* [13] B. Guo, D.H. Phong, and F. Tong, “Stability estimates for the complex Monge-Ampr̀e and Hessian equations”, arXiv:2106.03913
* [14] B. Guo, D.H. Phong, F. Tong, and C. Wang, “On $L^{\infty}$ estimates for Monge-Ampère and Hessian equations on nef classes”, arXiv:2111.14186
* [15] B. Guo, D.H. Phong, F. Tong, and C. Wang, “On the modulus of continuity of solutions to complex Monge-Ampère equations”, arXiv:2112.02354
* [16] B. Guo, D.H. Phong, and J. Sturm, “Green’s functions and complex Monge-Ampère equations”, arXiv:2202.04715.
* [17] B. Guo and D.H. Phong, “On $L^{\infty}$ estimates for fully nonlinear partial differential equations on Hermitian manifolds”, arXiv:2204.12549
* [18] B. Guo, D.H. Phong, J. Song, and J. Sturm, to appear.
* [19] B. Guo and J. Song, “Local noncollapsing for complex Monge-Ampère equations”, arXiv:2201.02930
* [20] F. R. Harvey and H. B. Lawson, “Dirichlet duality and the nonlinear Dirichlet problem”. Comm. Pure Appl. Math. 62 (2009), no. 3, 396 - 443.
* [21] F. R. Harvey and H. B. Lawson, “Dirichlet duality and the nonlinear Dirichlet problem on Riemannian manifolds”, J. Differential Geom. 88 (2011), no. 3, 395 - 482.
* [22] F.R. Harvey and H.B. Lawson, “Determinant majorization and the work of Guo-Phong-Tong and Abja-Olive”, arXiv: 2207.01729.
* [23] W.Y. He. “On the regularity of the complex Monge-Ampère equations”. Proc. Amer. Math. Soc. 140 (2012), no. 5, 1719-1727.
* [24] L. Hörmander, “An introduction to complex analysis in several variables”, Van Nostrand, Princeton, NJ, 1973
* [25] S. Kolodziej, “The complex Monge-Ampère equation”, Acta Math. 180 (1998) 69-117.
* [26] D.H. Phong and D.T. To, “Fully non-linear parabolic equations on Hermitian manifolds”, Ann. Scient. Ecole Normale Sup. 54 (2021), 793-829.
* [27] G. Szekelyhidi, “Fully non-linear elliptic equations on compact Hermitian manifolds”, J. Differential Geometry 109 (2018) no. 2, 337-378.
* [28] G. Tian, “On Kähler-Einstein metrics on certain Kähler manifolds with $C_{1}(M)>0$”, Invent. Math. 89 (1987), no. 2, 225–246
* [29] J.X. Wang, X.J. Wang, and B. Zhou, “A priori estimates for the complex Monge-Ampère equation”, arXiv:2003.06059.
* [30] J.X. Wang, X.J. Wang, and B. Zhou, “Moser-Trudinger inequality for the complex Monge-Ampère equation”, arXiv:2003.06056
* [31] S.T. Yau, “On the Ricci curvature of a compact Kähler manifold and the complex Monge-Ampère equation. I”, Comm. Pure Appl. Math. 31 (1978) 339-411.
* [32]
Department of Mathematics & Computer Science, Rutgers University, Newark, NJ
07102 USA
<EMAIL_ADDRESS>
Department of Mathematics, Columbia University, New York, NY 10027 USA
<EMAIL_ADDRESS>
|
# Phase coherence of pairs of Cooper pairs as quasi-long-range order of half-
vortex pairs in a two-dimensional bilayer system
Feng-Feng Song1 and Guang-Ming Zhang1,2<EMAIL_ADDRESS>1State Key
Laboratory of Low-Dimensional Quantum Physics and Department of Physics,
Tsinghua University, Beijing 100084, China.
2Frontier Science Center for Quantum Information, Beijing 100084, China.
###### Abstract
It is known that the loss of phase coherence of Cooper pairs in two-
dimensional (2D) superconductivity corresponds to the unbinding of vortex-
antivortex pairs with the quasi-long-range order (quasi-LRO) in the order-
parameter phase field, described by the Berezinskii-Kosterlizt-Thouless (BKT)
transition of a 2D XY model. Here we show that the second-order Josephson
coupling can induce an exotic superconducting phase in a bilayer system. By
using tensor-network methods, the partition function of the 2D classical model
is expressed as a product of 1D quantum transfer operator, whose eigen-
equation can be solved by an algorithm of matrix product states rigorously.
From the singularity shown by the entanglement entropy of the 1D quantum
analogue, various phase transitions can be accurately determined. Below the
BKT phase transition, an inter-layer Ising long-range order is established at
$T_{Ising}$, and the phase coherence of both intra-layers and inter-layers is
locked together. For two identical layers, the Ising transition coincides with
the BKT transition at a multi-critical point. For two inequivalent layers,
however, there emerges an intermediate quasi-LRO phase
($T_{Ising}<T<T_{BKT}$), where the vortex-antivortex bindings occur in the
layer with the larger intra-layer coupling, but only half-vortex pairs with
topological strings exist in the other layer, corresponding to the phase
coherence of pairs of Cooper pairs. So our study provides a promising way to
realize the charge-4e superconductivity in a bilayer system.
Introduction. -Superconductivity arises from electron pairing and its phase
coherence. In conventional Bardeen-Cooper-Schrieffer superconductors, the
electron pairing and condensation of Cooper pairs always happen
simultaneously, and the superconducting transition is determined by the
pairing temperature. In two dimensions (2D), however, the true transition can
be substantially below the pairing temperature and is controlled primarily by
thermal fluctuations in the phase field of the order parameterBeasley et al.
(1979); Emery and Kivelson (1995); Carlson et al. (1999); Li et al. (2007);
Ding et al. (2008); Kang et al. (2020); Faeth et al. (2021). In the Ginzburg-
Landau theory, when the magnitude fluctuation of the order parameter is
frozen, the phase field fluctuation can be characterized by the 2D XY spin
model, and the loss of phase coherence among the Cooper pairs corresponds to
the unbinding of vortex-antivortex pairs with the quasi-long-range order
(quasi-LRO), characterized by the Berezinskii-Kosterlizt-Thouless (BKT) phase
transitionBerezinsky (1971); Kosterlitz and Thouless (1973); Kosterlitz
(1974).
In recent years there has been the increasing interest in a bilayer structure
of coupled 2D superconducting systemsBojesen et al. (2013, 2014); Kobayashi et
al. (2019); Bighin et al. (2019); Zeng et al. (2021). When a direct Josephson
coupling is present, the relative phase of the order parameters is pinned to a
fixed value, so both phase locking and phase coherence of the Cooper pairs are
characterized by a single BKT transitionParga and Van Himbergen (1980).
However, when the direct Josephson coupling is suppressedLi et al. (2007);
Ding et al. (2008), the second-order Josephson coupling is dominant, and an
Ising-like transition for the phase locking occurs at $T_{Ising}$, which is
usually lower than the BKT transition temperature $T_{BKT}$. For the
inequivalent coupled layers, it was argued the existence of an intermediate
phase ($T_{Ising}<T<T_{BKT}$) with partial order: one layer is in disordered
phase and the other layer have vortex-antivortex pairs with quasi-LROGranato
and Kosterlitz (1986). Due to the lack of sharp thermodynamic signatures for
the BKT transition, it cannot unambiguously determine whether the transition
for the identical coupled layers is a single or double transitions with an
intervening unlocked phaseGranato and Kosterlitz (1986). Actually, the nature
of the intermediate phase with partial order has not been fully explored, so
it is a great challenge to determine the global phase diagram and calculate
the properties of the intermediate phase accurately.
Recently, tensor network methods have become a powerful tool to characterize
correlated quantum many-body phases and their phase transitions in the
thermodynamic limitVerstraete et al. (2008); Orús (2014). Since the partition
function of a 2D statistical model can be represented as a tensor product of
1D quantum transfer operatorHaegeman and Verstraete (2017), the correspondence
eigen-equation can be efficiently solved by the algorithm of variational
uniform matrix product statesZauner-Stauber et al. (2018a); Fishman et al.
(2018); Vanderstraeten et al. (2019a, b). In this Letter, we apply this method
to the bilayer system. According to the singularity displayed by the
entanglement entropy of the 1D quantum analogue, various phase transitions can
be precisely determinedLi et al. (2020); Song and Zhang (2021), and various
correlation functions of local observables are calculated rigorously.
Figure 1: (a) The finite-temperature phase diagram of the bilayer system.
Here we choose $K=0.5J_{1}$. In the low temperature phase, there emerges an
inter-layer Ising-like long-range order. The BKT and Ising transitions merge
together at the point $P$. (b) The schematic picture of the quasi-LRO phase-2,
while the quasi-LRO phase-1 is obtained by switching the upper and lower
layers. (c) The schematic picture of the low-temperature ordered phase.
The derived global phase diagram is displayed in Fig.1(a). As the temperature
decreases, the BKT transition first occurs before a local inter-layer Ising
long-range order is established. The Ising transition accompanies with the
vortex-antivortex bindings in both intra-layers and inter-layers, as shown in
Fig.1(c). For two identical layers, the Ising transition coincides with the
BKT transition at the multi-critical point $P$. However, for two inequivalent
layers, we find that the intermediate phase has a quasi-LRO: vortex-antivortex
bindings occur only in the layer with the larger intra-layer coupling while
half-vortex pairs emerge in the other layer, schematically shown in Fig.1(b).
Since the half-vortices are point singularities around which spin directions
rotate through an angle $\pi$ on circumnavigation, each pair of half-vortices
is connected by a topological stringLee and Grinstein (1985); Carpenter and
Chalker (1989); Shi et al. (2011); Serna et al. (2017). More importantly, as
the quasi-LRO of the phase fields can be viewed as the condensation of the
Cooper pairs of 2D superconductivity, the half-vortex pairs with a quasi-LRO
imply the condensate of pairs of the Cooper pairs in the absence of phase
coherence among the Cooper pairs, corresponding to the charge-4e
superconductivityDouçot and Vidal (2002); Babaev (2004); Berg et al. (2009);
Jiang et al. (2017); Fernandes and Fu (2021).
Model and tensor-network methods. -The Hamiltonian of a two-coupled XY spin
model on a square lattice is defined by
$\displaystyle H$ $\displaystyle=$ $\displaystyle-J_{1}\sum_{\langle
i,j\rangle}\cos(\theta_{i}-\theta_{j})-J_{2}\sum_{\langle
i,j\rangle}\cos(\varphi_{i}-\varphi_{j})$ (1)
$\displaystyle+K\sum_{i}\cos(2\theta_{i}-2\varphi_{i}),$
where $\theta_{i}$ and $\varphi_{i}\in[0,2\pi]$ are two $U(1)$ phase fields
describing the pairing order-parameters on the upper and lower layers,
respectively, $J_{1}$ and $J_{2}$ are their respective nearest-neighbour
intra-layer couplings, and $K$ denotes the second-order Josephson inter-layer
coupling. Due to the nature of the low-temperature phase, the inter-layer
coupling is always relevant for any finite value of $K$, and the phase fields
$\theta$ and $\varphi$ are no longer two independent $U(1)$ variables. At low
temperatures, the relative phase $\sigma_{i}\equiv\theta_{i}-\varphi_{i}$ is
reduced to a $\mathbb{Z}_{2}$ variable, which can be explicitly displayed in
the limit of $K\rightarrow\infty$, $\varphi_{i}=\theta_{i}+\pi s_{i}/2$ with
$s_{i}=\pm 1$. The reduced Ising-XY coupled model was intensively studied by
various numerical methodsChoi and Doniach (1985); Granato et al. (1991); Lee
et al. (1991); Li and Cieplak (1994); Nightingale et al. (1995).
Figure 2: (a) Tensor network representation of the partition function. (b)
The construction of the local tensor $O$ in the partition function. (c) Eigen-
equation for the fixed-point uMPS $|\Psi(A)\rangle$ of the 1D quantum transfer
operator $T$. (d) Two-point correlation function represented by contracting a
sequence of channel operators.
In the tensor network framework, the partition function is expressed as a
contraction of local tensors defined on the original square lattice, given by
$\displaystyle Z$ $\displaystyle=$
$\displaystyle\prod_{i}\iint\frac{\mathrm{d}\theta_{i}\mathrm{d}\varphi_{i}}{\left(2\pi\right)^{2}}\prod_{\langle
i,j\rangle}\mathrm{e}^{\beta J_{1}\cos(\theta_{i}-\theta_{j})}$ (2)
$\displaystyle\times\mathrm{e}^{\beta
J_{2}\cos(\varphi_{i}-\varphi_{j})}\mathrm{e}^{-\beta
K\cos(2\theta_{i}-2\varphi_{i})},$
where $\beta=1/T$ is the inverse temperature. To obtain its tensor network
representation, we apply a duality transformation that maps the phase
variables on each lattice site to the number indices on the nearest-neighbor
links. Such a map is achieved by the character decomposition
$\mathrm{e}^{x\cos\theta}=\sum_{n=-\infty}^{\infty}I_{n}(x)\mathrm{e}^{in\theta}$
for each Boltzmann factor, where $I_{n}(x)$ is the modified Bessel function of
the first kind. Then the partition function is represented as
$\displaystyle Z$ $\displaystyle=$
$\displaystyle\prod_{i}\iint\frac{\mathrm{d}\theta_{i}\mathrm{d}\varphi_{i}}{\left(2\pi\right)^{2}}\prod_{l\in\mathcal{L}}\sum_{n_{l},m_{l},k_{l}}I_{n_{l}}(\beta
J_{1})I_{m_{l}}(\beta J_{2})$ (3) $\displaystyle\times I_{k_{l}}(-\beta
K)\mathrm{e}^{in_{l}(\theta_{i}-\theta_{j})}\mathrm{e}^{im_{l}(\varphi_{i}-\varphi_{j})}\mathrm{e}^{i2k_{l}(\theta_{i}-\varphi_{i})},$
where $n_{l}$ ($m_{l}$) runs over every link on the upper (lower) layer, and
$k_{l}$ corresponds to every vertical link between $\theta_{i}$ and
$\varphi_{i}$. By integrating out all the phase degrees of freedom, the
partition function is transformed into a double tensor network as shown in
Fig. 2(a)
$Z=\mathrm{tTr}\prod_{i}O_{n_{1}m_{1},n_{2}m_{2}}^{n_{3}m_{3},n_{4}m_{4}}(i),$
(4)
where “tTr” denotes the tensor contraction over all auxiliary links. The
details are given in Supplemental Materials. As displayed in Fig. 2(b), each
local tensor $O$ is defined by
$\displaystyle O_{n_{1}m_{1},n_{2}m_{2}}^{n_{3}m_{3},n_{4}m_{4}}$
$\displaystyle=$ $\displaystyle\sum_{k}\left(\prod_{l=1}^{4}I_{n_{l}}(\beta
J_{1})I_{m_{l}}(\beta J_{2})\right)^{1/2}$ (5) $\displaystyle\times
I_{k}(\beta
K)\delta_{n_{1}+n_{2}+2k}^{n_{3}+n_{4}}\delta_{m_{1}+m_{2}}^{m_{3}+m_{4}+2k},$
where the inter-layer $k$ indices are summed over and the corresponding intra-
layer $m_{l}$ and $n_{l}$ indices are grouped together. The global $U(1)$
invariance of the bilayer model is encoded in each local tensor:
$O_{n_{1}m_{1},n_{2}m_{2}}^{n_{3}m_{3},n_{4}m_{4}}\neq 0$ only if
$n_{1}+m_{1}+n_{2}+m_{2}=n_{3}+m_{3}+n_{4}+m_{4}$. Since the expansion
coefficients in the Bessel function $I_{n}(x)$ decrease exponentially as
increasing $n$, an accurate truncation can be performed on the virtual indices
of the local tensors.
In the tensor-network approach, the row-to-row transfer matrix composed of an
infinite row of $O$ tensors is a 1D quantum transfer operator, whose
logarithmic form gives rise to a 1D quantum model with complex spin-spin
interactions. Under such a correspondence, the finite-temperature properties
of the 2D statistical problem are exactly mapped into a 1D quantum model at
zero temperature. In the thermodynamic limit, the value of the partition
function is determined by the dominant eigenvalues of the transfer operator,
whose eigen-equation sketched in Fig. 2(c) is
$T|\Psi(A)\rangle=\Lambda_{\max}|\Psi(A)\rangle,$ (6)
where $|\Psi(A)\rangle$ is the leading eigenvector represented by uniform
matrix product states (uMPS) consisting of local $A$ tensorsZauner-Stauber et
al. (2018b). This eigen-equation can be accurately solved by the algorithm of
variational uniform matrix product statesZauner-Stauber et al. (2018a);
Fishman et al. (2018); Vanderstraeten et al. (2019a, b), and the largest
eigenvector $|\Psi(A)\rangle$ corresponds to the fixed-point solution. The
precision of this approximation is controlled by the auxiliary bond dimension
$D$ of the local $A$ tensors.
From the fixed-point uMPS for the 1D quantum transfer operator, various
physical quantities can be estimated accurately. As far as the phase
transitions are concerned, the quantum entanglement entropy is the most
efficient measureVidal et al. (2003); Pollmann et al. (2009), which can be
directly determined via the Schmidt decomposition of $|\Psi(A)\rangle$:
$S_{E}=-\sum_{\alpha=1}^{D}s_{\alpha}^{2}\ln s_{\alpha}^{2}$, where
$s_{\alpha}$ are the singular values. And the two-point correlation function
of the local observable $h_{i}$ defined by $G(r)=\langle h_{j}h_{j+r}\rangle$
can be evaluated by the trace of an infinite sequence of channel operators
containing two local impurity tensors $M_{j}$ and $M_{j+r}$, as shown in Fig.
2(d). The details can be found in Supplementary Materials.
Phase Diagram. -Since the inter-layer coupling is always relevant, the
structure of the complete phase diagram is independent of its value, so we
simply choose a practical value $K/J_{1}=0.5$. Importantly we have noticed
that the entanglement entropy $S_{E}$ of the fixed-point uMPS for the 1D
quantum transfer operator exhibits singularity, which provides an accurate
criterion to determine the transition points. To obtain the phase diagram, we
have to numerically calculate the entanglement entropy under a wide range of
intra-layer coupling ratios $J_{2}/J_{1}$. In Fig. 3(a), the entanglement
entropy along $J_{2}/J_{1}=1.5$ develops two sharp peaks at $T_{c1}\simeq
1.21J_{1}$ and $T_{c2}\simeq 1.44J_{1}$, respectively. When $J_{2}$ approaches
$J_{1}$, these two peaks merge together, leading to a single peak at
$T_{\ast}\simeq 1.095J_{1}$ as shown in Fig. 3 (b). These peak positions are
nearly unchanged under the bond dimensions $D=90,100,110$. So the phase
boundaries can be determined with high precision and the complete phase
diagram is displayed in Fig. 1(a).
Figure 3: (a) and (b) The entanglement entropy as a function of temperature
for $J_{2}/J_{1}=1.5$ and $J_{1}=J_{2}$ with $K=0.5J_{1}$. (c) and (d) The
specific heat and the local Ising order parameter along $J_{2}/J_{1}=1.5$ and
$J_{1}=J_{2}$, respectively.
In order to gain insight into the essential physics of different phases, we
calculate the specific heat. Within the tensor-network framework, the internal
energy per site is calculated as
$u=-2J_{1}\langle\mathrm{e}^{i(\theta_{j}-\theta_{j+1})}\rangle-2J_{2}\langle\mathrm{e}^{i(\varphi_{j}-\varphi_{j+1})}\rangle+K\langle\mathrm{e}^{i(2\theta_{j}-2\varphi_{j})}\rangle,$
and the specific heat is obtained by $C_{V}=\partial u/\partial T$. As shown
in Fig. 3(c), along the line $J_{2}/J_{1}=1.5$, the specific heat exhibits a
logarithmic divergence at $T_{c1}$ but a small bump around $T_{c2}$. However,
for $J_{2}/J_{1}=1$, a single logarithmic singularity is observed at
$T_{\ast}$ as displayed in Fig. 3(d). The logarithmic specific heat at the
lower temperature reminds us of a 2D Ising phase transition with a
$\mathbb{Z}_{2}$ symmetry breaking, while the small bump at the higher
temperature indicates the nearby BKT transition.
At low temperatures, since the relative phase field
$\sigma_{i}\equiv\theta_{i}-\varphi_{i}$ is reduced to a $\mathbb{Z}_{2}$
variable, a local inter-layer Ising order parameter can be defined by
$\tau=\langle\sin\sigma_{i}\rangle$. As shown in Fig. 3(c) and (d), $\tau$ is
finite below $T_{c1}$, indicating that the phase lock occurs between the upper
and lower layers. When $J_{2}/J_{1}=1$, the Ising transition coincides with
the BKT transition exactly at the multi-critical point $P$, where there is an
interplay between the Ising and BKT degrees of freedom at the microscopic
level and exhibits a new universality class of critical properties with
emerged supersymmetryHuijse et al. (2015).
Correlation functions and spin stiffness. -To further explore the nature of
the intermediate temperature phase, we calculate the two-point correlation
functions of the XY spins and nematic spins, which represent the integer
vortices and half-integer vortices variables in the bilayer system,
respectively. The results are summarized in Table.1.
Table 1: Properties of correlation functions in the different phases of the phase diagram in Fig.1. | disordered | quasi-LRO-1 | quasi-LRO-2 | ordered
---|---|---|---|---
$\\!\langle\mathrm{e}^{i(\varphi_{i}-\varphi_{j})}\rangle\,$ | $\sim\mathrm{e}^{-r/\xi_{\varphi}}$ | $\sim\mathrm{e}^{-r/\xi_{\varphi}}$ | $\sim r^{-\eta_{\varphi}}$ | $\sim r^{-\eta_{\varphi}}$
$\\!\langle\mathrm{e}^{i2(\varphi_{i}-\varphi_{j})}\rangle\,$ | $\sim\mathrm{e}^{-r/\xi_{2\varphi}}$ | $\sim r^{-\eta_{2\varphi}}$ | $\sim r^{-\eta_{2\varphi}}$ | $\sim r^{-\eta_{2\varphi}}$
$\\!\langle\mathrm{e}^{i(\theta_{i}-\theta_{j})}\rangle\,$ | $\sim\mathrm{e}^{-r/\xi_{\theta}}$ | $\sim r^{-\eta_{\theta}}$ | $\sim\mathrm{e}^{-r/\xi_{\theta}}$ | $\sim r^{-\eta_{\theta}}$
$\\!\langle\mathrm{e}^{i2(\theta_{i}-\theta_{j})}\rangle\,$ | $\sim\mathrm{e}^{-r/\xi_{2\theta}}$ | $\sim r^{-\eta_{2\theta}}$ | $\sim r^{-\eta_{2\theta}}$ | $\sim r^{-\eta_{2\theta}}$
$\\!\langle\mathrm{e}^{i(\theta_{i}-\varphi_{j})}\rangle\,$ | $\sim\mathrm{e}^{-r/\xi_{\theta\varphi}}$ | $\sim\mathrm{e}^{-r/\xi_{\theta\varphi}}$ | $\sim\mathrm{e}^{-r/\xi_{\theta\varphi}}$ | $\sim r^{-\eta_{\theta\varphi}}$
$\\!\langle\mathrm{e}^{i(\sigma_{i}-\sigma_{j})}\rangle\,$ | $\sim\mathrm{e}^{-r/\xi_{\sigma}}$ | $\sim\mathrm{e}^{-r/\xi_{\sigma}}$ | $\sim\mathrm{e}^{-r/\xi_{\sigma}}$ | $\sim$ const.
For $J_{2}/J_{1}>1$, the spin-spin correlation function of the lower layer
$G_{\varphi}(r)$ starts to decay algebraically at $T_{c2}$ as the temperature
decreases. When approaching $T_{c2}$ from above, the spin correlation length
$\xi_{\varphi}$ is well-fitted by an exponentially divergent form
$\xi(T)\propto\exp(\frac{b}{\sqrt{T-T_{C}}}),\quad T\rightarrow T_{C}^{+},$
(7)
where $b$ is a non-universal positive constant. This is the characteristic
feature of the BKT transition. Below $T_{c1}$, the spin-spin correlation
functions of both the intra-layer $G_{\theta}(r)$ and the inter-layer
$G_{\theta\varphi}(r)$ exhibit the algebraic behavior, implying the vortex-
antivortex bindings in both intra-layers and inter-layers, a fully phase-
coherent state of the Cooper pairs in the bilayer system.
When we focus on the quasi-LRO-2 phase, the spin-spin correlation function
$G_{\varphi}(r)$ in the lower layer decays algebraically, while in the upper
layer it is the correlation function of the nematic spins $G_{2\theta}(r)$
that exhibits an algebraic behavior, instead of the correlation function of
the XY spins $G_{\theta}(r)$
$\displaystyle G_{\theta}(r)=\langle
e^{i\left(\theta_{j}-\theta_{j+r}\right)}\rangle\sim\mathrm{e}^{-r/\xi_{\theta}},$
$\displaystyle G_{2\theta}(r)=\langle
e^{i\left(2\theta_{j}-2\theta_{j+r}\right)}\rangle\sim r^{-\eta_{2\theta}}.$
(8)
For a given value of $J_{2}/J_{1}=1.5$ and $T/J_{1}=1.3$, the comparison
between the spin-spin correlation function and nematic correlation function is
displayed in Fig. 4(a) and (b). Such a behavior indicates that the integer
vortices in the upper layer are fractionalized into half-integer vortex pairs
due to the presence of the inter-layer squared cosine interaction. Since the
half-integer vortices are point-like topological defects about which the phase
angles of spins wind by $\pi$, each pair of half-vortices should be connected
by a topological string across which spins are antiparallel. Because the
integer vortex-antivortex pairs with quasi-LRO are regarded as the phase
condensation of the Cooper pairs in 2D, the half-integer vortex pairs with
quasi-LRO can be regarded as the condensation of pairs of the Cooper pairs in
the absence of the phase coherence among the Cooper pairsDouçot and Vidal
(2002); Babaev (2004). Such a phenomenon is just the characteristics of the
charge-4e superconductivityBerg et al. (2009); Jiang et al. (2017); Fernandes
and Fu (2021).
Figure 4: The properties of the quasi-LRO phase-2 when $J_{2}/J_{1}=1.5$,
$K/J_{1}=0.5$, and $T/J_{1}=1.3$. (a) The correlation function of the XY spins
shows an exponential decay. (b) The correlation function of the nematic spins
exhibits a power law decay.
To access the superfluid response of the bilayer system, we calculate the spin
stiffness or the helicity modulus defined by the second derivative of the free
energy density with respect to a twist $v$ along a given directionFisher et
al. (1973); Nelson and Kosterlitz (1977),
$\rho_{s}=\frac{\partial^{2}f}{\partial^{2}v}|_{v=0}$. The twist needs to be
taken in a way that respects the joint $U(1)$ symmetry of the coupled bilayer,
and the spin stiffness is expressed in terms of two-point functions within the
framework of tensor network methodsVanderstraeten et al. (2015, 2016). Since
the process is more technical, the details are given in the Supplementary
Materials. The jump of spin stiffness should be altered from the BKT
predictions $\rho_{s}/T_{BKT}=2/\pi$ due to the emergence of half
vorticesHübscher and Wessel (2013). In Fig. 5, the numerical spin stiffness as
a function of temperature is shown for $J_{2}/J_{1}=1.0\sim 1.8$ with the
inter-layer coupling $K/J_{1}=0.5$. It can be seen that the spin stiffness
starts to dramatically increase from zero around the BKT transition
temperature $T_{c2}$. When the temperature decreases, a cusp point forms in
the further increase of the spin stiffness, corresponding to the Ising phase
transition $T_{c1}$ precisely. Surprisingly, the cusp points for given values
of $J_{2}/J_{1}$ sit on a straight line, which is a key experimental feature
of the presence of the Ising phase transition within the superconducting
phase.
Figure 5: The spin stiffness as a function of temperature for given values of
$J_{2}/J_{1}$. The inter-layer coupling is chosen as $K=0.5J_{1}$ and the bond
dimension of the local tensor is $D=110$. The red straight line indicates the
temperature of the Ising transition.
Conclusion. -We have used the tensor-network methods to study the bilayer
system of two-coupled 2D XY spin models. The global finite temperature phase
diagram has been accurately determined. It has been found that, as the
temperature decreases, the BKT transition always happens above the phase
locking of the bilayer system, which corresponds to an inter-layer Ising long-
range order. More importantly, for two inequivalent coupled bilayers, there
exists an intervening unlocked phase, where the half-integer vortex pairs form
in one layer with the smaller intra-layer coupling, coexisting with the
integer vortex-antivortex pairs in the other layer. When a weak direct
Josephson coupling is also present, we have further proved that the Ising
phase transition below the BKT transition survives and the main results of
this work are still valid, because two local minima always exist to lock the
phase fields of the upper and lower layers.
Recently a new family of superconductors ACa2Fe4As4F2 (A=K,Rb,Cs) has
synthesizedWang et al. (2016), and these compounds can be viewed as an
intergrowth of AFe2As2 and CaFeAsF layers. The transport and magnetic
measurements on single crystals of CsCa2Fe4As4F2 showed a large resistivity
anisotropy that tends to increase with decreasing temperature, and the 2D
superconducting fluctuations have been observedWang et al. (2019). The
evolution of the in-plane penetration depth shows an inflection point around
$10$ K, indicating that a potentially ”magnetic” phase appears but does not
compete with superconductivityKirschner et al. (2018). These features may be
related to the formation of the inter-layer Ising long-range order and the
manifestation of the phase coherence of pairs of Cooper pairs revealing a cusp
point in the spin stiffness. Therefore, these compounds are good candidate
systems to explore the charge-4e superconductivity.
Acknowledgments. The authors are indebted to Qi Zhang for his stimulating
discussions. The research is supported by the National Key Research and
Development Program of MOST of China (2017YFA0302902).
## References
* Beasley et al. (1979) M. R. Beasley, J. E. Mooij, and T. P. Orlando, Phys. Rev. Lett. 42, 1165 (1979), URL https://link.aps.org/doi/10.1103/PhysRevLett.42.1165.
* Emery and Kivelson (1995) V. J. Emery and S. A. Kivelson, Nature 374, 434 (1995).
* Carlson et al. (1999) E. W. Carlson, S. A. Kivelson, V. J. Emery, and E. Manousakis, Phys. Rev. Lett. 83, 612 (1999), URL https://link.aps.org/doi/10.1103/PhysRevLett.83.612.
* Li et al. (2007) Q. Li, M. Hücker, G. D. Gu, A. M. Tsvelik, and J. M. Tranquada, Phys. Rev. Lett. 99, 067001 (2007), URL https://link.aps.org/doi/10.1103/PhysRevLett.99.067001.
* Ding et al. (2008) J. F. Ding, X. Q. Xiang, Y. Q. Zhang, H. Liu, and X. G. Li, Phys. Rev. B 77, 214524 (2008), URL https://link.aps.org/doi/10.1103/PhysRevB.77.214524.
* Kang et al. (2020) B. L. Kang, M. Z. Shi, S. J. Li, H. H. Wang, Q. Zhang, D. Zhao, J. Li, D. W. Song, L. X. Zheng, L. P. Nie, et al., Phys. Rev. Lett. 125, 097003 (2020), URL https://link.aps.org/doi/10.1103/PhysRevLett.125.097003.
* Faeth et al. (2021) B. D. Faeth, S.-L. Yang, J. K. Kawasaki, J. N. Nelson, P. Mishra, C. T. Parzyck, C. Li, D. G. Schlom, and K. M. Shen, Phys. Rev. X 11, 021054 (2021), URL https://link.aps.org/doi/10.1103/PhysRevX.11.021054.
* Berezinsky (1971) V. Berezinsky, Sov. Phys. JETP 32, 493 (1971).
* Kosterlitz and Thouless (1973) J. M. Kosterlitz and D. J. Thouless, Journal of Physics C: Solid State Physics 6, 1181 (1973), URL https://doi.org/10.1088.
* Kosterlitz (1974) J. M. Kosterlitz, Journal of Physics C: Solid State Physics 7, 1046 (1974), URL https://doi.org/10.1088.
* Bojesen et al. (2013) T. A. Bojesen, E. Babaev, and A. Sudbø, Phys. Rev. B 88, 220511 (2013), URL https://link.aps.org/doi/10.1103/PhysRevB.88.220511.
* Bojesen et al. (2014) T. A. Bojesen, E. Babaev, and A. Sudbø, Phys. Rev. B 89, 104509 (2014), URL https://link.aps.org/doi/10.1103/PhysRevB.89.104509.
* Kobayashi et al. (2019) M. Kobayashi, M. Eto, and M. Nitta, Phys. Rev. Lett. 123, 075303 (2019), URL https://link.aps.org/doi/10.1103/PhysRevLett.123.075303.
* Bighin et al. (2019) G. Bighin, N. Defenu, I. Nándori, L. Salasnich, and A. Trombettoni, Phys. Rev. Lett. 123, 100601 (2019), URL https://link.aps.org/doi/10.1103/PhysRevLett.123.100601.
* Zeng et al. (2021) M. Zeng, L.-H. Hu, H.-Y. Hu, Y.-Z. You, and C. Wu, arXiv e-prints arXiv:2102.06158 (2021), eprint 2102.06158.
* Parga and Van Himbergen (1980) N. Parga and J. Van Himbergen, Solid State Communications 35, 607 (1980), ISSN 0038-1098, URL https://www.sciencedirect.com/science/article/pii/003810988090592X.
* Granato and Kosterlitz (1986) E. Granato and J. M. Kosterlitz, Phys. Rev. B 33, 4767 (1986), URL https://link.aps.org/doi/10.1103/PhysRevB.33.4767.
* Verstraete et al. (2008) F. Verstraete, V. Murg, and J. Cirac, Advances in Physics 57, 143 (2008), eprint https://doi.org/10.1080/14789940801912366, URL https://doi.org/10.1080/14789940801912366.
* Orús (2014) R. Orús, Annals of Physics 349, 117 (2014), ISSN 0003-4916, URL http://www.sciencedirect.com/science/article/pii/S0003491614001596.
* Haegeman and Verstraete (2017) J. Haegeman and F. Verstraete, Annual Review of Condensed Matter Physics 8, 355 (2017), eprint https://doi.org/10.1146/annurev-conmatphys-031016-025507, URL https://doi.org/10.1146/annurev-conmatphys-031016-025507.
* Zauner-Stauber et al. (2018a) V. Zauner-Stauber, L. Vanderstraeten, M. T. Fishman, F. Verstraete, and J. Haegeman, Phys. Rev. B 97, 045145 (2018a), URL https://link.aps.org/doi/10.1103/PhysRevB.97.045145.
* Fishman et al. (2018) M. T. Fishman, L. Vanderstraeten, V. Zauner-Stauber, J. Haegeman, and F. Verstraete, Phys. Rev. B 98, 235148 (2018), URL https://link.aps.org/doi/10.1103/PhysRevB.98.235148.
* Vanderstraeten et al. (2019a) L. Vanderstraeten, J. Haegeman, and F. Verstraete, SciPost Phys. Lect. Notes p. 7 (2019a), URL https://scipost.org/10.21468/SciPostPhysLectNotes.7.
* Vanderstraeten et al. (2019b) L. Vanderstraeten, B. Vanhecke, A. M. Läuchli, and F. Verstraete, Phys. Rev. E 100, 062136 (2019b), URL https://link.aps.org/doi/10.1103/PhysRevE.100.062136.
* Li et al. (2020) Z.-Q. Li, L.-P. Yang, Z. Y. Xie, H.-H. Tu, H.-J. Liao, and T. Xiang, Phys. Rev. E 101, 060105 (2020), URL https://link.aps.org/doi/10.1103/PhysRevE.101.060105.
* Song and Zhang (2021) F.-F. Song and G.-M. Zhang, Phys. Rev. B 103, 024518 (2021), URL https://link.aps.org/doi/10.1103/PhysRevB.103.024518.
* Lee and Grinstein (1985) D. H. Lee and G. Grinstein, Phys. Rev. Lett. 55, 541 (1985), URL https://link.aps.org/doi/10.1103/PhysRevLett.55.541.
* Carpenter and Chalker (1989) D. B. Carpenter and J. T. Chalker, Journal of Physics: Condensed Matter 1, 4907 (1989), URL https://doi.org/10.1088.
* Shi et al. (2011) Y. Shi, A. Lamacraft, and P. Fendley, Phys. Rev. Lett. 107, 240601 (2011), URL https://link.aps.org/doi/10.1103/PhysRevLett.107.240601.
* Serna et al. (2017) P. Serna, J. T. Chalker, and P. Fendley, Journal of Physics A: Mathematical and Theoretical 50, 424003 (2017), URL https://doi.org/10.1088.
* Douçot and Vidal (2002) B. Douçot and J. Vidal, Phys. Rev. Lett. 88, 227005 (2002), URL https://link.aps.org/doi/10.1103/PhysRevLett.88.227005.
* Babaev (2004) E. Babaev, Nuclear Physics B 686, 397 (2004), eprint cond-mat/0201547, URL https://www.sciencedirect.com/science/article/pii/S0550321304001294.
* Berg et al. (2009) E. Berg, E. Fradkin, and S. A. Kivelson, Nature Physics 5, 830 (2009).
* Jiang et al. (2017) Y.-F. Jiang, Z.-X. Li, S. A. Kivelson, and H. Yao, Phys. Rev. B 95, 241103 (2017), URL https://link.aps.org/doi/10.1103/PhysRevB.95.241103.
* Fernandes and Fu (2021) R. M. Fernandes and L. Fu, arXiv e-prints arXiv:2101.07943 (2021), eprint 2101.07943.
* Choi and Doniach (1985) M. Y. Choi and S. Doniach, Phys. Rev. B 31, 4516 (1985), URL https://link.aps.org/doi/10.1103/PhysRevB.31.4516.
* Granato et al. (1991) E. Granato, J. M. Kosterlitz, J. Lee, and M. P. Nightingale, Phys. Rev. Lett. 66, 1090 (1991), URL https://link.aps.org/doi/10.1103/PhysRevLett.66.1090.
* Lee et al. (1991) J. Lee, E. Granato, and J. M. Kosterlitz, Phys. Rev. B 44, 4819 (1991), URL https://link.aps.org/doi/10.1103/PhysRevB.44.4819.
* Li and Cieplak (1994) M. S. Li and M. Cieplak, Phys. Rev. B 50, 955 (1994), URL https://link.aps.org/doi/10.1103/PhysRevB.50.955.
* Nightingale et al. (1995) M. P. Nightingale, E. Granato, and J. M. Kosterlitz, Phys. Rev. B 52, 7402 (1995), URL https://link.aps.org/doi/10.1103/PhysRevB.52.7402.
* Zauner-Stauber et al. (2018b) V. Zauner-Stauber, L. Vanderstraeten, M. T. Fishman, F. Verstraete, and J. Haegeman, Phys. Rev. B 97, 045145 (2018b), URL https://link.aps.org/doi/10.1103/PhysRevB.97.045145.
* Vidal et al. (2003) G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, Phys. Rev. Lett. 90, 227902 (2003), URL https://link.aps.org/doi/10.1103/PhysRevLett.90.227902.
* Pollmann et al. (2009) F. Pollmann, S. Mukerjee, A. M. Turner, and J. E. Moore, Phys. Rev. Lett. 102, 255701 (2009), URL https://link.aps.org/doi/10.1103/PhysRevLett.102.255701.
* Huijse et al. (2015) L. Huijse, B. Bauer, and E. Berg, Phys. Rev. Lett. 114, 090404 (2015), URL https://link.aps.org/doi/10.1103/PhysRevLett.114.090404.
* Fisher et al. (1973) M. E. Fisher, M. N. Barber, and D. Jasnow, Phys. Rev. A 8, 1111 (1973), URL https://link.aps.org/doi/10.1103/PhysRevA.8.1111.
* Nelson and Kosterlitz (1977) D. R. Nelson and J. M. Kosterlitz, Phys. Rev. Lett. 39, 1201 (1977), URL https://link.aps.org/doi/10.1103/PhysRevLett.39.1201.
* Vanderstraeten et al. (2015) L. Vanderstraeten, M. Mariën, F. Verstraete, and J. Haegeman, Phys. Rev. B 92, 201111 (2015), URL https://link.aps.org/doi/10.1103/PhysRevB.92.201111.
* Vanderstraeten et al. (2016) L. Vanderstraeten, J. Haegeman, P. Corboz, and F. Verstraete, Phys. Rev. B 94, 155123 (2016), URL https://link.aps.org/doi/10.1103/PhysRevB.94.155123.
* Hübscher and Wessel (2013) D. M. Hübscher and S. Wessel, Phys. Rev. E 87, 062112 (2013), URL https://link.aps.org/doi/10.1103/PhysRevE.87.062112.
* Wang et al. (2016) Z.-C. Wang, C.-Y. He, S.-Q. Wu, Z.-T. Tang, Y. Liu, A. Ablimit, C.-M. Feng, and G.-H. Cao, Journal of the American Chemical Society 138, 7856 (2016), ISSN 1520-5126, URL http://dx.doi.org/10.1021/jacs.6b04538.
* Wang et al. (2019) Z.-C. Wang, Y. Liu, S.-Q. Wu, Y.-T. Shao, Z. Ren, and G.-H. Cao, Phys. Rev. B 99, 144501 (2019), URL https://link.aps.org/doi/10.1103/PhysRevB.99.144501.
* Kirschner et al. (2018) F. K. K. Kirschner, D. T. Adroja, Z.-C. Wang, F. Lang, M. Smidman, P. J. Baker, G.-H. Cao, and S. J. Blundell, Phys. Rev. B 97, 060506 (2018), URL https://link.aps.org/doi/10.1103/PhysRevB.97.060506.
|
# Materials Fingerprinting Classification
Adam Spannaus1 Oak Ridge National Laboratory, Oak Ridge, TN 37830 , Kody J.
H. Law2 School of Mathematics, University of Manchester, Manchester, UK ,
Piotr Luszczek3 Innovative Computing Laboratory, University of Tennessee,
Knoxville, TN 37996 , Farzana Nasrin4 Department of Mathematics, University
of Hawaii at Manoa, Honolulu, HI 96822 , Cassie Putman Micucci5 Eastman
Chemical Company, Kingsport, TN 37662 , Peter K. Liaw6 Department of
Materials Science and Engineering, University of Tennessee, Knoxville, TN
37996 , Louis J. Santodonato7 Advanced Research Systems, Inc., Macungie, PA
18062 , David J. Keffer6 Department of Materials Science and Engineering,
University of Tennessee, Knoxville, TN 37996<EMAIL_ADDRESS>and Vasileios
Maroulas8,∗ Department of Mathematics, University of Tennessee, Knoxville, TN
37996<EMAIL_ADDRESS>
###### Abstract.
Significant progress in many classes of materials could be made with the
availability of experimentally-derived large datasets composed of atomic
identities and three-dimensional coordinates. Methods for visualizing the
local atomic structure, such as atom probe tomography (APT), which routinely
generate datasets comprised of millions of atoms, are an important step in
realizing this goal. However, state-of-the-art APT instruments generate noisy
and sparse datasets that provide information about elemental type, but obscure
atomic structures, thus limiting their subsequent value for materials
discovery. The application of a materials fingerprinting process, a machine
learning algorithm coupled with topological data analysis, provides an avenue
by which here-to-fore unprecedented structural information can be extracted
from an APT dataset. As a proof of concept, the material fingerprint is
applied to high-entropy alloy APT datasets containing body-centered cubic
(BCC) and face-centered cubic (FCC) crystal structures. A local atomic
configuration centered on an arbitrary atom is assigned a topological
descriptor, with which it can be characterized as a BCC or FCC lattice with
near perfect accuracy, despite the inherent noise in the dataset. This
successful identification of a fingerprint is a crucial first step in the
development of algorithms which can extract more nuanced information, such as
chemical ordering, from existing datasets of complex materials.
###### Key words and phrases:
Atom Probe Tomography, High Entropy Alloy, Machine Learning, Topological Data
Analysis, Materials Discovery
This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-
AC05-00OR22725 with the U.S. Department of Energy. The United States
Government retains and the publisher, by accepting the article for
publication, acknowledges that the United States Government retains a non-
exclusive, paid-up, irrevocable,world-wide license to publish or reproduce the
published form of this manuscript, or allow others to do so, for United States
Government purposes. The Department of Energy will provide public access to
these results of federally sponsored research in accordance with the DOE
Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).
## 1\. Introduction
Recent advancements in computing and contemporary machine-learning
technologies have yielded new paradigms in computational materials science
that are accelerating the pace of materials research and discovery [2, 4, 19,
46, 47]. For example, researchers have used a neural network to predict
materials properties, clustering them into groups consistent with those found
on the periodic table [46] and data-driven materials design is an area now
available to researchers due to advances in machine-learning algorithms and
computational materials science databases [19, 1, 23]. These developments in
computational materials science have led researchers to begin exploring
structure-property relationships for disordered materials, such as entropy-
stabilized oxides and high-entropy alloys (HEAs) [34, 45]. Considering the
number of atomic configurations in a disordered crystal structure, such as
those found in HEAs [44], the number of possible atomic combinations of even a
single unit cell, the smallest collection and ordering of atoms from which an
entire material can be built, quickly becomes computationally intractable for
existing algorithms [2]. In the present work, we propose an automated machine
learning methodology for determining the lattice structure of a noisy and
sparse materials dataset, e.g., the type retrieved from atom probe tomography
(APT) experiments, for materials with disordered lattice structures, such as
HEAs.
One of the fundamental properties of a crystalline material is the structure
of its unit cell. Indeed, knowledge of the chemical ordering and geometric
arrangement of the atoms of any material is essential for developing
predictive structure-property relationships. As materials become more complex
and the ordering of atoms amongst lattice sites becomes increasingly
disordered, such as is the case with HEAs [43], these structure-property
relationships have yet to be developed. Indeed, the high-configurational
entropy of HEAs yields a distribution of lattice parameters and cell
compositions, as opposed to a single unit cell and lattice constant found in
more traditional materials.
For many classes of materials, the lattice structure is either well-known,
e.g., sodium chloride (salt) is body-centered cubic, or it can be discovered
via X-ray diffraction (XRD) or neutron scattering techniques [35]. XRD is a
routine technique for the determination of crystal structures of metals,
ceramics, and other crystalline materials. These techniques do not yield
atomic level elemental distinctions or resolve local lattice distortions on a
scale of less than 10Å [35], which are crucial to researchers working with
highly-disordered materials, such as HEAs. Moreover, XRD cannot provide the
correlation between atom identity and position in a material. This chemical
ordering of atoms is essential to developing predictive relationships between
the composition of an HEA and its properties.
High entropy alloys are a relatively new class of metallic alloys, first
synthesized in the mid 2000’s by [43]. As defined by [42], HEAs are composed
of at least five atomic elements, each with an atomic concentration between 5%
and 35%. These novel alloys have remarkable properties, such as: corrosion
resistance [45, 36], increased strength at extreme temperatures, ductility
[11, 26, 27], increased levels of elasticity [41], strong fatigue and fracture
resistance [11, 15, 39], and enhanced electrical conductivity [12, 22]. HEAs
are amenable to the APT analysis as the process is able to recover elemental
type in addition to approximate the lattice sites in a material where the
atoms sit.
An experimental process that unambiguously determines the position, identity
of each atom, and structure of a material is currently nonexistent [2, 32].
Indeed, quantification of different lattice parameters and unit-cell
compositions have not previously been reported due to data quality issues
inherent to APT [21, 31]. While these experiments are able to discern
elemental types at a high resolution, the process has two drawbacks, _(i)
sparsity_ : empirically, approximately 65% of the atoms from a sample are not
registered by the detector [35]; and _(ii) noise_ : the atoms that are
observed have their spatial coordinates corrupted by experimental noise [31].
As noted by [31], the spatial resolution of the APT process is up to 3Å (0.3
nm) in the $xy$-horizontal plane, which is approximately the length of an unit
cell. This experimental noise has a two-fold impact on the data retrieved by a
typical experiment. First, the noise prevents materials science researchers
from extracting elemental atomic distributions, which are essential for
developing the necessary interaction potentials for molecular dynamics
simulations. Secondly, the experimental noise is significant enough to make
atoms that are first neighbors in an atomic neighborhood, i.e., those atoms
which occupy adjacent lattice sites, appear as second or third neighbors and
vice versa [31]. Furthermore, the experimental noise is only one source of
distortion to the lattice structure. HEAs exhibit local lattice deformations
due to the random distribution of atoms throughout the material and atoms of
differing size sitting at adjacent lattice points [44].
This deformation of the local crystal structure makes any determination of the
lattice a challenging problem for any symmetry-based algorithm, such as [16,
18, 24]. The field of atom probe crystallography has emerged in recent years
[32, 10] and existing methodologies in this area seek to discover local
structures when the global structure is known _a priori_. In the case of HEAs,
the global lattice structure is unknown and must be discovered. Indeed,
drawing correct conclusions about the material’s crystal structure is
virtually impossible from the APT analysis using current techniques [31].
A recent method relying on a convolutional neural network [47] classified
synthetic crystal structures that are either noisy or sparse by creating a
diffraction image from a lattice structure and using this image as input data
for the neural network. The authors of [47] claim that their methodology could
be applied to data with both experimental noise and significant levels of
sparsity, as is typically retrieved by APT experiments, but without showcasing
any such instances. Briefly, diffraction images are diffraction patterns
generated by simulating the results of an X-ray diffraction experiment. In
particular, they create the interference pattern that is generated when a
series of waves encounter a crystal lattice and either pass through
unobstructed or encounter an atom and bend around the atom.
Figure 1. Examples of the lattice structures that we consider viewed with the
visualization software Ovito [38] which uses empirical atomic radii in its
visualizations. We consider three different crystals: (a) body-centered cubic
(BCC), (b) face-centered cubic (FCC), and (c) hexagonal close packed (HCP)
lattices showing their similarities and differences with complete, noiseless
data. The FCC and HCP structures have only a subtle difference in their
geometry. The HCP structure forms an identifying parallelogram (c), whereas
the FCC forms a square (b) when all atoms within a radius of the center atom
are collected. (d) Example of an FCC structure retrieved from an APT analysis
of the HEA Al0.3CoCrFeNi [5] demonstrating the sparsity and atomic
displacements due to the resolution of APT process. The noise and sparsity
from the APT process obscures this difference between the FCC and HCP
structures.
Here we propose a machine-learning approach, a materials fingerprint, to
classify the crystal structure of a material by looking at local atomic
neighborhoods through the lens of topological data analysis (TDA). TDA is a
field that uses topological features within data for machine learning tasks
[28, 29, 33]. It has found other applications in materials science, such as
the characterization of amorphous solids [17], equilibrium phase transitions
[6], and similarity of pore-geometry in nanomaterials [25]. Our motivation is
to encode the geometric peculiarities of HEAs by considering atomic positions
within a neighborhood and looking at the neighborhood’s topology. Key
differences between atomic neighborhoods are encoded in the empty space, e.g.,
holes and voids, between atoms, as well as clusters of atoms in the
neighborhood. These identifying topological features of an atomic neighborhood
can be calculated through the concept of homology, which is the mathematical
study of ‘holes’ in different dimensions and differentiate the shape and
structure of the neighborhoods. Extracting this homological information from
each atomic neighborhood, we can distinguish between the different lattice
structures that we consider; figure 1 shows idealized versions of these
crystal structures. A typical lattice retrieved from an APT experiment is in
figure 1(d).
Using these topologically-derived features, we are able to classify the
crystal structure of HEAs from the APT data with accuracy approaching 100%. To
test the robustness of our proposed method, we combine levels of sparsity and
noise on synthetic data and find our method accurately classifies the crystal
structure. Our novel methodology couples the power of topological data
analysis to extract the intrinsic topology of these crystal lattices with a
machine learning classification scheme to differentiate between lattice
structures and classify them with a high degree of precision.
The outline of this paper is as follows. In Section 2 we describe the APT
experimental process and the details related to the analysis of the HEAs that
we consider. Section 3 provides details of the classification model for
recognizing crystal structures. Numerical results are presented in section 4
and we conclude with discussion in section 5.
## 2\. Atom Probe Tomography
In this section we discuss the APT experimental process and the postprocessing
employed to create the data. Furthermore, we discuss the resulting data and
its characteristics.
### 2.1. APT Process
APT was conducted using a Local Electrode Atom Probe (LEAP) 4000 XHR
instrument at the Center for Nanophase Materials Sciences of the Oak Ridge
National Laboratory [5, 13]. The process systematically evaporates ions from a
specimen’s hemispherical surface using voltage or laser pulses. A position
sensitive detector collects the ions, and the timing between the pulse and
detection events gives the time-of-flight, which identifies each species based
on unique mass-to-charge ratios. A reconstruction algorithm is used to create
a tomographic dataset from the $x$, $y$ detector data and the sequence of
detection gives the $z$-dimension in the reconstruction. Sample specimens for
APT experiments are typically sharp, conical tips with a maximum diameter of
less than 100 nm and a length of several hundred nanometers typically. Thus
all APT experiments investigate nanoscale structures and samples that contain
nanoparticles embedded in a matrix can be examined as well as layered
heterostructures.
### 2.2. APT Data
For our problem, the data consists of spatial coordinates of approximately
$10^{8}$ atoms with elemental type [31], constituting a highly-disordered
metallic alloy that is composed of BCC or FCC lattice structures. The sample
[35] was chosen because it has been previously well-characterized. This alloy
consists of three phases, a Cu-rich FCC phase, an Fe-Cr rich BCC phase, and a
remaining phase that incorporates all six elements, though the proportions of
Cu, Fe, and Cr are depleted due to accumulation in the other phases.
Importantly all three phases are present in the APT sample. When viewing the
entire data set with atoms identified by color, some nanoscale information is
immediately evident. The eye perceives elemental segregation of the Cu-rich
and Fe-Cr rich phases into nanoscale domains. The orange copper-rich area is
especially evident, as seen in figure 2(a). However, one cannot infer any
meaningful structure at a finer scale when viewing the entire dataset from a
typical APT experiment and further analysis requires that individual atomic
neighborhoods be extracted from the larger sample. Viewing each neighborhood
individually, figure 2(b), we can see that they contain a wealth of
information about the shape of the material under investigation, despite the
noise and sparsity present in a typical APT experiment.
## 3\. Methods
In this section we give the mathematical background necessary for our method,
detailed introductions can be found in [7, 8].
### 3.1. Topological Data Analysis
To extract the salient topological information from the atomic neighborhoods,
we turn to topological data analysis, particularly persistent homology.
Persistent homology describes connectedness and void space present within an
object and allows one to infer global properties of space from local
information [20]. Instead of considering only clusters of atoms, homology also
incorporates information about the regions enclosed by the atoms. This
approach yields topological features of the data in different homological
dimensions. In the case of these atomic neighborhoods created by APT
experiments, $0-$dim homological features are connected components, $1-$dim
homological features are holes, and $2-$dim homological features are voids,
2-dim holes, i.e., the space enclosed by a sphere.
Figure 2. Flowchart of the materials fingerprinting methodology. (a) The APT
data is processed as outlined in section 2.1. (b) Individual atomic
neighborhoods are extracted from an APT dataset as described in section 2.2.
(c) We create a collection of persistence diagrams, each diagram associated
with an atomic neighborhood, as explained in section 3.1. (d) Similarity
metrics between these persistence diagmras are computed via the
$d_{p}^{c}$-distance as defined in equation 3.1. (e) We create a feature
matrix composed of the summary statistics of these distances, which is used as
input in algorithm 1 to classify the persistence diagrams. (f) Output from
algorithm 1 classifying the structures under investigation, section 3.
To study the persistent homology of atomic structures extracted by HEAs, such
as the atomic neighborhoods in figure 2(b), we create spheres of increasing
radii around each atom in a neighborhood, detect when homological features
emerge and disappear, and record these radii in a persistence diagram, see
figures 2(c) and 3(e). Taking the atoms’ spatial positions in the
$xyz$-coordinate system recovered by the APT experimental process, we begin by
considering a sphere of radius $\epsilon$ centered at each atom, see figure
3(a). The algorithm starts at radius $\epsilon=0$ and this is the reason why
all points start at 0 in the persistence diagram associated with clusters and
connected components (grey circles in figures 2(c) and 3(e)). Indeed, all
atoms within a structure are initially treated as different clusters.
Increasing the radii, the algorithm starts clustering atoms together by
examining if their spheres intersect at a certain radius. If they do, the
these atoms form a cluster and that signifies the ‘death’ of the members of
clusters as being considered separately. Meanwhile, as spheres grow holes and
voids (2-dim holes) are created, see figures 3(b) and 3(c). By the same token,
these holes and voids get filled in due to increasing the radii, and are
represented in a persistence diagram by their death time (radius-wise).
Indeed, such topological features are recorded in a persistence diagram using
a different label (color). Eventually, at some radius, all spheres will
intersect, which means that all atoms belong to the same cluster and any hole
or void has been covered. This yields the end of the algorithm for creating a
persistence diagram. These homological features summarized in a persistence
diagram capture information about the shape of the neighborhood itself. This
type of multiscale analysis is key to bypassing the noise and sparsity present
in the data and to extract meaningful details about the configuration of each
neighborhood. For example, the corresponding diagram for the atomic
neighborhood in figure 3(a) is shown in figure 3(e). The persistence diagram
encodes information about the structure of each neighborhood by providing
insight about the number of atoms, the size and distance among atoms, possible
configuration of the faces, and $3-$dimensional structure. The persistence
diagram then functions as a proxy for the APT data by reducing an atomic
neighborhood to its most pertinent qualities.
Figure 3. Atomic neighborhood from an APT experiment [35] with the alloy
Al1.3CoCrCuFeNi. The atomic type is illustrated by the color, and is
visualized with [38]. (a) Shows each atom in a neighborhood as a point cloud
in $\mathbb{R}^{3}$. We begin by drawing a radius centered at each atom. As
the radius of these spheres increases in (b), a $1-$dim hole forms in the
atomic structure. Increasing the radii further, in (c) the formation of a
$2-$dim hole, a void, is evident. Continuing to increase the radii, in (d) the
radii have increased such that all atoms form one cluster. The persistence
diagram for this structure is shown in (e). In the persistence diagram, the
birth and death axes denote the emergence or disappearance of topological
features as the radii of the spheres centered on each atom increase and start
to intersect.
As the extracted persistence diagrams generated by APT experiments summarize
the shape peculiarities of each atomic neighborhood, different types of
lattice structures yield persistence diagrams with various identifying
features [30]. Indeed, examining the homological features, we see the
essential structural differences between crystal lattices in different
dimensions. Consider figure 4, which displays the difference between
persistence diagrams for BCC and FCC structures. From the viewpoint of
topology, the inside of an FCC cell contains a void, whereas the BCC cell does
not, thus yielding an important contrast. In the case of noiseless and
complete data, the presence of a void separates the BCC and FCC cells when
juxtaposing their crystal structures, as we see in the insets of figure 4
(a,b). The persistence diagrams capture differences in (i) the number of
neighbors (8 for BCC and 12 for FCC), (ii) the spacing between neighbors,
i.e., density, and (iii) the arrangement of neighbors.
Figure 4. Sample persistence diagrams of a material from the APT analysis of
the alloys Al1.3CoCrCuFeNi and Al0.3CoCrFeNi for two of the lattice types
considered here: BCC (a) and FCC (b), respectively [35, 5]. Notice the
distinguishing $2-$dim feature, the blue square, in the diagram derived from
an FCC lattice. Additionally, the diagram generated from the BCC structure has
fewer $0-$dim features. (c) The $d_{p}^{c}$ metric computes the distance
between two persistence diagrams generated by atomic neighborhoods, both
containing 1-dim features, denoted by the red triangles. The $d_{p}^{c}$
metric measures the distance between the diagrams by first finding the best
matching between points, given by the lines between the triangles. Any
unmatched points, e.g., the remaining triangle, are then penalized by the
constant term $c$. The birth and death axes denote the emergence or
disappearance of topological features, as a function of distance between atoms
in a neighborhood.
### 3.2. Persistence Diagram Similarity Metric
Different crystal structures produce different size point clouds [30]. To
properly account for differences in the number of points when comparing two
persistence diagrams, we employ the $d_{p}^{c}$ distance, introduced in [28].
For a given configuration, the persistence diagram can be compared to a
reference persistence diagram via a similarity metric, for BCC and FCC
structures as an example. Suppose $D_{1}=\\{d^{1}_{1},\ldots,d^{1}_{n}\\}$ and
$D_{2}=\\{d^{2}_{1},\ldots,d^{2}_{m}\\}$ are two persistence diagrams
associated with two local atomic neighborhoods such that $n\leq m$. Let $c>0$
and $1\leq p<\infty$ be fixed parameters. Then the $d_{p}^{c}$ distance
between $D_{1}$ and $D_{2}$ is
(3.1)
$d_{p}^{c}(D_{1},D_{2})=\left(\frac{1}{m}\left(\min_{\pi\in\Pi_{m}}\sum_{i=1}^{n}\min(c,\|d^{1}_{i}-d^{2}_{\pi(i)}\|_{\infty})^{p}+c^{p}|m-n|\right)\right)^{\frac{1}{p}}$
where $\Pi_{m}$ is the set of permutations of $(1,\dots,m)$. If $n>m$, define
$d_{p}^{c}(D_{1},D_{2}):=d_{p}^{c}(D_{2},D_{1})$.
This distance matches points between the persistence diagrams being compared,
and those that are unmatched are penalized by a regularization term $c$.
Figure 4(c) shows an example of how the distance between two persistence
diagrams is computed. We first find the optimal matching, denoted by the red
lines between triangles. This matching between points corresponds to the
summation term in the distance. If the matched distance is greater than $c$,
then we add $c$ to the matching distance, otherwise, we add the distance
between matched points. The unmatched 1-dim feature, denoted by the red
triangle, is penalized by the regularization term $c$ in the second part of
the definition. In developing the materials fingerprint, we compare
persistence diagrams with respect to 0, 1, and $2-$dim homological features,
i.e., connected components, holes, and voids, employing this distance. We then
compute summary statistics (mean, variance) from these distances to create
features for the classification algorithm.
### 3.3. Classification Model
We write $D_{i}$ as the persistence diagram generated by atom positions in an
atomic neighborhood retrieved by the APT experiment as seen in figure 2. Note
that the number of atoms in a neighborhood is not constant, but varies between
atomic neighborhoods in a sample. For the multiclass classification problem,
we are interested in modeling the conditional probability $\pi(Y=\ell\mid X)$
for a given input $X$, which encapsulates features of persistence diagrams and
a class label $Y=\ell$. We write the classification model as a generalized
additive regression model [9, 14]. Choosing this type of model gives us the
flexibility to let our data determine the correct functional form, as opposed
to imposing a linear model as in traditional logistic regression. Accordingly,
an $L$-class model is written
$\displaystyle\log\left(\frac{\pi(Y=1\,|\,X)}{\pi(Y=L\,|\,X)}\right)$
$\displaystyle=\alpha_{1}+F_{1}(X),$
$\displaystyle\log\left(\frac{\pi(Y=2\,|\,X)}{\pi(Y=L\,|\,X)}\right)$
$\displaystyle=\alpha_{2}+F_{2}(X),$
$\displaystyle\mathrel{\makebox[0.0pt]{\vdots}}$
$\displaystyle\log\left(\frac{\pi(Y=L-1\,|\,X)}{\pi(Y=L\,|\,X)}\right)$
$\displaystyle=\alpha_{L-1}+F_{L-1}(X),$
where $F_{i}(X)=\sum_{j=1}^{P}\,\alpha_{j}f_{j}(X)$ is a linear combination of
smooth functions $f_{j}$. Here $\mathbf{X}\in\mathbb{R}^{N\times P}$ and
$N=\sum_{i=1}^{L}\,N_{i}$ is such that for $1\leq i\leq N$ an arbitrary row of
$\mathbf{X}$ is
(3.2)
$\mathbf{X}_{i}={}(\mathbb{E}_{i,\lambda_{1}}^{0},\mathbb{E}_{i,\lambda_{1}}^{1},\mathbb{E}_{i,\lambda_{1}}^{2},\operatorname{\mathrm{Var}}_{i,\lambda_{1}}^{0},\operatorname{\mathrm{Var}}_{i,\lambda_{1}}^{1},\operatorname{\mathrm{Var}}_{i,\lambda_{1}}^{2},\dots,\mathbb{E}_{i,\lambda_{L}}^{0},\mathbb{E}_{i,\lambda_{L}}^{1},\mathbb{E}_{i,\lambda_{L}}^{2},\operatorname{\mathrm{Var}}_{i,\lambda_{L}}^{0},\operatorname{\mathrm{Var}}_{i,\lambda_{L}}^{1},\operatorname{\mathrm{Var}}_{i,\lambda_{L}}^{2}),$
where
$\mathbb{E}_{i,\lambda_{\ell}}^{k}=\frac{1}{N_{\ell}}\sum_{j=1}^{N_{\ell}}d_{p}^{c}(D_{i}^{k},D_{j}^{k})$
and
$\operatorname{\mathrm{Var}}_{i,\lambda_{\ell}}^{k}=\frac{1}{N_{\ell}-1}\sum_{j=1}^{N_{\ell}}\,(d_{p}^{c}(D_{i}^{k},D_{j}^{k})-\mathbb{E}_{i,\lambda_{\ell}}^{k})^{2}$
are the mean and variance respectively of the $d_{p}^{c}$ distance, equation
3.1, between any diagram $D_{i}^{k}$ and the collection of all persistence
diagrams in the class $\lambda_{\ell}\in\Lambda,1\leq\ell\leq L$ and
homological dimension $k=0,1,2$. The pseudocode for our algorithm is presented
in algorithm 1 and is visually represented in figure 2.
Algorithm 1 Materials Fingerprinting
1:Training Step
2:Read in labeled data (training set) with $L$ classes and compute persistence
diagrams in the training set $\mathcal{D}_{train}$, which has $N_{\ell}$
diagrams from the $\ell$th class, and set $N=\sum_{\ell=1}^{L}\,N_{\ell}$.
3:Read in response vector
$Y=(1\cdot\mathbf{1},\dots,\ell\cdot\mathbf{1},\dots,L\cdot\mathbf{1})^{T}$
where $\mathbf{1}$ is a vector of 1’s in $\mathbb{R}^{N_{\ell}}$.
4:for $i=1,\dots,N$ do
5: Compute feature matrix $\mathbf{X}$ according to equation (3.2)
$\mathbf{X}_{i}={}(\mathbb{E}_{i,\lambda_{1}}^{0},\mathbb{E}_{i,\lambda_{1}}^{1},\mathbb{E}_{i,\lambda_{1}}^{2},\operatorname{\mathrm{Var}}_{i,\lambda_{1}}^{0},\operatorname{\mathrm{Var}}_{i,\lambda_{1}}^{1},\operatorname{\mathrm{Var}}_{i,\lambda_{1}}^{2},\dots,\mathbb{E}_{i,\lambda_{L}}^{0},\mathbb{E}_{i,\lambda_{L}}^{1},\mathbb{E}_{i,\lambda_{L}}^{2},\operatorname{\mathrm{Var}}_{i,\lambda_{L}}^{0},\operatorname{\mathrm{Var}}_{i,\lambda_{L}}^{1},\operatorname{\mathrm{Var}}_{i,\lambda_{L}}^{2})$
where
$\mathbb{E}^{k}_{i,\lambda_{\ell}}=\frac{1}{N_{\ell}}\sum_{j=1}^{N_{\ell}}d_{p}^{c}(D_{i}^{k},D_{j}^{k}),\,\,\,\,\operatorname{\mathrm{Var}}_{i,\lambda_{\ell}}^{k}=\frac{1}{N_{\ell}-1}\sum_{j=1}^{N_{\ell}}\,(d_{p}^{c}(D_{i}^{k},D_{j}^{k})-\mathbb{E}_{i,\lambda_{\ell}}^{k})^{2},$
for $\lambda_{\ell}\in\Lambda,k\in\\{0,1,2\\}$.
6:end for
7:$\mathcal{C}(\mathbf{X})=\texttt{ADABOOST}(\mathbf{X},Y)$$\triangleright$
Obtain a classification rule $\mathcal{C}$ from the AdaBoost ensemble
classification algorithm
8:Testing Step
9:Read in unlabeled APT point cloud data and compute persistence diagrams
$\mathcal{D}_{test}=\\{\widehat{D}_{j}\\}_{j=1}^{J}$.
10:for $j=1,\dots,J$ do
11: Compute
$\widehat{\mathbf{X}}_{j}=(\widehat{\mathbb{E}}_{j,\lambda_{1}}^{0},\widehat{\mathbb{E}}_{j,\lambda_{1}}^{1},\widehat{\mathbb{E}}_{j,\lambda_{1}}^{2},\widehat{\operatorname{\mathrm{Var}}}_{j,\lambda_{1}}^{0},\widehat{\operatorname{\mathrm{Var}}}_{j,\lambda_{1}}^{1},\widehat{\operatorname{\mathrm{Var}}}_{j,\lambda_{1}}^{2},\dots,\widehat{\mathbb{E}}_{j,\lambda_{L}}^{0},\widehat{\mathbb{E}}_{j,\lambda_{L}}^{1},\widehat{\mathbb{E}}_{j,\lambda_{L}}^{2},\widehat{\operatorname{\mathrm{Var}}}_{j,\lambda_{L}}^{0},\widehat{\operatorname{\mathrm{Var}}}_{j,\lambda_{L}}^{1},\widehat{\operatorname{\mathrm{Var}}}_{j,\lambda_{L}}^{2})$
where
$\widehat{\mathbb{E}}^{k}_{j,\lambda_{\ell}}=\frac{1}{N_{\ell}}\sum_{n=1}^{N_{\ell}}d_{p}^{c}(\widehat{D}_{j}^{k},D_{n}^{k}),\,\,\widehat{\operatorname{\mathrm{Var}}}_{j,\lambda_{\ell}}^{k}=\frac{1}{N_{\ell}-1}\sum_{n=1}^{N_{\ell}}\,(d_{p}^{c}(\widehat{D}_{j}^{k},D_{n}^{k})-\widehat{\mathbb{E}}_{j,\lambda_{\ell}}^{k})^{2},$
for $\lambda_{\ell}\in\Lambda,k\in\\{0,1,2\\}$.
12:end for
13:Classify unlabeled APT data
14:$\widehat{Y}=\mathcal{C}(\widehat{\mathbf{X}})$ $\triangleright$ Yields
class labels for ${\mathcal{D}_{test}}$ as
$\hat{Y}\in\\{1,\dots,\ell,\dots,L\\}^{J}$.
### 3.4. Computational and Storage Considerations
Computing entries of the feature matrix $\mathbf{X}$, equation (3.2), requires
computing the mean and variance of $d_{p}^{c}$ distances with $k-$dim
persistence homology, $k=0,1,2$. For example, in the case of binary
classification between BCC and FCC lattice types, with $N_{1}$ and $N_{2}$
neighborhoods respectively, for each BCC persistence diagram, each
$\mathbb{E}_{i,\lambda_{1}}^{k}$ computation requires $N_{1}$ steps and for
FCC, it is $N_{2}$ steps. Similarly, computing the variance accurately in a
numerically stable fashion, e.g., when the size of the dataset is large and
the variance is small, each BCC diagram takes $2\times N_{1}$ steps for the
two pass algorithm [3]. In total, each row of $\mathbf{X}$ has complexity
$\mathcal{O}_{i}(N_{1},N_{2})=9\times\left(N_{1}+N_{2}\right)$ and the entire
feature matrix ends up with quadratic complexity:
$\mathcal{O}(N_{1},N_{2})=9\times\left(N_{1}+N_{2}\right)^{2}$. With the
atomic counts on the order of hundreds of thousands:
$N_{1},N_{2}\approx\mathcal{O}(10^{5})$, the quadratic component clearly
dominates with $10^{10}$ computational steps. Each of these steps requires the
$d_{p}^{c}$ distance computation given by equation 3.1, which is
computationally non-trivial for the majority of the diagrams due to the
identification of the optimal permutation between the diagrams being compared.
In order to reduce the total elapsed time of the computation, we used over
$1000$ x86 cores that ranged from Intel Westmere to Intel Skylake, ranging in
cores per socket from 8 to 36 with up to 72 cores per node. Additional speedup
of about $20\%$ came from porting the code for computing the feature matrix
from Python to C. The python code is publicly available at
https://github.com/maroulaslab/Materials-Fingerprinting.
## 4\. Numerical Experiments
We present here the outcome of algorithm 1 in both synthetic and real
experimental data as well as provide a sensitivity analysis. We first present
results of our fingerprinting process in different scenarios with synthetic
data to test the robustness of our method. We consider synthetic APT data with
various levels of sparsity and additive Gaussian noise,
$\mathcal{N}(0,\sigma^{2})$, as in real APT experimental data. In each of the
experiments presented, we perform 10-fold cross validation on the entire
dataset to control for overfitting of the model, randomly splitting the
dataset into 10 partitions. For each partition, we create a classification
rule from the other 9 partitions, and use the remaining one as a test set. Our
accuracy, defined here as (1 - Misclassification rate), is recorded for each
partition as it is used as the test set. The reported accuracy rate is the
mean accuracy over all 10 partitions. The hyperparameters $c,p$ were set to
the same values across all experiments, $c=(1,0.05,1.05)$ and $p=2$, to
provide a fair basis for comparison, and were selected by a grid search to
provide the highest accuracy score in the binary classification problem with
67% missing data and $\mathcal{N}(0,1)$ additive noise. A previous work [30],
discusses the role of $c$ and choosing this parameter.
### 4.1. Synthetic APT Data
We first present results of our fingerprinting process in different scenarios
with synthetic data to test the robustness of our method. First, we test with
combinations of noise and sparsity that we expect to see in real APT data.
Next, we examine the effect of class imbalances on the accuracy of our
methodology in the binary classification case of BCC and FCC materials. As a
final experiment with synthetic data, we repeat the scenario of varying the
concentration between BCC and FCC structures, but augment the data set with a
constant number of HCP lattice types. We observe the methodology is robust
against different levels of noise and sparsity in the case of the binary
classification problem. When the HCP structures are introduced into the
dataset, the accuracy decreases, due to the similarity of the FCC and HCP
structures, especially in the presence of additive noise and sparsity that we
consider. These results are presented in tables 1 to 3.
### 4.2. Sensitivity Analysis
To understand the effect of different levels of noise and sparsity in the
data, the materials fingerprint was applied to synthetic data having different
levels of sparsity and noise, similar to those values found in real APT data.
For each combination presented, we the dataset was composed of 400 structures,
split evenly between BCC and FCC types. We observe perfect accuracy in the
case of complete, noiseless data, as these lattice types differ in both their
geometry and atomic density. As the data becomes increasingly degraded, the
accuracy correspondingly decreases, but does not fall below 90% in this
analysis. Table 1 summarizes these results. We do observe a relative decrease
in accuracy with 50% sparsity and $\mathcal{N}(0,0.75^{2})$ added noise. We
attribute this decrease to the choice of $c$ and $p$ for the distance
computations. Indeed, for all the experiments presented herein, we used the
same values of $c$ and $p$. We may further optimize these parameters to
produce higher accuracy for each combination of noise and missing data
considered, at the risk of over-fitting for a specific dataset.
Table 1. Mean 10-fold cross validation accuracy, for synthetic APT data with different percentages of atoms removed and $\mathcal{N}(0,\sigma^{2})$ added noise. Sparsity Std. Dev. | $\sigma=0$ | $\sigma=0.25$ | $\sigma=0.75$ | $\sigma=1$
---|---|---|---|---
0% | 100% | 99.67% | 98% | 97.67%
33% | 100% | 99.32% | 96.67% | 94.67%
50% | 97.33% | 100% | 92.67% | 94%
67% | 98.67% | 100% | 99.33% | 92%
#### 4.2.1. Imbalanced Classification
Continuing our study of the binary classification problem, we investigated the
effect of varying the proportion of BCC vs. FCC lattice structures had on the
resulting classification accuracy. We considered the same combinations of
sparsity and additive noise as in section 4.2, but we varied the proportion of
BCC structures in the entire dataset between 10% and 90%. The remaining
percentage was composed of FCC structures so that the total number of
structures was 5,000. We observe a level of accuracy in this setting similar
to those observed in the previous experiment; these accuracy scores are
presented in table 2. We observe that the classification scheme is robust
against not only the perturbations and missing data expected from an APT
experiment, but class imbalance as well.
Table 2. Mean 10-fold cross validation accuracy, for synthetic APT data with $\mathcal{N}(0,\sigma^{2})$ added noise and 50% missing, the dataset is comprised of 5,000 configurations in each experiment. The proportion of BCC structures are given, and varied between experiments. The proportion of FCC configurations is 1-BCC%. BCC proportion | 10% | 25% | 40% | 60% | 75% | 90%
---|---|---|---|---|---|---
Std. Dev | Accuracy
$\sigma=0.25$ | 96.72% | 92.32% | 88.56% | 88.48% | 90.24% | 94.24%
$\sigma=0.5$ | 99.96% | 99.84% | 100% | 100% | 100% | 100%
$\sigma=0.75$ | 95.76% | 89.86% | 82.88% | 82.24% | 85.84% | 95.04%
$\sigma=1$ | 94.72% | 85.76% | 81.44% | 83.2% | 84.08% | 94%
#### 4.2.2. Multi-class Classification
As a final experiment, to the previous setting of varying the proportion of
BCC vs. FCC structures, we add a constant number of HCP structures to the data
set. All lattice structures in this experiment are perturbed by Gaussian noise
with a standard deviation of 0.25, as the noise was found in a previous study
to follow a narrowly peaked distribution, as opposed to a wide Gaussian
distribution [10]. From each of these datasets, we removed $\gamma\%$ of the
atoms. The results of this experiment are in table 3. In this scenario, the
primary challenge is to correctly identify the FCC and HCP lattices. While
these two structures are distinct, they have the same density, i.e., the same
number of atoms per unit volume, and only have a subtle variation in their
identifying geometry. Indeed, there is a non-trivial decrease in accuracy when
the HCP lattices are introduced into the dataset. Specifically, the accuracy
declines as the proportion of FCC structures increases relative to the number
of HCP lattice types and is the dominant class represented in the dataset.
When the BCC proportion comprises 10% of the dataset, the proportion of FCC to
HCP lattices is approximately 2:1, and the classifier’s accuracy is decreased
as compared to settings with less class imbalance in the dataset.
Table 3. Mean 10-fold cross validation accuracy, classifying synthetic APT data with $\mathcal{N}(0,0.25^{2})$ added noise and proportion $\gamma\in(0,1)$ missing. We consider three classes, BCC, FCC, and HCP structures, in this synthetic APT dataset. We varied the proportion of 5,000 configurations between BCC and FCC lattices. The BCC proportion of these structures are given and the fraction of FCC configurations is 1-BCC%. To these 5,000 structures we added a constant 2,500 HCP lattice structures in each instance. BCC proportion | 10% | 25% | 40% | 60% | 75% | 90%
---|---|---|---|---|---|---
Proportion Missing | Accuracy
$\gamma=0.33$ | 60.67% | 69.84% | 84.84% | 86.51% | 78.88% | 88.39%
$\gamma=0.50$ | 68.33% | 74.76% | 85.16% | 88.13% | 82.40% | 89.45%
### 4.3. APT Experimental Data
We now turn to our original problem of determining the local lattice structure
of an HEA from the experimental APT data. We apply our materials
fingerprinting method to the APT experimental data from two HEAs,
Al1.3CoCrCuFeNi and Al0.3CoCrFeNi (FCC). Recalling section 2.2, the former has
both BCC and FCC phases, while the was determined to be FCC through XRD
experiments [5]. The challenge is to uncover the true atomic-level structure
amid the noise and missing data. Using our materials fingerprinting
methodology, we are able to classify the lattice structure of 200,000 atomic
neighborhoods, split evenly between BCC and FCC lattice types, from these APT
datasets at 99.97% accuracy with 10-fold cross validation.
## 5\. Discussion
We have described materials fingerprinting, a topologically-based methodology
for classifying the crystal structure of the HEA APT data with near-perfect
accuracy especially in the binary case. Starting from a collection of atomic
neighborhoods generated by an APT experiment, we extract the fundamental
topology of the structure and record the information in a persistence diagram.
These diagrams succinctly encode the essential topology of an atomic
neighborhood over different length scales in various dimensions. It is by
computing the persistent homology of the data that we are able to see through
the noise and fill in the sparsity to see where these lattice structures are
connected and where they are not. Our materials fingerprinting methodology
uses the mean and variance of the $d_{p}^{c}$ distance between persistence
diagrams to create input for a machine learning algorithm. This distance not
only measures differences in the diagrams but accounts for different numbers
of points between diagrams being compared. This latter point is salient, as
BCC and FCC unit cells each contain a different number of atoms, and this
distinction must be taken into account. Basing our materials fingerprint on
topological features in conjunction with the number of atoms in each
neighborhood, we represent the necessary topological and numeric information
required to differentiate between the lattice structures considered here, with
the appropriate choice of metric. Indeed, by adopting this point of view, we
are able to qualitatively retain the essential geometric information of these
crystal structures and use this information to predict with greater than 99%
accuracy the crystal structure of real APT data.
The impact of the present work is two-fold. First, the input data to our
algorithm is point clouds generated by HEAs resulting from APT experiments.
The process can be generalized to other lattice types by incorporating
additional crystal structures into the materials fingerprint training set.
Indeed, the methodology described herein does not depend on the labels of the
data. It takes in the materials data and creates the information-rich
persistence diagrams, from which we examine homological differences between
the diagrams in various dimensions. The data analysis can be performed on
multiphase samples, although the characterization of individual configurations
may need be to be first preceded by classification of domains based on
compositional differences, for example. An alternative for comparisons between
a multitude of structures is outlined in [40], in which different topological
descriptors are invoked that consider the electronegativity of the atoms as a
feature when creating the persistence diagrams. Such a methodology may be used
in conjunction with a previous work [37] that identifies a mapping between the
APT data and a known crystal structure, to aid researchers in understanding
the local structure of materials characterized through the APT process.
## Acknowledgments
The authors are grateful to two anonymous referees for helpful comments and
suggestions that substantially improved the manuscript. The APT experiments
were conducted at the Oak Ridge National Laboratory’s Center for Nanophase
Materials Sciences (CNMS), which is a U.S. DOE Office of Science User
Facility. The authors would like to thank Jonathan Poplawsky for insightful
discussions about the APT method. V. M. is grateful for support from ARO Grant
# W911NF-17-1-0313 and the NSF DMS-1821241. D.K. and V.M. are grateful for
support from a UTK Seed grant. A.S., C.M., and F.N, acknowledge the
Mathematics Department of the University of Tennessee, where A.S. and C.M.
conducted this research as part of their Ph.D studies and F.N. was a Post-
Doctoral research associate. This research used resources of the Compute and
Data Environment for Science (CADES) at the Oak Ridge National Laboratory,
which is supported by the Office of Science of the U.S. Department of Energy
under Contract No. DE-AC05-00OR22725.
## References
* [1] Agrawal, A., and Choudhary, A. Perspective: Materials informatics and big data: Realization of the “fourth paradigm” of science in materials science. Apl Materials 4, 5 (2016), 053208.
* [2] Butler, K. T., Davies, D. W., Cartwright, H., Isayev, O., and Walsh, A. Machine learning for molecular and materials science. Nature 559, 7715 (2018), 547.
* [3] Chan, T. F., Golub, G. H., and LeVeque, R. J. Algorithms for computing the sample variance: Analysis and recommendations. The American Statistician 37, 3 (1983), 242–247.
* [4] Curtarolo, S., Hart, G. L., Nardelli, M. B., Mingo, N., Sanvito, S., and Levy, O. The high-throughput highway to computational materials design. Nature materials 12, 3 (2013), 191.
* [5] Diao, H., Ma, D., Feng, R., Liu, T., Pu, C., Zhang, C., Guo, W., Poplawsky, J. D., Gao, Y., and Liaw, P. K. Novel nial-strengthened high entropy alloys with balanced tensile strength and ductility. Materials Science and Engineering: A 742 (2019), 636–647.
* [6] Donato, I., Gori, M., Pettini, M., Petri, G., De Nigris, S., Franzosi, R., and Vaccarino, F. Persistent homology analysis of phase transitions. Physical Review E 93, 5 (2016), 052138.
* [7] Edelsbrunner, H., and Harer, J. Persistent homology-a survey. Contemporary Mathematics 453 (2008), 257–282.
* [8] Edelsbrunner, H., and Harer, J. Computational Topology: An Introduction. American Mathematical Society, Providence, RI, 2010.
* [9] Friedman, J., Hastie, T., Tibshirani, R., et al. Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). The Annals of Statistics 28, 2 (2000), 337–407.
* [10] Gault, B., Moody, M. P., Cairney, J. M., and Ringer, S. P. Atom probe crystallography. Materials Today 15, 9 (2012), 378–386.
* [11] Gludovatz, B., Hohenwarter, A., Catoor, D., Chang, E. H., George, E. P., and Ritchie, R. O. A fracture-resistant high-entropy alloy for cryogenic applications. Science 345, 6201 (2014), 1153–1158.
* [12] Guo, J., Wang, H., von Rohr, F., Wang, Z., Cai, S., Zhou, Y., Yang, K., Li, A., Jiang, S., and Wu, Q. Robust zero resistance in a superconducting high-entropy alloy at pressures up to 190 gpa. Proceedings of the National Academy of Sciences 114, 50 (2017), 13144–13147.
* [13] Guo, W., Garfinkel, D. A., Tucker, J. D., Haley, D., Young, G. A., and Poplawsky, J. D. An atom probe perspective on phase separation and precipitation in duplex stainless steels. Nanotechnology 27, 25 (2016), 254004.
* [14] Hastie, T. Generalized additive models, 1990.
* [15] Hemphill, M. A., Yuan, T., Wang, G., Yeh, J., Tsai, C., Chuang, A., and Liaw, P. K. Fatigue behavior of Al0.5CoCrCuFeNi high entropy alloys. Acta Materialia 60, 16 (2012), 5723–5734.
* [16] Hicks, D., Oses, C., Gossett, E., Gomez, G., Taylor, R. H., Toher, C., Mehl, M. J., Levy, O., and Curtarolo, S. Aflow-sym: platform for the complete, automatic and self-consistent symmetry analysis of crystals. Acta Crystallographica Section A: Foundations and Advances 74, 3 (2018), 184–203.
* [17] Hiraoka, Y., Nakamura, T., Hirata, A., Escolar, E. G., Matsue, K., and Nishiura, Y. Hierarchical structures of amorphous solids characterized by persistent homology. Proceedings of the National Academy of Sciences (2016), 201520877\.
* [18] Honeycutt, J. D., and Andersen, H. C. Molecular dynamics study of melting and freezing of small lennard-jones clusters. Journal of Physical Chemistry 91, 19 (1987), 4950–4963.
* [19] Islam, N., Huang, W., and Zhuang, H. L. Machine learning for phase selection in multi-principal element alloys. Computational Materials Science 150 (2018), 230–235.
* [20] Kaczynski, T., Mischaikow, K., and Mrozek, M. Computational homology, vol. 157. Springer Science & Business Media, 2006.
* [21] Kelly, T. F., Miller, M. K., Rajan, K., and Ringer, S. P. Atomic-scale tomography: A 2020 vision. Microscopy and Microanalysis 19, 3 (2013), 652–664.
* [22] Koželj, P., Vrtnik, S., Jelen, A., Jazbec, S., Jagličić, Z., Maiti, S., Feuerbacher, M., Steurer, W., and Dolinšek, J. Discovery of a superconducting high-entropy alloy. Physical Review Letters 113, 10 (2014), 107001.
* [23] Laboratory, N. Nomad, 2015.
* [24] Larsen, P. M., Schmidt, S., and Schiøtz, J. Robust structural identification via polyhedral template matching. Modelling and Simulation in Materials Science and Engineering 24, 5 (2016), 055007.
* [25] Lee, Y., Barthel, S. D., Dłotko, P., Moosavi, S. M., Hess, K., and Smit, B. Quantifying similarity of pore-geometry in nanoporous materials. Nature Communications 8 (2017), 15396.
* [26] Lei, Z., Liu, X., Wu, Y., Wang, H., Jiang, S., Wang, S., Hui, X., Wu, Y., Gault, B., and Kontis, P. Enhanced strength and ductility in a high-entropy alloy via ordered oxygen complexes. Nature 563, 7732 (2018), 546.
* [27] Li, Z., Pradeep, K. G., Deng, Y., Raabe, D., and Tasan, C. C. Metastable high-entropy dual-phase alloys overcome the strength–ductility trade-off. Nature 534, 7606 (2016), 227.
* [28] Marchese, A., and Maroulas, V. Signal classification with a point process distance on the space of persistence diagrams. Advances in Data Analysis and Classification 12, 3 (2018), 657–682.
* [29] Marchese, A., Maroulas, V., and Mike, J. K-means clustering on the space of persistence diagrams. In Wavelets and Sparsity XVII (2017), vol. 10394, International Society for Optics and Photonics, p. 103940W.
* [30] Maroulas, V., Micucci, C. P., and Spannaus, A. A stable cardinality distance for topological classification. Advances in Data Analysis and Classification (2019), 1–18.
* [31] Miller, M. K., Kelly, T. F., Rajan, K., and Ringer, S. P. The future of atom probe tomography. Materials Today 15, 4 (2012), 158–165.
* [32] Moody, M. P., Gault, B., Stephenson, L. T., Marceau, R. K., Powles, R. C., Ceguerra, A. V., Breen, A. J., and Ringer, S. P. Lattice rectification in atom probe tomography: Toward true three-dimensional atomic microscopy. Microscopy and Microanalysis 17, 2 (2011), 226–239.
* [33] Nasrin, F., Oballe, C., Boothe, D., and Maroulas, V. Bayesian topological learning for brain state classification. In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA) (2019), IEEE, pp. 1247–1252.
* [34] Rost, C. M., Sachet, E., Borman, T., Moballegh, A., Dickey, E. C., Hou, D., Jones, J. L., Curtarolo, S., and Maria, J.-P. Entropy-stabilized oxides. Nature communications 6 (2015), 8485.
* [35] Santodonato, L. J., Zhang, Y., Feygenson, M., Parish, C. M., Gao, M. C., Weber, R. J., Neuefeind, J. C., Tang, Z., and Liaw, P. K. Deviation from high-entropy configurations in the atomic distributions of a multi-principal-element alloy. Nature communications 6 (2015), 5964.
* [36] Shi, Y., Yang, B., and Liaw, P. K. Corrosion-resistant high-entropy alloys: A review. Metals 7, 2 (2017), 43.
* [37] Spannaus, A., Maroulas, V., Keffer, D. J., and Law, K. J. H. Bayesian point set registration. In 2017 MATRIX Annals. Springer, 2019, pp. 99–120.
* [38] Stukowski, A. Visualization and analysis of atomistic simulation data with OVITO-the Open Visualization Tool. MODELLING AND SIMULATION IN MATERIALS SCIENCE AND ENGINEERING 18, 1 (JAN 2010).
* [39] Tang, Z., Yuan, T., Tsai, C., Yeh, J., Lundin, C. D., and Liaw, P. K. Fatigue behavior of a wrought Al0.5CoCrCuFeNi two-phase high-entropy alloy. Acta Materialia 99 (2015), 247–258.
* [40] Townsend, J., Micucci, C. P., Hymel, J. H., Maroulas, V., and Vogiatzis, K. D. Representation of molecular structures with persistent homology for machine learning applications in chemistry. Nature communications 11, 1 (2020), 1–9.
* [41] Tsai, M.-H., and Yeh, J.-W. High-entropy alloys: a critical review. Materials Research Letters 2, 3 (2014), 107–123.
* [42] Yeh, J.-W. Physical metallurgy of high-entropy alloys. Jom 67, 10 (2015), 2254–2261.
* [43] Yeh, J.-W., Chen, S.-K., Lin, S.-J., Gan, J.-Y., Chin, T.-S., Shun, T.-T., Tsau, C.-H., and Chang, S.-Y. Nanostructured high-entropy alloys with multiple principal elements: novel alloy design concepts and outcomes. Advanced Engineering Materials 6, 5 (2004), 299–303.
* [44] Zhang, Y., Zhou, Y. J., Lin, J. P., Chen, G. L., and Liaw, P. K. Solid-solution phase formation rules for multi-component alloys. Advanced Engineering Materials 10, 6 (2008), 534–538.
* [45] Zhang, Y., Zuo, T. T., Tang, Z., Gao, M. C., Dahmen, K. A., Liaw, P. K., and Lu, Z. P. Microstructures and properties of high-entropy alloys. Progress in Materials Science 61 (2014), 1–93.
* [46] Zhou, Q., Tang, P., Liu, S., Pan, J., Yan, Q., and Zhang, S. Learning atoms for materials discovery. Proceedings of the National Academy of Sciences 115, 28 (2018), E6411–E6417.
* [47] Ziletti, A., Kumar, D., Scheffler, M., and Ghiringhelli, L. M. Insightful classification of crystal structures using deep learning. Nature communications 9, 1 (2018), 2775.
|
# Discovering an Aid Policy to Minimize Student Evasion Using Offline
Reinforcement Learning
Leandro M. de Lima Graduate Program in Computer Science, PPGI
Federal University of Espirito Santo, UFES
Vitória, Brazil
<EMAIL_ADDRESS>Renato A. Krohling Graduate Program in Computer
Science, PPGI
Production Engineering Department, LABCIN
Federal University of Espirito Santo, UFES
Vitória, Brazil
<EMAIL_ADDRESS>
###### Abstract
High dropout rates in tertiary education expose a lack of efficiency that
causes frustration of expectations and financial waste. Predicting students at
risk is not enough to avoid student dropout. Usually, an appropriate aid
action must be discovered and applied in the proper time for each student. To
tackle this sequential decision-making problem, we propose a decision support
method to the selection of aid actions for students using offline
reinforcement learning to support decision-makers effectively avoid student
dropout. Additionally, a discretization of student’s state space applying two
different clustering methods is evaluated. Our experiments using logged data
of real students shows, through off-policy evaluation, that the method should
achieve roughly 1.0 to 1.5 times as much cumulative reward as the logged
policy. So, it is feasible to help decision-makers apply appropriate aid
actions and, possibly, reduce student dropout.
©2021 IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works.
## I Introduction
Globally the gross enrollment ratio in tertiary education increased from
$19\%$ in 2000 to $38\%$ in 2018 according to Unesco [1], but the expansion of
the higher education system may not result in an increase in graduated
professionals. In 2017, on average $33\%$ of the students fail to successfully
complete the programs they undertake [2]. Especially in developing countries
that have fragile economies, as in Latin America, effectiveness in
transforming tertiary enrollment rates increase into the supply of high-
skilled workers is key for their development. Evasion in tertiary education
rates are important indicators of that efficiency and their results must be
improved.
Student evasion incurs wasted resources, frustrations of expectations, and
loss of personal, professional, and social potential. In public education
institutions, it also represents an onus for society, especially for the
financial waste it entails. Institutional policies and actions are extremely
important to prevent that outcome. Identifying which students are at risk of
evasion and what actions to take for each of them to minimize it is a complex
problem that affects educational institutions. There are several reasons for
students to abandon their undergraduate studies and those reasons, for the
most cases, remain undetected by educational institutions until the moment a
student initiates a transfer request, leave of absence request, or drops out.
Institutions that can identify students with high evasion risk, and manage to
successfully overcome student complains early on, may increase their
graduation rates and contribute, ultimately, to the progress of society as a
whole [3].
### I-A Related Works
Researches in the educational data mining field mainly focus on
classification, clustering, and visual data mining[4] techniques to solve
educational issues, as predicting students at risk. Most of the previous work
in the student evasion field focuses on predictive tasks [5, 6, 7]. Those
approaches are an important step towards effective monitoring and help those
students, but simply predicting who is at risk is not enough to solve the
problem and minimize evasion. For such cases, it is necessary to identify a
policy that defines the sequence of effective aid actions for each of the
students at risk. However, adequate aid actions at the right moment still
mostly depend on the specialist’s correct decision due to a lack of
appropriate decision support tools. Our approach is similar to [8], but their
work is focused in identify critical decisions in interactive learning
environments proposing a reinforcement learning based framework.
Many deep reinforcement learning (RL) algorithms have shown great results in
sequential decision-making problems and this work is based on those methods to
help decision-makers minimize student evasion. Deep reinforcement learning
algorithms, as Deep Q-Network (DQN)[9], Advantage Actor-Critic (A2C)[9], Trust
Region Policy Optimization (TRPO)[10], Stochastic Lower Bounds Optimization
(SLBO) [11], have shown advantages in learning human behaviors and strategies
in several areas, providing good results in problems with high-dimensional
action and state space. Usually, (online) reinforcement learning typically is
employed in problems that can directly apply actions and observe the
environment to acquire more data. However, data from previous interactions
could have been already collected. Besides that, in some situations, there are
cost, safety, or ethical issues in the usage of artificial intelligence (AI)
decisions without a known AI quality to gather more data, as in health or
educational field. Offline (or batched) reinforcement learning [12, 13] seeks
to overcome these limitations in exploration.
Off-policy reinforcement learning methods, as DQN[9], use data generated by an
exploratory policy to learn an optimal policy [14]. Usually, those methods use
an online sample collection to mitigate distributional shift issues, despite
that, off-policy RL can learn reasonably well in offline problems[15].
As the suggested actions often cannot be evaluated in the environment in
offline problems, off-policy policy evaluation (OPE) can evaluate newly
learned policies without interacting with the environment. OPE is an active
field of research and multiples estimators can be applied, e.g. Sequential
Weighted Doubly-Robust (SWDR) or Model Guided Importance Sampling Combining
(MAGIC) [16].
This work is also related to imitation learning and inverse reinforcement
learning as they also try to learn how to mimic other policies. In imitation,
learning the goal consists of finding a policy that directly mimics the
observed policy, but it also suffers from distributional shift[17]. Inverse
reinforcement learning tries to find a reward function that explains the
policy behavior that generated the data[18]. However, both are inappropriate
approaches as they require either access to an oracle, further online data
collection, or an explicit distinction between expert and non-expert data[13].
Distributional shift also affects the model-based algorithms that suffer from
it in the same way as the value function, since out-of-distribution state-
action tuples can result in inaccurate prediction [15].
The proposed decision support method can not act directly in the environment.
So, its recommendations can be reviewed or changed by the decision-maker
before they are applied. Similar methodologies were applied to sepsis
treatment [19, 20]. Offline reinforcement learning can also be applied in many
other sequential decision-making problems as in healthcare, spoken dialogue
systems, self-driving cars, and robotics[15].
The goal of this work is to propose a decision support method for selection of
aid actions for students based on a deep reinforcement learning approach
aiming to help decision-makers of an educational institution. A method that
recommends actions to aid students at risk of evasion to reduce the
institutional evasion rate. Also, a novel case study is presented to
illustrate the offline RL approach to automatically suggesting actions to aid
those students with an analysis comparing the performance of the application
using different evaluation methods and comparing the impact of different
clustering methods on application performance.
The remainder of this paper is organized as follows. In Section II, key
concepts of student dropout and offline RL are defined. Additionally, we
provide an explanation of how to model the problem as a Markov decision
process (MDP) and define a discrete state space for student dropout data. In
Section III, we propose a method to properly tackle the aforementioned
problem. Next, the experimental results are presented in Section IV. Finally,
Section V summarizes key findings of the experimental results and their
implications. It also presents new directions for further investigation and
future work.
## II Problem Statement
Student dropout or evasion are referenced here as any student that cannot
complete their initiated studies and will not be awarded an academic degree.
Our approach is based on a program perspective and any reason that caused a
student to not graduating in a program is considered as dropout, e.g.
transfer, withdrawal, poor performance.
This research aims to reduce the dropout rate by providing the most
appropriate help for each student, according to their profile. Also, it is
intended that the decision-maker has a reduction in the choice’s workload,
complexity, and pressure of who needs help, what action to take and when to
apply that action to minimize student evasion.
### II-A Markov decision process
Time-varying state space can be described as a Markov decision process (MDP).
In student evasion problem, it is not possible to directly observe the
underlying state, therefore, that problem can be more precisely defined as a
partially observable Markov decision process (POMDP). Due to simplification,
we consider the state observation as a full representation of the state, so
the student evasion problem is defined here as a fully-observed MDP.
A MDP[21] is characterized by $M=\\{S,A,P,R\\}$:
* •
State space $S$. At each step $t$ (e.g. academic term), $s_{t}\in S$ defines a
student state.
* •
Action space $A$. At each step $t$, an agent takes action $a_{t}\in A$ to
modify the environment. In the student evasion problem, it consists of all
combinations of the existence of a study plan and the types of student aid.
* •
Transition function $P(s_{t+1}|s_{t},a_{t})$. The probability $P:S\times
A\times S\to\mathbb{R}$ of seeing state $s_{t+1}$ after taking action $a_{t}$
at state $s_{t}$.
* •
Reward function $R(s_{t},a_{t})\in\mathbb{R}$. A function $R:S\times
A\to\mathbb{R}$ that returns the observed immediate feedback $r_{t}$ after
transition $(s_{t},a_{t})$ at time $t$. Transitions to desired states generate
positive rewards, transitions to undesired states negative ones.
In student evasion context, each student is represented as a trajectory
$H=\\{(s_{1},a_{1},r_{1}),(s_{2},a_{2},r_{2}),\dots,(s_{T},a_{T},r_{T})\\}$,
that is a sequence of state, action and reward tuples $(s_{t},a_{t},r_{t})$
over $T$ academic terms. The agent seeks maximize the expected discounted
return $R_{t}=\sum^{T}_{\tau=t}\gamma^{\tau-t}r_{\tau}$, where
$\gamma\in[0,1]$ is a discount factor that trades-off the importance of
immediate and future rewards.
### II-B Offline (batch) reinforcement learning
The goal in general reinforcement learning is to learn an optimal policy $\pi$
that observes the current state $s$ and selects an action $a$ which maximizes
the sum of expected rewards. Yet, offline reinforcement learning can be
defined as the task of learning the best possible policy from a fixed set
$\mathcal{D}$ of a priori-known transition samples or as a data-driven
reinforcement learning.
Deployment efficiency measure [22] in RL counts the number of changes in the
data-collection policy during learning, i.e., an offline RL setup corresponds
to a single deployment allowed for learning. Reinforcement learning methods
can be classified through data and interaction perspectives [23]. What is
named here as offline RL can be defined as pure batch algorithms, being
classified as batch (data perspective) and offline (interaction perspective)
methods. We refer to it as offline RL, since the use of a ”batch” in an
iterative learning algorithm can also refer to a method that consumes a batch
of data, updates a model, and then obtains online a different batch, as
opposed to a traditional online learning algorithm, which consumes one sample
at a time [15]. The interaction differences are shown in Fig. 1.
Figure 1: Online and Offline RL interaction difference. An online agent can
perform a selected action and observe its impact in the environment through
state and reward signals. An offline agent passively receives a batch of
logged interactions that consist of actions, states, and rewards signals.
Given that only a fixed (finite) set of samples are available, note that the
objective of offline RL is to learn the best possible policy from the given
data and not an optimal policy, as in general reinforcement learning. Since
there is no possibility of improving exploration, which is the main difficulty
in offline RL. Samples distribution is another challenge that the offline RL
approach needs to tackle. Generally, RL assumes that experiences samples are
from a representative distribution of the environment, but the agent in
offline RL has no control or knowledge of how they are sampled. While we
cannot expect offline RL to discover actions that are better than any action
in logged data, we can expect it to effectively utilize the compositional
structure inherent in any temporal process [15].
Offline RL needs to learn a policy that does something differently, presumably
better, from the behavior pattern observed in the experience samples [15].
Therefore, changes in visited states and actions mean be evaluated on a
distribution different from the training one, which forces offline RL to not
assume that the data is independent and identically distributed (i.i.d.).
Recently, fully offline reinforcement learning methods like Random Ensemble
Mixture (REM) [12], Deep Q-learning from Demonstrations (DQfD) [24],
Bootstrapping Error Accumulation Reduction (BEAR) [25], Batch-Constrained deep
Q-learning (BCQ) [13, 26], and Behavior Regularized Actor Critic (BRAC) [27]
consider different approaches and techniques to surpass those limitations.
Behavior-Regularized Model-ENsemble (BREMEN) [22] is a model-based algorithm
that can be used in a fully offline setup, but its main goal is to be
efficient considering the number of changes in the data-collection policy
needed during learning (deployment efficiency) and sampling efficient by using
a mixed (online and offline) approach. Similar to BREMEN, Advantage Weighted
Actor Critic (AWAC) [28] is focused on effectively fine-tuning using online
experiences after an offline pre-training period.
In deep reinforcement learning, for large or continuous state and action
spaces we can represent the various components of agents, such as policies
$\pi(s,a)$ or state-action values $Q(s,a)$, as an approximation with neural
networks, e.g. Dueling Double Deep Q-Network (Dueling DDQN) [29, 30]. In
Dueling DDQN, the classical DQN [9] is extended using Dueling Networks and
Double Q-learning[31] techniques.
Each policy $\pi$ has a corresponding state-action function
$Q_{\pi}(s,a)=\mathbb{E}_{\pi}[R_{t}|s,a]$, the expected return when following
the policy after taking action $a$ in state $s$, and a state value function
$V_{\pi}(s)=\mathbb{E}_{a\sim\pi(s)}[Q_{\pi}(s,a)]$, that measures how good it
is to be in a particular state $s$. In Double Q-Learning, an approximation of
the deep Q-function $Q_{\pi}(s,a;\theta)=Q_{\pi}^{\theta}(s,a)$ with
parameters $\theta$ is used. However, the value function $Q_{\pi}^{\theta}$ is
updated using the target $Q^{\bar{\theta}}_{\pi}(s^{\prime},a^{\prime})$ as
$y^{\text{DDQN}}_{i}=R_{t+1}+\gamma_{t+1}Q_{\pi}^{\bar{\theta}}(s^{\prime},\underset{a^{\prime}}{\mathrm{arg\,max\,}}Q_{\pi}^{\theta_{i}}(s^{\prime},a^{\prime})),$
(1)
where $\bar{\theta}$ represents the parameters of a fixed and separate target
network. Moreover, $s^{\prime}$ and $a^{\prime}$ are state and action at
$t+1$, respectively.
The parameters of the neural network are optimized by using stochastic
gradient descent to minimize the loss
$\mathcal{L}(\theta)=\|y^{\text{DDQN}}_{i}-Q_{\pi}^{\theta}(s,a)\|^{2}.$ (2)
The gradient of the loss is back-propagated only into the parameters $\theta$
of the online network, which is also used to select actions. Periodically, the
target network parameters $\bar{\theta}$, which is not directly optimized, are
updated as $\bar{\theta}\leftarrow\tau\theta+(1-\tau)\bar{\theta}$, where
$\tau$ is the target update rate.
Dueling Networks represents the value function $V_{\pi}(s)$ and the advantage
function $A_{\pi}(s,a)=Q_{\pi}(s)-V_{\pi}(s)$ as two separated streams with a
single deep model whose output combines the two to produce a state-action
value $Q_{\pi}(s,a)$ [29]. The streams are constructed such that they have the
capability of providing separate estimates of the value and advantage
functions.
The learning algorithm is provided with a static dataset
$\mathcal{D}=\\{(s^{i}_{t},a^{i}_{t},s^{i}_{t+1},r^{i}_{t})\\}_{i=1}^{m}$ as a
set of $m$ transitions and must learn the best policy it can using this
dataset. When training the Q-network, instead of only using the current
experience as prescribed by standard temporal difference learning, the network
is trained by sampling mini-batches of experiences from $\mathcal{D}$
uniformly at random. The sequence of losses thus takes the form
$\mathcal{L}_{i}(\theta_{i})=\mathbb{E}_{(s,a,r,s^{\prime})\sim\mathcal{U}(\mathcal{D})}[(y^{\text{DDQN}}_{i}-Q_{\pi}^{\theta_{i}}(s,a;\theta_{i}))^{2}].$
(3)
### II-C Discrete state space
As some algorithms are not suitable for continuous state space and
simplification, the state space is discretized through clustering. K-means
clustering algorithm [32] can discretize the state space and the number of
clusters can be defined by using the Bayesian information criterion (BIC) or
Akaike information criterion (AIC) [19].
Clustering with the K-means method, and its variants, can only produce convex
clusters[33]. Alternatives can be applied as the OPTICS [34] method, which is
a density-based clustering algorithm suitable for an arbitrary shape of
cluster[35].
In this work, discrete state space is created using X-means[36], a K-means
variant, with BIC, therefore there is no need to manually define a specific
number of clusters. Moreover, another discrete state space is created using
OPTICS for comparison.
### II-D Off-policy policy evaluation (OPE)
Offline reinforcement learning algorithms are trained with offline data,
however, the agent must perform in real situations. Although, in some
problems, evaluation can be performed through simulation, often the same
restrictions that led to the use of offline RL apply in evaluation. So, it is
important to predict how well the new policy will perform before deploying it.
Off-policy policy evaluation tackles the performance prediction problem
producing an estimate that minimizes some concept of error [16]. Alternatives
to OPE are, for example, crowd-sourced human labeling agent actions [37], an
expert qualitative analysis [20] and policy ranking [27].
Recently, supervised learning has been applied to large and diverse training
datasets available [38]. There are few initiatives to establish an offline RL
benchmark as [39, 27, 40], but the field still lacks realistic evaluation
protocols [39].
Off-policy policy evaluation is an active field of research and multiples
estimators can be applied. Two of them are Sequential Weighted Doubly-Robust
(SWDR) and Model and Guided Importance Sampling Combining (MAGIC) [16]
estimators to evaluate and compare offline RL methodologies. Both are Doubly-
Robust (DR) based methods specifically designed for evaluating policies on RL
problems where the horizon of episodes is longer than one. Besides the low
bias advantage inherited from the DR method [41], if either action
propensities or the reward function is accurate, those estimators balance the
bias-variance trade-off while maintaining asymptotic consistency [16].
For a description of MAGIC and SWDR methods for evaluation, let an approximate
model of an MDP, $\widehat{r}^{\pi}(s,a,t)$ denotes the model’s prediction of
the reward $t$ steps later, $R_{t}$, if $S_{0}=s$, $A_{0}=a$, and the policy
$\pi$ is used to generate the subsequent actions. Let
$\widehat{r}^{\pi}(s,t)=\displaystyle\sum_{a\in\mathcal{A}}\pi(a|s)\widehat{r}^{\pi}(s,a,t)$
be a prediction of $R_{t}$ if $S_{0}=s$, $A_{0}=a$, and the policy $\pi$ is
used to generate actions $A_{0},A_{1},\dots$, for all
$(s,t,\pi)\in\mathcal{S}\times\mathbb{N}_{\geq 0}\times\Pi$. We can also
define the estimated state value $\widehat{v}^{\pi}(s)$ and the estimate
state-action value $\widehat{q}^{\pi}(s)$ as, respectively,
$\widehat{v}^{\pi}(s)=\sum^{\infty}_{t=0}\gamma\widehat{r}^{\pi}(s,t),$
$\widehat{q}^{\pi}(s)=\sum^{\infty}_{t=0}\gamma\widehat{r}^{\pi}(s,a,t).$
Given a historical dataset $D=\\{H_{i}|\pi_{i}\\}^{n}_{i=1}$ as a set of $n$
trajectories and known policies, called behavior policies, that generated
them. A partial importance sampling estimator called off-policy $j$-step
return[16], $g^{(j)}(D)$, which uses an importance sampling based method to
predict the outcome by using an evaluated policy $\pi_{e}$ up to $R_{j}$ is
generated, and the approximate model estimator to predict the outcomes
thereafter is proposed in[16]. That is, for all $j\in\mathbb{N}_{\geq-1}$ and
using Weighted doubly-robust (WDR) method, let
$\begin{split}g^{(j)}(D)=\displaystyle\sum^{n}_{i=1}\displaystyle\sum^{j}_{t=0}\gamma^{t}w^{i}_{t}R^{H_{i}}_{t}+\displaystyle\displaystyle\sum^{n}_{i=1}\gamma^{j+1}w^{i}_{j}\widehat{v}^{\pi_{e}}(S^{H_{i}}_{j+1})\\\
-\displaystyle\sum^{n}_{i=1}\displaystyle\sum^{j}_{t=0}\gamma^{t}(w^{i}_{t}\widehat{q}^{\pi_{e}}(S^{H_{i}}_{t},A^{H_{i}}_{t})-w^{i}_{t-1}\widehat{v}^{\pi_{e}}(S^{H_{i}}_{t})),\end{split}$
(4)
where $n=|D|$, $S^{H}_{t}$ denotes the state at time $t$ during trajectory $H$
and $w^{i}_{j}$ is the weighted importance sampling
$w^{i}_{j}=\displaystyle\frac{\rho^{i}_{t}}{\displaystyle\sum^{n}_{j=1}\rho^{j}_{t}}$
and
$\rho_{t}(H,\pi_{e},\pi_{b})=\prod^{t}_{i=0}\frac{\pi_{e}(A^{H}_{i}|S^{H}_{i})}{\pi_{b}(A^{H}_{i}|S^{H}_{i})}$,
is an importance weight, which is the probability of the first $t$ steps of
$H$ under the evaluation policy $\pi_{e}$ divided by its probability under the
behavior policy $\pi_{b}$. For brevity, $\rho_{t}(H_{i},\pi_{e},\pi_{i})$ is
written here as $\rho^{i}_{t}$.
Notice that $g^{(-1)}(D)$ is a purely model-based estimator, $g^{(\infty)}(D)$
is the WDR estimator, and the other off-policy $j$-step returns are partial
WDR estimators that blend between these two extremes. So, the WDR estimator
can be defined as
$\text{WDR}(D)=g^{(\infty)}(D)=\displaystyle\lim_{j\to\infty}g^{(j)}(D).$
A Blending Importance Sampling and Model (BIM) estimator is defined as
$BIM(D)=\mathbf{x}^{\top}g(D)$, where
$\mathbf{x}=(x_{-1},x_{0},x_{1},\dots)^{\top}$ is a weight vector and
$g(D)=(g^{(-1)}(D),g^{(0)}(D),\dots)^{\top}$. So, we estimate $\mathbf{x}^{*}$
by minimizing an approximation of $MSE(\mathbf{x}^{\top}g(D),v(\pi_{e}))$. For
the approximation, we use a subset of the returns, $\\{g^{(j)}(D)\\}$, for
$j\in\mathcal{J}$, where $|\mathcal{J}|<\infty$. For all $j\notin\mathcal{J}$,
we assign $\mathbf{x}_{j}=0$. We also always include $-1$ and $\infty$ in
$\mathcal{J}$. Let $\mathbf{g}_{\mathcal{J}}(D)\in\mathbb{R}^{|\mathcal{J}|}$
be the elements of $\mathbf{g}(D)$ whose indexes are in $\mathcal{J}$, the
returns that will not necessarily be given weights of zero. Also let
$\mathcal{J}_{j}$ denote the $j$th element in $\mathcal{J}$. Before redefining
the BIM estimator, we need to introduce the bias approximation
$\widehat{\mathbf{b}}_{n}$ and the covariance approximation
$\widehat{\Omega}_{n}$, when there are $n$ trajectories in $D$.
After computing the percentile bootstrap $10\%$ confidence interval, $[l,u]$,
for the mean of $g^{(\infty)}(D)$, which we ensure includes $\text{WDR}(D)$,
the bias approximation is defined as
$\widehat{\mathbf{b}}_{n}(j)\leftarrow\begin{cases}g^{(\mathcal{J}_{i})}(D)-u,&\quad\text{if
}g^{(\mathcal{J}_{i})}(D)>u\\\ g^{(\mathcal{J}_{i})}(D)-l,&\quad\text{if
}g^{(\mathcal{J}_{i})}(D)<l\\\ 0,&\quad\text{ otherwise.}\end{cases}$
The covariance approximation $\widehat{\Omega}_{n}$ is defined as
$\begin{split}\widehat{\Omega}_{n}(i,j)=\displaystyle\frac{n}{n-1}\displaystyle\sum^{n}_{k=1}(g^{(\mathcal{J}_{i})}_{k}(D)-\overline{g}^{(\mathcal{J}_{i})}_{k}(D))\\\
\times(g^{(\mathcal{J}_{j})}_{k}(D)-\overline{g}^{(\mathcal{J}_{j})}_{k}(D)),\end{split}$
(5)
where
$\overline{g}^{(\mathcal{J}_{i})}_{k}(D)=\displaystyle\frac{1}{n}\displaystyle\sum^{n}_{k=1}g^{(\mathcal{J}_{i})}_{k}(D).$
(6)
That approximation can be summarized and redefine BIM estimator as
$\text{BIM}(D,\widehat{\Omega}_{n},\widehat{\mathbf{b}}_{n})=(\widehat{\mathbf{x}}^{*})^{\top}\mathbf{g}_{\mathcal{J}}(D),$
where
$\widehat{\mathbf{x}}^{*}\in\underset{\mathbf{x}\in\Delta^{|\mathcal{J}|}}{\mathrm{arg\,min\,}}\mathbf{x}^{\top}[\widehat{\Omega_{n}}+\widehat{\mathbf{b}}_{n}\widehat{\mathbf{b}}^{\top}_{n}]\mathbf{x}.$
Both estimators are designed to evaluate policies acting sequentially and are
step-wise estimators. In SWDR a low bias can be achieved if either action
propensities or the reward function is accurate. It uses weighted importance
sampling and balances bias-variance trade-off while maintaining asymptotic
consistency [42]. MAGIC also balances bias-variance trade-off using SWDR
combined with a purely model-based estimator (blending importance sampling and
model (BIM)) [16].
## III Methodology
This work uses the student evasion data from undergraduate students who
started and completed the course between $2008$ and $2018$ at the Federal
University of Espírito Santo (Ufes). The data consists of pseudo-anonymous
academic, social, and demographic information presented in $13150$ observed
semesters from a total of $1342$ students. Each observed semester is
represented by the subjects taken in the period and has a total of $37$
features. Each student is represented by a sequence of academic terms. At the
end of this sequence, the way of evasion is known and determined the student’s
outcomes, i.e, success or not.
An initial state space is defined as a $10$ continuous dimension state,
consisting only of academics information and after a manual aggregation of all
courses taken in each term, as listed in Table I. After that, a discretization
of the state features was used to represent the state space, where the
discrete state space is the identifier of the cluster to which it belongs. So,
it is a simplified one-dimension discrete state space defined through the
clustering of those $10$ initial features. Two clustering algorithms are
evaluated in this work by creating different discrete state spaces using
X-means and OPTICS.
TABLE I: State features Feature | Description
---|---
CH CURSO | Total major’s course hours
NUM PERIODOS SUGERIDO | Major’s suggested terms
NUM MAX PERIODOS | Major’s maximum allowed terms
MEDIA FINAL mean | Mean of aggregated grades of all courses in the term
MEDIA FINAL std | Standard deviation of aggregated grades of all courses in the term
CH DISCIPLINA mean | Mean of aggregated hours per course of all courses in the term
CH DISCIPLINA std | Standard deviation of aggregated hours per course of all courses in the term
NUM FALTAS mean | Mean of aggregated student’s frequency of all courses in the term
NUM FALTAS std | Standard deviation of aggregated student’s frequency of all courses in the term
COD DISCIPLINA count | Number of completed courses
The discretized states are represented in Fig. 2, where the cluster centroids
are expressed through a reduction of dimensionality to 3D using PCA. The
frequency at which a given state occurs in the dataset is represented by the
size of the circle, the higher the more frequent. The occurrence of
trajectories that end in evasion is highlighted by the color of each circle,
according to the scale in the percentile described in the sidebar of the
figure. A spontaneous gradient in the dropout rate supports the validity of
this state representation as it indicates that, in fact, there is an
association between state membership and dropout rate. The OPTICS algorithm
does not produce a clustering explicitly [34], so, it does not define a center
for each cluster and it is not possible to present a similar representation of
its states.
Figure 2: State clusterization using X-means. Circle size represents state
frequency in the dataset. Circle color represents the dropout rate in that
state.
A one-dimension discrete action state describes actions deployed to the
student in that term. The action space consists of all possible combinations
of two action features, which are the existence of a mandatory supervised
study plan for that student ($2$ options) and the types of available student
aid ($5$ options). In that manner, each action is represented by an index of
all $10$ possibilities. In Ufes dataset only one action is possible per action
type at each semester.
The reward function is a sparse function that returns non-zero signals only if
the current state is the last state in the student trajectory and the student
has a successful outcome. It is defined as:
$R(s_{t},a_{t})=\begin{cases}1,&\text{if }s_{t}\text{ is a terminal
academic}\\\ &\quad\quad\text{successful state}\\\ 0,&\text{otherwise.}\\\
\end{cases}$ (7)
To create a new policy from the logged policy in the dataset, we used the
Dueling Double DQN [30] algorithm. This off-policy reinforcement learning
algorithm can be setup to work online or offline. We deployed it in offline
setup as needed for the aforementioned problem according to Algorithm 1. The
performance of the newly learned policies is evaluated by off-policy policy
evaluation of the reward. For that, we used MAGIC and SWDR methods described
by Algorithm 2 and Algorithm 3, respectively.
Algorithm 1 Dueling Double DQN[30]
0:
* •
$\mathcal{D}$ \- dataset of transitions;
* •
$\theta$ \- initial network parameters,
* •
$\tau$ \- target update rate;
* •
$N_{b}$ \- training batch size;
* •
$\bar{N}$ \- target network replacement frequency.
1: $\bar{\theta}\leftarrow\theta$
2: for $t\in\\{0,1,\dots\\}$ do
3: Sample a minibatch of $N_{b}$ tuples
$(s,a,r,s^{\prime})\sim\mathcal{U}(\mathcal{D})$.
4: Construct target values, one for each of the $N_{b}$ tuples.
5:
$a^{max}(s^{\prime};{\bar{\theta}})=\underset{a^{\prime}}{\mathrm{arg\,max\,}}Q_{\pi}^{\bar{\theta}}(s^{\prime},a^{\prime})$
6: if $s^{\prime}$ is terminal then
7: $y_{j}=r$
8: else
9:
$y_{j}=R_{t+1}+\gamma_{t+1}Q_{\pi}^{\theta}(s^{\prime},a^{max}(s^{\prime};{\bar{\theta}}))$
10: end if
11: Do a gradient descent step according to Eq. 3 loss every $\bar{N}$ steps.
12: Update target parameters
$\bar{\theta}\leftarrow\tau\theta+(1-\tau)\bar{\theta}$ every $\bar{N}$ steps.
13: Calculate OPEs according to Algorithm 2 and Algorithm 3.
14: end for
15: return Newly learned policy $\pi$
Algorithm 2 Model and Guided Importance Sampling Combining (MAGIC) [16]
0:
* •
$D$: Historical data;
* •
$pi_{e}$: Evaluation policy;
* •
Approximate model that allows for computation of $\hat{r}^{\pi_{e}}(s,a,t)$;
* •
$\mathcal{J}$: The set of return lengths to consider;
* •
$k$: The number of bootstrap resampling.
1: Calculate $\widehat{\Omega}_{n}$ according to Eq. 5.
2: Allocate $D_{(.)}$ so that for all $i\in\\{1,\dots,k\\},D_{i}$ can hold $n$
trajectories.
3: for $i\in\\{1,\dots k\\}$ do
4: Load $D_{i}$ with $n$ uniform random samples drawn from $D$ with
replacement.
5: end for
6: $\mathbf{v}=sort(g^{(\infty)}(D_{(.)})$
7: $l\leftarrow\min\\{\text{WDR}(D),\mathbf{v}(\lfloor 0.05n\rfloor)\\}$
8: $u\leftarrow\max\\{\text{WDR}(D),\mathbf{v}(\lceil 0.5n\rceil)\\}$
9: for $j\in\\{1,\dots|\mathcal{J}|\\}$ do
10:
$\widehat{\mathbf{b}}_{n}(j)\leftarrow\begin{cases}g^{(\mathcal{J}_{i})}(D)-u,&\quad\text{if
}g^{(\mathcal{J}_{i})}(D)>u\\\ g^{(\mathcal{J}_{i})}(D)-l&\quad\text{if
}g^{(\mathcal{J}_{i})}(D)<l\\\ 0,&\quad\text{ otherwise.}\end{cases}$
11: end for
12:
$\mathbf{x}\leftarrow\underset{\mathbf{x}\in\Delta^{|\mathcal{J}|}}{\mathrm{arg\,min\,}}\mathbf{x}^{\top}[\widehat{\Omega_{n}}+\widehat{\mathbf{b}}_{n}\widehat{\mathbf{b}}^{\top}_{n}]\mathbf{x}$
13: return $\mathbf{x}^{\top}\mathbf{g}_{\mathcal{J}}(D)$
Algorithm 3 Sequential weighted doubly-robust (SWDR) [16]
0:
* •
$D$: Historical data;
* •
$\pi_{e}$: Evaluation policy;
* •
Approximate model that allows for computation of $\hat{r}^{\pi_{e}}(s,a,t)$;
* •
$k$: The number of bootstrap resampling.
1: $\mathcal{J}=\\{\infty\\}$
2: return $\text{MAGIC}(D,\pi_{e},\hat{r}^{\pi_{e}}(s,a,t),\mathcal{J},k)$
## IV Experimental Results
In our tests, for each clustering method, we defined $3$ different discrete
datasets. Then, we performed one trial for each discrete dataset. We also have
performed a limited tuning starting with the values used in the paper [30].
The offline setup of the Dueling Double DQN [30] algorithm was used to create
a new policy from the logged policy. The Q-network is a fully-connected layer
of size $128$ followed by two parallel fully-connected layers of size $32$,
creating a dueling architecture. The network is trained for $25$ epochs with
minibatch size $512$. The optimizer used was ADAM with a learning rate of
$0.01$ and a learning rate decay of $0.999$. The target update rate is $0.1$
and the discount factor is $0.99$. All experiments run in a computer with the
following specifications: i5-8265U processor, 8 GB DDR4 RAM system memory, $1$
TB HD (SATA 3.1 $5400$ RPM) and $240$ GB SSD (PCIe NVMe Ger 3.0 x2) storage.
Each discretization took $127.945$ seconds on average and each trial policy
training took $38.117$ seconds on average.
In Fig. 3a and Fig. 3b there is the OPE cumulative reward score of the
policies using X-means and OPTICS, respectively, to discretize state space.
Both show a solid line as the average of the performances along with the
training steps and the translucent error bands as the $95\%$ confidence range.
The OPE cumulative reward score (value axis) on the graph represents how many
times the performance of the new policy represents the performance of the
logged policy. Therefore, the dashed line ($value=1$) represents the point
where the learned policy is equivalent to the logged policy, values above it
mean that the new policy performs better and values below represent worse
performance.
(a)
(b)
Figure 3: OPE cumulative reward score (value axis), mean (solid line) and 95%
confidence (translucent error bands) over all trials, using X-means
LABEL:sub@fig_CPE_reward_xmeans and OPTICS LABEL:sub@fig_CPE_reward_optics to
discretize the state space. Notice that the RL model should achieve roughly
$1.0$ to $1.5$ times as much cumulative reward as the logged policy in both
scenarios.
In both figures, the SWDR estimator shows a value close to $1.5$, which is
$50\%$ of improvement over logged policy, with the OPTICS method having a
narrower $95\%$ confidence. However, the MAGIC estimator shows that the
learned policies performed approximately equivalent to the logged policy. It
also means that there was possible to learn a policy using both clustering
methods to discretize the state space. Despite that, there is no guarantee of
the effectiveness of the learned policies due to overestimation. There is the
possibility of the occurrence of distributional shift, inducing overestimation
if the learned policy does not stay close to the behavior policy [43].
Difficulties in properly estimate the policy performance before deploying it
is a major issue in applying offline RL in real-world scenarios[15]. That can
explain the expressive difference between estimators.
A comparison between the actions taken by the new policies and the logged
policy is shown in Fig. 4a and Fig. 4b, using X-means and OPTICS for
clustering, respectively. Logged policy action frequencies are expressed as
the mean of the logged actions occurrence evaluated in all seeds. Note that
all learned policies focus on using almost only two actions, which may
indicate that other actions do not have an impact on improving the reward or,
due to their low occurrence (low exploration), it was not possible to learn
their proper use.
(a)
(b)
Figure 4: Count of occurrences of each possible action in evaluation set after
training using X-means LABEL:sub@fig_actions_bar_xmeans and OPTICS
LABEL:sub@fig_actions_bar_optics clustering. Logged policy actions frequency
is expressed as the mean of the logged actions occurrence evaluated in all
seeds.
## V Conclusion
Recent developments in the machine learning field provide new ways to address
problems that are still present, such as student dropout. In this paper, a
methodology is proposed to reduce student dropout through the use of a
decision support method to select appropriate aid actions using offline
reinforcement learning.
Our analyzes were performed using real data from students at a Brazilian
university and show promising results in terms of the ability to produce an AI
policy, at least equivalent to the logged one, as it should achieve roughly
$1.0$ to $1.5$ times as much cumulative reward as the logged policy. However,
due to overestimation in offline RL when the learned policy loses similarity
to the policy that generated the dataset, there is no guarantee of the learned
policy effectiveness. Selecting algorithms that encourage policies to stay
close to the behavior policy can reduce that problem.
The proposed discretization of the state space seems suitable to learn a
policy in offline RL context. This work also makes a comparison of the impact
of two different clustering methods for that discretization on the result,
showing similar performance.
For future work, an investigation to identify features that are the most
informative using PCA analysis and which features are most suitable for the
state space and the possibility of a continuous state space approach might
improve performance. Providing interpretability for agent’s decisions through
explainable reinforcement learning and understanding their impacts seems
promising. A further investigation in the cases in which students were able to
avoid dropouts and clarify what actions and spans are appropriate can be
insightful for decision-makers.
## References
* [1] Unesco, “Education: gross enrolment ratio by level of education.” [Online]. Available: http://data.uis.unesco.org/index.aspx?queryid=142
* [2] OECD, _Education at a glance 2019_. Paris: OECD Publishing, 2019.
* [3] R. Balaniuk, H. A. do Prado, R. da Veiga Guadagnin, E. Ferneda, and P. R. Cobbe, “Predicting evasion candidates in higher education institutions,” in _International Conference on Model and Data Engineering_. Springer, 2011, pp. 143–151.
* [4] H. Aldowah, H. Al-Samarraie, and W. M. Fauzy, “Educational data mining and learning analytics for 21st century higher education: A review and synthesis,” _Telematics and Informatics_ , vol. 37, pp. 13–49, 2019.
* [5] A. Sales, L. Balby, and A. Cajueiro, “Exploiting academic records for predicting student drop out: a case study in Brazilian higher education,” _Journal of Information and Data Management_ , vol. 7, no. 2, pp. 166–166, 2016.
* [6] L. Kemper, G. Vorhoff, and B. U. Wigger, “Predicting student dropout: a machine learning approach,” _European Journal of Higher Education_ , vol. 10, no. 1, pp. 28–47, 2020.
* [7] M. Alban and D. Mauricio, “Predicting university dropout through data mining: a systematic literature,” _Indian Journal of Science and Technology_ , vol. 12, no. 4, pp. 1–12, 2019.
* [8] S. Ju, G. Zhou, T. Barnes, and M. Chi, “Pick the moment: identifying critical pedagogical decisions using long-short term rewards,” in _Proceedings of International Conference on Educational Data Mining_ , 2020.
* [9] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski _et al._ , “Human-level control through deep reinforcement learning,” _Nature_ , vol. 518, no. 7540, pp. 529–533, 2015.
* [10] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, “Trust region policy optimization,” in _International Conference on Machine Learning_ , 2015, pp. 1889–1897.
* [11] Y. Luo, H. Xu, Y. Li, Y. Tian, T. Darrell, and T. Ma, “Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees,” _arXiv preprint arXiv:1807.03858_ , 2018.
* [12] R. Agarwal, D. Schuurmans, and M. Norouzi, “An optimistic perspective on offline reinforcement learning,” _arXiv preprint arXiv:1907.04543_ , 2019\.
* [13] S. Fujimoto, E. Conti, M. Ghavamzadeh, and J. Pineau, “Benchmarking batch deep reinforcement learning algorithms,” _arXiv preprint arXiv:1910.01708v1_ , 2019.
* [14] R. S. Sutton and A. G. Barto, _Reinforcement learning: an introduction_. MIT press, 2018.
* [15] S. Levine, A. Kumar, G. Tucker, and J. Fu, “Offline reinforcement learning: tutorial, review, and perspectives on open problems,” _arXiv preprint arXiv:2005.01643_ , 2020.
* [16] P. Thomas and E. Brunskill, “Data-efficient off-policy policy evaluation for reinforcement learning,” in _International Conference on Machine Learning_ , 2016, pp. 2139–2148.
* [17] S. Reddy, A. D. Dragan, and S. Levine, “SQIL: imitation learning via reinforcement learning with sparse rewards,” _arXiv preprint arXiv:1905.11108_ , 2019.
* [18] L. Luceri, S. Giordano, and E. Ferrara, “Detecting troll behavior via inverse reinforcement learning: a case study of Russian trolls in the 2016 US election,” in _Proceedings of the International AAAI Conference on Web and Social Media_ , vol. 14, 2020, pp. 417–427.
* [19] M. Komorowski, L. A. Celi, O. Badawi, A. C. Gordon, and A. A. Faisal, “The artificial intelligence clinician learns optimal treatment strategies for sepsis in intensive care,” _Nature medicine_ , vol. 24, no. 11, pp. 1716–1720, 2018.
* [20] A. Raghu, M. Komorowski, I. Ahmed, L. Celi, P. Szolovits, and M. Ghassemi, “Deep reinforcement learning for sepsis treatment,” _arXiv preprint arXiv:1711.09602_ , 2017.
* [21] M. L. Puterman, “Markov decision processes,” _Handbooks in operations research and management science_ , vol. 2, pp. 331–434, 1990.
* [22] T. Matsushima, H. Furuta, Y. Matsuo, O. Nachum, and S. Gu, “Deployment-efficient reinforcement learning via model-based offline optimization,” _arXiv preprint arXiv:2006.03647_ , 2020.
* [23] S. Lange, T. Gabel, and M. Riedmiller, “Batch reinforcement learning,” in _Reinforcement learning_. Springer, 2012, pp. 45–73.
* [24] T. Hester, M. Vecerik, O. Pietquin, M. Lanctot, T. Schaul, B. Piot, D. Horgan, J. Quan, A. Sendonaris, I. Osband _et al._ , “Deep Q-learning from demonstrations,” in _AAAI Conference on Artificial Intelligence_ , 2018.
* [25] A. Kumar, J. Fu, M. Soh, G. Tucker, and S. Levine, “Stabilizing off-policy Q-learning via bootstrapping error reduction,” in _Advances in Neural Information Processing Systems_ , 2019, pp. 11 784–11 794.
* [26] S. Fujimoto, D. Meger, and D. Precup, “Off-policy deep reinforcement learning without exploration,” _arXiv preprint arXiv:1812.02900_ , 2018.
* [27] Y. Wu, G. Tucker, and O. Nachum, “Behavior regularized offline reinforcement learning,” _arXiv preprint arXiv:1911.11361_ , 2019.
* [28] A. Nair, M. Dalal, A. Gupta, and S. Levine, “Accelerating online reinforcement learning with offline datasets,” _arXiv preprint arXiv:2006.09359_ , 2020\.
* [29] Z. Wang, T. Schaul, M. Hessel, H. Hasselt, M. Lanctot, and N. Freitas, “Dueling network architectures for deep reinforcement learning,” in _International Conference on Machine Learning_ , 2016, pp. 1995–2003.
* [30] M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, “Rainbow: combining improvements in deep reinforcement learning,” in _AAAI Conference on Artificial Intelligence_ , 2018.
* [31] H. Van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double Q-learning,” _arXiv preprint arXiv:1509.06461_ , 2015.
* [32] D. Arthur and S. Vassilvitskii, “K-means++: the advantages of careful seeding,” in _Proceedings of ACM-SIAM Symposium on Discrete Algorithms_ , USA, 2007, p. 1027–1035.
* [33] P. Mitra, S. K. Pal, and M. A. Siddiqi, “Non-convex clustering using expectation maximization algorithm with rough set initialization,” _Pattern Recognition Letters_ , vol. 24, no. 6, pp. 863–873, 2003.
* [34] M. Ankerst, M. M. Breunig, H.-P. Kriegel, and J. Sander, “OPTICS: ordering points to identify the clustering structure,” _ACM SIGMOD Record_ , vol. 28, no. 2, pp. 49–60, 1999.
* [35] A. Nagpal, A. Jatain, and D. Gaur, “Review based on data clustering algorithms,” in _IEEE Conference on Information & Communication Technologies_, 2013, pp. 298–303.
* [36] D. Pelleg, A. W. Moore _et al._ , “X-means: extending k-means with efficient estimation of the number of clusters.” in _International Conference on Machine Learning_ , vol. 1, 2000, pp. 727–734.
* [37] N. Jaques, A. Ghandeharioun, J. H. Shen, C. Ferguson, A. Lapedriza, N. Jones, S. Gu, and R. Picard, “Way off-policy batch deep reinforcement learning of implicit human preferences in dialog,” _arXiv preprint arXiv:1907.00456_ , 2019.
* [38] I. Goodfellow, Y. Bengio, and A. Courville, _Deep learning_. MIT press, 2016.
* [39] J. Fu, A. Kumar, O. Nachum, G. Tucker, and S. Levine, “D4RL: datasets for deep data-driven reinforcement learning,” _arXiv preprint arXiv:2004.07219_ , 2020.
* [40] C. Gulcehre, Z. Wang, A. Novikov, T. L. Paine, S. G. Colmenarejo, K. Zolna, R. Agarwal, J. Merel, D. Mankowitz, C. Paduraru _et al._ , “RL unplugged: benchmarks for offline reinforcement learning,” _arXiv preprint arXiv:2006.13888_ , 2020.
* [41] M. Dudík, J. Langford, and L. Li, “Doubly robust policy evaluation and learning,” in _International Conference on Machine Learning_ , 2011, p. 1097–1104.
* [42] J. Gauci, E. Conti, Y. Liang, K. Virochsiri, Y. He, Z. Kaden, V. Narayanan, X. Ye, Z. Chen, and S. Fujimoto, “Horizon: Facebook’s open source applied reinforcement learning platform,” _arXiv preprint arXiv:1811.00260_ , 2018\.
* [43] T. L. Paine, C. Paduraru, A. Michi, C. Gulcehre, K. Zolna, A. Novikov, Z. Wang, and N. de Freitas, “Hyperparameter selection for offline reinforcement learning,” _arXiv preprint arXiv:2007.09055_ , 2020.
|
COSINE-100 Collaboration
# Search for inelastic WIMP-iodine scattering with COSINE-100
G. Adhikari Department of Physics and Wright Laboratory, Yale University, New
Haven, CT 06520, USA N. Carlin Physics Institute, University of São Paulo,
05508-090, São Paulo, Brazil J. J. Choi Department of Physics and Astronomy,
Seoul National University, Seoul 08826, Republic of Korea Center for
Underground Physics, Institute for Basic Science (IBS), Daejeon 34126,
Republic of Korea S. Choi Department of Physics and Astronomy, Seoul
National University, Seoul 08826, Republic of Korea A. C. Ezeribe Department
of Physics and Astronomy, University of Sheffield, Sheffield S3 7RH, United
Kingdom L. E. França Physics Institute, University of São Paulo, 05508-090,
São Paulo, Brazil C. Ha Department of Physics, Chung-Ang University, Seoul
06973, Republic of Korea I. S. Hahn Department of Science Education, Ewha
Womans University, Seoul 03760, Republic of Korea Center for Exotic Nuclear
Studies, Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea
IBS School, University of Science and Technology (UST), Daejeon 34113,
Republic of Korea S. J. Hollick Department of Physics and Wright Laboratory,
Yale University, New Haven, CT 06520, USA E. J. Jeon Center for Underground
Physics, Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea
J. H. Jo Department of Physics and Wright Laboratory, Yale University, New
Haven, CT 06520, USA H. W. Joo Department of Physics and Astronomy, Seoul
National University, Seoul 08826, Republic of Korea W. G. Kang Center for
Underground Physics, Institute for Basic Science (IBS), Daejeon 34126,
Republic of Korea M. Kauer Department of Physics and Wisconsin IceCube
Particle Astrophysics Center, University of Wisconsin-Madison, Madison, WI
53706, USA B. H. Kim Center for Underground Physics, Institute for Basic
Science (IBS), Daejeon 34126, Republic of Korea H. J. Kim Department of
Physics, Kyungpook National University, Daegu 41566, Republic of Korea J. Kim
Department of Physics, Chung-Ang University, Seoul 06973, Republic of Korea
K. W. Kim Center for Underground Physics, Institute for Basic Science (IBS),
Daejeon 34126, Republic of Korea S. H. Kim Center for Underground Physics,
Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea S. K. Kim
Department of Physics and Astronomy, Seoul National University, Seoul 08826,
Republic of Korea W. K. Kim IBS School, University of Science and Technology
(UST), Daejeon 34113, Republic of Korea Center for Underground Physics,
Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea Y. D. Kim
Center for Underground Physics, Institute for Basic Science (IBS), Daejeon
34126, Republic of Korea Department of Physics, Sejong University, Seoul
05006, Republic of Korea IBS School, University of Science and Technology
(UST), Daejeon 34113, Republic of Korea Y. H. Kim Center for Underground
Physics, Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea
Korea Research Institute of Standards and Science, Daejeon 34113, Republic of
Korea IBS School, University of Science and Technology (UST), Daejeon 34113,
Republic of Korea Y. J. Ko Center for Underground Physics, Institute for
Basic Science (IBS), Daejeon 34126, Republic of Korea D. H. Lee Department
of Physics, Kyungpook National University, Daegu 41566, Republic of Korea E.
K. Lee Center for Underground Physics, Institute for Basic Science (IBS),
Daejeon 34126, Republic of Korea H. Lee IBS School, University of Science
and Technology (UST), Daejeon 34113, Republic of Korea Center for Underground
Physics, Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea
H. S. Lee Center for Underground Physics, Institute for Basic Science (IBS),
Daejeon 34126, Republic of Korea IBS School, University of Science and
Technology (UST), Daejeon 34113, Republic of Korea H. Y. Lee Center for
Underground Physics, Institute for Basic Science (IBS), Daejeon 34126,
Republic of Korea I. S. Lee Center for Underground Physics, Institute for
Basic Science (IBS), Daejeon 34126, Republic of Korea J. Lee Center for
Underground Physics, Institute for Basic Science (IBS), Daejeon 34126,
Republic of Korea J. Y. Lee Department of Physics, Kyungpook National
University, Daegu 41566, Republic of Korea M. H. Lee Center for Underground
Physics, Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea
IBS School, University of Science and Technology (UST), Daejeon 34113,
Republic of Korea S. H. Lee IBS School, University of Science and Technology
(UST), Daejeon 34113, Republic of Korea Center for Underground Physics,
Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea S. M. Lee
Department of Physics and Astronomy, Seoul National University, Seoul 08826,
Republic of Korea Y. J. Lee Department of Physics, Chung-Ang University,
Seoul 06973, Republic of Korea D. S. Leonard Center for Underground Physics,
Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea N. T.
Luan Department of Physics, Kyungpook National University, Daegu 41566,
Republic of Korea B. B. Manzato Physics Institute, University of São Paulo,
05508-090, São Paulo, Brazil R. H. Maruyama Department of Physics and Wright
Laboratory, Yale University, New Haven, CT 06520, USA R. J. Neal Department
of Physics and Astronomy, University of Sheffield, Sheffield S3 7RH, United
Kingdom J. A. Nikkel Department of Physics and Wright Laboratory, Yale
University, New Haven, CT 06520, USA S. L. Olsen Center for Underground
Physics, Institute for Basic Science (IBS), Daejeon 34126, Republic of Korea
B. J. Park IBS School, University of Science and Technology (UST), Daejeon
34113, Republic of Korea Center for Underground Physics, Institute for Basic
Science (IBS), Daejeon 34126, Republic of Korea H. K. Park Department of
Accelerator Science, Korea University, Sejong 30019, Republic of Korea H. S.
Park Korea Research Institute of Standards and Science, Daejeon 34113,
Republic of Korea K. S. Park Center for Underground Physics, Institute for
Basic Science (IBS), Daejeon 34126, Republic of Korea S. D. Park Department
of Physics, Kyungpook National University, Daegu 41566, Republic of Korea R.
L. C. Pitta Physics Institute, University of São Paulo, 05508-090, São Paulo,
Brazil H. Prihtiadi Center for Underground Physics, Institute for Basic
Science (IBS), Daejeon 34126, Republic of Korea S. J. Ra Center for
Underground Physics, Institute for Basic Science (IBS), Daejeon 34126,
Republic of Korea C. Rott Department of Physics, Sungkyunkwan University,
Suwon 16419, Republic of Korea Department of Physics and Astronomy,
University of Utah, Salt Lake City, UT 84112, USA K. A. Shin Center for
Underground Physics, Institute for Basic Science (IBS), Daejeon 34126,
Republic of Korea D. F. F. S. Cavalcante Physics Institute, University of
São Paulo, 05508-090, São Paulo, Brazil A. Scarff Department of Physics and
Astronomy, University of Sheffield, Sheffield S3 7RH, United Kingdom N. J. C.
Spooner Department of Physics and Astronomy, University of Sheffield,
Sheffield S3 7RH, United Kingdom W. G. Thompson Department of Physics and
Wright Laboratory, Yale University, New Haven, CT 06520, USA L. Yang
Department of Physics, University of California San Diego, La Jolla, CA 92093,
USA G. H. Yu Department of Physics, Sungkyunkwan University, Suwon 16419,
Republic of Korea Center for Underground Physics, Institute for Basic Science
(IBS), Daejeon 34126, Republic of Korea
###### Abstract
We report the results of a search for inelastic scattering of weakly
interacting massive particles (WIMPs) off 127I nuclei using NaI(Tl) crystals
with a data exposure of 97.7 kg$\cdot$years from the COSINE-100 experiment.
The signature of inelastic WIMP-127I scattering is a nuclear recoil
accompanied by a 57.6 keV $\gamma$-ray from the prompt deexcitation, producing
an energetic signature compared to the typical WIMP nuclear recoil signal. We
found no evidence for this inelastic scattering signature and set a 90 $\%$
confidence level upper limit on the WIMP-proton spin-dependent, inelastic
scattering cross section of $1.2\times 10^{-37}{\rm cm^{2}}$ at the WIMP mass
500 ${\rm GeV/c^{2}}$.
## I Introduction
An abundance of astronomical observations indicates the existence of invisible
dark matter in the Universe Clowe:2006eq ; Ade:2015xua . Among the various
candidates for the particle dark matter, the weakly interacting massive
particle (WIMP) PhysRevLett.39.165 ; Goodman:1984dc is a prominent candidate
that is well motivated by theoretical models beyond the standard model of
particle physics Jungman:1995df . Several direct Billard:2021uyg and indirect
Conrad:2017pms detection experiments, as well as collider production
experiments dmcol , have tested this hypothesis, but no clear evidence has
been yet observed Workman:2022ynf .
Most direct detection experiments have focused on WIMP-nucleus elastic
scatterings reactions, which result in nuclear recoils with energies of about
10 keVnr (kilo-electron-volt nuclear recoil energy) that are quenched to about
1 keVee (kilo-electron-volt electron-equivalent energy) due to the detector
response Joo:2018hom . This low energy is close to the threshold of many dark
matter detectors PhysRevLett.118.021303 ; Agnese:2017njq ; Aprile:2017iyp ;
DarkSide:2018bpj ; XENON:2018voc ; CRESST:2019jnq ; Zyla:2020zbs .
On the other hand, when WIMP scatters from a target nucleus in a detector, it
may excite the nucleus to a higher energy state. In this so called WIMP-
nucleus inelastic scattering scenario ELLIS1988375 ; Vergados:2013raa ;
PhysRevC.102.035501 , the subsequent deexcitation of the nucleus emits a
characteristic $\gamma$-ray that is detected in addition to the nuclear recoil
energy transferred from the WIMP-nucleus interaction. In this scenario, the
inelastic interaction produces a relatively high-energy signature that is well
above the detector’s threshold. Since the nuclear excitation in inelastic
scattering is a spin-dependent interaction, the observation of WIMP inelastic
scattering would provide insight into spin properties of WIMP. In the model of
Ref. Arcadi:2019hrw , inelastic scattering channels can have better
sensitivity for some ranges of WIMP model parameters.
Previous searches for WIMP inelastic scattering have focused on excited states
of 127I FUSHIMI1994400 or 129Xe xmass:20191 ; PhysRevD.96.022008 ;
PhysRevD.103.063028 . In the case of 129Xe, the inelastic nuclear excitation
of the 39.6 keV state relies on the WIMP-neutron spin-dependent interaction.
With a 26 % natural abundance of 129Xe, the exclusion limits on the WIMP-
neutron inelastic scattering channels have been obtained xmass:20191 ;
PhysRevD.103.063028 .
The 127I nucleus has complementary advantages over the 129Xe nucleus; it has
almost 100 % natural abundance and odd proton numbers sensitive to WIMP-proton
spin-dependent interaction. The first excited level of 127I is a 7/2+ state of
57.6 keV above the 5/2+ ground state, with a half-life of 1.9 ns
Vergados:2013raa . Therefore, the WIMP-127I inelastic interaction signal would
be a combination of the nuclear recoil and the 57.6 keV deexcitation energies.
The ELEGANTS-V NaI(Tl) experiment FUSHIMI1994400 reported limits on the rate
of events rate with energies in this region. In this paper, we report a search
for the WIMP-127I inelastic scattering events from the COSINE-100 NaI(Tl)
crystal detectors in the signal region of 35–85 keVee.
## II Experiment
The COSINE-100 experiment Adhikari:2017esn is located at the Yangyang
underground laboratory in South Korea, with an overburden of approximately 700
m Kims:2005dol ; Kim:2012rza . The COSINE-100 detector, as shown in Fig. 1,
consists of an array of eight ultra-pure NaI(Tl) crystals, total weight of 106
kg cosinebg ; cosinebg2 . The NaI(Tl) crystal assemblies are immersed in an
active veto detector comprised of 2,200 L of linear alkylbenzene (LAB)-based
liquid scintillator (LS) that attenuates or tags the influence of radioactive
backgrounds observed by crystals Park:2017jvs ; Adhikari:2020asl . The LAB-LS
is surrounded by a 3 cm-thick layer of oxygen-free copper, a 20 cm thick lead
shield, and plastic scintillator pannels that tag and veto cosmic ray muons
Prihtiadi:2017inr ; Prihtiadi:2020yhz . Each crystal is optically coupled to
two photomultiplier tubes (PMTs).
Figure 1: Schematic of the COSINE-100 detector. The NaI(Tl) detectors are
immersed in the 2,200 L LAB-LS that is surrounded by layers of copper and lead
shielding.
An event is triggered when a signal corresponding to one or more
photoelectrons occurs in both PMTs of a crystal that are within a 200 ns time
window. If at least one crystal satisfies the trigger condition, signals from
all crystals and the LAB-LS are recorded. The signals from the crystal PMTs
are 8 $\mu$s long waveforms that start 2.4 $\mu$s earlier than the trigger
position, and are processed by 500 MHz flash analog-to-digital (FADC)
converters. In addition to the 5${}^{\textrm{th}}$ stage dynode readouts for
high-energies (50–3 MeV), the low-energy (0–100 keV) anode readouts are stored
by the FADCs. The anode readouts provide sufficient energy information for
events between 35 keVee and 85 keVee. The LAB-LS and plastic scintillator
signals are processed by charge-sensitive flash analog-to-digital converters.
Muon events are triggered by coincident signals from at least two plastic
scintillator panels. A trigger and clock board reads the trigger information
from individual boards and generates a global trigger and time
synchronizations for all of the modules. Details of the COSINE-100 data
acquisition system are described in Ref. Adhikari:2018fpo .
The analysis presented here utilizes data from October 2016 to July 2018,
corresponding to 1.7 years of exposure, which was used for our first annual
modulation search Adhikari:2019off and the model-dependent WIMP dark matter
search that was based on the shape of the energy spectra COSINE-100:2021xqn .
During the 1.7-year data-taking period, no significant environmental anomalies
or unstable detector performance were observed. Three of the eight crystals
are excluded from this analysis due to their high background, high noise, and
low light yield, resulting in a total effective mass of 61.3 kg
Adhikari:2017esn ; Adhikari:2018ljm .
## III WIMP-127I inelastic scattering signals
An inelastic scattering event that occurs in 127I can result in the nuclear
recoil together with the emission of the 57.6 keV $\gamma$-ray from the
deexcitation. The contribution to the scintillation signal from the nuclear
recoil depends on the velocity distribution of the WIMPs in the galatic dark
matter halo and the nuclear form factor for the spin dependent interaction.
The differential nuclear recoil rate per unit energy of the nuclear components
is described as follows Zyla:2020zbs ,
$\displaystyle\frac{d{R}}{dE_{\rm
nr}}=\frac{\rho_{\chi}}{2m_{\chi}\mu^{2}}\sigma\int^{v_{\rm max}}_{v_{\rm
min}}d^{3}f({\bf v},t),$ (1)
where ${R}$ is the event rate per unit target mass and unit time, $E_{\rm nr}$
is the nuclear recoil energy, $\rho_{\chi}$ is the local dark matter density,
$m_{\chi}$ and $\mu$ are the WIMP mass and the reduced mass of WIMP and 127I
nuclei, respectively, and $\sigma$ is the WIMP-nucleus scattering cross
section for the inelastic interaction. The integral is performed from the
minimum velocity $v_{\rm min}$ of the inelastic scattering to the maximum
velocity $v_{\rm max}$, which is the same as the galactic escape velocity
$v_{\rm esc}$. Because of the excited energy $E_{\rm ex}$, $v_{\rm min}$ is
increased from the minimum velocity of the typical nuclear recoil ($v_{0}$) as
follows,
$\displaystyle v_{\rm min}=v_{0}+\frac{v^{2}_{\rm thr}}{4v_{0}},$ (2)
where $v_{0}=\sqrt{\frac{m_{\rm target}E_{\rm nr}}{2\mu^{2}}}$, $m_{\rm
target}$ $=$ the mass of 127I, and $v^{2}_{\rm thr}=\frac{2\ E_{\rm
ex}}{\mu}$.
This cross section can be expressed in terms of the cross section of the WIMP-
proton spin-dependent interaction $\sigma_{\rm p}$ by
$\displaystyle\sigma=\frac{4}{3}\frac{\pi}{2J+1}\left(\frac{\mu}{\mu_{\rm
p}}\right)^{2}S(E_{\rm nr})\sigma_{\rm p},$ (3)
where $J=5/2$ is the spin of the ground-state 127I nucleus, $\mu_{\rm p}$ is
the reduced mass of the WIMP and proton, and $S(E_{\rm nr})$ is the nuclear
form factor of inelastic WIMP-127I scattering. We use a recent calculation of
$S(E_{\rm nr})$ for the 127I inelastic interaction in Ref. PhysRevC.102.035501
. We assume the standard halo model of the WIMP velocity distribution, $f({\bf
v},t)$ Lewin:1995rx ; Freese:2012xd ,
$\displaystyle f({\bf
v},t)=\begin{cases}\frac{1}{N_{\mathrm{esc}}}&\left(\frac{3}{2\pi\sigma_{v}^{2}}\right)^{3/2}e^{-3[{\bf
v}+{\bf v}_{\mathrm{E}}(t)]^{2}/2\sigma_{v}^{2}},\\\ &\mbox{for }\left|{\bf
v}+{\bf v}_{\mathrm{E}}(t)\right|<v_{\rm esc}\\\
0,&\mbox{otherwise,}\end{cases}$ (4)
where $N_{\mathrm{esc}}$ is a normalization constant, ${\bf v}_{\mathrm{E}}$
is the Earth velocity relative to the WIMP dark matter and $\sigma_{v}$ is the
velocity dispersion. The standard halo model parameterization is used with the
local dark matter density $\rho_{\chi}=0.3$ GeV/cm3, $v_{\mathrm{E}}$ = 232
km/s, $\sqrt{2/3}\sigma_{v}$ = 220 km/s, and $v_{\rm esc}$ = 544 km/s
Smith:2006ym .
Because of the short half-life of 1.9 ns of the 57.6 keV excited 127I state,
the energy deposited in the detector will be the sum of the nuclear recoil
energy and the deexcitation energy of 57.6 keV. Since the nuclear recoil
energy is quenched to the visible electron-equivalent energy with
approximately 5–10 % levels Joo:2018hom , the visible energy is expressed as
follows,
$\displaystyle E_{\rm vis}=f(E_{\rm nr})\times E_{\rm nr}+E_{\rm ex}.$ (5)
Here, $f(E_{\rm nr})$ is the energy-dependent quenching factor for nuclear
recoils SMITH1990203 . In this search, we use the measured quenching factor of
iodine from Ref. Joo:2018hom with an empirical model described in Ref.
Ko:2019enb . Figure 2 shows the simulated energy spectra for the WIMP masses
of 50 GeV/c2, 500 GeV/c2, and 5,000 GeV/c2 based on Eq. 1. In these energy
spectra, energy resolutions of individual crystal detectors cosinebg2 are
taken into account. The nuclear recoil energy is more relevant for high-mass
WIMPs, and long tails of the energy spectra at high energy are evident.
Figure 2: The expected energy spectra for inelastic scattering of WIMP-127I in
the COSINE-100 detector are shown for WIMP masses of 50 ${\rm GeV/c^{2}}$, 500
${\rm GeV/c^{2}}$, and 5,000 ${\rm GeV/c^{2}}$. The spectra include the
nuclear recoil energy and the deexcitation energy.
## IV Data analysis
### IV.1 Event selection
In order to suppress cosmic-ray muon-induced events, the crystal hit events
that are coincident with muon candidate events in the muon detector
Prihtiadi:2017inr ; Prihtiadi:2020yhz within 30 ms are rejected.
Additionally, we require that the leading edges of the trigger pulses start
later than 2.0 $\mu$s after the start of the recording, that the waveforms
from the hit crystal contain more than two single photoelectrons, and that the
integral waveform area below the baseline does not exceed a certain limit.
These criteria reject muon-induced phosphor events and electronic interference
noise events.
A multiple-hit event is one in which more than one crystal has a signal with
more than four photoelectrons in an 8 $\mu$s time window or has an LS signal
above an 80 keVee threshold within 4 $\mu$s of the hit crystal
Adhikari:2020asl . A single-hit event is classified as one where only one
crystal has a hit, and none of the other detectors meets the above criteria.
In this analysis, only single-hit events are used.
In the signal region around 57.6 keV energy, there is a few-percent-level
contamination from the high energy tail of PMT-induced noise events. However,
these events are efficiently rejected with the boosted decision tree-based
discriminant, as described in Refs. Adhikari:2020xxj ; COSINE-100:2021poy .
The selection efficiencies for the scintillation events are estimated from
calibration data using a 60Co source to be more than 99 % in 35–85 keVee
signal region.
### IV.2 Backgrounds
Geant4 Agostinelli:2002hh -based simulations are used to understand the
contribution of each background component Adhikari:2017gbj ; cosinebg ;
cosinebg2 . The fraction of each of these is determined from a simultaneous
fit to the single-hit and multiple-hit events cosinebg2 . For the single-hit
data, we consider the fit energy range between 35 and 85 keVee that covers the
dominant inelastic signal range of 50–70 keVee. Figure 3 presents the crystal
4 energy spectrum in the results from the Ref. cosinebg2 background model
superimposed. In this ROI, the dominant backgrounds are the internal 210Pb,
which produces a 46.5 keV $\gamma$-ray with a few keV Auger electrons, and
129I, which emits a 39.6 keV $\gamma$-ray and a $\beta$ particle. Table 1
presents the expected background composition and the observed data in the
50–70 keVee energy region of the single-hit events for the five crystals. As
shown in this table, the observed data agrees well with the sum of the
expected backgrounds.
Figure 3: Measured energy spectrum in the ROI of crystal 4 (black points) and its background model (blue solid line) with the 68% (yellow band) and 95% (green band) confidence intervals are presented. The expected contributions to the background from 210Pb (red dashed line), 129I (green dotted-dashed line), and other components (black dotted line) are indicated. In the 50–70 keVee energy region, 210Pb and 129I are the dominant background components. Table 1: The background expectations and observed data in the 50–70 keVee energy range for single hits of the 1.7 years COSINE-100 exposure are summarized, with only statistical uncertainties being considered. | crystal 2 | crystal 3 | crystal 4 | crystal 6 | crystal 7
---|---|---|---|---|---
Internal | 210Pb | 424500 $\pm$ 2400 | 155800 $\pm$ 4200 | 290000 $\pm$ 9100 | 486300 $\pm$ 8200 | 420900 $\pm$ 5100
40K | 8400 $\pm$ 200 | 3700 $\pm$ 100 | 8800 $\pm$ 100 | 2100 $\pm$ 200 | 2300 $\pm$ 200
Others | 100 $\pm$ 10 | 100 $\pm$ 10 | 100 $\pm$ 10 | 100 $\pm$ 11 | 100 $\pm$ 10
External | 238U | 87200 $\pm$ 1900 | 68600 $\pm$ 1600 | 24900 $\pm$ 2100 | 71100 $\pm$ 1600 | 67300 $\pm$ 1700
228Th | 11600 $\pm$ 2500 | 17500 $\pm$ 1900 | 38700 $\pm$ 2800 | 34300 $\pm$ 2400 | 29300 $\pm$ 2100
Others | 6400 $\pm$ 600 | 3400 $\pm$ 600 | 13400 $\pm$ 1000 | 12200 $\pm$ 900 | 8100 $\pm$ 1200
Cosmogenic | 129I | 157700 $\pm$ 3200 | 148200 $\pm$ 3300 | 331000 $\pm$ 5300 | 230300 $\pm$ 4200 | 313100 $\pm$ 5100
127mTe | 1900 $\pm$ 300 | 2900 $\pm$ 100 | 2100 $\pm$ 100 | 800 $\pm$ 100 | 800 $\pm$ 100
Others | 100 $\pm$ 8 | 2500 $\pm$ 280 | 32500 $\pm$ 900 | 18100 $\pm$ 1600 | 13600 $\pm$ 1600
Total (expected) | 707900 $\pm$ 5100 | 402700 $\pm$ 5900 | 742800 $\pm$ 9800 | 855300 $\pm$ 9800 | 855500 $\pm$ 8000
Data | 716352 | 410655 | 746285 | 856789 | 864034
We consider various sources of systematic uncertainties in the background and
signal models. Errors associated with the energy resolution, the energy scale,
and the background modeling technique are accounted for in the shapes of the
signal and background probability density functions, as well as in rate
changes as described in Ref. COSINE-100:2021xqn . These quantities are allowed
to vary within their uncertainties as nuisance parameters in the data fit used
to extract the signal. The largest systematic uncertainty comes from the error
associated with the 210Pb background modeling, which is due to its dominant
contribution and its fast shape change in the ROI as shown in Fig. 3. This
error includes uncertainties in the energy resolution, energy scale, and depth
profiles of 210Pb on the surface of the NaI(Tl) crystals, which were studied
with a 222Rn contaminated crystal Yu:2020ntl and varied within their
uncertainties.
### IV.3 Signal fit
To search for evidence of the WIMP-127I inelastic scattering signals, a
Bayesian approach with a likelihood function based on Poisson probability,
described in Ref. COSINE-100:2021xqn , is used. The likelihood fit is applied
to the measured single-hit energy spectra between 35 and 85 keVee for several
WIMP masses. Each crystal is fitted with a crystal-specific background model
and a crystal-correlated WIMP signal for the combined fit by multiplying the
five crystals’ likelihoods. The means and uncertainties for background
components, which are determined from modeling Adhikari:2017gbj ; cosinebg ;
cosinebg2 , are used to set Gaussian priors for the background. The systematic
uncertainties are included in the fit as nuisance parameters with Gaussian
priors.
Prior to the data fit, the fitter was tested with simulated event samples.
Each simulated dataset is prepared by Poisson random extraction of the modeled
background spectrum, assuming a background-only hypothesis. Marginalization to
obtain the posterior probability density function (PDF) for each simulated
sample is performed to set the 90 % confidence level upper limits. The 1,000
simulated experiments result in $\pm$1$\sigma$ and $\pm$2$\sigma$ bands of the
expected median sensitivity.
|
---|---
(a) | (b)
Figure 4: (a) The black-filled circles represent the data points showing the
summed energy spectra from the five crystals. The solid blue line indicates
the result of the fit with a 500 GeV/c2 WIMP mass signal. The expected signal
shape of the 500 GeV/c2 WIMP mass is a red solid line, assuming WIMP-proton
spin-dependent cross sections of $5\times 10^{-36}$ cm2, which is 30 times
higher than the 90 % confidence level upper limit. (b) It is an example of the
posterior probability density function (PDF) and cumulated density function
(CDF) for the 1.7-year of COSINE-100 data and a WIMP mass of 500 GeV/c2. The
posterior PDF is scaled so that the maximum value is unity. In this PDF, the
best fit (i.e. the most probable cross section) points to a null signal.
Therefore, we set the 90 % confidence level upper limit. The exclusion limit
at a 90 % confidence level is obtained from the CDF matched with 0.9. The
yellow and green areas represent the 1 $\sigma$ and 2 $\sigma$ confidence
intervals, respectively.
Fits to the data are performed for each of the 12 WIMP masses considered in
the same way as the simulated data. As an example, the data fit with a WIMP
mass of 500 ${\rm GeV/c^{2}}$ is presented in Fig. 4. The summed event
spectrum for the five crystals is shown in Fig. 4 (a), which corresponds to
the null observation. For comparison, the expected signal for the WIMP mass
500 GeV/c2 with spin-dependent cross section of $3.6\times 10^{-36}$cm2 is
overlaid. This cross section is 30 times higher than the measured 90 %
confidence level upper limit. Figure 4 (b) shows the posterior PDF and its
cumulative distribution function of data for this example.
Figure 5: The observed 90 % confidence level exclusion limits on the WIMP-
proton spin-dependent cross section from the 1.7-year of COSINE-100 data are
shown, along with the $\pm$1 $\sigma$ and $\pm$2 $\sigma$ bands for the
expected sensitivity under the background-only hypothesis. The limits are
compared with a WIMP interpretation of the ELEGANTS-V upper limit of 57.6 keV
event rate for the WIMP-127I inelastic scattering hypothesis.
No excess of events that could be attributed to WIMP-127I inelastic
interactions is found in the 12 different WIMP masses considered. The
posterior probabilities of the signal were consistent with zero in all cases,
and 90% confidence level limits are determined, as an example shown in Fig. 4
(b). Figure 5 shows the 90% confidence level exclusion limits from the
COSINE-100 data with $\pm$1$\sigma$ and $\pm$2$\sigma$ expected bands of
exclusion limits from the simulated experiments.
For comparison, we interpret a 90 % confidence level upper limit of ELEGANTS-V
from the inelastic scattering event rate of 9.8$\times$10-2 counts/kg/day
FUSHIMI1994400 as the WIMP-127I inelastic scattering cross section, as shown
in Fig. 5. Although extracted event rates depend on the WIMP mass, we assume
the same event rates of the ELEGANTS-V interpretations considering similar
shapes of signal spectra due to the dominant deexcitation energy of 57.6 keV.
Our results improve the exclusion limit by order of magnitude from the
previous search for the same channel and are the most stringent result to date
in the spin-dependent WIMP-proton cross section via the inelastic scattering
channel.
Because of the dominant background from 210Pb of 46.5 keV $\gamma$ with Auger
electrons to ROI, the inelastic WIMP-127I scattering signal for the low-mass
WIMP is encompassed with the 210Pb background as one can see in Fig. 2. It is
also affected by the systematic uncertainty of the 210Pb modeling, especially
from the energy scale that increases uncertainties of the event rates near the
signal region and increases the fluctuation of the expected limit bands, as
one can see in Fig. 5. Compared with the ELEGANTS-V interpretation, which
assumes flat background, the low-mass WIMP limit from this work reflect the
influence of the the 210Pb background.
Our R&D program for the development of low-background NaI(Tl) crystal
detectors has resulted in NaI(Tl) detectors with significantly reduced 210Pb
background COSINE:2020egt ; 10.3389/fphy.2023.1142765 , which will be applied
to the COSINE-200 experiment Ko:2022pmu . With the realization of the
COSINE-200 experiment, the sensitivities to search for the WIMP-127I inelastic
interaction will be enhanced.
## V Conclusion
We performed a search for WIMP-127I inelastic scattering events from 57.6 keV
deexcitation $\gamma$ with the nuclear recoil in the 1.7 years COSINE-100
data. The single-hit energy spectrum was fitted with signal and background
models in the energy range of 35–85 keVee. We found no evidence of the
WIMP-127I inelastic interaction signals, allowing us to set 90 % confidence
level exclusion limits on the WIMP-proton spin-dependent interaction cross
section. The best limit is 1.2$\times$10-37cm2 of a WIMP mass of 500 GeV/c2.
It is the most stringent limit for the spin-dependent WIMP-proton interaction
using WIMP-nucleus inelastic scattering process.
###### Acknowledgements.
We thank the Korea Hydro and Nuclear Power (KHNP) Company for providing
underground laboratory space at Yangyang and the IBS Research Solution Center
(RSC) for providing high performance computing resources. This work is
supported by: the Institute for Basic Science (IBS) under project code
IBS-R016-A1, NFEC-2019R1A6C1010027, NRF-2021R1A2C3010989,
NRF-2021R1I1A3041453, and NRF-2021R1A2C1013761, Republic of Korea; NSF Grants
No. PHY-1913742, DGE-1122492, WIPAC, the Wisconsin Alumni Research Foundation,
United States; STFC Grant ST/N000277/1 and ST/K001337/1, United Kingdom; Grant
No. 2021/06743-1 and 2022/12002-7 FAPESP, CAPES Finance Code 001, CNPq
131152/2020-3, Brazil.
## References
* (1) D. Clowe et al., A direct empirical proof of the existence of dark matter, Astrophys. J. 648, L109 (2006).
* (2) P. A. R. Ade et al., (Planck Collaboration), Planck 2015 results. XIII. Cosmological parameters, Astron. Astrophys. 594, A13 (2016).
* (3) B. W. Lee and S. Weinberg, Cosmological lower bound on heavy-neutrino masses, Phys. Rev. Lett. 39, 165 (1977).
* (4) M. W. Goodman and E. Witten, Detectability of Certain Dark Matter Candidates, Phys. Rev. D 31, 3059 (1985).
* (5) G. Jungman, M. Kamionkowski, and K. Griest, Supersymmetric dark matter, Phys. Rept. 267, 195–373 (1996).
* (6) J. Billard et al., Direct detection of dark matter—APPEC committee report*, Rept. Prog. Phys. 85, 056201 (2022).
* (7) J. Conrad and O. Reimer, Indirect dark matter searches in gamma and cosmic rays, Nature Phys. 13, 224–231 (2017).
* (8) O. Buchmueller, C. Doglioni, and L.-T. Wang, Search for dark matter at colliders, Nature Phy. 13, 217 (2017).
* (9) R. L. Workman and Others, (Particle Data Group Collaboration), Review of Particle Physics, PTEP 2022, 083C01 (2022).
* (10) H. W. Joo, H. S. Park, J. H. Kim, S. K. Kim, Y. D. Kim, H. S. Lee, and S. H. Kim, Quenching factor measurement for NaI(Tl) scintillation crystal, Astropart. Phys. 108, 50–56 (2019).
* (11) D. S. Akerib et al., (LUX Collaboration), Results from a Search for Dark Matter in the Complete LUX Exposure, Phys. Rev. Lett. 118, 021303 (2017).
* (12) R. Agnese et al., (SuperCDMS Collaboration), Results from the Super Cryogenic Dark Matter Search Experiment at Soudan, Phys. Rev. Lett. 120, 061802 (2018).
* (13) E. Aprile et al., (XENON Collaboration), First Dark Matter Search Results from the XENON1T Experiment, Phys. Rev. Lett. 119, 181301 (2017).
* (14) P. Agnes et al., (DarkSide Collaboration), Low-Mass Dark Matter Search with the DarkSide-50 Experiment, Phys. Rev. Lett. 121, 081307 (2018).
* (15) E. Aprile et al., (XENON Collaboration), Dark Matter Search Results from a One Ton-Year Exposure of XENON1T, Phys. Rev. Lett. 121, 111302 (2018).
* (16) A. H. Abdelhameed et al., (CRESST Collaboration), First results from the CRESST-III low-mass dark matter program, Phys. Rev. D 100, 102002 (2019).
* (17) P. Zyla et al., (Particle Data Group Collaboration), Review of Particle Physics, PTEP 2020, 083C01 (2020).
* (18) J. Ellis, R. Flores, and J. Lewin, Rates for inelastic nuclear excitation by dark matter particles, Physics Letters B 212, 375–380 (1988).
* (19) J. D. Vergados, H. Ejiri, and K. G. Savvidy, Theoretical direct WIMP detection rates for inelastic scattering to excited states, Nucl. Phys. B 877, 36–50 (2013).
* (20) R. Sahu, D. K. Papoulias, V. K. B. Kota, and T. S. Kosmas, Elastic and inelastic scattering of neutrinos and weakly interacting massive particles on nuclei, Phys. Rev. C 102, 035501 (2020).
* (21) G. Arcadi, C. Döring, C. Hasterok, and S. Vogl, Inelastic dark matter nucleus scattering, JCAP 12, 053 (2019).
* (22) K. Fushimi, H. Ejiri, R. Hazama, N. Kudomi, K. Nagata, H. Ohsumi, K. Okada, and J. Tanaka, Search for exotic nuclear transition by using the large volume NaI detector of ELEGANTS V., Nucl. Phys. B (Proc. Suppl.) 35, 400–402 (1994).
* (23) T. Suzuki et al., (XMASS Collaboration), Search for WIMP-129Xe inelastic scattering with particle identification in XMASS-I, Astropart. Phys. 110, 1–7 (2019).
* (24) E. Aprile et al., (XENON Collaboration), Search for WIMP inelastic scattering off xenon nuclei with XENON100, Phys. Rev. D 96, 022008 (2017).
* (25) E. Aprile et al., (XENON Collaboration), Search for inelastic scattering of WIMP dark matter in XENON1T, Phys. Rev. D 103, 063028 (2021).
* (26) G. Adhikari et al., (COSINE-100 Collaboration), Initial Performance of the COSINE-100 Experiment, Eur. Phys. J. C 78, 107 (2018).
* (27) H. S. Lee et al., (KIMS Collaboration), First limit on WIMP cross section with low background CsI(Tl) crystal detector, Phys. Lett. B 633, 201–208 (2006).
* (28) S. C. Kim et al., New Limits on Interactions between Weakly Interacting Massive Particles and Nucleons Obtained with CsI(Tl) Crystal Detectors, Phys. Rev. Lett. 108, 181301 (2012).
* (29) P. Adhikari et al., (COSINE-100 Collaboration), Background model for the NaI(Tl) crystals in COSINE-100, Eur. Phys. J. C 78, 490 (2018).
* (30) G. Adhikari et al., (COSINE-100 Collaboration), Background modeling for dark matter search with 1.7 years of COSINE-100 data, Eur. Phys. J. C 81, 837 (2021).
* (31) J. S. Park et al., (KIMS Collaboration), Performance of a prototype active veto system using liquid scintillator for a dark matter search experiment, Nucl. Instrum. Meth. A 851, 103 (2017).
* (32) G. Adhikari et al., The COSINE-100 liquid scintillator veto system, Nucl. Instrum. Meth. A 1006, 165431 (2021).
* (33) H. Prihtiadi et al., (COSINE-100 Collaboration), Muon detector for the COSINE-100 experiment, JINST 13, T02007 (2018).
* (34) H. Prihtiadi et al., (COSINE-100 Collaboration), Measurement of the cosmic muon annual and diurnal flux variation with the COSINE-100 detector, JCAP 02, 013 (2021).
* (35) G. Adhikari et al., (COSINE-100 Collaboration), The COSINE-100 Data Acquisition System, JINST 13, P09006 (2018).
* (36) G. Adhikari et al., (COSINE-100 Collaboration), Search for a dark matter-induced annual modulation signal in NaI(Tl) with the COSINE-100 experiment, Phys. Rev. Lett. 123, 031302 (2019).
* (37) G. Adhikari et al., (COSINE-100 Collaboration), Strong constraints from COSINE-100 on the DAMA dark matter results using the same sodium iodide target, Sci. Adv. 7, abk2699 (2021).
* (38) G. Adhikari et al., (COSINE-100 Collaboration), An experiment to search for dark-matter interactions using sodium iodide detectors, Nature 564, 83 (2018).
* (39) J. Lewin and P. Smith, Review of mathematics, numerical factors, and corrections for dark matter experiments based on elastic nuclear recoil, Astropart. Phys. 6, 87 (1996).
* (40) K. Freese, M. Lisanti, and C. Savage, Colloquium: Annual modulation of dark matter, Rev. Mod. Phys. 85, 1561 (2013).
* (41) M. C. Smith et al., The RAVE Survey: Constraining the Local Galactic Escape Speed, Mon. Not. Roy. Astron. Soc. 379, 755–772 (2007).
* (42) P. Smith and J. Lewin, Dark matter detection, Physics Reports 187, 203–280 (1990).
* (43) Y. J. Ko et al., (COSINE-100 Collaboration), Comparison between DAMA/LIBRA and COSINE-100 in the light of Quenching Factors, JCAP 1911, 008 (2019).
* (44) G. Adhikari et al., (COSINE-100 Collaboration), Lowering the energy threshold in COSINE-100 dark matter searches, Astropart. Phys. 130, 102581 (2021).
* (45) G. Adhikari et al., (COSINE-100 Collaboration), Searching for low-mass dark matter via the Migdal effect in COSINE-100, Phys. Rev. D 105, 042006 (2022).
* (46) S. Agostinelli et al., (GEANT4 Collaboration), GEANT4: A Simulation toolkit, Nucl. Instrum. Meth. A 506, 250 (2003).
* (47) G. Adhikari et al., (KIMS Collaboration), Understanding NaI(Tl) crystal background for dark matter searches, Eur. Phys. J. C 77, 437 (2017).
* (48) G. H. Yu, C. Ha, E. J. Jeon, K. W. Kim, N. Y. Kim, Y. D. Kim, H. S. Lee, H. K. Park, and C. Rott, Depth profile study of 210Pb in the surface of an NaI(Tl) crystal, Astropart. Phys. 126, 102518 (2021).
* (49) B. J. Park et al., (COSINE Collaboration), Development of ultra-pure NaI(Tl) detectors for the COSINE-200 experiment, Eur. Phys. J. C 80, 814 (2020).
* (50) H. Lee, B. J. Park, J. J. Choi, O. Gileva, C. Ha, A. Iltis, E. J. Jeon, D. Y. Kim, K. W. Kim, S. H. Kim, S. K. Kim, Y. D. Kim, Y. J. Ko, C. H. Lee, H. S. Lee, I. S. Lee, M. H. Lee, S. J. Ra, J. K. Son, and K. A. Shin, Performance of an ultra-pure nai(tl) detector produced by an indigenously-developed purification method and crystal growth for the cosine-200 experiment, Frontiers in Physics 11, (2023).
* (51) Y. J. Ko and H. S. Lee, Solar and supernova neutrino physics with future NaI(Tl) dark matter search detectors, arXiv:2210.01386.
|
# Polarization in Low Frequency Radio Astronomy
Baptiste Cecconi
(2016)
## Editorial Notice
This content of this document has been initially written in 2016 as a chapter
of a book entitled “The Universe in polarised light”, after the “Ecole
Internationale de Polarimétrie en Astrophysique” (International School on
Polarimetry for Astrophysics) organized in June 2013, at Centre Paul-Langevin,
Aussois, France. This summer school has been funded by the EU funded COST
action MP1104111http://www.polarisation.eu and OSUG (Observatoire des Science
de l’Univers de Grenoble). This book has not been published yet due to delays
and lack of funding.
## Foreword
This chapter introduces the concepts of polarimetry in the case of low
frequency radio astronomy. In this regime radio waves are usually not the
signature of atomic or molecular transitions lines, but rather that of
unstable particle distribution functions releasing their free energy through
electromagnetic radiation. As the radio source region is usually magnetized,
the propagation medium (at least close to the source) is anisotropic, and the
polarization depends on the local magnetic field direction, the propagation
mode and the direction of propagation.
## 1 Introduction
Low frequency radio astronomy is defined by a frequency span ranging from
$\sim$1 kHz to $\sim$100 MHz (or $\sim$3 m to $\sim$300 km in terms of
wavelengths). The energy of a single photon at 100 kHz is 7$\times$10-29 J (or
4$\times$10-10 eV). The wave properties of light are used instead of its
corpuscular properties. The electric (and/or magnetic) field fluctuations of
the wave are sampled. The polarization of the radio waves is thus directly
measured by the sensors. Depending on the sensor physical shape different
polarization components are intrinsically measured: linear electric dipoles
measure linear polarization, loop or helicoidal antenna are measuring the
circular polarization.
The electromagnetic radiations that can be detected in the low frequency range
have various origins. In the vicinity of the Earth, the most intense sources
are the Sun and the magnetized planets, as shown in Figure (1). Human
technologies are strong sources of Radio Frequency Interferences (RFI).
Atmospheric electricity (i.e., Earth and planetary lightnings) emit wide band
electromagnetic pulses. Our Galaxy is radiating a continuum of emission
(called the Galactic Background), resulting from free-free interactions of
electrons in the interstellar medium. The Galactic Background is almost
isotropic and its brightness temperature exceeds 107 K at 1 MHz [6, 12, 37].
The brightness temperature of Solar radio bursts and of Jovian radio emissions
are respectively of the order of $\sim$1012 K and $\sim$1018 K. Those
brightness temperatures usually do not correspond to the actual black body
temperature of the radio source medium. Hence, these radio emissions are
called “non-thermal” radio emissions.
Figure 1: Compared planetary radio emission spectral flux densities. The radio
emissions from Jupiter are in black (_nKOM_ : narrowband Kilometric Radiation;
_bKOM_ : broadband Kilometric Radiation; _QP_ : Quasi-Periodic Bursts; _HOM_ :
Hectometric Radiation; _S-bursts_ : Short (or Millisecond) Bursts; _Io-DAM_ :
Io-controlled Decametric Radiation; _non-Io-DAM_ : non-Io-controlled
Decametric Radiation; _DIM_ : Decimetric Radiation; _Thermal_ : Black body
thermal radiation), from Saturn in green (_n-SMR_ : narrowband Saturn
Myriametric Radiation; _n-SKR_ : narrowband Saturn Kilometric Radiation; _SKR_
: Saturn Kilometric Radiation), from Earth in red (_LFb_ : Low-Frequency
Bursts; _TKR_ (or _AKR_): Terrestrial (or Auroral) Kilometric Radiation), and
from Uranus and Neptune in blue (_UKR_ /_NKR_ : Uranus/Neptune Kilometric
Radiation). The vertical grey dashed line represents the Earth ionospheric
cut-off. Figure Adapted from [55], updated with [53, 28, 30].
Solar and Solar Wind radio sources are produced by populations of relativistic
electrons in the Solar Corona or escaping the Sun in the interplanetary
medium. The two main low frequency Solar radio sources are called type II and
type III solar radio bursts. Type II bursts are emitted by electrons
accelerated in front of coronal mass ejections and interplanetary shocks (see
[17] and references therein). Type III bursts are produced by beams of
electrons ejected from the Sun and travelling outward along magnetic field
lines. The radio waves are believed to be produced by wave conversion from
strong Langmuir waves excited along the beam path (see [20, 45] and references
therein). Radio waves are scattered in the inner heliosphere by solar wind
inhomogeneities, resulting in an apparent spatially extended source (up to 90∘
at $\sim$100 kHz [49]).
In planetary magnetospheres, the two main low frequency radio emissions
families are the auroral radio emissions and the radiation belts. Planetary
aurorae result from particle precipitations towards the magnetic poles of the
planet. When those particles reach the planetary ionosphere, they transfer
their kinetic energy through collisions with the atmospheric populations of
atoms and molecules, which are releasing this energy by emitting photons in
the infrared to ultraviolet range, depending on the chemical species present
in the medium. A fraction of the particles does not reach the planetary
ionosphere thanks to the magnetic mirror effect. The up-going particles are
the source of the auroral radio emissions. The typical energy of particles
responsible for auroral radio emissions is of the order of 10 keV. The
electrons are the main driver for electromagnetic auroral phenomena and the
emission mechanism is the Cyclotron MASER Instability (CMI), see [35, 55, 51]
for more details. This non-thermal process has been observed in-situ at Earth
by various instruments onboard the Viking [24] and FAST [13] space missions.
Remote observations at Jupiter and Saturn are consistent with the CMI
phenomenology [55, 23, 9]. Furthermore, radio observations by Cassini near a
SKR radio source are showing results that are fully consistent with the cold
plasma theory and the CMI emission process [30, 31]. The apparent brightness
temperature of such radio source can be higher than 1015 K. The radio waves
are emitted at a frequency close to the local cyclotron frequency. The radio
source is strongly anisotropic. Its theoretical beaming pattern is a hollow
cone, aligned with the ambiant magnetic field, with an opening half angle
between 45∘ and 90∘, and a thickness of $\sim$1∘.
The planetary radiation belts radio emissions are produced by synchrotron
emission of highly relativistic electrons trapped in the planetary magnetic
field near the magnetic equator. The radio source is the radiation belts by
itself and spans up to several planetary radii in the equatorial plane. The
Jovian radiation belts are the most intense of the kind in the solar system,
and span from about 40 MHz up to $\sim$10 GHz [55, 48, 16]. The terrestrial
radiation belts are emitting a radio emission around 1 MHz, but have still
been poorly studied.
Astrophysical objects such as pulsars are also emitting in this frequency
range. Magnetized exoplanets are believed to host low frequency radio sources,
similarly the Solar System planets [56, 22, 38, 54]. Finally, the Dark Ages
and Cosmic Dawn radiation signatures is also predicted to appear in this
frequency range (see [5] and references therein).
## 2 Magneto-ionic theory
This section presents the theoretical background required to understand the
emission and propagation of low frequency radio emissions, and thus their
polarization.
### 2.1 Presentation of the assumptions and conditions
A series of preliminary assumptions are made:
* •
The propagation medium is a plasma, which is globally neutral.
* •
The plasma is cold, which implies: (i) without perturbations, the particles
are not moving; (ii) particles have no thermal velocity; (iii) the pressure
gradient is negligible.
* •
The effect of the ions is negligible, as the radio frequency range is much
higher than that of the ion scales.
* •
The medium is magnetized and thus anisotropic.
### 2.2 Basic Equations
Using the Fourier notation, the Maxwell-Faraday and Maxwell-Ampère laws
respectively write:
$\displaystyle i\bm{k}\times\bm{E}$ $\displaystyle=i\omega\bm{B}$ (1)
$\displaystyle i\bm{k}\times\bm{B}$
$\displaystyle=-i\frac{\omega}{c^{2}}\overline{\overline{\bm{\kappa}}}.\bm{B}$
(2)
The equation driving the electric field can thus be derived as:
$\displaystyle\bm{n}\times(\bm{n}\times\bm{E})+\overline{\overline{\bm{\kappa}}}.\bm{E}=0$
(3)
using the refractive index $\bm{n}$ vector defined as:
$\displaystyle\bm{n}=\frac{n}{k}\bm{k}=\frac{c}{\omega}\bm{k}$ (4)
The permittivity tensor is computed from the equation of motion of the
particles in the plasma. After the assumptions presented in the previous
paragraph, it takes the following form:
$\displaystyle\bm{\kappa}=\left(\begin{array}[]{ccc}S&-iD&0\\\ iD&S&0\\\
0&0&P\end{array}\right)$ (8)
with $S=1-X/(1-Y^{2})$, $D=XY/(1-Y^{2})$, $P=1-X$, and defining
$X=(\omega_{p}/\omega)^{2}$ and $Y=\omega_{c}/\omega$. The characteristic
frequencies $\omega_{p}$ and $\omega_{c}$ are respectively the plasma
frequency and the electron cyclotron frequency.
### 2.3 Refraction index: Appleton-Hartree equation
As the medium is magnetized, we choose to represent the various vectors in the
reference frame defined as: $\bm{z}$ is along the main ambiant magnetic field
direction $\bm{B}_{0}$, and the wave vector $\bm{k}$ is in the
$(\bm{x},\bm{z})$ plane. Hence the refractive index vector is
$\bm{n}=(n\sin\theta,0,n\cos\theta)$, with $\theta$ the angular separation
between $\bm{B}_{0}$ and $\bm{k}$.
The dispersion equation derives from the equation of the electric field of the
wave, by searching for the eigenmodes of propagation, i.e., solving the
following linear system:
$\displaystyle\left(\begin{array}[]{ccc}S-n^{2}\cos^{2}\theta&-iD&n^{2}\sin\theta\cos\theta\\\
iD&S-n^{2}&0\\\
n^{2}\sin\theta\cos\theta&0&P-n^{2}\sin^{2}\theta\end{array}\right)\left(\begin{array}[]{c}E_{x}\\\
E_{y}\\\ E_{z}\end{array}\right)=0$ (15)
The roots of the determinant of the previous matrix provides the eigenmodes of
propagation. The following system has to be solved:
$\displaystyle\left|\begin{array}[]{ccc}S-n^{2}\cos^{2}\theta&-iD&n^{2}\sin\theta\cos\theta\\\
iD&S-n^{2}&0\\\
n^{2}\sin\theta\cos\theta&0&P-n^{2}\sin^{2}\theta\end{array}\right|=0$ (19)
The general solution of this equation is known as the Appleton-Hartree
equation:
$\displaystyle
n^{2}=1-\frac{2X(1-X)}{2(1-X)-Y^{2}\sin^{2}\theta\pm\sqrt{Y^{4}\sin^{4}\theta+4(1-X)^{2}Y^{2}\cos^{2}\theta}}$
(20)
This equation links the scalar refractive index $n$ with $X(\omega)$,
$Y(\omega)$ and $\theta$.
Figure 2: Dispersion diagram with following plasma and cyclotron frequency
values: $f_{p}=500$ kHz and $f_{c}=1500$ kHz. Vertical lines, from left to
right are: $f_{L}$, L-mode cutoff frequency; $f_{p}$, plasma cutoff frequency;
$f_{c}$, cyclotron resonance frequency; $f_{UH}$ upper hybrid resonance
frequency; and $f_{R}$, R-mode cutoff frequency.
### 2.4 Propagation Modes, propagation angle
The $\pm$ signs in the Appleton-Hartree equation correspond to two eigenmodes
and thus two propagation modes. We hereafter refer to them as the $\oplus$ and
$\ominus$ modes, respectively. Figure (2) shows the various propagations
modes. Each curve on the figure correspond to a propagation angle with respect
to the ambiant magnetic field $\bm{B}_{0}$: from parallel propagation in bold
plain line; to perpendicular propagation in bold dashed line. Each region of
the diagram is a specific propagation mode and is named as shown on the
figure. The $\oplus$ modes are the whistler and LO modes, whereas the
$\ominus$ modes are the Z, LX and RX modes. The vertical lines are the
resonnance (when $n\rightarrow\infty$) and cutoff (when $n\rightarrow 0$)
frequencies.
### 2.5 Polarization
When studying wave propagation in a cold plasma, the sense of polarization is
always given considering the wave propagating in the direction of the magnetic
field.
In case of parallel propagation, the polarization of the wave is always
circular. The $\oplus$ and $\ominus$ modes are respectively fully LH and RH
polarized.
In case of perpendicular propagation, the $\oplus$ mode is linearly polarized
along the ambiant magnetic field direction, and the $\ominus$ mode is
elliptically polarized perpendicular to the ambiant magnetic field direction
(this means that the wave propagation is not transverse).
In case of oblique propagation, the $\oplus$ and $\ominus$ modes are
elliptically polarized.
## 3 Radio Waves Propagation
The propagation of radio waves is usually studied with ray tracing methods.
They can include scattering effects. The main ray tracing algorithm for radio-
astronomy is the Haselgrove algorithm [18, 19]. Other algorithms are available
(such as Poeverlein [43]), but are mainly adapted to horizontally stratified
propagation media, such as the Earth’s ionosphere.
Figure 3: Ray tracing in the inner Saturn magnetosphere. a) Dynamic spectrum
of circular polarization degree of SKR. b Ray tracing performed by ARTEMIS-P
through the Kronian magnetosphere at $f=50\,\textrm{kHz}$. Green lines are ray
trajectories, red arrows are wave vectors, black arrows are directions of the
magnetic field, blue line are electron density contours at
$[0.001,0.01,0.5,1,10,50]\textrm{cm}^{-3}$, dotted black line is the iso-
surface where $f_{ce}=50\,\textrm{kHz}$, long dashed black lines are dipolar
magnetic field lines. The magnetospheric plasma density is taken from the
MDISC model [1]. Figure adapted from [15]
The two main propagation effects are the refraction and the scattering of
radio waves. While the former is directly related to the evolution of the
refractive index along the ray path, the latter includes random changes in ray
propagation, due to interaction is particle and inhomogeneities in the
propagation medium. Refraction is the core of the ray tracing techniques.
Scattering can be included [50]. Further non-linear effects, such as caustics
can not be handled by such simulation codes. Full electromagnetic code (such
as Finite Difference Time Domain (FDTD) methods) should then be used.
Figure (3) is showing an examples the effect of radio wave propagation. It
displays the trace of several radio rays in the inner magnetosphere of Saturn,
at a frequency of $50\,\textrm{kHz}$. The apparent location of the radio
source as seen from a remote location highly depends on the initial ray
inclination with the source ambient magnetic field vector.
### 3.1 Propagation of polarization
The propagation of polarization can be treated in ray tracing codes. Two kind
of propagation regions shall be separated, where (1) the medium is imposing
the polarization on the wave, or (2) the wave is freely propagating with its
own polarization. These regions are called “weak” and “strong” mode coupling,
respectively. The “coupling” is here referring to that between the wave modes,
not between the waves and the medium. The limit between the two regimes is
given by the following relation [3, 4]:
$\displaystyle|n_{\oplus}-n_{\ominus}|\sim\frac{1}{k}\frac{\mathrm{d}}{\mathrm{d}k}n_{(\oplus\textrm{
or }\ominus)}$ (21)
In the strong mode coupling region, the polarization is frozen and is
projected on the two modes of the medium, as in a birefringent medium in
classic optics. If the two modes have different phase velocities, Faraday
rotation occurs (see [41] and references therein).
## 4 Detection
The Earth ionosphere is reflecting radio waves below 10 MHz whether the wave
is coming from space or from ground. This is used for telecommunication on
ground in the so-called long wavelengths radio bands. Space based observations
are required below 10 MHz. Between 10 MHz and $\sim$80 MHz (i.e., the between
ionospheric cutoff and the FM broadcasting band), several ground based
instruments are available. Figure (4) is showing several types of radio
antenna used for ground based observations.
The angular resolution of a telescope is defined by the ratio between the
observed wavelength and the telescope aperture diameter. Hence, in order to
get a few arcseconds resolution at 30 MHz (i.e., a wavelength of 10 m), an
aperture of about 2500 km in needed. As single piece focussing devices of this
size cannot be built on ground, phased array are used together with
interferometric techniques.
Figure 4: Various types of ground based radio antenna used in the low
frequency range (10 MHz to 80 MHz): (a) Array of helicoidal antenna, Nançay
Decameter Array (Nançay, France); (b) Array of thick linear dipoles, UTR-2
(Kharkov, Ukraine); (c) Array of thick folded crossed dipoles, LWA (New
Mexico, USA) and Nenufar (Nançay, France); (d) Array of thin folded crossed
dipoles, LOFAR-LBA (EU); (e) Log Periodic Yagi antenna, Iitate HF Radio
Monitor (Iitate, Japan); (f) thin dipole, RadioJOVE (world wide).
### 4.1 Space based sensors
In the vicinity of Earth, the accessible spectral range includes frequencies
as low as a few kHz for observations of radio waves above the local
propagation cut-off. The constraints on space based instrumentation, such as
reliability and cost imply that only simple sensors can be used on a
spacecraft. Space-based electric and magnetic sensors are very simple.
Electric sensors are either wire boom monopoles or dipoles, or a pair of probe
antenna. Magnetic sensors are either magnetic loops (single loop) and search
coils (many loops). We will focus in the following on wire dipole electric
sensors.
Space based low frequency radio interferometry has not been implemented yet,
for obvious reasons of cost for building and operating a fleet of several tens
or hundreds of spacecraft. Nanosatellite concepts may help, and several
studies are ongoing [40, 44, 5].
### 4.2 Presentation of Goniopolarimetry
At frequencies much lower than the resonance frequency of wire dipole electric
sensors (i.e., at wavelengths longer than 20 times the antenna length, so-
called short antenna or quasistatic range), the gain properties of the antenna
is very simple:
$\displaystyle G(\theta)\propto\sin^{2}\theta,$ (22)
where $\theta$ is the angle between the antenna axis and the wave vector
direction $\bm{k}$. Figure (5) is showing the antenna beaming pattern at
different wavelengths. In the quasistatic range (upper left corner), the
beaming pattern is single lobed, and its gain figure varies as presented in
Eq. (22). When the wavelength is getting shorter, the beaming pattern is
getting narrower, up to the resonance frequency ($L/\lambda=1/2$) and then
becomes multi-lobed.
Figure 5: Linear dipole antenna beaming pattern for various wavelengths to
antenna lengths ratio.
Goniopolarimetric (i.e., simultaneous derivation of direction of arrival and
polarization) inversions are making use of the simple form of the beaming
pattern in the quasistatic range. Figure (6) is showing a simplified case of
linearly polarized wave observed with a set of orthogonal crossed dipoles, in
a fully planar configuration. The direction of arrival $\theta$ can simply be
determined after the ratio of the $P_{1}$ and $P_{2}$ measured power. The
solution is not unique in this case.
Figure 6: Principle of goniopolarimetry illustrated in a simple planar case.
The wave arrives with an angle $\theta$ from the blue antenna. The ratio of
the radio power measured on each antenna is directly linked to $\theta$.
For a three-dimensional analysis, either three quasi-orthogonal dipoles, or
dipoles on a spinning spacecraft (1 aligned with the spin axis, the other(s)
in the spin plane) shall be used. In the quasistatic range, the electric
signal induced by the wave on an antenna is:
$\displaystyle V_{a}=\bm{h}_{a}.\bm{E}_{\omega}(t),$ (23)
where $\bm{h}_{a}$ is the antenna vector and $\bm{E}_{\omega}(t)$ the wave
instantaneous electric field. The radio wave is assumed to be in a transverse
propagation regime. Hence, the instantaneous electric field is:
$\displaystyle\bm{E}_{\omega}(t)=e^{i(\omega
t+\phi_{0})}\left|\begin{array}[]{l}a_{x}\\\ a_{y}e^{i\delta}\\\
0\end{array}\right.$ (27)
in the wave polarization plane (i.e., perpendicular to $\bm{k}$).
Goniopolarimetric radio receivers are measuring the power on each sensor, as
well as the cross-correlation of the signals measured on each pair of
antennas. This provides us with the full spectral matrix $P_{ij}$ of the wave
[27, 8, 7]:
$\displaystyle P_{ij}=\frac{Z_{0}Gh_{i}h_{j}S}{2}$
$\displaystyle\left[(1+Q)A_{i}A_{j}+(U-iV)A_{i}B_{j}\right.$
$\displaystyle\left.+(U+iV)A_{j}B_{i}+(1-Q)B_{i}B_{j}\right]$ (28)
where: $S$, $Q$, $U$ and $V$ are the four Stokes parameters of the wave;
$h_{i}$ and $h_{j}$ the antenna electrical lengths; $G$, the gain of the
receiving chain; $Z_{0}$ the impedance of free space; and $A_{n}$ and $B_{n}$
(with $n=i$ or $n=j$) defined as:
$\displaystyle A_{n}$
$\displaystyle=-\sin\theta_{n}\cos\theta\cos(\phi-\phi_{n})+\cos\theta_{n}\cos\theta$
(29) $\displaystyle B_{n}$ $\displaystyle=-\sin\theta_{n}\sin(\phi-\phi_{n})$
(30)
where: $\theta$ and $\phi$ defines the direction of the wave vector $\bm{k}$
in a spherical frame; and $\theta_{n}$ and $\phi_{n}$ defines the direction of
the $n$-th antenna $\bm{h}_{n}$ in the same frame. All the spectral matrix
terms $P_{ij}$ are not always measured, depending on the receiver
capabilities. Goniopolarimetric techniques are inverting the set of
measurements $P_{ij}$ into the wave parameters. The Stokes parameters can be
related to the wave parameters as follows:
$\displaystyle S$
$\displaystyle=\frac{<E_{x}.E_{x}^{*}>+<E_{y}.E_{y}^{*}>}{2Z_{0}}=\frac{<a^{2}_{x}>+<a^{2}_{y}>}{2Z_{0}}$
(31) $\displaystyle Q$
$\displaystyle=\frac{<E_{x}.E_{x}^{*}>-<E_{y}.E_{y}^{*}>}{<E_{x}.E_{x}^{*}>+<E_{y}.E_{y}^{*}>}=\frac{<a^{2}_{x}>-<a^{2}_{y}>}{2SZ_{0}}$
(32) $\displaystyle U$
$\displaystyle=\frac{<E_{x}.E_{y}^{*}>+<E_{y}.E_{x}^{*}>}{<E_{x}.E_{x}^{*}>+<E_{y}.E_{y}^{*}>}=\frac{<a_{x}a_{y}\cos\delta>}{SZ_{0}}$
(33) $\displaystyle V$
$\displaystyle=-\frac{<E_{x}.E_{y}^{*}>-<E_{y}.E_{x}^{*}>}{i(<E_{x}.E_{x}^{*}>+<E_{y}.E_{y}^{*}>)}=\frac{<a_{x}a_{y}\sin\delta>}{SZ_{0}}$
(34)
The radio convention [25] of the sign of $U$ and $V$ is the opposite of the
optical convention. Radio astronomers consider they measure the polarization
following the radio wave instead of looking at the source.
The goniopolarimetric inversions are used to derive the wave polarization
state. The plane of polarization is also determined. The direction of
propagation is then derived as the normal direction to the polarization plane,
with the transverse propagation assumption. This assumption is valid most of
the time, except close to the wave propagation resonances and cutoffs. In that
case, determining the polarization of the magnetic component is directly
providing the direction of $\bm{k}$. However, in such a case, the direction of
propagation (the Poynting vector) is not that of the wave vector. Hence,
simultaneous electric and magnetic component sensing is required.
In case of planetary radio emissions, the radio source location (or at least
the magnetic hemisphere containing its location) can be derived from the
direction of arrival, see Figure (8), and the sense of polarization indicates
the emission mode. In weak mode coupling conditions (the medium is setting the
polarization), the sign of the Stokes parameter $Q$ is providing the
propagation mode:
* •
$Q>0$ means $\oplus$ mode (L-O branch of the dispersion diagram)
* •
$Q<0$ means $\ominus$ mode (R-X branch of the dispersion diagram)
### 4.3 Various goniopolarimetric inversions
Several goniopolarimetric inversions are available in the literature. The
following list is grouping them with their assumptions:
* •
Point sources. The output parameters are $S$, $Q$, $U$, $V$, $\theta$ and
$\phi$. They are used for auroral radio emissions (Earth, Jupiter, Saturn).
They have been used on Cassini/RPWS (with 2 or 3 antenna modes) and
INTERBAL/Polrad (with 3 antennas) [8, 27, 32, 42].
* •
Spatially extended sources. The output parameters are $S$, $Q$, $U$, $V$,
$\theta$, $\phi$ and $\gamma$, where $\gamma$ is the angular size of the
source (assuming a circular shaped source region). They are used for solar
radio emissions. They have been used on STEREO/Waves (with 3 antennas) and
WIND/Waves (spinning spacecraft) [36, 7, 26].
* •
Radio sources along a spatial profile. The output parameters are $S(a)$,
$Q(a)$, $U(a)$, $V(a)$, with $a$ the curvilinear coordinate along the spatial
profile. Such inversion can be used for auroral radio sources [21].
* •
All sky source. The output parameters are the Stokes parameters distribution
on the sky $S(\theta,\phi)$, $Q(\theta,\phi)$, $U(\theta,\phi)$,
$V(\theta,\phi)$. Such inversions can be used for mapping the galactic
background emission, but are still to be developed. The only Galactic emission
mapping below $10\,\textrm{Mhz}$ was obtained with Lunar occultations measured
by Radio Astronomy Explorer 2 (RAE 2) [39].
### 4.4 Limitations
Although very powerful compared to the use of a single dipole,
goniopolarimetric system have several intrinsic limitations:
* •
Electric versus magnetic sensors. Magnetic component intensity is $c$ times
fainter than the electric component. Furthermore, the magnetic sensor have an
intrinsic band pass input filter that forbids wide band applications.
* •
Sense of propagation. Simple goniopolarimetry provides the wave direction of
propagation, but not its sense of propagation. Coupled electric and magnetic
components sensing are required to do so.
* •
Wave vector direction. Goniopolarimetric inversions with electric sensors
assume transverse propagation. It is not true when $n\neq 1$. As discussed
earlier, observing the magnetic components provides directly the wave vector
direction. However, the direction of propagation is not that of the wave
vector.
* •
Accuracy. The goniopolarimetric accuracy depends on two factors: the accuracy
of the receiver system calibration (effective antenna directions and lengths,
phase miss-match between sensing channels); and the receiver and onboard
processing noise. The main source of noise is usually the digitization noise,
which quantifies the output signal. This results in the following typical
accuracies: $\sim$2∘ on direction of arrival, $\sim$10% on polarization
degrees and $\sim$3 dB on flux densities.
* •
Effective sensor directions. Electric and magnetic sensors must be accurately
calibrated to obtain their effective parameters (length and direction). This
calibration process is more critical for electric sensors, as the conductive
spacecraft body can strongly alter the antenna beaming pattern.
* •
Propagation mode. The propagation mode can not be derived unambiguously
without simultaneous and coupled measurements on electric and magnetic
sensors.
* •
Direction of propagation. The wave vector is aligned with the Poynting vector,
except when $n\neq 1$.
* •
Radio source location. The goniopolarimetric systems are measuring the wave
parameter at the place of the spacecraft. Hence the radio source location can
not be retrieved without specific assumptions, such as: straight line
propagation, emission process.
## 5 Example of results and open questions
According to the magneto-ionic theory, there is a direct link between the
propagation angle and the polarization. As seen in the previous section,
perpendicular emission leads to linear or elliptical polarization, parallel
emission to circular polarization, and oblique emission to elliptical
polarization. The sense of polarization depends on the direction of the
magnetic field in the source (hence on the hemisphere of emission), as well as
on the propagation mode.
Solar radio emissions are showing a limited degree of polarization. In the
lower frequency range (below $\sim$10 MHz), radio observations are not showing
evidences of significant degree of polarization (excepty for a limited series
of event [47]), whereas at higher frequency, up to 35% of polarization can be
observed [11]. Contrarily planetary radio emissions are fully polarized,
either circularly or elliptically [55]. The planetary radiation belts are
elliptically or linearly polarized [10, 52, 33, 34].
### 5.1 Latitudinal variability of polarization
All magnetized planets are producing radio emissions in their auroral regions.
The main emission is produced in the R-X mode (i.e., RH polarized in the
Northern magnetic hemisphere). The radio waves are 100% polarized. At Earth,
the observations show circularly polarized waves. At Jupiter and Saturn,
auroral radio waves are circularly polarized when observed from near
equatorial regions, while they are elliptically polarized at high latitudes.
The limit latitude is about $\pm$30∘ as shown in Figure (7) [46, 14].
Figure 7: Observed polarization state at Jupiter and Saturn, with Ulysses/URAP
and Cassini/RPWS respectively. The left-hand panel shows a meridian plane
projection of the trajectory of the Ulysses spacecraft during its Jovian flyby
[46]. Axes are number in units of Jovian Radii. Bold segments of the
trajectory correspond to the location where elliptical polarization was
observed. The right-hand panel shows a similar figure measured at Saturn [14].
Axes are number in units of Kronian Radii. The displayed fraction of orbit
correspond to elliptical polarization waves. On both cases, circularly
polarized waves are observed elsewhere.
### 5.2 Saturn Kilometric Radiation
Cassini/RPWS/HFR (Radio and Plasma Waves Science/High Frequency receiver) is a
goniopolarimetric radio receiver. Cecconi et al. [9] proposed a simple way to
derive the radio sources locations assuming straight line propagation and CMI
emission, as shown in figure (8). Figure (9) shows a comparison of active
radio source magnetic footprints with ultraviolet (UV) aurora observed with
the Hubble Space Telescope (HST), at Saturn [29]. Both datasets were observed
at the same time, and with a very similar observing geometry. The figure shows
an very good conjugacy between the two phenomena, which comforts the idea that
the same electron populations are producing radio and UV emissions.
Figure 8: Derivation of the radio source location at Saturn, from the
goniopolarimetric products provided by Cassini/RPWS/HFR. Straight line
propagation is assumed, together with CMI emission process, which implies an
emission at the local electron cyclotron frequency ($f_{ce}$)in the source.
The three-dimensional location of the radio source is derived, as well as the
location of the magnetic footprint of the source (for comparison with
atmospheric aurora). The error on the wave propagation direction is also used
to derive the radio source footprint error ellipse. Figure extracted from [9].
Figure 9: Comparison of radio source location with UV aurora at Saturn. The
top panel shows radio source locations derived from Cassini/RPWS/HFR
measurements. The bottom panel shows the UV aurora as observed from HST. The
left-hand column is displaying the radio and UV sources as seen from the
observer, whereas the right-hand column shows the same data projected on the
southern polar ionosphere. Figure extracted from [29].
On Oct. 17th 2008, the Cassini spacecraft flew through the auroral radio
source region for a few minutes [31]. Figure (10) shows the polarization
parameters with respect to the distance to the radio source. It confirms that
the wave has a quasi perpendicular propagation, with elliptical polarization
consistently with the magnetoionic theory.
Figure 10: Polarization parameters of Saturn Kilometric Radiation in the
vicinity of a source region: (a) Circular polarization degree plotted versus
the reduced frequency Figure extracted from [31].
### 5.3 Future prospects
Goniopolarimetric methods are using measurements acquired on a single
location. Although implying several assumptions they are very efficient for
auroral planetary radio emissions, which are point sources with a high degree
of intrinsic polarization. Solar radio bursts can also be measured, but a
limited view of their spatial structure can be derived.
The two main limitations of goniopolarimetric inversions are the assumptions
of (i) a plane transverse wave propagating on a straight line from the source
to the observer and (ii) a point or circularly-shaped single source observed
at a time. The first limitation can be fixed using sensors measuring the full
electric and magnetic components of the wave. This set up is proposed for the
Alfvén space mission [2], which is dedicated to the study of the Terrestrial
auroral cavities. Solving the second limitation implies space-based
radioastronomy interferometric instrumentation [40, 5].
## References
* [1] N. Achilleos, et al. 2010. MNRAS. doi:10.1111/j.1365-2966.2009.15865.x
* [2] M. Berthomier, et al. 2011. Exp. Astron. doi:10.1007/s10686-011-9273-y
* [3] H.G. Booker. 1936. Proc. Ray. Soc. A, 155, 235–257
* [4] K.G. Buden. 1952. Proc. Roy. Soc. A, 215, 215–233
* [5] J.O. Burns, et al. 2011. J. Adv. Space Res. doi:10.1016/j.asr.2011.10.014
* [6] H.V. Cane. 1979. Mon. Not. R. Astr. Soc. 189, 465–478
* [7] B. Cecconi. 2007. Radio Sci. doi:10.1029/2006RS003458
* [8] B. Cecconi, P. Zarka. 2005. Radio Sci. doi:10.1029/2004RS003070
* [9] B. Cecconi, et al. 2009. J. Geophys. Res. doi:10.1029/2008JA013830
* [10] D.B. Chang, L. Davis. 1962. Astrophys. J. 126, 567–
* [11] G.A. Dulk, S. Suzuki. 1980. Astron. Astrophys. 88, 203–217
* [12] G.A. Dulk, et al. 2001. Astrophys. J. 365, 294–300
* [13] R.E. Ergun, et al. 1998. Geophys. Res. Lett. 25, 2061–2064
* [14] G. Fischer, et al. 2009. J. Geophys. Res. doi:10.1029/2009JA014176
* [15] A.-L. Gautier, et al. 2013. Proc. Intern. Symp. Electromag. Theory
* [16] J.N. Girard, et al. 2016. Astron. Astrophys. doi:10.1051/0004-6361/201527518
* [17] N. Gopalswamy, et al. 2009. Sol. Phys. doi:10.1007/s11207-009-9382-1
* [18] J. Haselgrove. 1955. Physics of the Ionosphere. 335–
* [19] J. Haselgrove. 1963. J. Atmosph. Terrestr. Phys. 25, 397–399
* [20] P. Henri, et al. 2009. J. Geophys. Res. doi:10.1029/2008JA013738
* [21] S.L.G. Hess. 2010. Radio Sci. doi:10.1029/2009RS004208
* [22] S.L.G. Hess, P. Zarka. 2011. Astr. Astrophys. doi:10.1051/0004-6361/201116510
* [23] S.L.G. Hess, et al. 2008. Geophys. Res. Lett. doi:10.1029/2008GL033656
* [24] A. Hilgers. 1992. Geophys. Res. Lett. 19, 237–240
* [25] J.D. Kraus. 1966. Mc-Graw Hill
* [26] V. Krupar, et al. 2012. J. Geophys. Res. doi:10.1029/2011JA017333
* [27] H.-P. Ladreiter, el al. 1995. Radio Sci. 30, 1699–1712
* [28] L. Lamy, et al. 2008. J. Geophys. Res. doi:10.1029/2007JA012900
* [29] L. Lamy, et al. 2009. J. Geophys. Res. doi:10.1029/2009JA014401
* [30] L. Lamy, et al. 2010. J. Geophys. Res. doi:10.1029/2010JA043415
* [31] L. Lamy, et al. 2011. J. Geophys. Res. doi:10.1029/2010JA016195
* [32] A. Lecacheux. 1978. Astron. Astrophys. 70, 701–706
* [33] M.P.C. Legg, K.C. Westfold. 1968. Astrophys. J. 154, 499–
* [34] S.M. Levin, et al. 2001. Geophys. Res. Lett. doi:10.1029/2000GL012087
* [35] P. Louarn. 1992. Adv. Space. Res. doi:10.1016/0273-1177(92)90385-B
* [36] R.M. Manning, J. Fainberg. 1980. Space Science Instr. 5, 161–181
* [37] R.M. Manning, G.A. Dulk. 2001. Astrophys. doi:10.1051/0004-6361:20010516
* [38] J.D. Nichols. 2012. MNRAS Lett. doi:10.1111/j.1745-3933.2012.01348.x
* [39] J.C. Novaco, L.W. Brown. 1978. Astrophys. J. 221, 114–123
* [40] D. Oberoi, J.L. Pino̧n. 2005. Radio Sci. doi:10.1029/2004RS003211
* [41] D. Oberoi, C.J. Lonsdale. 2012. Radio Sci. doi:10.1029/2012RS004992
* [42] M. Panchenko. 2003. Radio Sci. doi:10.1029/2003RS002929
* [43] H. Poverlein. 1948. Bayerische Akad. Wissenschaften München
* [44] R.T. Rajan et al. 2011. IEEE Aerospace Conf.
* [45] H.A.S Reid, H. Ratcliffe. 2014. Res. Astron. Astrophys. doi:10.1088/1674-4527/14/7/003
* [46] M.J. Reiner, et al. 1995. Geophys. Res. Lett. 22:4, 345–348
* [47] M.J. Reiner, et al. 2007. Sol. Phys. doi:10.1007/s11207-007-0277-8
* [48] D. Santos-Costa, S.J. Bolton. 2008. Planet. Space Sci. 56, 326–345
* [49] J.-L. Steinberg, et al. 1985. Astron. Astrophys. 150 205–116
* [50] J.-L. Steinberg, et al. 2004. Planet. Space Sci. doi:10.1016/j.pss.2003.12.005
* [51] R.A. Treumann. 2006. Astron. Astrophys. Rev. doi:10.1107/s00159-006-0001-y
* [52] J.F. Vesecky, A.M. Peterson. 1967. J. Geophys. res. 72, 1647–1650
* [53] P. Zarka, et al. 2004. J. Geophys. Res. doi:10.1029/2003JA010260
* [54] P. Zarka, et al. 2012. Planet. Space Sci. doi:10.1016/j.pss.2012.08.004
* [55] P. Zarka. 1992. Adv. Space. Res. 12, 99–115
* [56] P. Zarka. 2007. Planet. Space Sci. doi:10.1016/j.pss.2006.05.045
|
1
# Hybrid Multiparty Session Types - Full Version
Compositionality for Protocol Specification through Endpoint Projection
Lorenzo Gheri 0000-0002-3191-7722 University of OxfordUnited Kingdom
<EMAIL_ADDRESS>and Nobuko Yoshida 0000-0002-3925-8557 University
of OxfordUnited Kingdom<EMAIL_ADDRESS>
###### Abstract.
Multiparty session types (MPST) are a specification and verification framework
for distributed message-passing systems. The communication protocol of the
system is specified as a _global type_ , from which a collection of _local
types_ (local process implementations) is obtained by _endpoint projection_. A
global type is a single disciplining entity for the whole system, specified by
_one designer_ that has full knowledge of the communication protocol. On the
other hand, distributed systems are often described in terms of their
_components_ : a different designer is in charge of providing a subprotocol
for each component. The problem of modular specification of global protocols
has been addressed in the literature, but the state of the art focuses only on
dual input/output compatibility. Our work overcomes this limitation. We
propose the first MPST theory of _multiparty compositionality for distributed
protocol specification_ that is semantics-preserving, allows the composition
of two or more components, and retains full MPST expressiveness. We introduce
_hybrid types_ for describing subprotocols interacting with each other, define
a novel _compatibility relation_ , explicitly describe an algorithm for
composing multiple subprotocols into a _well-formed global type_ , and prove
that compositionality preserves projection, thus retaining semantic
guarantees, such as liveness and deadlock freedom. Finally, we test our work
against real-world case studies and we smoothly extend our novel compatibility
to MPST with delegation and explicit connections.
multiparty session types, compositionality, protocol design, concurrency
††journal: PACMPL††journalvolume: 1††journalnumber: CONF††article:
1††journalyear: 2018††publicationmonth: 1††copyright: none††ccs: Theory of
computation Distributed computing models††ccs: Theory of computation Type
theory
## 1\. Introduction
With the current growth in scale and complexity of systems, their _design_ has
become of central importance for industry and society in general.
Choreographies for interactions among multiple participants, or
_(communication) protocols_ , arise naturally in numerous fields:
authorisation standards (Hardt, 2012; MIT, 2022), the BPMN graphical
specification for business processes (OMG, 2022), or smart contracts for
financial transactions (Ethereum, 2022).
The literature on programming languages offers a variety of formal frameworks
for protocol description (Honda et al., 2016; Barbanera et al., 2020a;
Montesi, 2013), aimed at the verification of behavioural properties of
distributed implementations that comply with the communication discipline
prescribed by the protocol. Such theories focus on distributed implementations
of participants, but rarely feature modularity in the design of protocols,
which are instead seen as standalone, monolithic entities. Mostly, when
modularity is considered, it is either conceived in terms of nesting
(Demangeon and Honda, 2012; Tabareau et al., 2014) or it substantially
modifies protocol description, by adding additional structure (Carbone et al.,
2018; Savanovic et al., 2020; Montesi and Yoshida, 2013). To the best of our
knowledge, only in (Barbanera et al., 2021) and (Stolze et al., 2021) the
result of composition is a well-formed protocol.
_This paper presents hybrid multiparty session types: a novel, general theory
that offers compositionality for distributed protocol specification, improves
on the state of the art, and is immediately compatible with existing
multiparty session types systems._
Multiparty session types (MPST) (Honda et al., 2016; Coppo et al., 2015;
Yoshida and Gheri, 2020) provide a typing discipline for message-passing
concurrency, ensuring deadlock freedom for two or more distributed processes.
A _global type_ or _protocol_ , which describes an entire interaction
scenario, is projected into a collection of _local types_ onto the respective
participants (endpoint projection). MPST cater for the safe implementation of
distributed processes: as long as the process for each participant is
independently type-checked against its local type, its communication behaviour
is disciplined by the semantics of the global type, and its execution does not
get stuck.
Although alternatives to the top-down approach (endpoint projection from a
global type) have been proposed (Scalas and Yoshida, 2019; Deniélou and
Yoshida, 2013; Lange et al., 2015; Lange and Yoshida, 2019), the benefits of
an explicit, concise design of the communication protocol for the whole system
have been recognised by the research community, since the first appearance of
MPST (Honda et al., 2008), until more recent times, e.g., see (Glabbeek et
al., 2021; Cledou et al., 2022). Furthermore, the top-down approach has been
extended, e.g., to fault tolerance (Viering et al., 2021), timed specification
(Bocchi et al., 2014), refinements (Zhou et al., 2020; Gheri et al., 2022),
cost awareness (Castro-Perez and Yoshida, 2020), exception handling
(Lagaillardie et al., 2022), or explicit connections and delegation (Hu and
Yoshida, 2017; Castellani et al., 2020).
Concretely, the underlying assumption to top-down MPST systems is that a
single designer has _full knowledge_ of the communication protocol and can
give its formal specification in terms of a global type. Distributed systems,
however, are designed modularly, by multiple designers. Recently, the
literature has addressed the problem of obtaining a single coherent global
type from independently specified subprotocols (components of a protocol) and
some solutions have been offered: Barbanera et al. (2021) achieve direct
composition of _two_ global types, through a dual compatibility relation that
matches inputs and outputs, based on gateways (Barbanera et al., 2018, 2019,
2020b). Stolze et al. (2021) describe a dual methodology beyond gateways, but
severely restrict the syntax for global types. In contrast to this approach,
our theory substitutes dual compatibility, based only on input/output
matching, with the notion of _compatibility through projection_. Thus, we
improve on the state of the art: (1) we can compose more than two
subprotocols into a well-formed global type and (2) we retain the full
expressiveness of MPST (including recursion and parallel composition). See §6
for a broader, in-detail discussion. Moreover, metathoretical results about
the semantics of traditional MPST systems (Deniélou and Yoshida, 2013; Honda
et al., 2016) immediately translate to ours (_semantics preservation_): from
distributed specifications in terms of subprotocols, our theory synthesises a
global protocol for the whole system; we prove once and for all, as a
metatheoretical result, that such global protocol is a traditionally well-
formed global type.
_Contributions._ This paper develops _a theory of compositionality for
distributed protocol description in MPST systems_ and introduces the following
novel MPST concepts:
* •
_hybrid types_ , a generalisation of both global and local types, for the
specification of communicating subprotocols (Definition 3.3);
* •
_generalised projection_ onto sets of roles (Definition 3.6), which well-
behaves with respect to set inclusion (Theorem 4.9);
* •
_localiser_ (Definition 3.8), a novel operator that isolates, in a
subprotocol, the inter-component communication from the intra-component one;
* •
_compatibility_ based on projection and localiser (Equation C, §4.1);
* •
_build-back_ , an explicit algorithm to compose _two or more_ subprotocols
into a more general one (Definitions 4.1 and 4.6 and Theorems 4.4 and 4.7).
To the best of our knowledge, our approach is the first that:
* •
enables the correct composition of _two or more_ subprotocols into a global
type, while capturing _full MPST expressiveness_ : branching, parallel
composition, and recursion (Corollary 4.10);
* •
operates at a purely syntactic level, thus retaining previously developed MPST
semantics results (_semantics preservation_); correctness is guaranteed by
compositionality resulting in a traditionally well-formed global type and
preserving endpoint projection (Corollary 4.10);
* •
provides a notion of compatibility that is _more expressive than dual
input/output matching_ and hence suitable for extension to more sophisticated
MPST systems (Example 5.7).
We discuss the applicability and generality of our work, through _case
studies_. (1) We give a distributed specification of the real-world protocol
OAuth 2.0 (Hardt, 2012), which showcases modularity features of our theory
(§5.2) and leads to an optimisation (§5.3, Corollary 5.1). (2) We extend our
theory beyond traditional MPST, to delegation and explicit connections (§5.4).
_Outline._ §2 gives an overview of our development, with a simple, but
realistic, application scenario. §3 and §4 are dedicated to our technical
contributions. §5 tests the strengths of our theory with case studies. §6
discusses in detail, with examples, related work. §8 concludes with future
work. Further detail for definitions and proofs can be found in Appendix B.
## 2\. Overview of Our Development
This work achieves _distributed protocol specification_ for MPST: different
_(protocol) designers_ specify protocols (naively as global types, Figure 1)
for different components of the communicating system; then, these compose into
a single global type for the whole system. Composition must _preserve
(endpoint) projection_ (indicated with $\upharpoonright$): local types, for
the distributed implementation of _roles_ (or _participants_), need to be
obtained by projection of each separate component, but, also, they need to be
projections of the same global type (obtained by composition), if we want
semantic guarantees (e.g., deadlock freedom) to hold. In other words, our
protocol-compositionality theory relies on multiparty compatibility,
guaranteed by a well-formed global type, and on semantics proofs from previous
work (e.g., (Deniélou and Yoshida, 2013)). This approach makes our development
_semantics-preserving_ : it endows existing MPST systems with distributed
protocol specification.
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G_{1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G_{n}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{11}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{1k_{1}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{n1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{nk_{n}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$${\bf\color[rgb]{1,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,1}\pgfsys@color@cmyk@stroke{0}{1}{0}{0}\pgfsys@color@cmyk@fill{0}{1}{0}{0}{=}}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{11}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{1k_{1}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{nk_{n}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$composition
Figure 1. Distributed Protocol Specification, Naively
Traditionally, a global type is a “closed” standalone entity that describes a
one-component communication protocol: all interactions among participants are
_internal_ to such component. We consider instead the distributed
specification of a system, in terms of multiple components (disjoint sets of
participants). Each participant can send both internal messages, within its
component, or _external_ , to other components. Therefore, we “open” the
syntax of global types, so that it allows not only for intra-component
communication, but also for inter-component communication. By extending the
syntax of global types with an interface for inter-component communication, we
obtain _hybrid types_. The communication protocol of each component of the
system is specified as a hybrid type; multiple components can be composed into
a well-formed global type thanks to a novel notion of _compatibility, based on
projection._
In what follows: we consider a three-component system: a company with three
departments, for each of which, a different _(protocol) designer_ is in charge
of describing the communication protocol. The departments, with respective
(internal) roles, are the following: (a) the _strategy team_ , the roles of
which are the director d of the company and the advertisement team ad; (b)
the _sales department_ , with a salesman s and the website administrator w;
(c) the _finance department_ , with two employees,
${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}$
and
${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}$.
We assume that internal roles of different components are _distinct_.
_Global Types for Intra-Component Communication._ When no inter-component
communication happens, each protocol designer gives a global type for the
_internal_ communication of their department (Figure 2(a)). In
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$, the
global type for the strategy department, the director d sends the product ID
to the responsible for advertisement ad; then, d gives an ok or asks ad to
stop. For the sales department
($\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G_{\sf
sales}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$) s
decides whether w can publish some content on the company website. In the
financial department
($\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G_{\sf
fin}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$),
${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}$
sends the product ID to
${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}$
and gets back either a price or a stop.
$\begin{array}[]{l}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{ad}}}}:}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{ad}}}}:}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}\\{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{ok}}}}}.\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mathtt{end}}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5},{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{stop}}}}}.\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mathtt{end}}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}\\}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\\\
\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G_{\sf
sales}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{w}}}}:}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\\{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{publish}}}}}.\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mathtt{end}}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5},{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{stop}}}}}.\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mathtt{end}}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}\\}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\\\
\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G_{\sf
fin}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}\to{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}:}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}\to{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}:}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}\\{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{price}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mathtt{end}}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5},{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{stop}}}}}.\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mathtt{end}}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}\\}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\end{array}$
(a) Global Types for Internal Communication
$\begin{array}[]{l}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H^{\prime}_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{ad}}}}:}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\hbox{\pagecolor{lightyellow}\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}{{d}}}}}!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}{{s}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\small
nat}}})}.}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\hbox{\pagecolor{papayawhip}\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}{{d}}}}}!{${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}$};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\small
nat}}})}.}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{ad}}}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\\{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{ok}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1},{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{stop}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\\}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\\\
\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H^{\prime}_{\sf
sales}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\hbox{\pagecolor{lightyellow}\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}{{d}}}}}?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}{{s}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\small
nat}}})}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{w}}}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\\{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{publish}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1},{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{stop}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\\}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\\\
\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H^{\prime}_{\sf
fin}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\hbox{\pagecolor{papayawhip}\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}{{d}}}}}?{${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}$};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\small
nat}}})}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}\to{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}\to{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\\{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{price}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1},{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{stop}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\\}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\end{array}$
(b) Hybrid Types for Basic Inter-Department Interactions
$\begin{array}[]{l}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{ad}}}}:}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}!{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mu\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\left\\{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{ok}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{ad}}}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{go}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1},{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{wait}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{ad}}}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{wait}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\right\\}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\\\
\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
sales}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}};}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mu\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\left\\{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{price}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{w}}}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{publish}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1},{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{wait}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{w}}}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{wait}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\right\\}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\\\
\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
fin}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}?{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}};}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}\to{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mu\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}\to{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\left\\{\begin{array}[]{l}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{price}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{ok}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{price}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1},\\\
{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{wait}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{wait}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{wait}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}}\end{array}\right\\}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\\\
\end{array}$
(c) More Expressive Hybrid Types
Figure 2. Types for Interactions in the Company
_Hybrid Types for Inter-Component Interactions._ The components of a
distributed system are expected to communicate with each other. Therefore, we
introduce a _hybrid syntax of global and local constructs_ (and we call
_hybrid types_ the terms of this syntax): to the global-type syntax (e.g.,
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{ad}}}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$),
for intra-component communication, we add local send and receive constructs
(e.g.,
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}?{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$),
as the interface for inter-component communication. In our example, a first
message is sent by d, with a product ID prod, externally, to the other two
departments (Figure 2(b)):
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}!{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$.
These are _dually_ received by the sales team
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
and by the finance team
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}?{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
respectively (as highlighted in Figure 2(b)).
###### Remark 2.1 (Generalising Global and Local Types).
We observe that hybrid types are a generalisation of both global and local
types. A global type is a “closed” hybrid type, where only internal messages
are exchanged. The intuition for local types is more subtle: a local type can
be interpreted as a basic, _one-participant component_ of a communicating
system, which communicates only externally, with participants of other
components. E.g., the local type
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}};}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{1}}}}.}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{r}}}}};}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{2}}}}.\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\mathtt{end}}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
for the participant q, can be written as the hybrid type
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}}?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{1}}}}.}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}}}!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{r}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{2}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$:
q is the only internal participant that first receives from p and then sends
to r (both p and r are of other components). Being able to express global and
local types as hybrid types is fundamental: it makes our results correct and
compatible with existing MPST theories (see Remark 2.2 below, and Corollary
4.10 in §4).
_Expressiveness and Compatibility._ We describe a more expressive version of
the protocols (Figure 2(c)) that combines inter-component messages with
branching and recursion. Figure 3 shows the communication for each component
of the system, as described by the protocol designer of each department. We
imagine that the price of the product prod is decided within the finance
department: the finance expert
${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}$
either gives a price or asks all processes to wait in a recursive loop; then,
the decision is communicated to the other departments. Figure 3(c) shows the
execution of the protocol for the finance department, where
${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}$
makes such choice. Figure 2(c) shows the formal specifications, as hybrid
types, of the three protocols. We observe that, to compose
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
sales}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$, and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
fin}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$ (and,
in general, to compose, more than two communicating protocols), dual relations
are not sufficient for compatibility (for a broader discussion see §6). Our
proposal is to give separately the specification of a _communication
discipline for inter-component interactions only_ : intra-component
interactions are left to the designer of each respective component and some
_chief designer_ gives the description of one more protocol, for global
guidance of inter-component communication. For our example, we collect all the
interactions between any two different departments in the protocol in Figure
4(a), and we formalise it with a _compatibility (global) type_ :
$\begin{array}[]{l}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}:=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}:}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}\to{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mu\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\left\\{\begin{array}[]{l}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{ok}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{price}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1},\\\
{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{wait}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}:}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{wait}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}}\end{array}\right\\}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\end{array}$
_Compatibility_ of subprotocols
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
sales}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$, and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
fin}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$, with
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
is achieved by asking that the _(generalised) projection_
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{}}$,
of
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
with respect to the internal participants of each subprotocol, is equal to the
_localisation_
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\sf{loc}}\;}}{}\\!\\!$,
of that subprotocol, where “localising a protocol” means isolating its inter-
component communication (by retaining only its _local_ constructs). E.g., we
consider
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$ and
its internal participants
$\\{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}},{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{ad}}}}\\}$.
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\\{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}},{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{ad}}}}\\}}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\sf{loc}}\;}}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}}};}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}!{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{prod}}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mu\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}};}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\left\\{{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{ok}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\mathtt{end}}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1},{\textit{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{wait}}}}}.\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}\right\\}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
Analogously we require that
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\\{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{s}}}},{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{w}}}}\\}}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\sf{loc}}\;}}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
sales}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$ and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\\{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}},{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}\\}}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\sf{loc}}\;}}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$.
We observe that not only we have enriched the syntax of global types with
local constructs to get hybrid types, but also we have _generalised
projection_ to _sets_ of participants, introduced a new operator (_localiser_)
to isolate external communication, and, based on these, defined compatibility.
dads${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}$prodprodprodokgowaitwait
(a) Strategy Department
swd${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}$prodpricepublishwaitwait
(b) Sales Department
${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}$${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}2}}$dsprodprodpriceokpricewaitwaitwait
(c) Finance Department
pinternal participantqexternal participantinternal interactionexternal
interactionchoiceloop (recursion)
Figure 3. Communication for Each Department in the Company
_Compositionality and Correctness._ Our theory (§3 and §4) provides an
explicit function
$\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\mathcal{B}}$
that _builds back_ a single global type for the communication in the company,
from the distributed specification above:
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\mathcal{B}\;(\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0})\;([\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0},\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
sales}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0},\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
fin}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}])}}$.
It holds that
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
str}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{d}}}}}}$,
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{\sf
fin}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}}}$,
and analogously for all participants: _projection is preserved_. A figure,
representing such
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
for our example, can be found in Appendix A.
More generally (see Figure 4(b)), from a compatibility type
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
and hybrid types
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
for each component (with set of internal participants $E_{i}$), such that they
are compatible
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\upharpoonright_{E_{i}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\sf{loc}}\;}}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
our theory synthesises a global type
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\mathcal{B}\;(\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0})\;([\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0},\dots,\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H}\color[rgb]{0.1,0.4,0.1}{}_{n}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}])}}$
(Definition 4.6 and Theorem 4.7). Correctness of our theory is given by
Corollary 4.10. Formally, this result guarantees that _the local types,
projections of
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
on each participant, are the same as if obtained by the respective subprotocol
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$_:
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}}}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}}}$
if p is a participant of the $i$-th subprotocol
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$.
We have achieved _distributed protocol specification_ : we can both obtain
local types for implementation in a distributed fashion, by projection of the
respective component
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
(no designer or programmer needs the full knowledge of
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$),
and rest assured that all local types (for all participants, in all
components) are projections of a single, well-formed global type. This makes
our development compatible with existing MPST theories, with no need for
developing new semantics (_semantics preservation_): a well-formed global type
projecting on all participants gives traditional _multiparty compatibility_ ,
which, thanks to the semantics results in the literature (Deniélou and
Yoshida, 2013; Coppo et al., 2015; Honda et al., 2016), leads to guarantees,
such as liveness and deadlock freedom.
###### Remark 2.2 (Hybrid Types and Generalised Projection).
With reference to Figure 4(b), Let us consider the set of participants $E_{i}$
of the generic
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
and
${{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}\in
E_{i}$. Generalised projection takes a hybrid type and returns a hybrid type;
since _global and local types are hybrid types_ (Remark 2.1), e.g., we can
project
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
onto $E_{i}$ for compatibility
($\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\upharpoonright_{E_{i}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\sf{loc}}\;}}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$),
or
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
onto $E_{i}$ and verify that it is equal to
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
(see Theorem 4.7, §4). Most importantly, Theorem 4.9 in §4 guarantees that
projection composes over set inclusion
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}}}=(\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{E_{i}}}){\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}}}=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}}}$:
by projecting
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
onto the participant p, we obtain the same local type for p. Namely, we can
obtain local types from the specific component
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
and then _implement them in a distributed fashion_ , but also they all are
projections of a well-formed global type
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$.
To summarise (see Figure 4(b)), our proposal for distributed protocol
specification is the following:
1. (1)
a different designer specifies, for each component of the system, a hybrid
type
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$;
2. (2)
a chief designer gives the compatibility type
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
to discipline inter-component interactions;
3. (3)
compatibility is a simple equality check:
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\upharpoonright}}_{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{E_{i}}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\sf{loc}}\;}}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
($E_{i}$ is the set of participants for
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$).
In return, as a metatheoretical result proved once and for all by our theory,
the designers obtain
${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\mathcal{B}\;(\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0})\;([\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0},\dots,\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H}\color[rgb]{0.1,0.4,0.1}{}_{n}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}])}}$,
_a global type for the whole communication_ , for which _projections are
preserved_ (and, hence, MPST semantic guarantees hold).
In §3 and §4 we detail our compositionality theory, including generalised
projection, localiser, compatibility, build-back, and correctness results.
ds${\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{f}}}_{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}1}}$prodprodokpricewaitwait
(a) Inter-Component
Communication
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G^{\dagger}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H}\color[rgb]{0.1,0.4,0.1}{}^{\prime}_{1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H}\color[rgb]{0.1,0.4,0.1}{}^{\prime}_{2}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{\definecolor{currbkp}{rgb}{0.1,0.4,0.1}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H}\color[rgb]{0.1,0.4,0.1}{}^{\prime}_{n}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{2}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{H_{n}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{11}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{1k_{1}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{21}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{2k_{2}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{n1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{nk_{n}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$${\bf\color[rgb]{1,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,1}\pgfsys@color@cmyk@stroke{0}{1}{0}{0}\pgfsys@color@cmyk@fill{0}{1}{0}{0}{=}}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{11}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{1k_{1}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{21}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\dots$$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L_{nk_{n}}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$$\upharpoonright$$\upharpoonright$$\upharpoonright$${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\sf{loc}}\;}}{}$${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\sf{loc}}\;}}{}$${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\sf{loc}}\;}}{}$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$$\upharpoonright$composition
(b) Distributed Protocol Specification, Compatibility through Projection
Figure 4. Inter-Component Communication and Compatibility via Projection
## 3\. Hybrid Types for Protocol Specification
### 3.1. Background: Preliminaries of Multiparty Session Types
We give a short summary of _multiparty session types_ (Honda et al., 2016;
Scalas et al., 2019; Coppo et al., 2015; Yoshida and Gheri, 2020);
specifically, our theory is based on the formulation in (Deniélou and Yoshida,
2013), extended to parallel composition of global types. The notation for our
MPST system is standard (directly adapted from (Castro-Perez et al., 2021)).
Atoms of our syntax are: a set of _roles_ (or _participants_), ranged over by
${{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}},{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}},\dots$,
a set of _(type) variables_ , ranged over by
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{X}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0},\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0.1,0.4,0.1}\definecolor[named]{pgfstrokecolor}{rgb}{0.1,0.4,0.1}{Y}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0},\dots$;
and a set of labels, ranged over by
${{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{0}}}},{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{1}}}},\dots,{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{i}}}},{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{j}}}},\dots$.
###### Definition 3.1 (Sorts, Global Types, and Local Types).
Sorts, global types, and local types, ranged over by S,
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
respectively, are inductive datatypes generated by:
$\begin{array}[]{l}{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}::={\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
unit}}}~{}~{}\mathbf{|\\!\\!|}~{}~{}{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}~{}~{}\mathbf{|\\!\\!|}~{}~{}{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
int}}}~{}~{}\mathbf{|\\!\\!|}~{}~{}{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
bool}}}~{}~{}\mathbf{|\\!\\!|}~{}~{}{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}\texttt{\small+}{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}}}~{}~{}\mathbf{|\\!\\!|}~{}~{}{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}\texttt{\small*}{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}}}\qquad\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}::=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mathtt{end}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}~{}~{}\mathbf{|\\!\\!|}~{}~{}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{X}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}~{}~{}\mathbf{|\\!\\!|}~{}~{}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mu\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{X}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}.\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}~{}~{}\mathbf{|\\!\\!|}~{}~{}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}}:\\{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell}}}_{i}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}}_{i}}}).\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{i}\\}_{i\in
I}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}~{}~{}\mathbf{|\\!\\!|}~{}~{}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{1}\,|\,\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{2}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\\\
\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}::=\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\mathtt{end}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}~{}~{}\mathbf{|\\!\\!|}~{}~{}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{X}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}~{}~{}\mathbf{|\\!\\!|}~{}~{}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\mu\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{X}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}.\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}~{}~{}\mathbf{|\\!\\!|}~{}~{}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}}};\\{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell}}}}}}_{i}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}_{i}}}).\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,1}{}_{i}\\}_{i\in
I}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}~{}~{}\mathbf{|\\!\\!|}~{}~{}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}};\\{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell}}}}}}_{i}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}_{i}}}).\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,1}{}_{i}\\}_{i\in
I}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\color[rgb]{0,0,0}}\end{array}$
where,
${{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}\neq{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}}$,
$I\neq\emptyset$, and
${{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{i}}}}\neq{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{j}}}}$
when $i\neq j,$ for all $i,j\in I$, in
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}}:\\{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell}}}_{i}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}}_{i}}}).\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{i}\\}_{i\in
I}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}}};\\{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell}}}}}}_{i}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}_{i}}}).\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,1}{}_{i}\\}_{i\in
I}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$, and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}};\\{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell}}}_{i}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}_{i}}}).\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,1}{}_{i}\\}_{i\in
I}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$.
The _global message_
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}}:\\{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell}}}_{i}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}}_{i}}}).\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{i}\\}_{i\in
I}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
describes a protocol where participant p sends to q one message with label
${{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{i}}}}$
and a value of sort
${\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}_{i}}}$ as payload, for some $i\in I$; then, depending on which
${{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{i}}}}$
was sent by p, the protocol continues as
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{0,0,0}{}_{i}$.
The type
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mathtt{end}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
represents a _terminated protocol_. _Recursive protocol_ is modelled as
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mu}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{X}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}.\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
where recursion _variable_
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{X}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
is bound. The _parallel_ construct
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{1}\,|\,\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{2}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
describes a protocol composed by two independent ones. The participants of
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{2}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
are required to be disjoint: no communication happens between
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{G}\color[rgb]{.5,0,.5}{}_{2}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
but only internally in each one of them (for a broader discussion, see §5.1).
The intuition for local types
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\mathtt{end}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$,
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{X}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
and
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\mu\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{X}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}.\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
is the same as for global types. The _send type_
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{!{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}}};\\{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell}}}_{i}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}_{i}}}).\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,1}{}_{i}\\}_{i\in
I}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$ says
that the participant implementing the type must choose a labelled message to
send to q; if the participant chooses the label
${{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{i}}}}$,
it must include in the message to q a payload value of sort
${\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}_{i}$, and continue as prescribed by
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,1}{}_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$.
The _receive type_
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{?{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}};\\{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell}}}_{i}}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{{\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}_{i}}}).\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,1}{}_{i}\\}_{i\in
I}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$ requires
to wait to receive a value of sort
${\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
S}}}_{i}$ (for some $i\in I$) from the participant p, via a message with label
${{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell_{i}}}}$;
then the process continues as prescribed by
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{\definecolor{currbkp}{rgb}{0,0,1}\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}{L}\color[rgb]{0,0,1}{}_{i}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$.
We are interested in types that are (1) _guarded_ —e.g.,
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mu}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{X}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}.\allowbreak\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{p}}}}\to{{\color[rgb]{0.00390625,0.47265625,0.43359375}\definecolor[named]{pgfstrokecolor}{rgb}{0.00390625,0.47265625,0.43359375}\textbf{{q}}}}:}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{{{\color[rgb]{0.28125,0.23828125,0.55078125}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.55078125}{\ell}}}({\color[rgb]{0.28125,0.23828125,0.546875}\definecolor[named]{pgfstrokecolor}{rgb}{0.28125,0.23828125,0.546875}{\texttt{\small
nat}}}).\definecolor{currbkp}{rgb}{.5,0,.5}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{X}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
is a valid global type, whereas
$\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{\mu}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{X}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{.}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\definecolor{currbkp}{rgb}{0,0,0}\color[rgb]{.5,0,.5}\definecolor[named]{pgfstrokecolor}{rgb}{.5,0,.5}{X}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$
is not—(detail in Appendix B.1, Definition B.1) and (2) _closed_ , i.e., all
variables are bound by
|
# On the Boundedness solutions of the difference equation
$x_{n+1}=ax^{\alpha}_{n}+bx^{\alpha}_{n-1},0<\alpha\leq 2$ and its application
in medicine
Zeraoulia Rafik
University of batna2.Algeria
Departement of mathematics
yabous ,khenchela
<EMAIL_ADDRESS>
&Alvaro humberto Salas
Universidad national de Colombia
departement of physics
Bogota Colombia
<EMAIL_ADDRESS>
&Lorenzo Martinez
Universidad national de Colombia
departement of mathematics
Manizales,Caldas
<EMAIL_ADDRESS>
###### Abstract
Recently ,mathematicians have been interested in studying the theory of
discrete dynamical system, specifically difference equation, such that
considerable works about discussing the behavior properties of its solutions
(boundedness and unboundedness) are discussed and published in many areas of
mathematics which involves several interesting results and applications in
applied mathematics and physics ,One of the most important discrete dynamics
which is become of interest for researchers in the field is the rational
dynamical system .In this paper we may discuss qualitative behavior and
properties of the difference equation $x_{n+1}=ax^{2}_{n}+bx^{2}_{n-1}$ with
$a$ and $b$ are two parameters and we shall show its application to medicine
_K_ eywords difference equation $\cdot$ boundedness $\cdot$ number theory
$\cdot$ dynamical system
## 1 Introduction
The theory of difference equations finds many applications in almost all areas
of natural science [Daniel J. Duffy(2006)]. increasingly clearly emerges the
fundamental role that difference equations with discrete and continuous
argument is played for understanding nonlinear dynamics and phenomena also it
is used for combinatorics and in the approximation of solutions of partial
differential equations [Y. Ordokhan1 ,S. Davaei far(2017)]. The increased
interest in difference equations is partly due to their ease of handling. A
minimum is enough computing and graphical tools to see how the solution of
difference equations trace their bifurcations with changing parameters [Josef
Diblik ,Miroslava Ruzickova , Barbora Vaclavikova (2008)]. Thus opens a
complex understanding as well invariant manifolds for linear and nonlinear
dynamical systems.nonlinear difference equations and systems are of wide
interest due to their applications in real life. Such equations appear
naturally as the mathematical models which describe biological, physical and
economical phenomena.
Although difference equations have very simple forms, however, it is extremely
difficult to understand completely the global behavior of their solutions.y
the global behaviors of their solutions. One can refer to [Charlie y
Routh(1992)],[Camouzis and G.Ladas(2008)],[E.A.Grove and G. Ladas(2005)] and
the references therein. Difference equations have always played an important
role in the construction and analysis of mathematical models of biology,
ecology, physics, economic processes, etc. The study of nonlinear rational
difference equations of higher order is of paramount importance, since we
still know so little about such equations. In this paper [A. Q. Khan,S. M.
Qureshi (2020)] A. Q. Khan and S. M. Qureshi discussed Dynamical properties of
some rational systems of difference equations such as they explored the
equilibrium points, local and global dynamics, rate of convergence,
instability and boundedness of positive solution of some rational systems of
difference equations .As application to modern science ,namely, in
mathematical biology they also explored the local dynamics about equilibrium
points of the discrete-time Levin’s model .In the meanwhile A. Q. Khan studied
and discussed Global dynamics of a $3\times 6$ exponential system of
difference equations defined as:
$\begin{cases}x_{n+1}=\frac{\alpha_{1}+\beta_{1}\exp(-x_{n})}{\gamma_{1}+y_{n-1}}\\\
y_{n+1}=\frac{\alpha_{2}+\beta_{2}\exp(-y_{n})}{\gamma_{2}+z_{n-1}}\\\
z_{n+1}=\frac{\alpha_{3}+\beta_{3}\exp(-z_{n})}{\gamma_{3}+y_{n-1}}\end{cases}$
$n=0,1,....$
where parameters $\alpha_{i},\beta_{i},\gamma_{i}(i=1,2,3)$ and initial
conditions $x_{i},y_{i},z_{i}(i=0,-1)$ are nonnegative real numbers.
In [R. Abo-Zeid (2017)] .R. Abo Zeid has discussed the global behavior of all
solutions of the difference equation:
$x_{n+1}=\frac{x_{n}x_{n-1}}{ax_{n}+bx_{n-1}},n\in\mathbb{N}_{\textpdfrender{TextRenderingMode=Stroke,LineWidth=.1pt,}{0}}$
(1)
where $a,b$ are real numbers and the initial conditions $x_{-1},x_{0}$ are
real numbers .In this paper, we discuss the global behavior of the difference
equation :
$x_{n+1}=Ax_{n}^{\alpha}+Bx_{n-1}^{\alpha}$ (2)
with :$A,B$ are two parameters real numbers and $\alpha$ is a real number such
that :
$0<\alpha\leq 2$, For $\alpha=1$ the dynamics defined in (2) is deduced from
(1) by the substitution $y=\frac{1}{x_{n}}$ , the global behavior of all
solutions of (1) is discussed as well in [R. Abo-Zeid (2017)] by R. Abo-Zeid
,The difference equation (2) for $\alpha=1$ ,namely ,
$y_{n+1}=by_{n}+ay_{n-1},n\in\mathbb{N}$ (3)
The characteristic equation of equation (3) is :
$\lambda^{2}-b\lambda-a=0$ (4)
Equation (4) has two roots :
$\lambda_{1}=\frac{b-\sqrt{b^{2}-4ac}}{2a},\lambda_{2}=\frac{b+\sqrt{b^{2}-4ac}}{2a}$
The form of the solution should be according to the value of the quantity
$b^{2}+a^{2}$,The following theorem [S.Elaydi(2005)] is useful in studying the
solutions of the difference equation (3)
###### Theorem 1.1
The following statements holds:
* •
1) All solutions of (3) oscillates (about zero) if and only if the
characteristic equation has no positive roots.
* •
2) All solutions of (3) converge to zero if and only if
$\max\\{\lambda_{1},\lambda_{2},\\}<1$
For boundedness solutions of (3) one can refer to [R. Abo-Zeid (2017)] .Now
for $\alpha=2$ which it is the aim of this paper we are ready to do some
analysis and discussion about the global behavior of solutions of this
difference equation:
$y_{n+1}=by_{n}^{2}+ay_{n-1}^{2},n\in\mathbb{N}$ (5)
## 2 Analysis and discussion:
Case:1 $|a|,|b|<1$ ,for this case we may use a nice trick just we assume $a,b$
are two bounded functions such that :
$a=\sin(\theta),b=\cos(\theta),\theta\in\mathbb{R}$ then the dynamics defined
in (5) become :
$y_{n+1}=\cos(t)y_{n}^{2}+\sin(t)y_{n-1}^{2},n\in\mathbb{N},t\in\mathbb{R}$
(6)
Now ,we may ask for which values of $\theta$ does this equation
$x_{n+1}=\cos(\theta)x^{2}_{n}+\sin(\theta)x^{2}_{n-1}$ have bounded
solutions?.
We ran a small computation: The below plot,see figure (1) is created as
follows: For each point $r(\cos\theta,\sin\theta)$, We use $x_{-1}=0$,
$x_{0}=r$ as initial values, and $\theta$ as the parameter.
The white area is where the iterations is less than $2$ (bounded), the first
30 iterations. As we see, a small circle around the origin is white, meaning
that there are small initial values that is bounded for every $\theta$. Now,
this is not a proof, but the picture suggests this.
Changing the cut-off to $40$ iterations does not change the picture much.
Figure 1: Bounded solution for $x_{-1}=0,r=x_{0}$
For analytical proof we have :Let $r=1/\sqrt{2}$. If $x_{-1},x_{0}\leq r$,
then the sequence is bounded for all $\theta$.
We note that: $|\cos(\theta)x_{n-1}^{2}+\sin(\theta)x_{n}^{2}|\leq
r^{2}(|\cos\theta|+|\sin\theta|)\leq r^{2}\sqrt{2}\leq 1/\sqrt{2}$, and the
statement follows from induction.
Case:2
$|a|,|b|=1$, for the case $|a|=|b|=1$ we note after runing with the same
initial conditions for each point $r(\cos\theta,\sin\theta)$ (iteration of the
plot (20 iteration) ) that the obtained picture changed ( Rotation of picture)
and the white region become larger than it for the precedent case ,namely,
Case:1 this mean that we have another new initial values that is bounded for
every $\theta$.(The number of initial values is increasing).see figure 2
Figure 2: new intial values that bounded solutions for every
$\theta,x_{-1}=0,r=x_{0}$
Case3 In this case we may assume
:$a^{\prime}=|a|k,b^{\prime}=|b|k,k\in\mathbb{R}$ and
$|a^{\prime}|,|b^{\prime}|>1$, then our dynamics become :
$y_{n+1}=|\cos(t)|y_{n}^{2}+|\sin(t)|y_{n-1}^{2},n\in\mathbb{N},t\in\mathbb{R}$
(7)
With the same data and conditions that we have used in both of case1 and case
2, we noticed after runing using mathematica the white region become smaller
whenever $k$ being larger this indicate that the dyanmics defined in (7) would
be unbounded,for numerical evidence we just take , $k=5,20,-15$ as shown in
below plots (figure (3+4+5)).
Figure 3: boundedness solutions for the dynamic (7),$k=5$ Figure 4:
boundedness solutions for the dynamic (7),$k=15$ Figure 5: boundedness
solutions for the dynamic (7),$k=-150$
We noticed For the remainder case when $0<\alpha\leq 1$ the iteration of the
dynamics (2) show us much intinial values that is bounded for every $\theta$ ,
we took $\alpha=\frac{1}{2},\frac{1}{3},\alpha=1$ as comparative examples as
shown in below figures:
Figure 6: bounded solutions for $\alpha=1$ (white circle arround origin)
Figure 7: bounded solution for $\alpha=\frac{1}{2},\frac{1}{3},\cdots$ (The
circle become white at all this indicate all solution of our dynamics are
bounded for any arbitary initial values )
## 3 Stability and analysis
Dynamical modeling is the study of change and changes take place everywhere in
life. As a result dynamical systems have a wide range of application areas in
applied science and engineering. With these systems, real life situations can
be turned into the language of mathematics. If one can create a good model for
a real life situation, we will be able to predict the future states of the
system by simply iterations according to this model.
Stability, like one of the most important concepts in Discrete Dynamical
Theory, can tell much about the behavior of the dynamical system. In this
section a symbolic Mathematica package for analysis and control of chaos in
discrete two dimensional dynamical nonlinear systems, is presented. There are
constructed some computer codes to find stability types of the fixed points,
covering the stability of the one-dimensional nonlinear dynamical systems.
Applications are taken from chemical kinetics and population dynamics
(logistic model).Since our dynamics is two dimonsional discret dynamical
system it is enough to run the below code to get satibility and oscillation
points arround origing…
example one Let us apply that code for our dynamics: let $a=2,b=9$
twoDimStab[2 x^2 + y + 1, 9x^2 - 1]
1 112
{Null, {--, -(---)} -> oscillatory source}
11 121
One can do more examples just applying the above code with change of variable
$a$ and $b$.
For the periodicity of our dynamic for $\alpha=2$ it is already discussed in
[R. Abo-Zeid (2017)],The dynamics is two period solution
## 4 PHASE PLANE DIAGRAMS
Continuous systems are often approximated as discrete processes, meaning that
we look only at the solutions for positive integer inputs. Difference
equations are recurrence relations, and first order difference equations only
depend on the previous value. Using difference equations, we can model
discrete dynamical systems. The observations we can determine, by analyzing
phase plane diagrams of difference equations, are if we are modeling decay or
growth, convergence, stability, and equilibrium points.In this section we may
give a diagram plot of our dynamics for some values $a$ and $b$ and $\alpha=2$
, The dynamics defined in (2) can be written as sytem of two difference
equations :
$\begin{cases}x(t+1)=ax_{t}^{2}+y(t)+1\\\ y(t+1)=bx_{t}^{2}-1\end{cases}$ (8)
The phase portrait plot of the dynamic (8) is shown in below figures, Here is
the mathematica code one can change values of a,b to analyse the slop field
Figure 8: diagram plot of nonlinare dynamics (8), for $a=0.5,b=2$
## 5 Time series analysis and modeling (application to medicin)
Nowadays mathematics is being successfully applied to a number of important
fields in medicine including biofluids, cardiovascular diseases, clinical
schedules and tests, data analysis, drug design and discovery, epidemiology,
genetics, image processing, immunology, instrumentation, microbiology [José M.
Amigó , Michael Small (2017)], neuroscience, oncology, virology and more. The
list of tools includes virtually the whole of applied mathematics. To cite the
most familiar ones: difference equations and discrete-time dynamical systems,
information and coding theory, graph and network theory, integral transforms,
numerical and computational mathematics, ordinary differential equations and
continuous-time dynamical systems, partial differential equations, stochastic
and time-delay differential equations, statistics, probability and time-series
analysis. All this research has contributed to and continues to increasingly
contribute both to better understand medical phenomena and to finding
practical ways of action.
Time series modeling for predictive purpose has been an active research area
of machine learning for many years [Fatoumata Dama and Christine Sinoquet
(2019)]. However, no sufficiently comprehensive and meanwhile substantive
survey was offered so far. This survey strives to meet this need. A unified
presentation has been adopted for entire parts of this compilation. Time
series data are amongst the most ubiquitous data types that capture
information and record activity in most aeras. In any domain involving
temporal measurements via sensors, censuses, transaction records, the capture
of a sequence of observations indexed by time stamps first allows to provide
insights on the past evolution of some measurable quantity. Beyond this goal,
the pervasiveness of time series has generated an increasing demand for
performing various tasks on time series data (visualization, discovery of
recurrent patterns, correlation discovery, classification, clustering, outlier
detection, segmentation, forecasting, data simulation).
Time series classification (TSC) is one of data minings persistent challenges.
Applications of TSC abound in fields including agriculture, medicine, and
engine prognostics [Abdoli, A., Murillo, A. C., Yeh, C. C. M., Gerry, A. C.,
and Keogh, E. J. (2018, December)],and [Yu, W., Kim, I. Y., Mechefske, C.
(2021)] a common goal being to detect instances of sub optimal behavior or
decreased health (biologically or mechanically) as just one real-world
example. Dozens of new TSC algorithms were introduced in the last four years
alone [Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E.(2017)]. This
trend has been intensified by the increasing availability of real-world
datasets. In fact, the classification of any inherently ordered data
(temporally or otherwise) can be treated as a TSC problem [Gamboa, J. C. B.
(2017)] , making for a vast breadth of real-world applications. Deep learning
methods have shown suitability for time series classification in the health
and medical domain, with promising results for electrocardiogram data
classification. Successful identification of myocardial infarction holds life
saving potential and any meaningful improvement upon deep learning models in
this area is of great interest.In this section we show that the discussed
dynamics ,namely , (8) present new classes of heartbeat ,namely, A new
dynamical model for Generating Synthetic Electrocardiogram Signals,The
electrocardiogram (ECG) is a time-varying signal reflecting the ionic current
flow which causes the cardiac fibers to contract and subsequently relax. The
surface ECG is obtained by recording the potential difference between two
electrodes placed on the surface of the skin. A single normal cycle of the ECG
represents the successive atrial depolarization/repolarization and ventricular
depolarization/repolarization which occurs with every heartbeat. These can be
approximately associated with the peaks and troughs of the ECG waveform
labeled $P,Q,R,S$, and $T$ as shown in figure 9
Figure 9: Morphology of a mean PQRST-complex of an ECG recorded from a normal
human
Extracting useful clinical information from the real (noisy) ECG requires
reliable signal processing techniques [AL Goldberger,E Goldberger (1977)].
These include R-peak detection [Jiapu Pan; Willis J. Tompkins (1985)], [C. Li,
C. Zheng, and C. Tai(1995)], QT-interval detection and the derivation of heart
rate and respiration rate from the ECG [P.E. McSharry , G.D. Clifford, L.
Tarassenko, L.A. Smith (2003)]. The RR-interval is the time between successive
R-peaks, the inverse of this time interval gives the instantaneous heart rate.
A series of RR-intervals is known as a RR tachogram and variability of these
RR-intervals reveals important information about the physiological state of
the ECG signal. The ECG may be divided into the following sections.
* •
Q-wave: A small low-voltage deflection away from the baseline caused by the
depolarization of the atria prior to atrial contraction as the activation
(depolarization) wavefront propagates from the SA node through the atria.
* •
PQ-interval: The time between the beginning of atrial depolarization and the
beginning of ventricular depolarization.
* •
QRS-complex: The largest-amplitude portion of the ECG, caused by currents
generated when the ventricles depolarize prior to their contraction. Although
atrial repolarization occurs before ventricular depolarization, the latter
waveform (i.e. the QRS-complex) is of much greater amplitude and atrial
repolarization is therefore not seen on the ECG.
* •
QT-interval: The time between the onset of ventricular depolarization and the
end of ventricular repolarization. Clinical studies have demonstrated that the
QT-interval increases linearly as the RR-interval increases . Prolonged QT-
interval may be associated with delayed ventricular repolarization which may
cause ventricular tachyarrhythmias leading to sudden cardiac death
* •
ST-interval: The time between the end of S-wave and the beginning of T-wave.
Significantly elevated or depressed amplitudes away from the baseline are
often associated with cardiac illness.
* •
T-wave: Ventricular repolarization, whereby the cardiac muscle is prepared for
the next cycle of the ECG.
Analysis of variations in the instantaneous heart rate time series using the
beat-to-beat RR-intervals (the RR tachogram) is known as HRV analysis [M Malik
, AJ camm(1995)], [Task force of the european society of cardiology(1996)].
HRV analysis has been shown to provide an assessment of cardiovascular disease
[M H Crawford,S Bernstein and P Deedwania(1999)].A dynamical model based on
three coupled ordinary differential equations is introduced which is capable
of generating realistic synthetic electrocardiogram (ECG) signals.The
dynamical equations of motion are given by a set of three ordinary
differential equations defined as the following :
$\begin{cases}\dot{x}=\alpha x-\omega y\\\ \dot{y}=\alpha y+\omega x\\\
\dot{z}=-\sum_{i\in\\{P,Q,R,S,T\\}}\exp\bigg{(}-\frac{-\Delta\theta_{i}^{2}}{2b_{i}^{2}}\bigg{)}-(z-z_{0})\end{cases}$
(9)
where $\alpha=1-\sqrt{x^{2}+y^{2}},\Delta\theta_{i}=(\theta-\theta_{i})\bmod
2\pi,\theta=\arctan 2(y,x)$, (the four quadrant arctangent of the real parts
of elements of $x$ and $y$ , with $-\pi\leq\arctan 2(y,x)\leq\pi$) and
$\omega$ is the angular velocity of the trajectory as it moves around the
limit cycle. Baseline wander was introduced by coupling the baseline value
$z_{0}$ in (9) to the respiratory frequency $f_{2}$ using
:$z_{0}(t)=A\sin(2\pi f_{2}t)$ where $A=0.15mV$
Figure 10: ECG generated by dynamical model: (a) 10 s and (b) 50 s Figure 11:
Comparison between (a) synthetic ECG with additive normally distributed
measurement errors and (b) real ECG signal from a normal human.
## 6 Introduction to Electrocardiography
An electrocardiogram is a recording of the electrical activity of the
heart.The heart can be viewed as a three-dimensional vector. Therefore, its
electrical activity can, in theory, be recorded by three orthogonal leads. In
practice, however, a standard clinical EKG is recorded with 12 leads: 6 limb
leads and 6 precordical leads.A normal EKG reflects the electrical activity of
each of the four heart chambers: left and right ventricles and left and right
atria (Fig. 1).
Figure 12: A normal EKG reflects the electrical activity of each of the four
heart chambers: left and right ventricles and left and right atria
The P wave marks the beginning of atrial depolarization. The onset of the Q
wave is produced by the stimulus from the Purkinje system. The depolarization
of the ventricle produces the R wave. The S wave is initiated when the
excitation reaches the base of the ventricle. The T wave is produced by
ventricular repolarization (Tompkins 1993; Goldman 1992). The synthetic ECG
(Figure 10) illustrates the modulation of the QRS-complex due to RSA and Mayer
waves. Observational uncertainty is incorporated by adding normally
distributed measurement errors with mean zero and standard deviation $0.025mV$
[Figure 11], yielding a similar signal to a segment of real ECG from a normal
human [Figure 11]. Electrocardiogram (ECG) detection is currently the most
effective and direct way to detect ECG signals [Z. Tang, G. Zhao, and T.
Ouyang(2021)]. At present, the diagnosis of cardiac diseases is mainly
determined by medical doctors and clinicians through manual detection and ECG
analysis.ECG is a diagnostic technology that records the electrocardiography
activities of the heart in a certain time unit through the chest of biological
objects. It collects and records the electrodes connected to the skin of
specific parts of biological objects and preserves the relevant contents in a
certain form [D. A. Winter, P. M. Rautaharju, and H. K. Wolf(1997)].
The pre-ejection-period (PEP) is the time span between the depolarization of
the left ventricle (R onset) and opening of the aortic valve (B point). The R
onset is the beginning of the Q-wave signal; it indicates the beginning of the
depolarization and can be picked up from the ECG signal. As signature for the
beginning of the Q wave, we take the minimum of the ECG’s second derivative.
Its peak indicates the maximum curvature at the transition of the ECG signal
into the Q wave. However, as the Q wave is relatively small, other signatures
in the ECG signal can be misinterpreted as the R onset in an automated
evaluation of the data. In general, first and higher-order derivatives of
noisy signals suffer from containing spurious peaks. We therefore restrict the
possible occurrences of R onset to a time window after the, easily
identifiable, R peak (peak of the QRS complex). Within that window, the R
onset is typically seen as a clear negative peak of the ECG signal’s second
derivative, which can be located reliably and with high precision and thus
allows a reliable identification of the Q wave onset. Heart rate (HR) is then
calculated from the time difference of subsequent R points.[M. Thomas, M. K.
Das, and S. Ari(2015)]
The time point for the opening of the aortic valve (B point) is derived from
the impedance cardiogram (ICG). The impedance Z, and thus the ICG, is
sensitive to a variation of blood volume in the thorax. The first derivative,
$\frac{dZ}{dt}$, corresponds to blood flow. The second derivative
$\frac{d^{2}Z}{dt^{2}}$, in turn, corresponds to a change of the blood flow
and is thus indicative for the opening of the heart valves. The B point is the
onset of the aortic valve’s opening, indicated by a negative peak in the third
derivative, $\frac{d^{3}Z}{dt^{3}}$. While, compared to ECG, the ICG signal is
smooth and devoid of characteristic spikes, its first, second, and third
derivative show distinct features. As selection criterion for picking the
correct peak of the third derivative, we use the, easily identifiable, peak of
the first derivative, $\frac{dZ}{dt}$. The B point is obtained as the minimum
of $\frac{d^{3}Z}{dt^{3}}$ that occurs just before the maximum in
$\frac{dZ}{dt}$. This strategy allows for an automated evaluation of the PEP
interval for the large data sets, with few outliers and the required
precision. To calculate the derivatives of the measured signals, we use the
Savitzky-Golay filter [Jianwen LuoKui ,YingLijing BaiLijing Bai(2005)]. This
method allows data smoothing, while keeping intact signatures like peaks, and
the simultaneous determination of derivatives. Similar to a moving average, a
moving section of the data is selected. However, instead of a simple
averaging, the algorithm fits a polynomial of given degree to the selected
sequence. Then, one point of the fitted polynomial (usually the central point)
is taken as value for the smoothed curve. Higher derivatives are taken from
the corresponding derivatives of the fitted polynomial at the respective
point. The Savitzky-Golay filter is implemented numerically by a list
convolution with a kernel. That kernel is calculated in advance for the number
of points for the moving fitting, the order of the polynomial, and the order
of the derivative. We use a kernel length of 100 points, corresponding to a
time interval of 50 ms, and a 3rd-order polynomial for all kernels and for the
ICG and ECG signals. The third derivative of the ICG signal is calculated from
the first derivative of the ICG signal, which, together with Z, is provided by
the Biopac MP36 system (i.e., the system we used to measure ICG/ECG). We
ensure that no time lag gets introduced between the ICG and ECG signals and
their derivatives by the Savitzky-Golay filter. Thus, PEP and LVET data get
extracted from the ICG and ECG measurements in a semi-automated way and with a
by-heartbeat resolution. The Mathematica Notebook output is stored in a text
file, with every row containing a timestamp, the corresponding length of
cardiac PEP, LVET, and HR for each heartbeat. Here is the Graphical display of
$Z,\frac{dZ}{dt}$,EKG
Figure 13: Graphical display of $Z,\frac{dZ}{dt}$ ,EKG , Istart=1000,Iend
=16000
## 7 Medical interpretation of our discret dynamics
To analyze ECG signal, the most necessary step is to extract its QRS wave
group. The QRS complex reflects changes in depolarization potential and time
of the left and right ventricles. Considering the robustness and stability,
For analysis of ECG signal using delay differential equation (DDE’s) is
computationally fast one can refer to the system of ODE in (9), and could be
the basis for a real time diagnostic system. DDEs reveal non-linear properties
as well as spectral properties of the data as shown in this paper [Claudia
Lainscsek and Terrence J. Sejnowski(2013)] by Claudia Lainscsek and Terrence
J. Sejnowski they analyzed ECG signal using delay differential equation which
leads to the good classification of Electrocardiogram ,may that classification
wouldn’t work as well using discret map ,some authors discussed
Electrocardiogram Signal Classification in the Diagnosis of Heart Disease
Based on RBF Neural Network [Yan Fang , Jianshe Shi, Yifeng Huang, Taisheng
Zeng,Yuguang Ye , Lianta Su, Daxin Zhu,and Jianlong Huang(2022)] such that
they extracted QRS wave using discret map (they used difference equations to
analyze ECG ),Recall that in our paper we are interested to the behavior of
the following dynamics :
$x_{n+1}=ax^{2}_{n}-bx^{2}_{n-1},n=0,1,....$
,$a,b$ two real parameters ,It is known that ECG signal has a very obvious
periodicity, Taking $T$ as sampling period we may rewrite our dynamics using
$T$ as :
$x((n+1)T)=ax^{2}(nT)-bx^{2}(T(n-1),n=0,1,...$ (10)
The high-frequency characteristics can be enhanced by a nonlinear square
function [Yan Fang , Jianshe Shi, Yifeng Huang, Taisheng Zeng,Yuguang Ye ,
Lianta Su, Daxin Zhu,and Jianlong Huang(2022)],see page 3 equation 4, whose
equation can be expressed as :
$y((nT)=x^{2}(nT)$ (11)
This means strongly that our discret dynamics which is defined in (10) can be
interpreted in the medical point of view as :Enhanced high frequencies
regarding the behavior of the heartbeat in short time, we may consider the
discret time defined in (10) as a discretized boundary value problem (delay
differential equation) using some standrad numerical methods like finite
difference method in 1D (dimension) , we may give a short analysis of the ECG
signal which is produced by the dynamics defined in (10), Assume the
correspending non linear boundary value problem which we wanted to solve
satisfies $x_{1}=0$ and $x_{n+1}=1$ using the Finit difference technique [R.
P. AGARWAL and Y. M. CHOW(1985)]. The first step is to partition the domain
$[0,1]$ into a number of sub-domains or intervals of length $h$. So, if the
number of intervals is equal to $n$, then $nh=1$. We denote by $x_{i}$ the
interval end points or nodes, . In general, we have $x_{i}=(i-1)h$,
$i=1,2,\cdots n+1$. Let us denote the concentration at the ith node by
$C_{i}$, for short enough time (say between $t_{i},t_{i+1}$) the dynamics (10)
become as a simple linear differential equation such that : for $h\to 0$ we
have $x((n+1)T)\to\dot{x}\leq 0$ implies that we have a decreasing frequencies
coming up to a constant signal $x(t)$ thus ,the dynamic (10) can be
interpreted as heart attack or heart failure ,In general that case indicate
Cardiac diseases ,(see figure in example 1), Enhanced high frequency appear
whenever $\dot{x}>0$ which means derivative of the analyzed signal is always
positive (one can refer to figures in examples (2+3+4)).It is hard to analyze
ECG signal using coupled signal which is defined in (12)
we have plotted many figures with some values of $a$ and $b$ such that we call
for medical interpretation $a$ is a factor of life (Electrocardiogram (EKG))
,identification of that factor(EKG) present attempt to improve patient
survival and $b$ may present blood loss which causes the cardiac arrest.we may
introduce a new parameter here $\sigma$ as a toxine factor this for good
modelization of the discussed phenomena. Now,let us try showing heartbeat
classes ,The dynamics (8) for $\alpha=2$ can be written as :
$\begin{cases}x(t+1)=ax_{t}^{2}+\sigma y(t)+1\\\
y(t+1)=bx_{t}^{2}-1\end{cases}$ (12)
We have noticed that For heartbeat classes plots we should have : $a\leq
0.15$,we may try playing with $b$ and $\sigma<0$ (should be negative) and for
$x_{0}=0.07,y_{0}=0.08$ initial values ,increasing values of $\sigma$ means
attempt to reduce toxine factor and increasing values of $b$ means raise
factor of EKG thus a good attempt to improve patient survival ,for the reverse
assumption we would have cardiac arrest record.
Example1
we start with the first heartplot , $a=0.15,\sigma=-0.45,b=0.45$
Figure 14: heartbeat classe for
$a=0.15,\sigma=-0.45,b=0.45,x_{0}=0.07,y_{0}=0.08$
Example2 let :$a=0.15,\sigma=-0.45,b=0.45$
Figure 15: heartbeat case for $a=0.15,\sigma=-0.6,b=0.5,x_{0}=0.07,y_{0}=0.08$
Example3 let :$a=0.15,\sigma=-0.65,b=0.58$
Figure 16: heartbeat case for
$a=0.15,\sigma=-0.65,b=0.58,x_{0}=0.07,y_{0}=0.08$
Example4 let :$a=0.15,\sigma=-0.65,b=0.57$
Figure 17: heartbeat case for
$a=0.15,\sigma=-0.75,b=0.6,x_{0}=0.07,y_{0}=0.08$
## 8 Conclusion:
This study of difference equations with public health applications to science
develops the methodology for the solution of the general kth order linear
difference equation using the generating function approach and computers tools
like mathematica. It includes an examination of the dynamics of disease spread
and containment in populations using illness-death models ,Cardiovascular
disease is one of the major hazard to human health today. ECG stands for
electrocardiogram and it is an important way for clinical diagnosis
cardiovascular disease. The ECG refers to the small voltages ( 1mv) found on
the skin as a result of electrical activity of the heart. These electrical
actions trigger various electrical and muscular activity in the heart. The
health and function of the heart can be measured by the shape of the ECG
waveform. Typical heart problems are leaking valves and blocked coronary
arteries.in our paper we have discussed a new discret dynamics which lead to
discover a new illness-death model(Enhanced high frequencies model ) using
time series diagram and ECG analyses with some data which are defined by
iterative discret dynamics such as heartbeat classes ,in particulary attempts
to improve patient for being survival,we may call it model of life for more
time.
## 9 Data Availability
The data that support the findings of this study can be obtained from the
corresponding author upon reasonable request.
## 10 Conflicts of Interest
The author declare that they have no conflicts of interest.
## 11 Authors’ Contributions
All authors contributed to the study conception and design of that research .
Material preparation, data collection, analysis and medical interpretation
were performed by Zeraoulia Rafik . The first draft of the manuscript was
written by Zeraoulia Rafik and all authors commented on previous versions of
the manuscript. All authors read and approved the revised manuscript.
## References
* [Daniel J. Duffy(2006)] Daniel J. Duffy, 2006.Finite Difference Methods in Financial Engineering: A Partial Differential Equation Approach, ISBN:=9780470858820, 9780470858820
* [Y. Ordokhan1 ,S. Davaei far(2017) ] Able, B., 1956.Approximate Solutions of Differential Equations by Using the Bernstein Polynomials
DOI: 10.5402/2011/787694
* [Josef Diblik ,Miroslava Ruzickova , Barbora Vaclavikova (2008)] Josef Diblik ,Miroslava Ruzickova , Barbora Vaclavikova, 2008. Bounded Solutions of Dynamic Equations on Time Scales,International Journal of Difference Equations (IJDE). ,ISSN 0973-6069 Volume 3 Number 1 (2008), pp. 61–69
* [Charlie y Routh(1992)] Agarwal,Difference equations and inequalities,1992.first edition, Marcel Dekker
* [ Camouzis and G.Ladas(2008)] Dynamics of Third-Order Rational Difference Equations;With Open Problems and Conjectures, 2008. Chapman and Hall/HRC Boca Raton,
* [E.A.Grove and G. Ladas(2005)] E.A. Grove and G. Ladas,Periodicities in Nonlinear Difference Equations,2005. Chapmanand Hall/CRC, 2005
* [R. Abo-Zeid (2017)] R. Abo-Zeid On the solutions of a second order difference equation,2017.Mathematica Moravica Vol. 21, No. 2 (61-73
* [S.Elaydi(2005)] S.Elaydi, An Introduction to Difference Equations,2005. Third Edition, Springer, New York.
* [Fatoumata Dama and Christine Sinoquet (2019)] Fatoumata Dama and Christine Sinoquet,2019. Time Series Analysis and Modeling to Forecast: a Survey .LS2N / UMR CNRS 6004, Nantes University, France
* [Abdoli, A., Murillo, A. C., Yeh, C. C. M., Gerry, A. C., and Keogh, E. J. (2018, December)] Abdoli, A., Murillo, A. C., Yeh, C. C. M., Gerry, A. C., Keogh, E. J. 2018. Time series classification to improve poultry welfare. In 2018 17TH IEEE International conference on machine learning and applications (ICMLA) (pp. 635, 642). IEEE
* [Yu, W., Kim, I. Y., Mechefske, C. (2021)] Yu, W., Kim, I. Y., Mechefske, C. ,2021. Analysis of different RNN autoencoder variants for time series classification and machine prognostics. Mechanical Systems and Signal Processing, 149, 107322
* [Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E.(2017)] Bagnall, A., Lines, J., Bostrom, A., Large, J., Keogh, E. ,2017. The great time series classification bake off: a review and experimental evaluation of recent algorithmic advances. Data mining and knowledge discovery, 31(3), 606-660
* [Gamboa, J. C. B. (2017)] Gamboa, J. C. B. 2017. Deep learning for time series analysis.
* [A. Q. Khan,S. M. Qureshi (2020)] A. Q. Khan,S. M. Qureshi (2020) . Dynamical properties of some rational systems of difference equations.Mathematical method in the applied science
* [José M. Amigó , Michael Small (2017)] A. Q. Khan,S. M. Qureshi (2017) .Mathematical methods in medicine: neuroscience, cardiology and pathology National library of medcin
* [AL Goldberger,E Goldberger (1977)] AL Goldberger,E Goldberger (1977) .Clinical electrocardiac ,Louis MO
* [Jiapu Pan; Willis J. Tompkins (1985)] Jiapu Pan; Willis J. Tompkins (1985).A Real-Time QRS Detection Algorithm
* [P.E. McSharry , G.D. Clifford, L. Tarassenko, L.A. Smith (2003)] P.E. McSharry , G.D. Clifford, L. Tarassenko, L.A. Smith (2003).A dynamical model for generating synthetic electrocardiogram signals .IEEE Transactions on Biomedical Engineering ( Volume: 50, Issue: 3)
* [M Malik , AJ camm(1995)] M Malik , AJ camm(1995). Heart rate variability ,Armonk,NY :futura
* [Task force of the european society of cardiology(1996)] Task force of the european society of cardiology(1996). Heart rate variability :standard of measurement physiological interpretation and clinical use the north americain society of pacing and electrophysiology,sophia antipolis,France
* [M H Crawford,S Bernstein and P Deedwania(1999)] M H Crawford,S Bernstein and P Deedwania(1999). ACC AHA , guidlines for ambulatory ,electrocardiography.circulation vol 100, pp 886-893
* [Jianwen LuoKui ,YingLijing BaiLijing Bai(2005)] Jianwen LuoKui ,YingLijing BaiLijing Bai(2005). Savitzky–Golay smoothing and differentiation filter for even number data
* [C. Li, C. Zheng, and C. Tai(1995)] C. Li, C. Zheng, and C. Tai(1995).Detection of ECG characteristic points using wavelet transforms. IEEE Transactions on Biomedical Engineering, vol. 42, no. 1, pp. 21–28
* [Z. Tang, G. Zhao, and T. Ouyang(2021)] Z. Tang, G. Zhao, and T. Ouyang(2021).Two-phase deep learning model for short-term wind direction forecasting,Renewable Energy, vol. 173, pp. 1005–1016
* [D. A. Winter, P. M. Rautaharju, and H. K. Wolf(1997)] , D. A. Winter, P. M. Rautaharju, and H. K. Wolf(1997).Measurement and characteristics of over-all noise content in exercise electrocardiograms,American Heart Journal, vol. 74, no. 3, pp. 324–331, 1967.
* [M. Thomas, M. K. Das, and S. Ari(2015)] M. Thomas, M. K. Das, and S. Ari(2015). Automatic ECG arrhythmia classification using dual tree complex wavelet based features, AEU - International Journal of Electronics and Communications, vol. 69, no. 4, pp. 715–721, 2015
* [Yan Fang , Jianshe Shi, Yifeng Huang, Taisheng Zeng,Yuguang Ye , Lianta Su, Daxin Zhu,and Jianlong Huang(2022)] Yan Fang , Jianshe Shi, Yifeng Huang, Taisheng Zeng,Yuguang Ye , Lianta Su, Daxin Zhu,and Jianlong Huang(2022).Electrocardiogram Signal Classification in the Diagnosis of Heart Disease Based on RBF Neural Network .Hindawi Computational and Mathematical Methods in Medicine Volume 2022, Article ID 9251225, 9 pages
* [Claudia Lainscsek and Terrence J. Sejnowski(2013)] Claudia Lainscsek and Terrence J. Sejnowski(2013).Electrocardiogram classification using delay differential equations doi: 10.1063/1.4811544
* [R. P. AGARWAL and Y. M. CHOW(1985)] R. P. AGARWAL and Y. M. CHOW(1985). FINITE-DIFFERENCE METHODS FOR BOUNDARY-VALUE PROBLEMS OF DIFFERENTIAL EQUATIONS WITH DEVIATING ARGUMENTS
|
# Augmenting momentum resolution with well tuned probability distributions
Gregorio Landia Corresponding author. Giovanni E. Landib
a Dipartimento di Fisica e Astronomia Universita’ di Firenze
Largo E. Fermi 2 50125 Firenze Italy
and INFN Sezione di Firenze Firenze Italy
E-mail:
b E.T.T. S.a.g.l<EMAIL_ADDRESS>
Via Balestra 33
6900 Lugano Switzerland.
###### Abstract
The realistic probability distributions of a previous article are applied to
the reconstruction of tracks in constant magnetic field. The complete forms
and their schematic approximations produce excellent momentum estimations,
drastically better than standard fits. A simplified derivation of one of our
probability distributions is illustrated. The momentum reconstructions are
compared with standard fits (least squares) with two different position
algorithms: the $\eta_{2}$-algorithm and the two-strip center of gravity. The
quality of our results are expressed as the increase of the magnetic field and
signal-to-noise ratio that overlap the standard fit reconstructions with ours
best distributions. The data and the simulations are tuned on the tracker of a
running experiment and its double sided microstrip detectors, here each
detector side is simulated to measure the magnetic bending. To overlap with
our best distributions, the magnetic field must be increased by a factor 1.5
for the least squares based on the $\eta_{2}$-algorithm and 1.8 for the two-
strip center of gravity for the low noise side, and 1.8 and 2.0 for the high
noise side. The signal-to-noise ratio must be increased by 1.6 for the low
noise side and 2.2 for the high noise side ($\eta_{2}$-algorithms). The fits,
built on the positioning with the center of gravity, are not modified by a
reduction of the signal-to-noise ratio.
###### keywords:
Particle tracking detectors, Performance of High Energy Physics Detectors, Si
microstrip and pad detectors, Analysis and statistical methods
## 1 Introduction
Some properties of our well tuned probability density functions (PDFs) were
described in ref. [1] and were tested on fits of simulated straight tracks.
The limitation to straight tracks was mainly due to the complexity of the
method we applied for the first time. Hence, working with two parameters, the
debugging and testing of the application can be followed on a surface.
Furthermore, the data, elaborated in our approach, were collected in a CERN
test beam [2] in the absence of a magnetic field. The detectors we used, were
few samples of double-sided silicon microstrip detector [3, 4] as those
composing the PAMELA tracker [5]. The results of ref. [1] showed a drastic
improvement of the fitted track parameters respect to the results of the least
squares methods. We observed excellent reconstructions even in presence of
very noisy hits, often called outliers, that generally produce worse fits.
This achievement is almost natural for our non gaussian PDFs. In fact, an
outlier hit is an event that is incompatible with the gaussian distribution
that is always assumed as the error distribution of the hit positions. Thus,
being the least squares a method strictly optimum for a gaussian PDF, it must
be modified in a somewhat arbitrary way to handle these pathological hits. On
the contrary, our PDFs are essentially different from a gaussian, and those
non standard cases are allowed (with low probability) by the non-gaussian
tails of the PDFs.
We have to recall that the perception of a rough handling of the hit
properties is well present in literature, and Kalman filter modifications are
often studied to accept extended deviations from a pure gaussian model. These
extensions imply the use of linear combinations of different gaussians [6],
but in a small number to avoid the intractability of the equations. The
unknown parameters are optimized from an improvements of the fits. In late
sense, our schematic approximations, used to initialize the maximum likelihood
search, could be reconnected to those extensions. In fact, to speed the
convergence, we calculate a different gaussian PDF (or more precisely an
effective variance) for each hit. Our comparisons take as references the least
squares methods. The Gauss-Markov theorem states their optimality among the
linear methods, thus no other linear method can be better.
The confidence gained in ref. [1] with straight tracks allow us to deal with
more complex tasks, i.e. to reconstruct the tracks in a magnetic field and
determine their momenta. To face this extension with the minimum modifications
of our previous work, we will utilize the simulated data and parameters used
in ref. [1], adapting them to the geometry and average magnetic field of the
PAMELA tracker. In fact, the relatively low magnetic field introduces small
modifications of the parameters obtained in its absence. The average signal
distribution of a minimum ionizing particle (MIP) is slightly distorted by a
Lorentz angle, that for the average magnetic field of the PAMELA tracker (
$0.43\,T$) is about $0.7^{\circ}$. Around this incidence angle, the average
signal distribution of a MIP is practically identical to that at orthogonal
incidence in the absence of a magnetic field, thus without further notice we
will assume a rotation of the detectors of their Lorentz angle. With these
assumptions, we simulate high momentum tracks for each side of the double
sided detectors. In the PAMELA tracker, the low noise side of the detector
(the junction side) is used for momentum measurements. This side has an
excellent resolution, a strip each two is connected to the read out system and
the unconnected strip distributes the incident signal on the nearby ones
optimizing the detector resolution. The other side (the ohmic side) has the
strips oriented perpendicularly to the junction side. Each strip is connected
to the readout system and has, by the complexity of construction, an higher
noise and a small, if any, signal spread to nearby strips. For those
characteristics, this side responds in a way similar to the types of
microstrip arrays used in the large trackers of the CERN LHC [7, 8, 9]. Thus,
the simulations on this side, as bending side, can give a glimpse of our
approach for other type of trackers even for the (small angle stereo) double-
sided detectors of the ALICE [7] experiment.
## 2 A compact derivation of the PDF
As always, the MIP incidence of our geometry imposes the use of the minimum
number of signal strips to reduce the noise, hence only two signal strips will
be used. In ref. [1] we indicated the principal steps required to obtain the
PDF for the center of gravity (COG) with two-strips, those steps followed the
standard method described in the books about probability. That procedure uses
the essential tools of the calculus: integrals and derivatives. In our cases,
many pieces of integrals over geometrical domains must be calculated to build
the cumulative probability, and its derivative gives our final PDF. It is
evident the length and the complexity of this development. Here we follow a
very different approach (Quantum Mechanics style) that reaches identical
results in few steps. The two strip COG (COG2 in the following) is calculated
with the origin in the center of the maximum-signal strip. The strip signals
are indicated with: $x_{1}$, $x_{2}$, and $x_{3}$, respectively the signal of
the right strip, central strip (with the maximum signal) and left strip. If
$x_{1}>x_{3}$ the COG2 is $x=x_{1}/(x_{1}+x_{2})$, if $x_{3}>x_{1}$ it is
$x=-x_{3}/(x_{3}+x_{2})$, other minor details will not be discussed here.
Thus:
$\displaystyle P_{x_{g2}}(x)=$
$\displaystyle\int_{-\infty}^{+\infty}\mathrm{d}\,x_{1}\int_{-\infty}^{+\infty}\mathrm{d}\,x_{2}\int_{-\infty}^{+\infty}\mathrm{d}\,x_{3}\,P(x_{1},x_{2},x_{3})$
(1)
$\displaystyle\,\Big{[}\theta(x_{1}-x_{3})\delta(x-\frac{x_{1}}{x_{1}+x_{2}})+\theta(x_{3}-x_{1})\delta(x+\frac{x_{3}}{x_{3}+x_{2}})\Big{]}$
where $P(x_{1},x_{2},x_{3})$ is the probability to have the signals
$x_{1},x_{2},x_{3}$ from the strips $1,2,3$. The signals $x_{i}$ are at their
final elaboration stage and ready to be used for position reconstruction of
the hit. The function $\theta(x_{j})$ is the Heaviside $\theta$-function:
$\theta(x_{j})=1$ for $x_{j}>0$, $\theta(x_{j})=0$ for $x_{j}\leq 0$, and
$\delta(x)$ is the Dirac $\delta$-function. It is immediate to verify the
normalization of $P_{x_{g2}}(x)$ by direct integration. Splitting the sum of
eq. 1 in two independent integrals and transforming the variables $x_{1}=\xi$,
$x_{1}+x_{2}=z_{1}$ and $x_{3}=\beta$, $x_{3}+x_{2}=z_{2}$, the jacobian of
the transformation is one and the integrals in $z_{1}$ and $z_{2}$ can be
performed with the rule:
$\int_{-\infty}^{+\infty}\mathrm{d}\,z\,F(z-\nu)\,\delta(x\mp\frac{\nu}{z})=F(\frac{\pm\nu}{x}-\nu)\,\frac{|\nu|}{x^{2}}\,.$
(2)
Applying eq. 2 to eq. 1 and using the limitations of the two
$\theta$-functions, eq. 1 becomes:
$\displaystyle P_{x_{g2}}(x)=\frac{1}{x^{2}}\Big{[}$
$\displaystyle\int_{-\infty}^{+\infty}\mathrm{d}\,\xi\int_{-\infty}^{\xi}\,\mathrm{d}\beta\,P\big{(}\xi,\,\xi\frac{1-x}{x},\beta\big{)}\,|\xi|\,+$
(3)
$\displaystyle\int_{-\infty}^{+\infty}\mathrm{d}\,\beta\,\int_{-\infty}^{\beta}\,\mathrm{d}\xi\,P\big{(}\xi,\,\beta\frac{-1-x}{x},\beta\big{)}\,|\beta|\Big{]}\,.$
This form underlines very well the similarity with the Cauchy PDF; in the
limit of $x\rightarrow\infty$ the $x$-part of the $P$ arguments are $-1$ and
$P_{x_{g2}}(x)\propto 1/x^{2}$ for large $x$.
### 2.1 The probability $P_{x_{g2}}(x)$ for small $x$
The probability $P(x_{1},x_{2},x_{3})$ can handle a strict correlation among
the arguments, we will release this strict correlation in a weakest one: the
mean values of the strip signals are correlated, but the fluctuations around
the mean values are independent. Thus the probability $P(x_{1},x_{2},x_{3})$
becomes the product of three functions $\\{P_{i}(x_{i}),i=1,2,3\\}$. Each
$P_{i}(x_{i})$ is assumed to be a gaussian PDF with mean values $a_{i}$ and
standard deviation $\sigma_{i}$. To simplify, the constants $a_{i},i=1,2,3$
are the noiseless signals released by a MIP with impact point $\varepsilon$:
$P_{i}(x_{i})=\exp\big{[}-\frac{(x_{i}-a_{i})^{2}}{2\sigma_{i}^{2}}\big{]}\frac{1}{\sqrt{2\pi}\sigma_{i}}.$
(4)
Even with the gaussian functions, the integrals of eq. 3 have no analytical
expressions and effective approximations must be constructed. We will not
report our final forms that are very long, instead we will illustrate a
limiting case which gives a simple approximation and eliminates a disturbing
singularity for the numerical integrations. It easy to show that for
$|x|\rightarrow 0$, $x^{-1}P_{2}\big{(}\xi(1-x)/x\big{)}$ and
$x^{-1}P_{2}\big{(}\beta(-1-x)/x\big{)}$ converge to two Dirac
$\delta$-functions. Hence, for small $|x|$, the integrals of eq. 3 can be
expressed as:
$\displaystyle P_{x_{g2}}(x)=$
$\displaystyle\frac{|a_{2}|}{\sqrt{2\pi}}\Big{\\{}\frac{\exp\big{[}-(x-\frac{a_{1}}{a_{1}+a_{2}})^{2}\frac{(a_{1}+a_{2})^{2}}{2\sigma_{1}^{2}(1-x)^{2}}\big{]}\big{[}1-\mathrm{erf}\big{(}(\frac{a_{3}}{a_{2}+a_{3}}-x)\frac{a_{2}+a_{3}}{\sqrt{2}\sigma_{3}(1-x)}\big{)}\big{]}}{2\sigma_{1}(1-x)^{2}}+$
(5)
$\displaystyle\frac{\exp\big{[}-(x+\frac{a_{3}}{a_{3}+a_{2}})^{2}\frac{(a_{3}+a_{2})^{2}}{2\sigma_{3}^{2}(1+x)^{2}}\big{]}\big{[}1-\mathrm{erf}\big{(}(\frac{a_{1}}{a_{2}+a_{1}}+x)\frac{a_{2}+a_{1}}{\sqrt{2}\sigma_{1}(1+x)}\big{)}\big{]}}{2\sigma_{3}(1+x)^{2}}\Big{\\}}\,.$
Equation 5 is correct only for $|x|\rightarrow 0$, but it is useful beyond
this limit and contains many ingredients of more complex expressions. An
example can be seen in fig.2 of ref. [1], the essential elements are the two
bumps (gaussian-like) centered in the possible noiseless two-strip COG:
$a_{1}/(a_{1}+a_{2})$ and $-a_{3}/(a_{3}+a_{2})$. Their effective standard
deviations are modulated by the signal-to-noise ratio
$\sigma_{1}/(a_{2}+a_{1})$ and $\sigma_{3}/(a_{2}+a_{3})$ and by the $1\pm x$
factor. More complete expressions contain terms very similar to Cauchy PDFs
that are $\propto 1/x^{2}$ for large $x$. The dimensions of the constants
$a_{j}$ must be those of the $\sigma_{j}$, for both of them we take directly
the ADC counts. The $x$-variable (the COG2) is a pure number expressed as a
fraction of the strip size, or more precisely, the strip size is the scale of
lengths.
The form of the COG2 inserted in eq. 1 synthesizes well our positioning
algorithm: each strip signal around the strip with the maximum signal is used
for position reconstruction. The strategy of cluster detection is supposed to
be optimized to suppress the false hits and its results are used only for the
maximum-signal-strip selection, and the signals of the two lateral strips are
used in any case even for small positive or not too negative values. It must
be underlined that the tiny amount of information is relevant for positioning
or to maintain the noise distribution. In fact, the track reconstruction
algorithms reach their optimum with realistic probability distributions, and
any unaccounted distortion induces a loss of resolution.
For construction, $P_{x_{g2}}(x)$ gives to the probability of an $x$ value
with constant $\\{a_{j}\\}$. For the track fitting this simple probability is
useless, we need a PDF that, for any $x$-value, gives the probabilities of the
impact points $\\{\varepsilon\\}$. To insert this functional dependence on
$P_{x_{g2}}(x)$, we must know the average variations of the energies
$\\{a_{1},a_{2},a_{3}\\}$ with $\varepsilon$. For this task, it is essential a
theorem illustrated in ref. [1] and the details of refs. [10, 11, 12], this
theorem allows the extraction of these functional dependencies from the data.
Once extended in $\varepsilon$, the PDF can be rewritten as:
$P_{x_{g2}}(x,E_{t},\varepsilon)=\frac{F(a_{1}(\varepsilon),a_{2}(\varepsilon),a_{3}(\varepsilon),E_{t},\sigma_{1},\sigma_{2},\sigma_{3},x)}{x^{2}}\,.$
(6)
Where
$F(a_{1}(\varepsilon),a_{2}(\varepsilon),a_{3}(\varepsilon),E_{t},\sigma_{1},\sigma_{2},\sigma_{3},x)$
is the term in square brackets of eq. 3. The functions
$a_{1}(\varepsilon),a_{2}(\varepsilon),a_{3}(\varepsilon)$ are the average
noiseless fractions of signal collected by each strip and $E_{t}$ is the total
signal of the three strips: the central one with the maximum signal and the
two lateral. We would need the noiseless $E_{t}$, but the measured one is the
only possibility. The extraction of the functions $\\{a_{i}(\varepsilon)\\}$
from real data of ref. [1] necessitates further refinements and numerical
recipes. But, once defined and checked their results, the procedure can be
standardized. In any case, slight variations of the $\\{a_{i}(\varepsilon)\\}$
around the best one give almost identical track parameter distributions, thus
their selection is very important but non critical. The functions
$\\{a_{i}(\varepsilon)\\}$ have a definition similar to the "templates" of
ref. [13]. The parameters $\sigma_{1},\sigma_{2},\sigma_{3}$ are the noise
(eq. 4) of the three strips considered. Our PDF can easily accomodate strips
with different $\sigma$, but, in the simulations, we will use a single
$\sigma$ for all the strips of same detector side and these parameters will
not be reported in the future expressions. Equation 6 is normalized as
function of $x$ but is not normalized in $\varepsilon$, due to its irrelevance
for the fit, this normalization will be neglected here. A set of normalized
PDFs from eq. 6 are illustrated in ref. [1] as $\varepsilon$ functions.
### 2.2 Track definition
Given our exploration of a new approach, our track definition must be the
easiest allowed. The tracks are circles with a large radius to simulate the
high momentum MIPs where the multiple scattering is negligible. The relation
of the track parameters to the momenta is $p=0.3\,\mathrm{B}\,R$, the form,
adapted to our needs, from ref. [14] where $p$ is in $\mathrm{GeV/c}$,
$\mathrm{B}$ in Tesla and $R$, the track radius, in meters.
The tracker model is formed by six parallel equidistant ($89\,mm$) detector
layers as for the PAMELA tracker, with constant magnetic field of $0.43\,T$
perpendicular to the track plane ($\xi\,,z$). The center of the tracker has
coordinates $\xi=0\,,z=0$, the $\xi$ axis is parallel to the layer planes and
the $z$ axis is perpendicular. The tracks are circles with center in $\xi=-R$
and $z=0$, and the magnetic field is parallel to the analyzing strips. To
simplify the geometry, the overall small rotation of the Lorentz angle
($0.7^{\circ}$) is neglected now, but it will be introduced in the following.
The simulated hits are generated (ref. [1]) with a uniform random distribution
on a strip, they are collected in groups of six to produce the tracks. For the
least squares, the exact impact position $\varepsilon$ of each hit is
subtracted from its reconstructed $\eta_{2}(x_{g2})$ position (as defined in
refs. [1, 11, 15]) and it is added the value of the fiducial track for the
corresponding detector layer. In this way each group of six hits defines a
track with our geometry and the error distribution of $\eta_{2}$-algorithm.
Identically for the COG2 positioning algorithm. This hit collection simulates
a set of tracks populating a large portion of the tracker system with slightly
non parallel strips on different layers (as it is always in real detectors).
All these tracks are possible realizations of our model track. At our high
momenta the track bending is very small and the track sagitta is smaller than
the strip width. Thus, the bunch of tracks has a transversal section around a
strip size, on this width we have to consider the Lorentz angle but its effect
is clearly negligible. Our preferred position reconstruction is the $\eta_{2}$
algorithm as in ref. [1] because it gives parameter distributions better than
those obtained with the simplest COG2 positions, but even the results for the
COG2 will be reported in the following.
In the $\\{{\xi},{z}\\}$-plane, the circular tracks are approximated, as
usual, with parabolas linear in the track parameters:
$\displaystyle\xi=\beta+\gamma{z}-\alpha{z}^{2}=\varphi(z)\,$ (7)
$\displaystyle\xi_{n}=\beta_{n}+\gamma_{n}{z}-\alpha_{n}{z}^{2}=\varphi_{n}(z)\,.$
The first line of eq. 7 is the model track, the second line is the fitted one.
The circular track is the osculating circle of the parabola $\varphi(z)$, at
our high momenta and tracker size the differences are negligible. Our model
track has $\gamma=0,\,\beta=0$, and $1/\alpha$ is proportional to the track
momentum. Due to the noise, the reconstructed track has equation
$\varphi_{n}(z)$ and the fitted parameters
$\\{\alpha_{n},\beta_{n},\gamma_{n}\\}$ are distributed around the model
values $\alpha,\beta,\gamma$. For their non gaussian forms, our PDFs must be
used with the non-linear search of the likelihood maxima, but, as always, the
search will be transformed in a minimization. The momentum and the other
parameters of the track $n$ are obtained minimizing, respect to $\alpha_{n}$,
$\beta_{n}$ and $\gamma_{n}$ the function $L(\alpha_{n},\beta_{n},\gamma_{n})$
defined as the negative logarithm of the likelihood with the PDFs of eq. 6:
$\displaystyle
L(\alpha_{n},\beta_{n},\gamma_{n})=-\sum_{j=6n+1}^{6n+6}\,\ln[P_{x_{g2}}(x(j),E_{t}(j),\psi_{j}(\alpha_{n},\beta_{n},\gamma_{n})]$
(8)
$\displaystyle\psi_{j}(\alpha_{n},\beta_{n},\gamma_{n})=\varepsilon(j)-\varphi(z_{j})+\varphi_{n}(z_{j})\,.$
The parameters $x(j)$, $E_{t}(j)$ (introduced in eq. 6) are respectively: the
COG${}_{2}(j)$ position, the sum of signal in the three strips for the hit $j$
and the track $n$, ${z}_{j}$ is the position of the detector plane $j$ of the
$n$th track. The $\varepsilon$-dependence in the $\\{a_{i}(\varepsilon)\\}$ is
modified to $\psi_{j}(\alpha_{n},\beta_{n},\gamma_{n})$, to place the impact
points on the track. In real data,
$\varepsilon(j)-\varphi(\alpha,\beta,\gamma,z_{j})$ is absent (and unknown)
but the data are supposed to be on a track. We can easily use a non linear
form for the function $\psi_{j}(\alpha_{n},\beta_{n},\gamma_{n})$, but in this
case is of scarce meaning. In more complex cases, non linearities of various
origin can be easily implemented.
We will reserve the definition of maximum likelihood evaluation (MLE) to the
results of eq. 8 even if the least squares method can be derived by a maximum
likelihood. The minimum search routine for the MLE is initialized as in ref.
[1]. The initial track parameters are given by a weighted least squares with
weights given by an effective variance ($\sigma_{eff}(i)^{2}$) for each hit
$(i)$ and $\eta_{2}(i)$ as hit position. The $\sigma_{eff}(i)^{2}$ is obtained
from our PDF, but, for the form of eq. 6, the variance is an hill defined
parameter even in $\varepsilon$, and this hill definition must be eliminated
with cuts on the integration ranges that suppress the PDF tails. We use two
sizes of cuts, one for the low noise side and one for the high noise side of
our double side detector. The cuts are optimized to obtain gaussian
distributions, with standard deviation $\sigma_{eff}(i)$, reproducing our PDF
for a good hit. For a set of hits (good hits), eq. 6 has the form of a narrow
high peak and a gaussian (centered in $\eta_{2}(i)$) with standard deviation
$\sigma_{eff}(i)$ is built to reproduce well the
$P_{x_{g2}}(x(i),E_{t}(i),\varepsilon)$ on few of them. These gaussian
approximations look good in linear plots, the logarithmic plots show marked
differences even in these happy cases, the tails are non-gaussian. The cuts,
so defined, are used for the $\sigma_{eff}(i)$ extraction in all the other
hits, even where $P_{x_{g2}}(x(i),E_{t}(i),\varepsilon)$ is poorly reproduced
by a gaussian.
The $\\{\alpha_{n},\beta_{n},\gamma_{n}\\}$ given by the weighted least
squares are almost always near those given by the minimization of eq. 8, thus
rendering less heavy the minimum search to the MATLAB [17] $fminsearch$
routine. The closeness of these approximations to the MLE supports the non
criticality of the extraction of the functions $\\{a_{i}(\varepsilon)\\}$. In
fact, the approximate gaussian distributions are often very different from the
$P_{x_{g2}}(x(i),E_{t}(i),\varepsilon)$, but the realistic pieces of
information about the hits are sufficient to produce near optimal results.
When the tails of the PDFs are important the MLE results are better. The
strong variations of the set $\\{\sigma_{eff}\\}$ on the strip are illustrated
in figs. 6 and 11 of ref. [1], and the patterns of those variations support a
complex interplay among the physical properties of the strips and the
reconstruction algorithm. The time-consuming extraction of $\sigma_{eff}(i)$
(as indicated in ref. [1]) can be reduced building the surface
$\sigma_{eff}(x,E_{t})$ and calculating $\sigma_{eff}(i)$ with an
interpolation.
## 3 Low noise, high resolution, floating strip side
The floating strip side is the best of the two sides of this type of strip
detector. It is just this side that measures the track bending in the PAMELA
magnetic spectrometer. In the test beam [2], the noise of the "average" strips
is well reproduced by a gaussian with a standard deviation of 4 ADC counts,
and the PDF for the sum of three strip signals has its maximum at 142 ADC
counts with the most probable signal-to-noise ratio of $35.5$ for a three
strip cluster ($SNR(n)=\sum_{i=1}^{3}x_{i}(n)/\sigma_{i}$). The functions
$a_{j}(\varepsilon)$ are those of ref. [1] for this strip type. In the
simulations we will use a high momentum of $350\,GeV/c$. For this momentum and
similar geometry, we have a report with some histograms of a CERN test beam of
the PAMELA tracker before its installation in the satellite. The tracks were
reconstructed with the $\eta_{2}$ algorithm, and this allows a positive check
of our simulations.
At $350\,GeV/c$, the hits of the tracks are largely contained
Figure 1: Left plot. Blue line: distribution of the differences of hit
$\\#\,3$ minus hit $\\#\,1$ of a track for the $\eta_{2}$-algorithm; magenta
line: the same for the COG2. Right plot. True residuals of the reconstructed
tracks: from MLE (red), from $\sigma_{eff}(i)$ (black), from $\eta_{2}$
(blue), from COG2 (magenta), and the position errors of the $\eta_{2}$ (cyan)
and COG2 (green).
in a strip width ($51\,\mu m$), the left side of fig. 1 shows the
distributions of the differences between two hits on a track (the $\\#\,3$
minus the $\\#\,1$) for two different hit reconstruction algorithm. They peak
around 8 $\mu m$, but, as any COG reconstruction algorithm, the COG2 algorithm
produces a wider distribution due to a systematic error of refs. [10]. The
$\eta_{2}$-algorithm is built to be free of this systematic error, here we
corrected even the asymmetry errors discussed in ref. [11, 12]. Even if they
are now very small, the clear symmetry of the plots is the product of this
accuracy. For the absence of the COG systematic error, the PDFs of the track
parameters given by the $\eta_{2}$ algorithm are better than those given by
the COG2. The results of our MLE are compared with those obtained by three
different least squares. Having to plot the results of four type of
reconstructions we will use the following color convention:
* •
red lines refer to our MLE (eq. 8),
* •
black lines are the weighted least squares with weight $1/\sigma_{eff}(i)^{2}$
and $\eta_{2}$ position,
* •
blue lines for the least squares with the $\eta_{2}$ position algorithm
* •
magenta lines for the least squares with the COG2 position algorithm.
In the right plot of fig. 1 the distributions (or more precisely the histogram
values divided by the number of entries and the step size) of the differences
of the fitted positions respect to the exact ones are reported with the above
color convention. In the following we will call these differences as true
residuals (allowed only in simulations) to distinguish from the residuals
generally defined as the differences of the fitted positions from the
reconstructed positions. For further tests, two other lines are added, the
cyan and green lines are the error distributions of the $\eta_{2}$ and COG2
hit reconstruction algorithms. The PDFs of the residuals are not reported,
but, those for the COG2 and $\eta_{2}$ are almost identical to the reported
PDFs of the true residual. The residuals for other two approaches are very
different from the true residuals of fig. 1, they have very high peaks around
zero. Our PDF of eq. 6 and $\sigma_{eff}(i)$ allow the recognition of the good
hits and the fit optimization selects to pass near to them, giving an high
frequency of small residuals.
The general expectancy of an improvement of the position reconstruction by
redundant data is evidently disappointed for the $\eta_{2}$ least squares: the
cyan distribution is higher than the blue one. This mismatch is very similar
to the case of a least squares on data from a Cauchy distribution [16].
Instead, the true residuals of COG2 least squares look much better than their
error distribution, that is very large. The least squares looks able to round
the error distribution. But this effort consumes all its power, in fact if the
statistical noise is suppressed, the COG systematic error does not allow any
modification of the true residuals. On the contrary, the true residual PDF of
the $\eta_{2}$ least squares grows toward a Dirac $\delta$ function in the
absence of the statistical noise. We have to signal a light inconsistency
among the histograms reported here and those of ref. [1]. There we worked
always with the strip pitch as scale for the length (as defined for the
probability of eq. 5), in the plots the horizontal scale was turned to the
appropriate scales ($\mu m$). Here, the conversion to the correct dimensions
was performed before building the histograms, thus the vertical scales turn
out different respect to those of ref. [1].
Figure 2: Left plot. Distributions of the curvature in $(GeV/c)^{-1}$ (the
$\alpha$ parameters) for the four types of fit. Right plot: distributions of
the reconstructed momenta.
As illustrated in fig. 2, the MLEs give the best results for the momentum
reconstruction, and the weighted least squares, are very near to them. The
fits of the standard least squares with $\eta_{2}$ or COG2 positioning
algorithms show a drastic decrease in resolution. The use of the simple COG2
algorithm is the worst one. Often the distributions of the left side of fig. 2
are reported as resolution of the momentum reconstruction, the k-value of ref.
[14]. For the $\eta_{2}$ and COG2 least squares, the plots of the momentum
distributions have appreciable shifts of the maxima (most probable value)
respect to the fiducial value of 350 $GeV/c$, the shifts are negligible for
the other two fits. These shifts are mathematical consequences of the change
of variable from curvature to momentum.
### 3.1 Other track parameters
The complete track reconstruction must consider even the other two parameters
of a track, the $\beta_{n}$ and $\gamma_{n}$. Their fits give very similar
distributions to those plotted in ref. [1]. The maxima of the distributions
are now a little lower, in particular for the $\beta$ parameter. This is not
unexpected, in ref. [1] we had 3 degree of freedom for two parameters, here we
have 3 degree of freedom for three parameters, an effective reduction of the
redundancy that has a slight effect on the results.
## 4 High noise, low resolution, normal strip side
The other side of the double sided silicon microstrip detector has very
different properties respect to the floating strip side (the junction side).
We will not recall the special treatments required to transform a ohmic side
in strip detector and all the other particular setups necessary to its
functioning. From the point of view of their data, this side produces data
very similar to a normal strip with a gaussian noise of 8 ADC counts, twice of
the other side (most probable signal-to-noise ratio $SNR(n)=18.2$). The
absence of the floating strips gives to the histograms of the COG2 the normal
aspect with a high central density and a drop around $x_{g2}=0$. No additional
rises around $x_{g2}=\pm 1/2$ are present, they are typical of the charge
spreads given by the floating strips. This absence reduces the efficiency of
the positioning algorithms that gain substantially from the charge spread.
Now, the functions $\\{a_{i}(\varepsilon)\\}$ of ref. [1] are similar to those
of an interval function with a weak rounding to the borders, this rounding is
mainly due to the convolution of the strip response with the charge spread
produced by the drift in the collecting field. If a residual capacitive
coupling is present, it is very small. In any case, it is just this rounding
that renders very good the resolution of the hits on the strip borders with a
small $\sigma_{eff}(i)$. Due to the differences respect to the other side, we
have to implement the simulations with a lower momentum of $150\,GeV/c$.
Figure 3: Left plot. Blue line: distribution of the differences of hit
$\\#\,3$ minus hit $\\#\,1$ of a track for the $\eta_{2}$-algorithm, magenta
line: the same for the COG2. Right plot. True residuals of the reconstructed
tracks. Color cades defined above for the fits, the cyan and green PDFs are
the positioning errors of the $\eta_{2}$ and COG2 algorithms.
The left side of fig. 3 shows the distributions of the hit differences
($\\#\,3$ minus $\\#\,1$) to test the containment of the track on a strip
width, the largest noise has evident effects on their forms. As usual the COG2
distribution is larger than the $\eta_{2}$ distribution. The true residuals of
the fits are reported in the right side of fig. 3 with the cyan and green
lines for the error distributions of the $\eta_{2}$ and COG2 hit
reconstruction algorithms. The other color codes are the standard ones. Even
here the data redundancy for the $\eta_{2}$-fit does not improve the position
reconstruction. The $\eta_{2}$ hit error distribution is drastically better
than that of the true residuals for the fit. The COG2 position error
distribution has two maxima due to the systematic error that is positive for
the first half of the strip and negative in the second half, this difference
of sign combines with the noise random error to round the two maxima. The fit,
based on the COG2 positions, has a good probability to partially average these
sign differences and produce a single wide maximum. The momentum distributions
given by the four fits are reported in fig. 4.
Figure 4: Left plot. Distributions of the curvature in $(GeV/c)^{-1}$ (the
$\alpha$ parameters) for the four different fits. Right plot: distributions of
the reconstructed momenta.
The results of our MLE and the weighted least squares are drastically better
than the standard least squares. Now the shift of the maxima for the momentum
distributions of two least squares are more evident than in fig. 2.
## 5 Discussion
A meaningful comparison among the different fits is somewhat difficult. Often
the standard deviation of the PDF is extracted interpolating a gaussian
function on non gaussian distributions, or two (or more) gaussian PDF are
fitted hoping to describe two (or more) independent gaussian processes. In
ref. [1] we used the full width at half maximum, but, apart from its complex
extraction, it does not characterize very well the distributions. In any case
those differences are not of easy interpretation. Here we will use two very
precious resources contributing to the momentum measurements: the magnetic
field and the signal-to-noise ratio. The simulated magnetic field intensity
and the signal-to-noise ratio are increased in the $\eta_{2}$ and COG2 least
squares reconstructions to reach our best distributions (the red lines).
### 5.1 Increasing the magnetic field
Figure 5: Top plots. Low noise case, Curvature in $(GeV/c)^{-1}$ and momentum
distributions for our MLE (red lines) and $\eta_{2}$ least squares with a
magnetic field increased by a factor 1.5. Bottom plots. Higher noise case,
here the $\eta_{2}$ least square has a magnetic field 1.8 times greater than
the MLE.
With fixed momentum, the magnetic field is increased in the fit for the
$\eta_{2}$ least squares. The upper sector of fig. 5 illustrates these results
for the low noise side and the overlaps with our MLE. The red lines are
identical to those of fig. 2, the blue line are the $\eta_{2}$ least squares
for a magnetic field 1.5 times higher. The plots for the COG2 least squares
are not reported to render easily legible the figures, in this case the
magnetic field must be increased of factor 1.8 to overlap our red lines. The
lowest sector of fig. 5 reports the higher noise side, the red lines are
identical to those of fig. 4, the blue lines ($\eta_{2}$ least squares)
overlap the red lines with a magnetic field increased of 1.8 times. Even here,
the overlap with the COG2 least squares is not reported, now, to obtain a
reasonable overlap, the magnetic field must be doubled. Clearly, the increment
of the magnetic field produces other slight differences in the tails of the
distributions, but the main results are well evidenced.
### 5.2 Increase of the signal-to-noise ratio
Let us see now the effects of the increase of the signal-to-noise ratio. These
modifications reproduce in part the results of the magnetic increase, but it
is different in other parts. In fact, higher values of the magnetic field do
not modify the form of the distributions of the curvature $\alpha$ (with
dimension ${legth}^{-1}$), the curvature distributions translate toward higher
values for a reduction of the track radius. The dimensional transformations to
$(\mathrm{GeV/c})^{-1}$ and $\mathrm{GeV/c}$ have the magnetic field intensity
as scaling factor, that can rise the distributions to the forms of fig. 5 even
for the COG2 case. But, as for the $\alpha$ parameters (with dimension
${length}^{-1}$), the distributions of the true residuals of fig. 1 and fig. 3
are not influenced by the rising of the magnetic, a part small effects on the
Lorentz angle.
Keeping the magnetic field to $0.43\,T$, the raise of the signal-to-noise
ratio modifies the distributions of the $\alpha$ parameters and those of the
true residuals and can bring them to overlap our red distributions. In our
simulations, this is accomplished scaling the amplitude of the random noise,
added to the strip signals, and rerunning the least squares fits. The plots of
these results for the momentum (in $\mathrm{GeV/c}$) and curvature (in
$(\mathrm{GeV/c})^{-1}$) are not reported because they are practically
identical to those of fig. 5. For the low noise side the gaussian strip noise
of $\sigma=4$ ADC counts must be reduced to 2.5 ADC counts, raising the
signal-to-noise ratio by a factor 1.6 to a huge most probable value of
$SNR(n)=56.8$. Now, even the blue line in fig. 1 of the true residuals reaches
the red line, but the redundancy continues to be unable to move the true
residual distribution beyond that of the new $\eta_{2}$ errors. A special
mention must be devoted to the COG2 distributions, the magenta lines. They
remain essentially unchanged in any plot for a (any) reduction of the strip
noise. The COG2 error distribution of fig. 1 (the green line) shows a slight
modification becoming less rounded for the hard presence of the systematic
error, unaffected by the random noise.
For the higher noise side of the detector, a reduction to less of one half of
the noise (from $8$ to $3.6\ \ \mathrm{ADC}$-counts) is required to reproduce
the low parts of fig. 5. With this doubling (2.2) of the signal-to-noise ratio
the blue line (the $\eta_{2}$ best fit) overlaps the red line in fig. 3 rising
the most probable signal-to-noise ratio to 40.5. As for the other side, no
noise reduction moves the distributions given by the COG2 least-squares. The
magenta lines remain practically unchanged a part a reduction of the
fluctuations and a sharpening of the green line of fig. 3.
As just said above, the insensitiveness to the noise reduction is due to the
COG systematic error of ref. [10]. This fact renders of weak relevance the
efforts to improve the signal-to-noise ratio in the detectors if the
positioning algorithm remains the COG. On the other side, this method is
stable respect to a decrease of the signal-to-noise ratio due to ageing or
radiation damages, as far as the COG systematic error dominates.
A last point remains to be analyzed, i.e. the huge difference between the
curvature PDF for the $\eta_{2}$ least squares in the low part fig. 5 and the
blue line of fig. 2. Now the two detector types have very similar signal-to-
noise ratio (3.6 ADC counts in the first and 4 ADC counts in the second), but,
to reach the overlap of the two PDFs, another factor of around 2.4 is needed.
This factor is too large to be due to the $20\,\%$ of differences between the
sizes of their strips. The main difference must be due to the beneficial
charge sharing of the floating strip and the nearing of its detector
architecture to the ideal detector defined in ref. [10].
## 6 Conclusions
We extended the simulated track reconstructions of ref. [1] with the study of
curved tracks in a constant magnetic field of $0.43\,T$. All the other
effects, $\delta$-rays, multiple scattering (negligible at our high momenta),
energy loss, etc., are explicitly excluded by the simulation, focusing on the
differences among various fit methods. Each type of detectors of our double
sided detector are examined as momentum analyzer. The higher noise side has
signal properties very similar to single sided detectors of large use in
running trackers. We use our well tuned PDFs in two form: complete in the MLE
or schematic with weight parameters $1/\sigma_{eff}(i)^{2}$. Each one of these
two form is very effective for this task respect to the results of two other
type of track fitting explored i.e. the least squares with $\eta_{2}$ as
position algorithm and the least squares with the COG2 positioning. To
establish a comparison among these fitting methods, we skip the usual ways of
comparing standard deviations or full width at half maximum. Instead, we
modify two different tracker properties to reach an overlap among the fit
outputs: the magnetic field and the signal-to-noise ratio. For the overlap of
the two standard fits with our best distributions, the magnetic field must be
increased by a factor 1.5 for the $\eta_{2}$ fit and 1.8 for the COG2 in the
low noise side and 1.8 and 2 for the higher noise side. The increase of
signal-to-noise ratio is effective only for the $\eta_{2}$ based least
squares, the overlaps are obtained with factors 1.6 and 2.2 for the two
detector sides. Any increase of the signal-to-noise ratio is unable to improve
the curvature and momentum distributions in the case of COG2 positioning, the
COG intrinsic systematic error survives untouched to any reduction of the
detector random noise. In any case, the drastic improvement of our well tuned
PDFs in the momentum reconstructions is evident. We have to remind that these
are simulations and come with all their uncertainties. Assuming an optimistic
view, this increase in resolution can be spent in different ways either for
better results on running experiments or in reducing the complexity of future
experiments if the baseline fits (almost always based on the COG positioning)
are estimated sufficient. With eq. 5, we start to give a glimpse of the
analytical forms of our PDFs. Even if this expression is of limited validity,
its combination with the appropriate $\\{a_{j}(\varepsilon)\\}$ allows a
faithful reproduction of the green and cyan distributions of figs. 1 and 3
from the data. Those distributions are exclusive products of simulations, but,
with slight redefinitions, the simplified PDF is able to reproduce them. The
complex task of MLE requires more advanced expressions, future papers will be
devoted to their discussion.
## References
* [1] G. Landi, G.E. Landi, Improvement of track reconstruction with well tuned probability distributions 9 2014 P10006.
* [2] O. Adriani et al., "In-flight performance of the PAMELA magnetic spectrometer" $16^{th}$ _International Workshop on Vertex Detectors_ September 2007 NY. USA. PoS(Vertex 2007)048
* [3] G. Batignani et al., Double sided read out silicon strip detectors for the ALEPH minivertex http://dx.doi.org/10.1016/0168-9002(89)90546-9 _Nucl. Instrum. and Meth. A_ 277 (1989) 147.
* [4] O. Adriani et al., [L3 SMD Collaboration] The New double sided silicon microvertex detector for the L3 experiment. http://dx.doi.org/10.1016/0168-9002(94)90774-9 _Nucl. Instrum. Meth. A_ 348 (1994) 431.
* [5] P. Picozza et al., PAMELA-A Payload for Matter Antmatter Exploration and Light-nuclei Astrophysics http://dx.doi.org/10.1016/j.astropartphys.2006.12.002 _Astropart. Phys._ 27 (2007) 296 [0608697], O. Adriani et. al., An anomalous positron abundance in cosmic rays with energies 1.5-100 GeV http://dx.doi.org/10.1038/nature07942 _Nature_ 458 (2009) 607
* [6] R. Fr$\mathrm{\ddot{u}}$hwirth Track fitting with non-Gaussian noise http://dx.doi.org/10.1016/S0010-4655(96)00155-5 _Computer Physics Communications_ 100 (1997) 1. R. Fr$\mathrm{\ddot{u}}$hwirth, T. Speer, A Gaussian-sum filter for vertex reconstruction http://dx.doi.org/10.1016/j.nima.2004.07.090 _Nucl. Instr. and Meth. A_ 534 (2004) 217
* [7] ALICE Collaboration, The ALICE experiment at the CERN LHC. 3 2008 S08002
* [8] ATLAS Collaboration, The ATLAS experiment at the CERN Large Hadron Collider. 3 2008 S08003
* [9] CMS Collaboration, The CMS experiment at the CERN Large Hadron Collider. 3 2008 S08004.
* [10] G. Landi, Properties of the center of gravity as an algorithm for position measurements http://dx.doi.org/10.1016/S0168-9002(01)02071-X _Nucl. Instrum. and Meth. A_ 485 (2002) 698. G. Landi, _Properties of the center of gravity as an algorithm for position measurements: two-dimensional geometry_ http://dx.doi.org/10.1016/S0168-9002(02)01822-3 _Nucl. Instrum. and Meth. A_ 497 (2003) 511.
* [11] G. Landi, _Problems of position reconstruction in silicon microstrip detectors_ , http://dx.doi.org/10.1016/j.nima.2005.08.094 Nucl. Instr. and Meth. A 554 (2005) 226.
* [12] G. Landi and G.E. Landi, "Asymmetries in Silicon Microstrip Response Function and Lorentz Angle" [http://arxiv.org/abs/1403.4273physics/arXiv:1403.4273]
* [13] The CMS Collaboration _Description and performance of track and primary-vertex reconstruction with the CMS tracker_ 9 2014 P10009
* [14] K.A. Olive et al. _Particle Data Book_ Chin. Phys. C 38 090001 (2014)
* [15] E. Belau et al., _Charge collection in silicon strip detector_ http://dx.doi.org/10.1016/0167-5087(83)90591-4 _Nucl. Instrum. and Meth. A_ 214 (1983) 253.
* [16] G. Landi Tracks of detector theory Slides for a seminar to the Firenze Theory Group, October 2014
* [17] MATLAB 8. The MathWorks Inc.
|
# Multi-Robot Trajectory Planning with Feasibility Guarantee and Deadlock
Resolution: An Obstacle-Dense Environment
Yuda Chen, Chenghan Wang, and Zhongkui Li The authors are with the State Key
Laboratory for Turbulence and Complex Systems, Department of Mechanics and
Engineering Science, College of Engineering, Peking University, Beijing
100871, China (e-mail: [email protected]).
###### Abstract
This article presents a multi-robot trajectory planning method which
guarantees optimization feasibility and resolves deadlocks in an obstacle-
dense environment. The method is proposed via formulating an optimization
problem, where the modified buffered Voronoi cell with warning band is
utilized to avoid the inter-robot collision and the deadlock is resolved by an
adaptive right-hand rule. Meanwhile, a novel safe corridor derived from
historical planned trajectory is proposed to provide a proper space for
obstacle avoidance in trajectory planning. Comparisons with state-of-the-art
works are conducted to illustrate the safety and deadlock resolution in
cluttered scenarios. Additionally, hardware experiments are carried out to
verify the performance of the proposed method where eight nano-quadrotors fly
through a $0.6$m cubic framework.
###### Index Terms:
Trajectory generation, motion planning, multi-robot system, collision
avoidance, deadlock resolution.
## I Introduction
Collision-free trajectory planning plays an essential role in the missions
preformed by a swarm of robots in a shared environment, such as cooperative
inspection and transportation [1, 2]. Currently, optimization-based methods,
such as model predictive control [3] and sequential convex program [4], are
widely employed to handle collision avoidance by introducing different kinds
of constraints. However, the constrained optimization problem may suffer from
infeasibility leading to the failure of replanning, and such a phenomenon
occurs more frequently in a crowded environment. Furthermore, in obstacle-
dense situations, the robots in the swarm are more likely to get stuck with
each other, which is also known as deadlock [5].
Concerning the multi-robot trajectory planning problem in an obstacle-dense
environment, we propose a novel method to ensure the optimization feasibility
and handle the deadlock problem simultaneously. In this work, the modified
buffered Voronoi cell with warning band (MBVC-WB) [6] is utilized to deal with
inter-robot collision avoidance, and the assorted adaptive right hand rule is
brought in for deadlock resolution. Furthermore, in order to avoid obstacles
in the environment, a safe corridor is generated to provide a feasible space
for trajectory replanning. Spcifically, we firstly adopt a search-based path
planning method ABIT∗ [7] to determine an approximated path in the complex
environment. Then, the separating hyperplane is constructed between the
imminent obstacles and the existing planned trajectory based on a quadratic
program.
The main contributions of this work are summarized as follows.
* $\bullet$
A novel safe corridor constituted by a sequence of polyhedron is proposed for
obstacle avoidance, and it is formed via an online method, different from
those offline ones as in [8, 9]. In contrast to the rectangular safe corridor
in [10, 11, 12], the presented corridor provides a more reasonable planning
space by considering the motion tendency of robots and the distribution of
obstacles in the environment.
* $\bullet$
Different from [6] where the deadlock resolution is performed in a free space,
this work considers an obstacle-dense environment, which is evidently more
challenging. In addition, the penalty term related to the warning band is
replaced by a quadratic one which considerably mitigates the computation time.
* $\bullet$
Comparisons with state-of-the-art results [11, 13, 14] are made in several
cluttered scenarios, illustrating that the proposed method has a better
performance in terms of guaranteeing the collision avoidance as well as
handling the deadlock problem.
* $\bullet$
Hardware experiments are executed to verify the validation in real-world
cases, including eight crazyfiles passing though a 3D framework (Fig. 1), four
crazyfiles transitting in a polygonal environment, and six and eight robots
going through “H”-and “n”-shaped narrow passages.
Figure 1: Eight nano-quadrotors fly through a cubic framework.
## II Related Work
### II-A Optimization-based Trajectory Planning
In optimization-based multi-robot planners, trajectory planning is formulated
as a numerical optimization problem, where the inter-robot collision avoidance
is leveraged by adding convex [3, 4] or non-convex [14, 15] constraints.
Nonetheless, most of the existing methods encounter a challenge that the
optimization may be infeasible under these constraints. To overcome this
drawback, the method in [13] adopts a soft constraints instead of a hard one,
which however may lead to the result that the safety of planned trajectory
cannot be ensured. Other methods [11] solve the feasibility problem where the
relative safe corridor is used to guarantee the feasibility. Unfortunately, it
computes the trajectories sequentially instead of in a concurrent way, e.g.,
[3, 16]. This indicates that the robot would waste a large amount of time in
waiting for others replanning. Besides the optimization feasibility, another
problem in trajectory planning is the deadlock, which refers to the fact that
robots will be trapped by each other in collision avoidance [5, 11, 17]. A
common solution is the right-hand rule [16] based on an artificial
perturbation [8], but the inter-robot collision avoidance cannot be ensured.
Our previous work [6] is well performed in deadlock resolution via an adaptive
right-hand rule, which however is only applicable in obstacle-free space. In
conclusion, feasibility guarantee and deadlock resolution in an obstacle-dense
environment is still an open problem for multi-robot trajectory planning.
### II-B Safe corridor
An early work related to the safe corridor is [18], which generates the
corridor through semi-definite programming. The work [19] produces the safe
corridor using two steps, namely, sampling-based path planning and geometry-
based corridor construction, and achieves a high-speed replanning for
quadrotors. Unfortunately, the constraints introduced by the corridor cannot
always be satisfied, which implies the optimization may be infeasible, and the
same problem can also be observed in [8, 20]. In addition, the authors in [10,
11, 12] propose a construction method by expanding the rectangular corridor in
a grid map. Despite this method is computationally efficient, the constructed
corridors are restricted by its rectangular shape that may have a lower space
efficiency. The method in [21] generates a safe corridor by using supported
vector machine, which has a higher utility rate of space but is centralized
and offline. Another way to construct a safe corridor is voxel expansion [8,
9], where a performed corridor is constituted based on a grid map. However,
the construction there is achieved offline as well, and cannot handle dynamic
missions such as changing targets.
## III Problem Statement
This section formulates the optimization-based trajectory planning problem in
a cluttered environment with dimension $d={2,3}$. The goal is to drive $N$
robots from their initial positions to respective destinations in an
environment with obstacles. During this period, a robot cannot collide with
_any_ other robot or _any_ obstacle. Despite every robot can only determine
its own control input, the information of others can be obtained via wireless
communication. The trajectory is replanned and executed every sampling time
step and the replanning is reformulated as a numerical optimization with
finite variables.
### III-A Trajectory Representation
Let $h$ denote the sampling time step. In the time step $t$ of replanning, the
planned trajectory for robot $i$ is defined as
$\mathcal{P}^{i}(t)=[p^{i}_{1}(t),p^{i}_{2}(t),\ldots,p^{i}_{K}(t)]$ where
$p^{i}_{k}(t)$, $k\in\mathcal{K}:=\\{1,2,\cdots,K\\}$, is the planned position
at time $t+kh$ and $K$ is the length of horizon. Similarly, let
$v^{i}_{k}(t)$, $k\in\mathcal{K}$ denote the velocity at time $t+kh$ and let
$u^{i}_{k}(t)$, $k=\\{0,1,\cdots,K-1\\}$ denote the control input. The
dynamics of the robot are formulated as
$x_{k}^{i}(t)=\mathbf{A}x_{k-1}^{i}(t)+\mathbf{B}u_{k-1}^{i}(t),\;k\in\mathcal{K},$
(1)
where $x^{i}_{k}(t)=[p^{i}_{k}(t),v^{i}_{k}(t)]$ is the planned state at time
$t+kh$, $x^{i}_{0}(t)=x^{i}(t)$ and
$\mathbf{A}=\left[\begin{array}[]{ccc}\mathbf{I}_{d}&h\mathbf{I}_{d}\\\
\mathbf{0}_{d}&\mathbf{I}_{d}\\\
\end{array}\right],\quad\mathbf{B}=\left[\begin{array}[]{ccc}\frac{h^{2}}{2}\mathbf{I}_{d}\\\
h\mathbf{I}_{d}\end{array}\right].$ (2)
Additionally, the velocity and input constraints are given as
$\|\Theta_{a}u^{i}_{k-1}(t)\|_{2}\leq a_{\text{max}},\ k\in\mathcal{K},$ (3)
$\|\Theta_{v}v_{k}^{i}\|_{2}\leq v_{\text{max}},\ k\in\mathcal{K},$ (4)
where $\Theta_{v},\Theta_{a}$ are positive-definite matrices, and
$v_{\text{max}},a_{\text{max}}$ denote the maximum velocity and acceleration,
respectively.
Assume that once the planned trajectory is updated, the lower feedback
controller can perfectly track it in the time interval $\left[t,t+h\right]$.
As a result, when replanning at time $t+h$, the current state of the robot
satisfies $x^{i}(t+h)=x^{i}_{1}(t)$. Fortunately, the current tracking
controller, e.g. [22], is qualified to fulfill this assumption and is adopted
in our hardware experiments.
In cooperative navigation, the information of different robots is exchanged by
wireless communication. Moreover, it is assumed that the modified historically
planned trajectory is informed and called predetermined trajectory, defined as
$\overline{P}^{i}(t)=[\overline{p}_{1}^{i}(t),\overline{p}_{2}^{i}(t),\ldots,\overline{p}_{K}^{i}(t)]$,
where
$\overline{p}_{k}^{i}(t)=p^{i}_{k+1}(t-h),k\in\tilde{\mathcal{K}}:=1,2,\ldots,K-1$
and $\overline{p}^{i}_{K}(t)=p^{i}_{K}(t-h)$.
### III-B Collision Avoidance
#### III-B1 Inter-robot Collision Avoidance
To avoid the collision among robots, the minimum distance allowed between any
pair of robots is set to be $r_{\text{min}}>0$, implying that a collision
happens when $\|p^{i}-p^{j}\|_{2}\leq r_{\text{min}}$ 111For the sake of
simplicity, the time index $t$ will be omitted whenever ambiguity is not
caused. For example, $p^{i}(t)$ will be rewritten as $p^{i}$. . Moreover, the
replanned trajectories of robots $i$ and $j$ are collision-free if the
positions of different pairs of robots are larger than $r_{\text{min}}$ at not
only the sampling time $t+kh$, $k\in\mathcal{K}$ but also during the interval
time.
#### III-B2 Obstacle Avoidance
Let $\mathcal{O}$ denote the collection of obstacles. Then the obstacle
avoidance requires that the occupied circle or sphere for a robot in the
configuration space should not have contact with any obstacle, i.e.,
$\left(p^{i}\oplus r_{a}\mathbf{I}_{d}\right)\cap\mathcal{O}=\emptyset$, where
$\oplus$ is the Minkowski sum and $r_{a}$ is the radius of agents.
Furthermore, a planned trajectory $\mathcal{P}$ is collision-free, if its
position is collision-free at not only the sampling time but also during the
interval time. Similar to [20, 23], we assume that the obstacles are convex-
shaped.
## IV Trajectory Planning Method
The trajectory planning method will be provided in this section. We deal with
the inter-robot collision avoidance, deadlock resolution and then obstacle
avoidance. Subsequently, the complete trajectory planning method is
summarized, followed by the proof for the feasibility guarantee of the
proposed method.
### IV-A Inter-robot Collision Avoidance
For inter-robot collision avoidance, we introduce the modified buffered
Voronoi cell with warning band (MBVC-WB) [6] as depicted in Fig. 2. Define the
following parameters:
$a_{k}^{ij}=\frac{\overline{p}_{k}^{i}-\overline{p}_{k}^{j}}{\|\overline{p}_{k}^{i}-\overline{p}_{k}^{j}\|_{2}},\quad
b_{k}^{ij}=a_{k}^{ij^{T}}\frac{\overline{p}_{k}^{i}+\overline{p}_{k}^{j}}{2}+\frac{r_{\min}^{\prime}}{2},$
(5)
where
$r^{\prime}_{\text{min}}=\sqrt{r_{\text{min}}^{2}+h^{2}v_{\text{max}}^{2}}$
denotes the extended minimum distance. Then, the MBVC-WB can be formulated by
$\displaystyle{a_{k}^{ij}}^{T}p_{k}^{i}\geq b_{k}^{ij},\ \forall j\neq
i,k\in\tilde{\mathcal{K}},$ (6a) $\displaystyle{a_{K}^{ij}}^{T}p_{K}^{i}\geq
b_{K}^{ij}+w^{ij},\forall j\neq i,$ (6b)
where $w^{ij}$ is an additional variable added to the optimization and
satisfies
$0\leq w^{ij}\leq\epsilon.$ (7)
In (7), $\epsilon$ is referred to as the maximum width of warning band.
The additional penalty term added to the cost function is given by
$C_{w}^{i}=\sum_{j\neq i}\frac{1}{\epsilon\gamma^{ij}}\rho_{ij}{w^{ij}}^{2},$
(8)
where $\gamma^{ij}(t)=(1-\beta)\gamma^{ij}(t-h)+\beta w^{ij}(t-h)$,
$\beta\in(0,1)$ and $\gamma^{ij}(t_{0})~{}=~{}\epsilon$. Notably, the modified
function (8) is a quadratic term and is computationally efficient in the sense
that $\gamma^{ij}(t)$ related to the result of $w^{ij}$ in last time step
$t-h$ is utilized to adjust the weight of this penalty. Additionally,
$\rho_{ij}$ is an important coefficient designed to execute adaptive the
right-hand rule and will be presented later.
### IV-B Deadlock Resolution
To deal with the possible deadlock problem in inter-robot collision avoidance,
we propose a detection-resolution mechanism. For deadlock detection, a notion
of terminal overlap is introduced as follows: If there holds $p^{i}_{K}(t)\neq
p^{i}_{\text{target}}$, $p^{i}_{K}(t)=p^{i}_{K}(t-h)$, and
$p^{i}_{K}(t)=p^{i}_{K-1}(t)$, we say that a terminal overlap happens and it
is denoted by $b^{i}_{\text{TO}}=True$. Regarding deadlock resolution, the
adaptive right-hand rule is carried out by adjusting $\rho^{ij}$ as follows
$\rho^{ij}=\rho_{0}\,e^{\eta^{i}(t)\,\sin\theta^{ij}},$ (9)
where
$\eta^{i}(t)=\left\\{\begin{array}[]{rll}&\eta^{i}(t-h)+\Delta\eta&\mbox{if}\;b^{i}_{\text{TO}}=True,\\\
&0&\mbox{if}\;\ w^{ij}=0,\forall j\neq i,\\\
&\eta^{i}(t-h)&\text{else}.\end{array}\right.$ (10)
In (9) and (10), $\rho_{0}>0$ and $\Delta\eta>0$ are the coefficients; the
initial condition of $\eta^{i}$ is $\eta^{i}(t_{0})=0$; the parameter
$\theta^{ij}$ is defined as the angle in $x-y$ plane between the projection of
$p^{i}_{K}$ to $p^{i}_{\text{target}}$ and $p^{i}_{K}$ to $p^{j}_{K}$.
Figure 2: Illustration of the MBVC-WB where the green area is the feasible
space and the orange one is the warning band. Shared space is split at each
horizon (Left). In particular, for terminal horizon, warning band is added
(Right).
### IV-C Obstacle Avoidance
The obstacle avoidance is realized by restricting the planned trajectory in a
safe corridor. The corridor is constituted by a sequence of convex polyhedra
in which the edge can separate the planned positions $p^{i}_{k}$ and inflated
obstacle $\tilde{\mathcal{O}}$, i.e., the obstacle inflated by $r_{\text{a}}$
in (Fig. 3, the blue polygons shifted by green obstacles). In our method, the
corridor is formed upon the predetermined trajectory. As an example in Fig.
3(d), three intersected convex polyhedra make up a corridor. Based on the safe
corridor, the obstacle avoidance constraint can be written as
${a_{k}^{i,o}}^{T}p_{k}^{i}\geq b_{k}^{i,o},$ (11)
where $a_{k}^{i,o}$ and $b_{k}^{i,o}$ constitute the edge of corridor for
robot $i$ at horizon $k\in\mathcal{K}$. In the following subsections, the
method of constructing these planes will be clarified in detail.
#### IV-C1 Path Planning
To begin with, a path is required to indicate an approximated direction for a
robot to the destination. This path is a collision-free polyline that connects
the robot and its target, i.e., connecting terminal horizon position of
predetermined trajectory, $\overline{p}^{i}_{K}$ and target,
$p^{i}_{\text{target}}$ as shown in Fig 3(a). RRT∗-based methods are qualified
to find such a feasible path, and among these methods, the Advanced Batch
Informed Trees (ABIT∗) [7] has a higher path quality. Thus, ABIT∗ is utilized
to find the path and the length of this path is chosen as objective to be
minimized.
Once the path is found, a point $p^{i}_{\text{tractive}}$, called the tractive
point, will be chosen in this path. It is determined as the closet point in
the path to the target such that the line segment between this point and
$\overline{P}^{i}_{K}$ is collision-free. A demo is presented in Fig. 3(b).
Then, if a terminal overlap does not happen, i.e., $b^{i}_{\text{TO}}=False$,
we add the tractive point to the end of predetermined trajectory to form the
extended predetermined trajectory (EPT). In practice, we find that, in most of
situations, a tractive point can be obtained via previously planned path. In
other words, the path is unnecessary to be updated in every replanning. Thus,
the path planning is triggered when it is needed such as changing the target
or failing to find a tractive point.
#### IV-C2 Segment Division
After obtaining EPT, we need to divide the points of EPT into several segments
so as to decrease the computation requirements as shown in Fig 3(b). Firstly,
we choose the end point of EPT as the start of the first segment. Next, from
the second end point of EPT to the start, the point in EPT will be added into
the current segment one by one. We will stop to add the next point into the
current segment until the convex hull of the contained points is not
collision-free anymore. Then, a new segment will begin from the end point of
the last one. The above process will be repeatedly carried out until the
beginning point $\overline{p}^{i}_{1}$ is added into the segment.
(a) Path planning.
(b) The process of choosing a tractive point and splitting the segments $S1$,
$S2$, $S3$.
(c) Form the separation hyperplane via linear program.
(d) For a given predetermined trajectory, form a safe corridor to provide a
feasible space for replanning.
Figure 3: Process of forming safe corridor.
input : $\mathcal{O}$, $\overline{\mathcal{P}}^{i}(t)$
output : $ObCons$
1 if _Need Path Planning_ then
2 $Path^{i}(t)\leftarrow
ABIT^{*}(\overline{p}^{i}_{K},p^{i}_{\text{target}},\mathcal{O})$;
3
4else
5 $Path^{i}(t)\leftarrow Path^{i}(t-h)$;
6
7 end if
8$ObCons\leftarrow\emptyset$;
9
$Segmentlist\leftarrow\text{SegmentDivision}(\mathcal{O},\overline{\mathcal{P}}^{i}(t),Path^{i}(t))$;
10 for _$Segment\ \in\ Segmentlist$_ do
11 $Corridor\leftarrow\text{GetCorridor}(Segment,\mathcal{O})$;
12 $ObCons\cup\text{AddConstraints}(Corridor)$;
13
14 end for
Algorithm 1 GetCorridor()
#### IV-C3 Separating Plane
After diving points of EPT into several segments, we will construct the
separating plane between the segment and obstacles. Since the convex hull
formed by the points in each segment is obstacle-free and also the obstacles
are convex-shaped, a separating plane exists according to separating
hyperplane theorem [24]. Then, as shown in Fig 3(c), an optimization-based
method can be provided as follows:
$\displaystyle\max_{a,b,\gamma}\;\gamma,$ (12) $\displaystyle{\rm s.t.},$
$\displaystyle{a}^{T}p^{\text{free}}\geq\gamma+b,$
$\displaystyle{a}^{T}p^{\text{obstacle}}\leq b,$
$\displaystyle{\left\|a\right\|}_{2}=1,$ $\displaystyle\gamma\geq 0,$
where $a$ and $b$ determine the separating plane; $p^{\text{free}}$ and
$p^{\text{obstacle}}$ denote the points of segments and obstacles
respectively; $\gamma$ is the margin variable. Such an optimization can be
further transformed to the following quadratic program (QP):
$\displaystyle\min_{a^{\prime},b^{\prime}}\ \left\|a^{\prime}\right\|_{2}^{2}$
(13) $\displaystyle{\rm s.t.},$
$\displaystyle{a^{\prime}}^{T}p^{\text{free}}\geq 1+b^{\prime},$
$\displaystyle{a^{\prime}}^{T}p^{\text{obstacle}}\leq b^{\prime},$
where $a^{\prime}=\frac{a}{\gamma}$ and $b^{\prime}=\frac{b}{\gamma}$. By
solving the QP (13), the separating plane can be obtained as
$a=\frac{a^{\prime}}{\|a^{\prime}\|_{2}}$ and
$b=\frac{b^{\prime}}{\|a^{\prime}\|_{2}}$, which forms the edge of a convex
polyhedron. Moreover, $a_{k}^{i,o}$ and $b_{k}^{i,o}$ are chosen as $a$ and
$b$, respectively, to formulate the constraints (11).
In implementation, the separation between segments and obstacles are
unnecessary to be constructed in some occasions that the obstacle locates out
of the planning region. Therefore, we come up with the following manipulation
to simplify the planning process. Specifically, the separating plane will be
computed from the nearest obstacle to the farthest one. Regarding each
obstacle, a separating plane will be built if this obstacle has a contact with
the currently formed convex polyhedra. Otherwise, such an obstacle will be
omitted. In addition, if the distance between the obstacle and the robot is
larger than $Khv_{\text{max}}$, the obstacle will not be considered either.
#### IV-C4 Safe Corridor Construction
All above, the algorithm of forming a safe corridor is concluded in Alg. 1,
where the detail process has been illustrated sequentially in previous sub-
subsections. This safe corridor generation method has some critical
properties, which is summarized as follows.
###### Lemma 1.
The safe corridor formulation method provided in Alg. 1 has the following
three properties:
1. 1.
If the predetermined trajectory $\overline{\mathcal{P}}$ is obstacle-free, a
safe corridor can be generated.
2. 2.
The underlying predetermined trajectory $\overline{\mathcal{P}}$ satisfies the
formulated constraints (11), i.e., ${a_{k}^{i,o}}^{T}\overline{p}_{k}^{i}\geq
b_{k}^{i,o}$, $k\in\mathcal{K}$.
3. 3.
If the constraints (11) formulated by the safe corridor are satisfied, the
planned trajectory $\mathcal{P}$ is obstacle-free.
###### Proof.
1) Based on the above-mentioned method, we can conclude that a division of
segment can be found if a predetermined trajectory is obstacle-free. This is
because the line between the tractive point and $\overline{p}^{i}_{K}$ is
collision-free, which obviously leads to a collision-free EPT. In this case,
the worst and most conservative division is all the segments are only formed
by the adjacent two points of EPT. Additionally, since there exist separating
planes between the points of segment and any of obstacle, the convex polyhedra
can be formed. Consequently, the safe corridor can be constituted by a
sequence of polyhedra.
2) Since the constraints (11) are obtained from the optimization (12) with
$p^{\text{free}}$ chosen as the positions from predetermined trajectory, it
clear that the predetermined trajectory $\overline{\mathcal{P}}$ satisfies
these constraints.
3) As aforementioned, for a collision-free predetermined trajectory, a
division of segment can be found. Then, due to the segment division rule, it
is obvious that the adjacent points of predetermined trajectory, e.g.,
$\overline{p}^{i}_{k}$ and $\overline{p}^{i}_{k+1}$, must be contained in a
common segment. Thus, if the constraints is enforced, the corresponding
planned points $p^{i}_{k}$ and $p^{i}_{k+1}$ must be restricted to a common
convex polyhedron. Consequently, the line segment between them must be
obstacle-free. Then, regarding a finished segment division, all points of
predetermined trajectory are included because the segment division can be
completed if and only if the point is counted sequentially until the last one.
Thus, all the line segments of the trajectory is covered by a collision-free
safe corridor. ∎
### IV-D Trajectory Planning Algorithm
In the previous subsections, the methods dealing with inter-robot and robot-
obstacle collision avoidance are formulated to be certain kinds of constraints
into the optimization problem. Furthermore, another constraint is introduced
to ensure the feasibility of the underlying optimization, that is,
$v^{i}_{K}=\mathbf{0}_{d}.$ (14)
Moreover, we enforce that $x^{i}_{k}=x^{i}_{K}$, $k>K$ and
$u^{i}_{k}=\textbf{0}_{d}$.
In addition to the constraints, the proposed cost function for robot $i$ is
given by
$C^{i}=C^{i}_{p}+C^{i}_{w},$ (15)
where $C^{i}_{w}$ is provided in (8) and $C^{i}_{p}$ is given by
$C^{i}_{p}=\frac{1}{2}Q_{K}\|p_{K}^{i}-p_{\text{tractive}}^{i}\|_{2}^{2}+\frac{1}{2}\sum_{k=1}^{K-1}Q_{k}\|p_{k+1}^{i}-p_{k}^{i}\|_{2}^{2}.$
(16)
Note that $C^{i}_{p}$ is employed to drive the robots to the current tractive
point and $Q_{k}$, $k\in\mathcal{K}$ in (16) is the weight parameter.
Therefore, the optimization can reformulated as follows.
$\displaystyle\min_{\mathrm{u}^{i},x^{i},w^{ij}}C^{i}$ $\displaystyle\text{
s.t. }\eqref{eq: dynamic constraint},\eqref{eq: input-constraint},$
$\displaystyle\eqref{eq: state-constraint},\eqref{eq: inter-
constraint},\eqref{eq: w-constraint},\eqref{eq: obstacle-
constraint},\eqref{eq: terminal-constraint}.$
Based on this optimization, the proposed trajectory planning method is
summarized in Algorithm 2. The inputs are the initial position $p^{i}(t_{0})$,
the target position $p^{i}_{\text{target}}$ and obstacles $\mathcal{O}$. To
begin with, the predetermined trajectory is initialized as
$\overline{\mathcal{P}}^{i}(t_{0})=[p^{i}(t_{0}),\ldots,p^{i}(t_{0})]$ and
$b^{i}_{\text{TO}}$ is set as False. After this initialization, in the main
loop, each robot run their respective algorithm in parallel (Line 2). At the
beginning, the predetermined trajectory is informed among the robots (Line 2),
followed by the procedure that the functions related to inter-robot and robot-
obstacle collision avoidance are realized (Line 2-2). After obtaining the
current state, the optimization (17) is formulated and resolved (Line 2).
Afterwards, the deadlock detection is made (Line 2) based on the resolution of
the optimization and the predetermined trajectory in next step is derived
thereafter (Line 2). Finally, the planned trajectory is executed (Line 2).
Input : $p^{i}(t_{0})$, $p^{i}_{\text{target}}$,$\mathcal{O}$
1
$\overline{\mathcal{P}}^{i}(t_{0})\leftarrow\text{InitialPredTraj}(p^{i}(t_{0}))$
;
2 $b^{i}_{\text{TO}},\leftarrow\textbf{False}$;
3 while _not all robots at target_ do
4 for _$i\in\mathcal{N}$ concurrently _ do
5
6
$\overline{\mathcal{P}}^{j}(t)\leftarrow\text{Communicate}(\overline{\mathcal{P}}^{i}(t))$
;
7
8
$cons^{i}\leftarrow\text{GetInterCons}(\overline{\mathcal{P}}^{i}(t),\overline{\mathcal{P}}^{j}(t))$;
9
10 $cons^{i}\leftarrow
cons^{i}\cup\text{GetCorridor}(\mathcal{O},\overline{\mathcal{P}}^{i}(t))$ ;
11
12 $x^{i}(t)\leftarrow\text{GetCurrentState}()$;
13
14 $\mathcal{P}^{i}(t),w^{ij}\leftarrow\text{Optimization}(cons^{i},x^{i}(t))$
;
15
16
$b^{i}_{\text{TO}}\leftarrow\text{DeadlockDetection}(\mathcal{P}^{i}(t),w^{ij})$
;
17
18
$\overline{\mathcal{P}}^{i}(t+h)\leftarrow\text{GetPredTraj}(\mathcal{P}^{i}(t))$
;
19
20 $\text{ExecuteTrajectory}(\mathcal{P}^{i}(t))$;
21
22 end for
23 $t\leftarrow t+h$;
24
25 end while
Algorithm 2 The Complete Algorithm
### IV-E Feasibility Guarantee
Different from most existing optimization-based methods, the proposed planner
guarantees the feasibility of the optimization problem, which is illustrated
in the following theorem.
###### Theorem 1.
If the initial positions of all robots are collision-free, the robots will
never have collisions with each other or any obstacles.
###### Proof.
At beginning time $t_{0}$, the predetermined trajectory is initialized as
$\overline{\mathcal{P}}^{i}(t_{0})=[p^{i}(t_{0}),\ldots,p^{i}(t_{0})]$.
Choosing it as the planned trajectory, i.e.,
$p^{i}_{k}(t_{0})=\overline{p}^{i}_{k}(t_{0})$, it is naturally that this
planned trajectory satisfies all constraints listed in optimization (17).
Thus, at time $t_{0}$, the optimization is feasible for all robots.
Thereafter, we intend to prove that our algorithm is recursively feasible,
i.e., once the optimization at time step $t-h$ are feasible, we can find a
feasible solution at time $t$. Form Lemma 1, we already know that if planned
trajectory at $t-h$ is feasible, it must be obstacle-free and we can find a
safe corridor in this time’s replanning. Afterwards, the final optimization
(17) can be formulated. Given feasible solution at last time step,
$u^{i}_{k-1}(t-h)$ and $x^{i}_{k}(t-h)$ for $k\in\mathcal{K}$, we can provide
a feasible solution $x^{i}_{k}(t)=x^{i}_{k+1}(t-h)$,
$u^{i}_{k}(t)=u^{i}_{k+1}(t-h)$ and
$w^{ij}(t)=\min\\{\epsilon,\;{a_{K}^{ij}}^{T}(t)p_{K}^{i}(t)-b_{K}^{ij}(t)\\}$,
where we enforce that $x^{i}_{K}(t)=x^{i}_{K+1}(t-h)=x^{i}_{K}(t-h)$ and
$u^{i}_{K}(t)=u_{e}$.
First, as the result of optimization at time step $t-h$, $x^{i}_{k+1}(t-h)$
and $u^{i}_{k}(t-h)$ with $k\in\tilde{\mathcal{K}}$, satisfy the constraints
in (1), (3)-(4) naturally. In addition, since
$x^{i}_{K}(t)=x^{i}_{K+1}(t-~{}h~{})=x^{i}_{K}(t-~{}h~{})$ and
$u^{i}_{K-1}(t)=u^{i}_{K}(t-h)=u_{e}$ hold, $x^{i}_{K}(t)$ and
$u^{i}_{K-1}(t)$ also satisfy these constraints. In the meantime, as
$x^{i}_{K}(t)=x^{i}_{K+1}(t-h)=x^{i}_{K}(t-h)=x^{i}_{K-1}(t)$ holds, it is
evident that the constraint (14) holds as well. Then, towards constraints
related to MBVC-WB, i.e., (6), (7), the feasibility of the provided solution
have been proved in our previous work [6]. Lastly, regarding constraint (11),
as property 1) and 3) stated in Lemma 1, for given feasible solution in time
$t-h$, the previously planned trajectory is obstacle-free and a formulation of
safe corridor can be found at this time. Afterwards, according to property 3),
the provided solution, i.e., the predetermined trajectory, satisfies these
constraints evidently. Thus, the constraint (11) is feasible and the
optimization is feasible in a recursive way.
Since the initial optimizations as well as thees successive ones are feasible,
the included constraints are satisfied. According to property 3) in Lemma 1
and the property of MBVC-WB in [6], the obstacle and inter-robot collision can
be avoided. ∎
## V Numerical Simulations and Experiments
In this section, we will analyze the validation and performance of the
proposed algorithm via numerical simulations and hardware experiments. The
algorithm is implemented in an Intel Core i9 @3.2GHz computer with Python3
programming language and publicly available at https://github.com/PKU-
MACDLab/IMPC-OB. We use CVXOPT [25] for quadratic programming and trajectory
optimization, and OMPL [26] for ABIT∗ path planning. Furthermore, comparisons
with Ego-swarm [13], MADER [14] and LSC [11] are carried out.
### V-A Numerical Simulations and Comparison
The main parameters of robots are chosen as follows: the minimum inter-agent
distance $r_{\text{min}}=0.6{\rm m}$; the radius of agents $r_{a}=0.3{\rm m}$;
the maximum acceleration $a_{\text{min}}=2{\rm m/s^{2}}$; the maximum velocity
$v_{\text{max}}=3{\rm m/s}$.
(a) LSC [11]
(b) Ours
Figure 4: The scenario ’L’-passage for LSC [11] and ours. Figure 5: The
generated corridor for given pre-planned trajectory (Top: LSC [11] and Bottom:
Ours ). Noticeably, the provided corridor is the feasible space for the
centroid of the robot.
(a) ego-swarm[13]
(b) MADER[14]
(c) LSC[11]
(d) Ours
Figure 6: Four planners are performed in the forest scenario.
(a) ego-swarm[13]
(b) MADER[14]
(c) LSC[11]
(d) Ours
Figure 7: Four planners are performed in the “H” scenario. TABLE I: Comparison With State-of-arts. (Safety: no collision occurs. $T_{t}[{\rm s}]$: transition time. $L_{t}[{\rm m}]$: the length of transition. $T_{c}[{\rm ms}]$: mean computation time per replanning.) | method | Safety | $T_{t}$ | $L_{t}$ | $T_{c}$
---|---|---|---|---|---
forest | Ego-swarm [13] | No | 9.2 | 105.8 | 9.6
MADER [14] | No | 22.3 | 111.1 | 104.0
LSC [11] | Yes | 22.3 | 114.34 | 53.2
Ours | Yes | 8.1 | 102.4 | 93.0
“H” | Ego-swarm [13] | No | 7.5 | 66.6 | 10.2
MADER [14] | No | 14.2 | 71.5 | 116.7
LSC [11] | Yes | - | - | 62.3
Ours | Yes | 9.3 | 67.7 | 86.8
To begin with, we will provide a single robot simulation to show the proposed
safe corridor, where the environment is a “L”-passage. Fig. 4 shows the
comparative simulation results of the method proposed in [11] and our method.
Notably, our method has a higher speed and smoother trajectory in the sense
that our planner cost $4.1s$ in comparison with $7.5{\rm s}$ for LSC. The
difference between the safe corridors are given in Fig. 5. Compared with the
rectangle corridor in LSC, our corridor is a trapezoid, which provides more
feasible space for upcoming turning.
Furthermore, comparisons with Ego-swarm, MADER and LSC are made throughout
forest-transition and “H”-transition. Trajectories of these scenarios are
illustrated in Fig. 6 and Fig. 7, respectively. The results are shown in Table
I. For computation time, the proposed method has a relatively long computation
time since it is programmed by Python3 that cannot take full advantage of
multi-core computation. But in prospective execution, the computation can be
carried out in multiple processors concurrently, and the time will decrease
considerably. Ego-swarm has an impressive computation time in addition to a
relative smooth and fast trajectories. Unfortunately, it cannot guarantee the
collision avoidance, since it adopts an unconstrained optimization in
planning. For MADER, the optimization under the strict constraints guarantees
the avoidance between robots. However, regarding obstacle-avoidance, MADER
seems to be maladaptive in an obstacle-dense environment as several collision
appears. LSC has the feasibility guarantee which ensures the safety of robots,
but a deadlock occurs in “H”-transition. Despite heuristic deadlock resolution
method is adopted in LSC, it is ineffective in this scenario. Though we
similarly utilize linear constraint to handle inter-agent collision, the extra
warning band introduce an elastic interaction instead of a hard one, based on
which, the adaptive right-hand rule is leveraged, resulting in right-hand
rotations in the bottleneck of the “H”. Moreover, for transition time and
length, the proposed planner has a considerable superiority which means a
faster and smoother trajectories.
(a) Left: the trajectories of this swarm where the ellipsoids indicate the
minimum inter-robot distance. Right: the distances between different pairs of
robots.
(b) left: four crazyfiles are deployed in a polyhedrom-shape enviroment.
Right: When a crazyfile go through the narrow passage, another quadrotor make
a way to let it pass by.
(c) Left: The hardware experiment of “H”-transition. Right: “n”-transition.
Figure 8: The real-world experiments.
### V-B Hardware Experiments
Hardware experiments are executed on the platform of crazyswarm [27], where
multiple nano-quadrotors are flying under a motion capture system OptiTrack.
The computation of all robots is done on a central computer with frequency,
$5$Hz to comply with the sampling time step $h=0.2$s. For each crazyfile, a
feedback controller [22] is adopted to track the planned trajectory.
The first experiment is shown in Fig. 8(a), where $8$ crazyfiles fly through a
$0.6{\rm m}$ cubic framework. Considering the air turbulence, a crazyfile in
the inter-robot avoidance is represented as an ellipsoid with diameter $0.24$m
in the $x-y$ plane and $0.6$m in the $z$ axis. Owing to the deformation of
robots, the inter-robot constraints are accordingly adjusted by modifying
$a^{ij}_{k}$ and $b^{ij}_{k}$ as
$a_{k}^{ij}=E\frac{E(\overline{p}_{k}^{i}-\overline{p}_{k}^{j})}{\|E(\overline{p}_{k}^{i}-\overline{p}_{k}^{j})\|_{2}},\quad
b_{k}^{ij}=a_{k}^{ij^{T}}\frac{E(\overline{p}_{k}^{i}+\overline{p}_{k}^{j})}{2}+\frac{r_{\min}^{\prime}}{2},$
where $E={\rm diag}(1.0,1.0,\frac{0.24}{0.6})$;
$r_{\min}^{\prime}=\sqrt{r_{\text{min}}^{2}+v_{\text{max}}^{2}}$ and
$r_{\text{min}}=0.24$m. The radius of a crazyfile is set as $r_{a}=0.12$m.
From the result given in Fig 8(a), it is apparent that the crazyfiles can
achieve this transition.
In addition, four crazyfiles moving in a cluttered environment are shown in
Fig 8(b). Given initial positions, the targets are randomly chosen. After
arriving at the targets, the new one will be published immediately and this
process is repeated $5$ times. In this scenario, the feasible space is the
irregular-shaped passage at the interval of polygon-shaped obstacles where the
width of these passages ranges from $0.4$m to $0.7$m. By the help of MBVC-WB,
where the warning band introduces an elastic interaction between the robots, a
robot can squeeze out a way between the wall and other robots to avoid this
trap as shown in Fig. 8(b). Such an inter-robot coordination in this scenario
solves the deadlock problem.
At last, other two experiments as illustrated in Fig. 8(c) are carried out,
where the scenarios are “H”-transition and “n”-transition, respectively. In
these navigations, the safety is guaranteed and the coordination in narrow
passage is properly achieved. In “H”-transition, the right-hand rotation
appears as in the previous simulation. Regarding the “n”-transition, the
intersection of two groups of quadrotors at the top passage is the main
challenge for this mission. It can be seen that the proposed method resolves
it as the quadrotors fly through the passage without sacrificing any speed.
For a quadrotor, when encountering an oncoming agent, it can rapidly find a
side to avoid collisions by utilizing MBVC-WB.
## VI Conclusion
This work has proposed a novel multi-robot trajectory planning method for
obstacle-dense environments wherein the collision avoidance is guaranteed and
the deadlock among robots is resolved. In contrast to state-of-the-art works,
the proposed method’s performance of safety and deadlock resolution in
cluttered scenarios can be ensured by theoretical proof and validated by
comprehensive simulations and hardware experiments.
## References
* [1] S.-J. Chung, A. A. Paranjape, P. Dames, S. Shen, and V. Kumar, “A survey on aerial swarm robotics,” _IEEE Transactions on Robotics_ , vol. 34, no. 4, pp. 837–855, 2018.
* [2] X. Zhou, X. Wen, Z. Wang, Y. Gao, H. Li, Q. Wang, T. Yang, H. Lu, Y. Cao, C. Xu, and F. Gao, “Swarm of micro flying robots in the wild,” _Science Robotics_ , vol. 7, no. 66, p. eabm5954, 2022.
* [3] L. C. E. and A. P. Schoellig, “Trajectory generation for multiagent point-to-point transitions via distributed model predictive control,” _IEEE Robotics and Automation Letters_ , vol. 4, no. 2, pp. 375–382, 2019\.
* [4] F. Augugliaro, A. P. Schoellig, and R. D’Andrea, “Generation of collision-free trajectories for a quadrocopter fleet: A sequential convex programming approach,” in _2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2012, pp. 1917–1922.
* [5] J. Alonso-Mora, P. Beardsley, and R. Siegwart, “Cooperative collision avoidance for nonholonomic robots,” _IEEE Transactions on Robotics_ , vol. 34, no. 2, pp. 404–420, 2018.
* [6] Y. Chen, M. Guo, and Z. Li, “Deadlock resolution and recursive feasibility in mpc-based multi-robot trajectory generation,” _arXiv preprint arXiv:2202.06071_ , 2022.
* [7] M. P. Strub and J. D. Gammell, “Advanced BIT (ABIT): Sampling-based planning with advanced graph-search techniques,” in _2020 IEEE International Conference on Robotics and Automation (ICRA)_ , 2020, pp. 130–136.
* [8] C. Toumieh and A. Lambert, “Decentralized multi-agent planning using model predictive control and time-aware safe corridors,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 4, pp. 11 110–11 117, 2022.
* [9] F. Gao, L. Wang, B. Zhou, X. Zhou, J. Pan, and S. Shen, “Teach-repeat-replan: A complete and robust system for aggressive flight in complex environments,” _IEEE Transactions on Robotics_ , vol. 36, no. 5, pp. 1526–1545, 2020.
* [10] J. Park, J. Kim, I. Jang, and H. J. Kim, “Efficient multi-agent trajectory planning with feasibility guarantee using relative bernstein polynomial,” in _2020 IEEE International Conference on Robotics and Automation (ICRA)_ , 2020, pp. 434–440.
* [11] J. Park, D. Kim, G. C. Kim, D. Oh, and H. J. Kim, “Online distributed trajectory planning for quadrotor swarm with feasibility guarantee using linear safe corridor,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 2, pp. 4869–4876, 2022.
* [12] J. Li, M. Ran, and L. Xie, “Efficient trajectory planning for multiple non-holonomic mobile robots via prioritized trajectory optimization,” _IEEE Robotics and Automation Letters_ , vol. 6, no. 2, pp. 405–412, 2021\.
* [13] X. Zhou, J. Zhu, H. Zhou, C. Xu, and F. Gao, “Ego-swarm: A fully autonomous and decentralized quadrotor swarm system in cluttered environments,” in _2021 IEEE International Conference on Robotics and Automation (ICRA)_ , 2021, pp. 4101–4107.
* [14] J. Tordesillas and J. P. How, “Mader: Trajectory planner in multiagent and dynamic environments,” _IEEE Transactions on Robotics_ , pp. 1–14, 2021\.
* [15] M. Kamel, J. Alonso-Mora, R. Siegwart, and J. Nieto, “Robust collision avoidance for multiple micro aerial vehicles using nonlinear model predictive control,” in _2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2017, pp. 236–243.
* [16] D. Zhou, Z. Wang, S. Bandyopadhyay, and M. Schwager, “Fast, on-line collision avoidance for dynamic vehicles using buffered Voronoi cells,” _IEEE Robotics and Automation Letters_ , vol. 2, no. 2, pp. 1047–1054, 2017.
* [17] L. Wang, A. D. Ames, and M. Egerstedt, “Safety barrier certificates for collisions-free multirobot systems,” _IEEE Transactions on Robotics_ , vol. 33, no. 3, pp. 661–674, 2017.
* [18] R. Deits and R. Tedrake, _Computing Large Convex Regions of Obstacle-Free Space Through Semidefinite Programming_. Springer International Publishing, 2015, pp. 109–124.
* [19] S. Liu, M. Watterson, K. Mohta, K. Sun, S. Bhattacharya, C. J. Taylor, and V. Kumar, “Planning dynamically feasible trajectories for quadrotors using safe flight corridors in 3-D complex environments,” _IEEE Robotics and Automation Letters_ , vol. 2, no. 3, pp. 1688–1695, 2017.
* [20] B. Şenbaşlar, W. Hönig, and N. Ayanian, “Rlss: Real-time multi-robot trajectory replanning using linear spatial separations,” _arXiv preprint arXiv:2103.07588_ , 2021.
* [21] W. Hönig, J. A. Preiss, T. K. S. Kumar, G. S. Sukhatme, and N. Ayanian, “Trajectory planning for quadrotor swarms,” _IEEE Transactions on Robotics_ , vol. 34, no. 4, pp. 856–869, 2018.
* [22] D. Mellinger and V. Kumar, “Minimum snap trajectory generation and control for quadrotors,” in _2011 IEEE International Conference on Robotics and Automation (ICRA)_ , 2011, pp. 2520–2525.
* [23] X. Ma, Z. Jiao, Z. Wang, and D. Panagou, “Decentralized prioritized motion planning for multiple autonomous uavs in 3d polygonal obstacle environments,” in _2016 International Conference on Unmanned Aircraft Systems (ICUAS)_ , 2016, pp. 292–300.
* [24] S. Boyd, S. P. Boyd, and L. Vandenberghe, _Convex optimization_. Cambridge university press, 2004.
* [25] A. Martin, D. Joachim, and V. Lieven, “Cvxopt,” Website, http://cvxopt.org/.
* [26] I. A. Şucan, M. Moll, and L. E. Kavraki, “The Open Motion Planning Library,” _IEEE Robotics & Automation Magazine_, vol. 19, no. 4, pp. 72–82, December 2012, https://ompl.kavrakilab.org.
* [27] J. A. Preiss, W. Hönig, G. S. Sukhatme, and N. Ayanian, “Crazyswarm: A large nano-quadcopter swarm,” in _2017 IEEE International Conference on Robotics and Automation (ICRA)_ , 2017, pp. 3299–3304.
|
# Integration of vector fields on cell complexes and Morse theory
Takeo Nishinou Department of Mathematics, Rikkyo University, Toshima, Tokyo,
Japan<EMAIL_ADDRESS>
###### Abstract.
In this paper, we investigate vector fields on polyhedral complexes and their
associated trajectories. We study vector fields which are analogue of the
gradient vector field of a function in the smooth case. Our goal is to define
a nice theory of trajectories of such vector fields, so that the set of them
captures the topology of the polyhedral complex, as in classical Morse theory.
Since we do not assume the polyhedral complex to be a manifold, the definition
of vector fields on it is very different from the smooth case. Nevertheless,
we will show that there exist nice classes of functions and metrics which give
gradient vector fields with desired properties. Our construction relies on
Forman’s discrete Morse theory. In particular, the class of functions we use
is an improvement of Forman’s discrete Morse functions. A notable feature of
our theory is that our gradient vector fields are defined purely from
functions and metrics as in the smooth case, contrary to the case of discrete
Morse theory where we need the data of dimension of cells. This allows us to
implement several useful constructions which were not available in the
discrete case.
## 1\. Introduction
In this paper, we consider vector fields on cell complexes. Our goal is to
give a reasonable theory of integration of such vector fields. In other words,
we would like to give a definition of trajectories of these vector fields with
nice properties. As a measure of such ‘nice properties’, we refer to Morse
theory [4]. Namely, if we can calculate the homology of cell complexes in a
simple way from the data of trajectories of a vector field associated with a
suitable function, it may be reasonable to claim that the theory of
integrating vector fields is nice.
Since we deal with cell complexes which are not necessarily manifolds, a
definition of vector fields should be different from the usual one. In
particular, to capture the topological informations of a cell complex, it will
be necessary to allow multivaluedness at points where there is no regular
neighborhood. On the other hand, if we allow any object with such
multivaluedness, we will have to deal with extremely complicated objects which
exhibit pathological behavior. Therefore, it is desirable to assume that
vector fields are as simple as possible on each cell.
From this point of view, we consider gradient vector fields of piecewise
linear functions. However, it is easy to see that if one chooses the functions
arbitrarily, it would be impossible to deduce any meaningful result, even if
one chooses them generically. Different choices of functions will give
different results, and there would be no reasonable control. Therefore, for
the construction of a theory of integration of gradient vector fields, it is
essential to introduce a nice class of functions.
There are discrete, or piecewise linear, analogues of Morse theory, see for
example [1, 2, 6]. We adopt Forman’s _discrete Morse functions_ [2], see
Definition 2.4, to solve the problem. Forman used such functions to develop
_discrete Morse theory_ , a nice combinatorial analogue of usual Morse theory
on smooth manifolds. However, if we allow arbitrary discrete Morse functions,
we will still have difficulties. We need to choose a nice class of discrete
Morse functions, which we call _tame_ , to have a good control of gradient
trajectories, see Definition 3.1.
On the other hand, to consider the gradient vector field of a function, we
also need a metric. It turns out that the choice of a suitable class of
metrics is also essential. We choose _sharp piecewise affine metrics_ on the
base space, see Definition 4.3. This is also necessary to guarantee nice
behavior of trajectories of vector fields. On the other hand, on any finite
cell complex, plenty of tame discrete Morse functions and sharp piecewise
affine metrics exist, so these choices do not give a restriction to the
theory.
Using a tame discrete Morse function and a sharp piecewise affine metric, we
can define the gradient vector field and its trajectories, see Definition 5.5
and 7.3. They are the analogues of the corresponding objects in smooth case.
However, the nature of gradient vector fields defined here is quite different
from those in the smooth case. For example, even when the base space is a
manifold, several flows can go into (or emanate from) a point, which is not
necessarily a critical point, see, for example, Figure 1. Nevertheless, the
total behavior of the flows is good enough and we can naturally define a
complex from these flows which computes the homology of the base space, see
Section 8. We will prove this by comparing it to discrete Morse theory, see
Theorem 9.1.
As we mentioned at the beginning, our study is an attempt to construct a
theory of vector fields on cell complexes in a natural and tractable way. This
is achieved by geometrically realizing discrete Morse theory. One of the nice
features of our construction is that the gradient vector field of a function
is defined purely from the information of the function and the metric, as in
the smooth case. This is different from the case of discrete Morse theory,
where the definition of gradient paths requires information other than the
function and the metric, namely the dimension of the cell.
This allows us to give another useful construction which was not available in
discrete Morse theory. Namely, when the base space is a closed manifold, we
can consider the dual theory. In discrete Morse theory, the dual theory is
defined on the dual cell complex, and one cannot directly compare the gradient
vector fields of the original theory with those of the dual theory. On the
other hand, in our case, the gradient vector field of the dual theory is
obtained simply by reversing the directions, as in the smooth case. This
allows us to construct stable and unstable complexes. In fact, we can
introduce stable and unstable complexes even when the base space is not a
manifold. In such a case, the reversed vector field is not necessarily related
to a discrete Morse theory, since the dual complex of a polyhedral complex may
not make sense when the base space is not a manifold. Nevertheless, the vector
field still has nice properties, see Section 10.
## 2\. Review of discrete Morse theory
Our purpose is to study the behavior of gradient vector fields associated with
piecewise linear functions. To make sense of such vector fields, the base
space needs to have a piecewise linear structure. Simplicial complexes or
triangulated manifolds naturally have such structures. We will work with
affine complexes which we now introduce.
###### Definition 2.1.
A _convex polytope_ $P$ is a convex hull of finite points in the vector space
$\mathbb{R}^{n}$ for some $n$. The boundary $\partial P$ is naturally a union
$\cup P_{i}$ of convex polytopes such that for any $i\neq j$, $P_{i}\cap
P_{j}=P_{k}$ for some $k$, if $P_{i}\cap P_{j}$ is not empty. Each $P_{i}$ is
called a _face_ of $P$. A face of codimension one in $P$ is called a _facet_.
Note that any polytope has a natural affine linear structure. That is, the
notion of affine linear functions on a polytope makes sense.
###### Definition 2.2.
An _affine complex_ $X$ is a finite CW complex such that each cell has a
structure of a convex polytope in the sense of Definition 2.1 compatible with
the gluing. Here, a cell always means a closed cell in this paper.
Compatibility means that if $X_{j}$ is a face of some cell $X_{i}$ of $X$, the
affine linear structure on $X_{j}$ induced by its structure as a convex
polytope is isomorphic to the affine linear structure induced by that of
$X_{i}$. Note that no self-intersection of a cell is allowed.
In particular, an affine complex need not be a topological manifold, and it
can have boundary components.
###### Definition 2.3.
The _dimension_ of an affine complex $X$ is the maximum of the dimension of
the cells in $X$.
Let $X$ be an affine complex. If $\alpha$ is an $i$-dimensional cell of $X$,
we write it as $\alpha^{(i)}$ when we want to emphasize its dimension. We
often call it an $i$-cell. If $\alpha^{(i)}$ is a face of $\beta^{(j)}$, we
write $\alpha^{(i)}<\beta^{(j)}$. In this case, we say that $\alpha^{(i)}$ is
adjacent to $\beta^{(j)}$, and also that $\beta^{(j)}$ is adjacent to
$\alpha^{(i)}$.
As we mentioned in the introduction, we study vector fields modeled on the
gradient vector fields of piecewise linear functions. To introduce a nice
class of functions which are compatible with the piecewise linearity, we use
discrete Morse functions in the sense of Forman [2], instead of smooth
functions in usual Morse theory. Let us recall the definition.
###### Definition 2.4.
([2, Definition 2.1]) A real valued function
$F\colon\\{\rm{cells\;of\;X}\\}\to\mathbb{R}$
is a _discrete Morse function_ if for every cell $\alpha^{(i)}$ of $X$, the
inequalities
(1) $\\#\\{\beta^{(i+1)}>\alpha^{(i)}\;|\;F(\beta^{(i+1)})\leq
F(\alpha^{(i)})\\}\leq 1$
and
(2) $\\#\\{\gamma^{(i-1)}<\alpha^{(i)}\;|\;F(\gamma^{(i-1)})\geq
F(\alpha^{(i)})\\}\leq 1$
hold.
Roughly speaking, this definition says a discrete Morse function tends to have
larger values on larger dimensional cells. For example, the function
$F_{triv}$ defined by
$F_{triv}(\alpha^{(i)})=i$
for any $i$-dimensional cell $\alpha^{(i)}$ is a discrete Morse function,
where the left hand side of (1) and (2) are both zero. The condition of the
definition claims that there is at most one exception for each cell. The cells
without such an exception is called critical as in the following definition.
###### Definition 2.5.
([2, Definition 2.2]) Consider an affine complex and a discrete Morse function
$F$ on it. A cell $\alpha^{(i)}$ is _critical_ if
(3) $\\#\\{\beta^{(i+1)}>\alpha^{(i)}\;|\;F(\beta^{(i+1)})\leq
F(\alpha^{(i)})\\}=0$
and
(4) $\\#\\{\gamma^{(i-1)}<\alpha^{(i)}\;|\;F(\gamma^{(i-1)})\geq
F(\alpha^{(i)})\\}=0$
hold.
For example, when we take the discrete Morse function $F_{triv}$ above, all
the cells are critical.
We also need the definition of gradient flows in discrete Morse theory, which
is used to define the discrete Morse complex. We need it because later we will
compare the discrete Morse complex with a complex defined from piecewise
linear gradient flows, see Section 9.
Recall that there is a chain complex naturally associated with a CW complex.
Let $C_{i}(X,\mathbb{Z})$ denote the free abelian group generated by
$i$-dimensional cells of $X$, where on each cell an orientation is chosen. In
particular, $-\sigma\in C_{i}(X,\mathbb{Z})$ represents the same cell $\sigma$
with the opposite orientation. Let $\partial$ denote the boundary operator
$\partial:C_{i+1}(X,\mathbb{Z})\to C_{i}(X,\mathbb{Z}).$
For example, in the simplicial case, it is given by
$\partial\langle a_{0},\dots,a_{i+1}\rangle=\sum_{l=0}^{i+1}(-1)^{l}\langle
a_{0},\dots,\check{a}_{l},\dots,a_{i+1}\rangle,$
here $\check{a}_{l}$ means it is skipped. In general, we have
$\partial\beta=\sum_{l}sgn(\beta,\alpha_{l})\alpha_{l},$
where the sum runs over the set of facets of $\beta$, and
$sgn(\beta,\alpha_{l})\in\\{\pm 1\\}$ is determined as follows. Namely, given
an oriented polytope $\beta^{(i+1)}$, the orientation is represented by a non-
zero element $\tau_{i+1,x}\in\wedge^{i+1}T_{x}\beta^{(i+1)}$ at any point $x$
in the interior of $\beta^{(i+1)}$. This determines a constant section
$\tau_{i+1}$ of $\wedge^{i+1}T\beta^{(i+1)}$. On a facet $\alpha^{(i)}$ of
$\beta^{(i+1)}$, there is a natural orientation represented by an element
$\eta_{i}\in\wedge^{i}T_{y}\alpha^{(i)}$ at any point $y$ in the interior of
$\alpha^{(i)}$, determined by the condition
$n_{y}\wedge\eta_{i}=\tau_{i+1,y}$, where $n_{y}$ is an outward normal vector
of $\beta^{(i+1)}$ at $y$. Then,
$sgn(\beta,\alpha_{l})=\begin{cases}1\;\;\text{if the orientation on
$\alpha_{l}$ induced from $\beta$ equals to the fixed orientation on
$\alpha_{l}$},\\\ -1\;\;{\rm otherwise}.\end{cases}$
Now, introduce an inner product $\langle\,,\,\rangle$ on $C_{*}(X,\mathbb{Z})$
by requiring the set of cells of $X$ to be an orthonormal basis.
###### Definition 2.6.
([2, Definition 6.1],) A _gradient vector field_ $V$ on an affine complex
equipped with a discrete Morse function $F$ and an orientation for each cell
is defined as follows. Let $\alpha^{(i)}$ be an oriented $i$-cell. If there is
an $(i+1)$-cell $\beta^{(i+1)}$ satisfying $\beta^{(i+1)}>\alpha^{(i)}$ and
$F(\beta^{(i+1)})\leq F(\alpha^{(i)})$, we set
$V(\alpha^{(i)})=-\langle\partial\beta^{(i+1)},\alpha^{(i)}\rangle\beta^{(i+1)}.$
If there is no such $\beta^{(i+1)}$, set
$V(\alpha^{(i)})=0.$
For each $i$, we extend $V$ linearly to a map
$V:C_{i}(X,\mathbb{Z})\to C_{i+1}(X,\mathbb{Z}).$
The map $V$ is a discrete analogue of the gradient vector field of a Morse
function on a smooth manifold. The next is an analogue of the integral of such
a vector field.
###### Definition 2.7.
([2, Definition 8.4]) Let $X$ be an affine complex equipped with a discrete
Morse function $F$. A $gradient\,\,path$ of dimension $i$ is a sequence
$\gamma$ of $i$-cells of $X$
$\gamma=\alpha_{0},\alpha_{1},\alpha_{2},\dots,\alpha_{r}$
such that for every $l=0,\dots,r-1$,
(i) if $V(\alpha_{l})=0$, $\alpha_{l+1}=\alpha_{l}$.
(ii) if $V(\alpha_{l})\neq 0$, $\alpha_{l+1}<V(\alpha_{l})$ and
$\alpha_{l+1}\neq\alpha_{l}$.
We call $\gamma$ a gradient path of $length$ $r$.
### 2.1. Sign of gradient paths and Morse differential
Let us introduce the complex of Forman
$(\ast)\;\;\;\;\mathcal{M}:0\to\mathcal{M}_{n}\xrightarrow{\tilde{\partial}}\mathcal{M}_{n-1}\xrightarrow{\tilde{\partial}}\cdots\xrightarrow{\tilde{\partial}}\mathcal{M}_{0}\to
0,$
in terms of gradient paths, as in [2, Section 8]. Here, $\mathcal{M}_{i}$ is
the free abelian group generated by critical $i$-dimensional cells.
For this purpose, we need a remark on the orientation of the cells contained
in a gradient path, as in [2, page 125]. Suppose $\alpha$ and $\tilde{\alpha}$
are distinct $i$-cells and $\beta$ is an $(i+1)$-cell with $\alpha<\beta$ and
$\tilde{\alpha}<\beta$. Then, an orientation on $\alpha$ induces an
orientation on $\beta$ by the condition
$\langle\partial\beta,\alpha\rangle=-1$. Given this orientation on $\beta$, we
choose the orientation on $\tilde{\alpha}$ so that
$\langle\partial\beta,\tilde{\alpha}\rangle=1$. Equivalently, fixing an
orientation on $\alpha$ and $\beta$, an orientation is induced on
$\tilde{\alpha}$ by the condition
$\langle\partial\beta,\alpha\rangle\langle\partial\beta,\tilde{\alpha}\rangle=-1.$
On the other hand, if $\alpha=\tilde{\alpha}$, we just take the same
orientation on $\tilde{\alpha}$ as $\alpha$. Thus, if
$\gamma=\alpha_{0},\alpha_{1},\dots,\alpha_{r}$ is a gradient path, an
orientation on $\alpha_{0}$ induces an orientation on each $\alpha_{i}$, and,
in particular, on $\alpha_{r}$. Recall that we have fixed an orientation for
each cell of $X$. Write $m(\gamma)=1$ if the fixed orientation on $\alpha_{0}$
induces the fixed orientation on $\alpha_{r}$, and $m(\gamma)=-1$ otherwise.
For $i$-cells $\alpha$ and $\tilde{\alpha}$, let
$\Gamma_{r}(\alpha,\tilde{\alpha})$ denote the set of all gradient paths from
$\alpha$ to $\tilde{\alpha}$ of length $r$. If $\beta^{(i+1)}$ and
$\alpha^{(i)}$ are critical, the differential $\tilde{\partial}$ of the
sequence ($\ast$) is given by
(5)
$\langle\tilde{\partial}\beta^{(i+1)},\alpha^{(i)}\rangle=\sum_{\tilde{\alpha}^{(i)}<\beta^{(i+1)}}\langle\partial\beta^{(i+1)},\tilde{\alpha}^{(i)}\rangle\sum_{\gamma\in\Gamma_{N}(\tilde{\alpha}^{(i)},\alpha^{(i)})}m(\gamma)$
for any $N$ large enough. In the paper [2], Forman proved the next theorem
(for any finite CW complex which is not necessarily an affine complex).
###### Theorem 2.8.
(Forman [2], Theorems 8.2 and 8.10) $\tilde{\partial}$ is a differential, i.e,
$\tilde{\partial}^{2}=0$. The homology of the complex $(\ast)$ is precisely
the singular homology of $X$. ∎
## 3\. Tame discrete Morse functions
Our purpose is to define and study integration of vector fields on cell
complexes using discrete Morse theory as a model. However, if one uses
arbitrary discrete Morse functions, one will soon realize that the
corresponding piecewise linear theory may be ill-behaved. To remedy this, we
need several ideas, as we explained in the introduction. The first of these
ideas is to choose a nice class of discrete Morse functions, which we call
tame.
###### Definition 3.1.
We call a discrete Morse function $F$ _generic_ if for any pair of cells
$\alpha,\beta$ satisfying $\alpha<\beta$, we have $F(\alpha)\neq F(\beta)$. We
call $F$ _tame_ if $\alpha^{(i)}<\beta^{(j)}$ and $j\geq i+2$, the inequality
$F(\beta^{(j)})>F(\alpha^{(i)})$ holds.
The tameness of discrete Morse functions does not impose a strong restriction.
In fact, given a generic discrete Morse function, we can modify it to a tame
one without changing discrete Morse theoretic properties.
###### Lemma 3.2.
Let $F$ be a generic discrete Morse function on an affine complex $X$. Then,
there is a tame generic discrete Morse function $\tilde{F}$ such that there is
a natural identification between the associated complexes $\mathcal{M}$. That
is, the set of critical cells and the differential on them are the same.
###### Proof.
We prove this by induction on the number of pairs of cells
$(\alpha^{(i)},\beta^{(j)})$ in $X$ which violate the condition for the
tameness. That is, we consider those pairs of cells which satisfy
$\alpha^{(i)}<\beta^{(j)},\;\;j\geq i+2,\;\;F(\alpha^{(i)})>F(\beta^{(j)}).$
If this number is zero, $F$ itself is tame. Let $k$ be a positive integer.
Assume that for any $F$ such that there are $k^{\prime}$-pairs of cells
$(\alpha,\beta)$, $k^{\prime}\in\\{1,2,\dots,k\\}$, which violate the tameness
condition, there is a tame $\tilde{F}$ which gives the same complex as the
complex associated with $F$. We will prove the assertion for $F$ for which
there are $(k+1)$-pairs of cells $(\alpha,\beta)$ which violate the tameness
condition.
Let $\mathcal{P}$ be the set of such pairs of cells. We introduce a partial
order to $\mathcal{P}$ by the rule that
$(\alpha^{(i)},\beta^{(j)})>((\alpha^{\prime})^{(i^{\prime})},(\beta^{\prime})^{(j^{\prime})})$
if and only if $j>j^{\prime}$. Let $(\alpha^{(i)},\beta^{(j)})$ be a maximal
element with respect to this order. Note that if $\alpha^{(i)}<\beta^{(j)}$
and $j\geq i+2$, there are at least two $(i+1)$-cells adjacent to both
$\alpha^{(i)}$ and $\beta^{(j)}$. Then, by definition of discrete Morse
functions, there is at least one $(i+1)$-cell $\gamma^{(i+1)}$ such that
$\alpha^{(i)}<\gamma^{(i+1)}<\beta^{(j)}$
and
$F(\gamma^{(i+1)})>F(\alpha^{(i)})$
hold. Similarly, if we have $j\geq i+3$, there is at least one $(i+2)$-cell
$\delta^{(i+2)}$ such that
$\gamma^{(i+1)}<\delta^{(i+2)}<\beta^{(j)}$
and
$F(\delta^{(i+2)})>F(\gamma^{(i+1)})$
hold. In particular, we have $\alpha^{(i)}<\delta^{(i+2)}$ and
$F(\delta^{(i+2)})>F(\alpha^{(i)})$. Repeating this, we see that there is at
least one $(j-1)$-cell $\varepsilon^{(j-1)}$ which satisfies
$\alpha^{(i)}<\varepsilon^{(j-1)}<\beta^{(j)}$
and
$F(\varepsilon^{(j-1)})>F(\alpha^{(i)})>F(\beta^{(j)}).$
In particular, all the $(j-1)$-cells other than $\varepsilon^{(j-1)}$ adjacent
to $\beta^{(j)}$ have the smaller value of $F$ than $\beta^{(j)}$ by
definition of discrete Morse functions.
On the other hand, if $\eta^{(k)}>\beta^{(j)}$ (so that we also have
$\eta^{(k)}>\varepsilon^{(j-1)}$), by the maximality of the pair
$(\alpha^{(i)},\beta^{(j)})$ with respect to the given order on $\mathcal{P}$,
we have
(6) $F(\eta^{(k)})>F(\varepsilon^{(j-1)}).$
Then, modify the value $F(\beta^{(j)})$ to $\bar{F}(\beta^{(j)})$, where
$\bar{F}(\beta^{(j)})$ is a generic real number in the open interval
$(F(\alpha^{(i)}),F(\varepsilon^{(j-1)})).$ The values of $\bar{F}$ at the
other cells are the same as those of $F$. Clearly, $\bar{F}$ is still a
generic discrete Morse function. By the relation (6), this modification does
not produce a new pair of cells which violates the tameness condition. Also,
it does not change the associated complex. Then, by applying the induction
hypothesis to $\bar{F}$, we obtain a required function $\tilde{F}$.∎
## 4\. Piecewise linear functions associated with discrete Morse functions
and metrics on affine complexes
Now, we construct a piecewise linear function $f$ from a given discrete Morse
function $F$.
###### Definition 4.1.
Let $X$ be an affine complex and $F$ be a discrete Morse function on it.
1\. Take the barycentric subdivision $X_{1}$ of $X$. By definition, the
vertices of it correspond to the cells of $X$.
2\. To a vertex of $X_{1}$, give the value of $F$ at the corresponding cell.
3\. Extend it affine linearly to each maximal cell of $X_{1}$. It is possible,
because $X_{1}$ is a simplicial complex.
We call a vertex of $X_{1}$ corresponding to an $i$-dimensional cell of $X$ an
$i$-dimensional vertex. Also, we call an edge of $X_{1}$ connecting an
$i$-dimensional vertex and a $j$-dimensional vertex an $(i,j)$-edge.
One of the features of discrete Morse theory is that although it studies an
analogue of gradient flows, it does not require a metric on the base space.
However, since we consider a geometric flow on $X$ (contrary to the
combinatorial one, see Definition 2.7), we need a metric as in usual Morse
theory.
###### Definition 4.2.
Let $Y$ be a simplicial complex. A _piecewise affine metric_ on $Y$ is defined
by an affine metric on each maximal cell (i.e, a cell which is not contained
in the boundary of another cell) which is compatible in the following sense.
Namely, on any lower dimensional cell, the metrics induced by those on the
adjacent maximal cells are the same. Here, an affine metric on a polytope in
$\mathbb{R}^{n}$ is the metric obtained by the restriction of a constant
valued positive definite symmetric 2-tensor field
$g\in\Gamma(\mathbb{R}^{n},Sym^{2}T^{*}\mathbb{R}^{n})$.
We cannot use every such a metric, since otherwise the gradient flow will
exhibit undesirable behavior. This problem can be resolved by imposing the
following simple condition to the metric.
###### Definition 4.3.
We call a piecewise affine metric on a simplicial complex _sharp_ if on each
simplex $\sigma$, the following condition is satisfied. Namely, take any
vertex $a$ of $\sigma$, and a two dimensional plane $L$ containing $a$. Assume
the intersection $L\cap\sigma$ is a triangle, which we write by $\triangle
abc$. Then, it is acute-angled, that is, the interior angle at each vertex is
less than $\frac{\pi}{2}$ with respect to the induced metric.
A sharp piecewise linear metric always exists on simplicial complexes as the
following claim shows.
###### Proposition 4.4.
Given a simplicial complex, a piecewise affine metric for which any of its
simplex is equilateral of a given edge length is sharp.
###### Proof.
It suffices to prove that the metric on the standard simplex with vertices at
$a_{0}=(1,0,\dots,0),a_{1}=(0,1,0,\dots,0),\dots,a_{n}=(0,\dots,0,1)$ in
$\mathbb{R}^{n+1}$ induced from the Euclidean metric on $\mathbb{R}^{n+1}$ is
sharp. By symmetry, we can assume that the vertex $a$ in Definition 4.3 is
$a_{0}$. Then, using the notation in Definition 4.3, we can assume that the
vertices $b$ and $c$ have the coordinates of the form
$(0,x_{1},\dots,x_{k},0,\dots,0),\;\;x_{1},\dots,x_{k}\geq 0,$
and
$(0,\dots,0,y_{k+1},\dots,y_{n}),\;\;y_{k+1},\dots,y_{n}\geq 0,$
respectively. Here, the coordinates satisfy $\sum_{i=1}^{k}x_{i}=1$ and
$\sum_{i=k+1}^{n}y_{i}=1$.
By the cosine formula, we have
$\cos\angle
acb=\frac{\overline{ac}^{2}+\overline{bc}^{2}-\overline{ab}^{2}}{2\overline{ac}\overline{bc}}.$
Since we have
$\overline{ac}^{2}+\overline{bc}^{2}-\overline{ab}^{2}=2\sum_{i=k+1}^{n}y_{i}^{2}>0,$
we have $\cos\angle acb>0$ and $\angle acb<\frac{\pi}{2}$. We can apply the
same argument to the other angles. This finishes the proof.∎
## 5\. Piecewise linear gradient vector fields
In Sections 5 to 8, we will define piecewise linear Morse theory on affine
complexes. First, we will define the piecewise linear gradient flow. Let
$X,X_{1},F$ and $f$ be as in the previous section. We write $n=\dim X$. We fix
a piecewise affine metric on $X_{1}$. From now on, we assume the following
unless otherwise noted.
###### Assumption 5.1.
The metric is sharp and the discrete Morse function $F$ is generic and tame in
the sense of Definition 3.1.
By Lemma 3.2 and Proposition 4.4, this assumption does not give a restriction
to the applicability of the theory.
Let $\sigma$ be a cell in $X_{1}$. If $\sigma$ is a polytope in
$\mathbb{R}^{n}$ and $L$ is the minimal affine subspace of $\mathbb{R}^{n}$
containing $\sigma$, we define
$T\sigma=TL|_{\sigma}.$
The restriction of the function $f$ and the metric to $\sigma$ induces a
gradient vector field on it. By the piecewise linearity, this is a constant
vector field. Under the above assumption, we have the following.
###### Lemma 5.2.
Under Assumption 5.1, the direction of the gradient vector on each $\sigma$ is
not contained in the hyperplanes parallel to the facets of it.∎
###### Definition 5.3.
Let $\sigma$ be a cell in $X_{1}$. The constant vector field above gives the
_gradient flow_ on $\sigma$, which is the family of maps
$\phi_{t}\colon\sigma\to\sigma,\;\;t\in\mathbb{R}_{\geq 0}.$
Note that since $\sigma$ has boundary, $\phi_{t}$ is not a diffeomorphism.
Namely, once a point reaches the boundary by the flow at some $t>0$, it stays
there afterwards.
###### Definition 5.4.
Let $\sigma$ be a cell in $X_{1}$ and $\tau$ be its facet. Let $x$ be a point
in the interior of $\tau$. By Lemma 5.2, for sufficiently small $t$, either
1. (i)
$\phi_{t}(x)$ is in the interior of $\sigma$, or
2. (ii)
$\phi_{t}(x)=x$,
holds. In the case (i), we say that the facet $\tau$ has an _out-flow_ , and
in the case (ii), we say that the facet $\tau$ has an _in-flow_.
Let $x$ be a point in $X_{1}$. Let $\sigma_{0}$ be the minimal cell of $X_{1}$
which contains $x$ in its interior. We write its dimension by
$d_{0}=\dim\sigma_{0}$. Let $\\{\sigma_{j,1},\dots,\sigma_{j,a_{j}}\\}$ be the
set of $(d_{0}+j)$-dimensional cells which contain $\sigma_{0}$, here $1\leq
j\leq n-d_{0}$. For each $\sigma_{j,l}$, there is a gradient vector field
determined by the restriction of $f$ and the metric. In this paper, we assume
that the gradient vector points the direction in which the function decreases.
We write the value of it at $x$ as $\tau_{x;j,l}\in T_{x}\sigma_{j,l}$. Here,
we write $\sigma_{0}=\sigma_{0,1}$. The inclusions of the cells induce natural
inclusions of tangent spaces. Note that under the above assumption, the images
of gradient vectors under these inclusions are all different.
Figure 1. Examples of gradient vector fields on cell complexes. Each point in
the interior of the edge $PQ$ has three directions.
###### Definition 5.5.
The set of _gradient vectors_ of $f$ at $x$ is the set of tangent vectors
$\\{\tau_{x;j,l}\\}$. The _gradient vector field_ of $f$ is the assignment of
the set of gradient vectors to each point of $X_{1}$.
This definition of gradient vector fields is very different from the usual
gradient vector field in the smooth case, see Figure 1. There will be
different definitions of vector fields in piecewise linear theory, see [3, 5]
for example. In these references, vector fields on manifolds with piecewise
linear structures are studied. The definition here is different from these
definitions as well, in that it allows multivaluedness. This makes it possible
to develop a meaningful theory of gradient flows on spaces which are not
necessarily manifolds. On the other hand, the definition here is very simple
compared to the other definitions in that it is constant valued on cells. Of
course, such a trivial choice will not give a meaningful theory in general. It
makes sense only if it is combined with the tameness of the function and the
sharpness of the metric.
A nice feature of this definition of gradient vector fields is that, unlike
the case of discrete gradient paths (Definition 2.7), it is purely determined
by the function and the metric, without referring to other information such as
the dimension of the vertices. From this, one can immediately see the
following duality. Assume that the affine complex $X$ is a polyhedral
decomposition of a manifold without boundary. Consider the dual of our
piecewise linear theory on the dual complex $\bar{X}$ of $X$. Namely, since
there is a natural inclusion-reversing one-to-one correspondence between the
set of cells of $X$ and $\bar{X}$, we have a natural function $\bar{F}$ on the
set of cells of $\bar{X}$ induced from the discrete Morse function $F$ on $X$.
Then, $-\bar{F}$ is a discrete Morse function on $\bar{X}$. Note that if $F$
is tame and generic, so is $-\bar{F}$. Also, there is a natural identification
between $X_{1}$ and the barycentric subdivision of $\bar{X}$. Thus, we have a
piecewise linear theory on $X_{1}$ using the function induced from $-\bar{F}$
and the given sharp piecewise affine metric.
###### Proposition 5.6.
The associated gradient vector for the dual theory at a point $x\in X_{1}$ is
given by $\\{-\tau_{x;j,l}\\}$.
###### Proof.
This follows from the definition. ∎
This point is one of the advantages of considering piecewise linear theory,
compared to the discrete theory. That is, in the discrete case, when one
considers Poincaré duality for a polyhedral decomposition of a manifold
without boundary, the dual theory is defined on the dual complex of the
original, so that one cannot directly relate the original discrete gradient
path to the dual. Here, we can consider Poincaré duality on the same complex
just by reversing the direction of the gradient vector field.
## 6\. Properties of piecewise linear flows
Proposition 6.2 below will be a basic ingredient in the argument in the
following sections. First, we prove the following.
###### Lemma 6.1.
Consider a triangle $P$ with vertices $a,b,c$ and put an affine metric on it
which is not necessarily sharp. Take an affine linear function $f$ on $P$
which is generic so that the values at the vertices are different, say,
$f(a)<f(b)<f(c)$, and the gradient flow with respect to the metric is not
parallel to any of the edges. Then, if the edge $\overline{bc}$ of $P$ has an
in-flow, the angle $\angle abc$ is larger than $\frac{\pi}{2}$. Further, the
edge $\overline{ab}$ also has an in-flow.
###### Proof.
There is a point $b^{\prime}$ on the edge $\overline{ac}$ with the property
$f(b)=f(b^{\prime})$. Since $f$ is affine linear, $f$ is constant on the line
connecting $b$ and $b^{\prime}$. This line divides $P$ into two triangles
$P_{1}=\triangle{cbb^{\prime}}$ and $P_{2}=\triangle{bab^{\prime}}$. The
gradient flow is perpendicular to $\overline{bb^{\prime}}$, and as an edge of
$P_{1}$, clearly $\overline{bb^{\prime}}$ has an in-flow. By the assumption,
the edge $\overline{bc}$ also has an in-flow. It follows that there is a flow
into the vertex $b$ whose trajectory is perpendicular to
$\overline{bb^{\prime}}$. Then, we have
$\angle abc=\angle abb^{\prime}+\angle cbb^{\prime}>\angle
abb^{\prime}+\frac{\pi}{2}>\frac{\pi}{2}.$
The final assertion is obvious.∎
###### Proposition 6.2.
Let $X,X_{1},F$ and $f$ be as above. Put a sharp piecewise affine metric on
$X_{1}$. Let $\tau$ be a cell in $X_{1}$. If $\tau$ has an in-flow from a cell
$\sigma$, the cell $\tau$ contains the lowest value vertex of the Morse
function on $\sigma$.
###### Proof.
Note that $\tau$ is a facet of $\sigma$ by definition, see Definition 5.4. If
$\dim\sigma=1$, the claim is obvious. If $\dim\sigma=2$, the claim follows
from Lemma 6.1. Namely, using the notation in Lemma 6.1, if the cell $\tau$
does not contain the lowest value vertex, we have $\tau=\overline{bc}$. Thus,
if $\tau$ has an in-flow, the angle $\angle abc$ is larger then
$\frac{\pi}{2}$. However, this contradicts to the sharpness of the metric.
Assume $\dim\sigma$ is larger than two. Suppose the lowest value vertex $p$ of
$\sigma$ is not contained in $\tau$. Take a generic point $q$ on $\tau$. Let
$L$ be a two dimensional plane containing the edge $\overline{pq}$ and the
flow line from the interior of $\sigma$ to $q$. Then, the intersection
$\tau\cap L$ is a segment, which we write as $\overline{rs}$. We can assume
the inequalities
$f(p)<f(r)<f(s)$
hold. Then, the triangle $\triangle prs$ satisfies the condition of Lemma 6.1,
since there is an in-flow on $\overline{rs}$ by the assumption. It follows
that the angle $\angle prs$ is larger than $\frac{\pi}{2}$. However, this
contradicts to the sharpness of the metric.∎
Dually, we have the following.
###### Corollary 6.3.
Using the same notation as in Proposition 6.2, if $\tau$ has an out-flow to a
cell $\sigma$, the cell $\tau$ contains the largest value vertex of the Morse
function on $\sigma$.∎
## 7\. Gradient trajectories between critical vertices
Let $X,X_{1},F$ and $f$ be as before. Fix a sharp piecewise affine metric on
$X_{1}$.
###### Definition 7.1.
Critical points of an affine complex $X$ equipped with a discrete Morse
function are the vertices of $X_{1}$ corresponding to the critical cells of
the discrete Morse function in the sense of Forman. We call a critical point
$i$-dimensional if it is a vertex corresponding to an $i$-dimensional cell of
the original complex.
As in discrete theory, let $\mathcal{M}_{i}^{PL}$ be the free abelian group
generated by critical $i$-dimensional vertices. It is canonically isomorphic
to $\mathcal{M}_{i}$. We want to construct a complex
$0\to\mathcal{M}_{n}^{PL}\to\cdots\to\mathcal{M}_{0}^{PL}\to 0$
by counting gradient trajectories of the piecewise linear flow. First, let us
define gradient trajectories.
###### Definition 7.2.
Let $\sigma$ be a cell of $X_{1}$. Let $x,y$ be points on $\sigma$. A
_gradient segment_ from $x$ to $y$ is an affine linear map
$\gamma\colon[t_{0},t_{1}]\to\sigma,$
where $t_{0},t_{1}\in\mathbb{R}$, such that $\gamma(t_{0})=x,\gamma(t_{1})=y$
and for each $s\in[t_{0},t_{1}]$, $\gamma^{\prime}(s)\in\\{\tau_{x;j,l}\\}$
(see Definition 5.5). A _gradient trajectory_ from $x$ to $y$ is a piecewise
affine linear map
$\gamma\colon[t_{0},t_{k}]\to\sigma,$
where $k$ is a positive integer, such that there is a refinement
$t_{0}<t_{1}<\cdots<t_{k}$
of $[t_{0},t_{k}]$ where $\gamma|_{[t_{l-1},t_{l}]}$ is a gradient segment for
each $l=1,\dots,k$.
###### Definition 7.3.
A gradient trajectory of the gradient vector field on $X_{1}$ is an ordered
sequence
$\gamma_{1},\gamma_{2},\dots,\gamma_{k},$
of gradient trajectories $\gamma_{l}:[t_{l-1},t_{l}]\to\sigma_{l}$ on some
cell $\sigma_{l}$ such that
$\gamma_{l}(t_{l})=\gamma_{l+1}(t_{l}),\;\;l=1,\dots,k-1.$
We identify two gradient trajectories if their images give the same subsets of
$X$. Note that for a point $x\in X_{1}$, there can be more than one gradient
trajectories through $x$, contrary to the case of smooth manifolds.
## 8\. Piecewise linear Morse complex
###### Definition 8.1.
Let $i$ be a non-negative integer. Let $X^{(i)}$ be the $i$-th skeleton of
$X$. Let $X_{1}^{(i)}$ be the barycentric subdivision of $X^{(i)}$. We call
$X_{1}^{(i)}$ the _$i$ -th rib_ of $X_{1}$.
Note that $X_{1}^{(i)}$ is not the $i$-th skeleton of $X_{1}$, which consists
of all the simplices of $X_{1}$ of dimension at most $i$.
###### Proposition 8.2.
Assume $F$ is generic and tame. Let $p$ and $q$ be $i$\- and
$(i-1)$-dimensional critical vertices, respectively. Then, a gradient
trajectory $\gamma:[0,t]\to X_{1}$ connecting $p$ and $q$ is contained in a
subcomplex composed of edges whose ends are $i$\- and $(i-1)$-dimensional
vertices.
###### Proof.
First, we observe the following.
###### Claim 8.3.
Let $x\in X_{1}^{(i)}$ be a point. Let $\delta$ be a gradient trajectory
starting from $x$. Then, the inclusion $\delta\subset X_{1}^{(i+1)}$ holds.
###### Proof.
This is an immediate consequence of Corollary 6.3 and the tameness of $F$.∎
Since $p$ is an $i$-dimensional critical vertex, any of its adjacent
$(i+1)$-dimensional vertices has larger value of $f$ than $f(p)$. Then, again
by Corollary 6.3, for sufficiently small $\varepsilon$,
$\gamma([0,\varepsilon))$ is contained in $X^{(i)}$. Let $\sigma$ be the cell
of $X_{1}$ with the property that $\gamma((0,\varepsilon))\subset int(\sigma)$
for sufficiently small $\varepsilon$, where $int(\sigma)$ is the interior of
$\sigma$. Note that $\sigma$ is uniquely determined by this property. Let
$0<t_{1}$ be the minimal real number with the property
$\gamma(t_{1})\in\partial\sigma$. Let $\tau\subset\partial\sigma$ be the
unique cell with the property that $\tau$ contains $\gamma(t_{1})$ in its
interior. Then, since $p$ is a critical vertex, the inclusion $\tau\subset
X_{1}^{(i-1)}$ holds.
Thus, by Claim 8.3, the image of the restriction $\gamma_{[t_{1},t]}$ is
contained in $X_{1}^{(i)}$. It follows that we have $\gamma([0,t])\subset
X_{1}^{(i)}$. Since $q$ is a critical vertex, a positive dimensional cell in
$X_{1}^{(i)}$ which has $q$ as the lowest value vertex with respect to $f$
must be an edge connecting $q$ and an adjacent $i$-dimensional vertex. Thus,
by Proposition 6.2, the intersection of a small neighborhood of $q$ with
$\gamma$ must be a subset of an edge connecting $q$ and an $i$-dimensional
vertex $q_{1}$. Similarly, by the tameness of $F$, a positive dimensional cell
in $X_{1}^{(i)}$ which has $q_{1}$ as the lowest value vertex with respect to
$f$ must be an edge connecting $q_{1}$ and an $(i-1)$-dimensional vertex
$q_{2}$ (which is unique, if any). In this case, we have
$f(q_{1})<f(q_{2}).$
Note that any $(i-2)$-dimensional vertex adjacent to $q_{2}$ is also adjacent
to $q_{1}$. If $r$ is an $(i-2)$-dimensional vertex adjacent to $q_{1}$, we
have $f(r)<f(q_{1})$ by the tameness of $F$. In particular, for any
$(i-2)$-dimensional vertex $r$ adjacent to $q_{2}$, we have $f(r)<f(q_{2})$.
Thus, again, a positive dimensional cell in $X_{1}^{(i)}$ which has $q_{2}$ as
the lowest value vertex with respect to $f$ must be an edge connecting $i$\-
and $(i-1)$-dimensional vertices. Repeating this, we see that $\gamma$ is
contained in the union of edges connecting $i$\- and $(i-1)$-dimensional
vertices.∎
From this, the following is obvious.
###### Corollary 8.4.
In the notation of Proposition 8.2, the number of gradient trajectories
connecting $p$ and $q$ is finite.∎
### 8.1. Sign of gradient trajectory and Morse differential
We define the linear map
$d_{PL}:\mathcal{M}_{i}^{PL}\to\mathcal{M}_{i-1}^{PL}$
by linearly extending the map
$p\mapsto\sum_{q\in\mathcal{M}_{i-1}^{PL}}\sum_{\gamma\in\Gamma_{PL}(p,q)}\bar{m}(\gamma)q,$
where $\Gamma_{PL}(p,q)$ is the set of gradient trajectories connecting $p$
and $q$, and $\bar{m}(\gamma)$ is the sign of the gradient trajectory $\gamma$
determined as follows. Namely, to each $k$-dimensional critical vertex
$a^{(k)}$, $0\leq k\leq n$, we attach an orientation of the $k$-dimensional
cell corresponding to it. It is represented by a constant element of
$\Gamma(\sigma_{a^{(k)}},\wedge^{k}T\sigma_{a^{(k)}})$. As an element of
$\mathcal{M}_{k}^{PL}$, $-a^{(k)}$ represents the same vertex with the
opposite orientation of $\sigma_{a^{(k)}}$ attached.
By Proposition 8.2, the trajectory connecting $p^{(i)}\in\mathcal{M}_{i}^{PL}$
and $q^{(i-1)}\in\mathcal{M}_{i-1}^{PL}$ is contained in the union of edges
connecting $i$\- and $(i-1)$-dimensional vertices. It follows that if the
trajectory passes through the vertices as
$p^{(i)}\to s^{(i-1)}\to r^{(i)}\to\cdots\to q^{(i-1)},$
the orientation at $p^{(i)}$ naturally induces an orientation at each
$s^{(i-1)},r^{(i)},\dots,q^{(i-1)}$. Namely, if
$o_{p^{(i)}}\in\Gamma(\sigma_{p^{(i)}},\wedge^{i}T\sigma_{p^{(i)}})$ is the
orientation at $p^{(i)}$, it induces an orientation at $s^{(i-1)}$ by
$o_{s^{(i-1)}}=n_{p^{(i)}}\lfloor
o_{p^{(i)}}\in\Gamma(\sigma_{s^{(i-1)}},\wedge^{(i-1)}T\sigma_{s^{(i-1)}}),$
where $n_{p^{(i)}}$ is the unit outward normal vector of $\sigma_{p^{(i)}}$ on
the facet $\sigma_{s^{(i-1)}}$, and $\lfloor$ denotes the interior product
defined using the affine metric. Then, the orientation at $r^{(i)}$ is given
by
$o_{r^{(i)}}=-n_{r^{(i)}}\wedge o_{s^{(i-1)}},$
where $n_{r^{(i)}}$ is the unit outward normal vector of $\sigma_{r^{(i)}}$ on
the facet $\sigma_{s^{(i-1)}}$. Repeating this, an orientation is induced at
$q^{(i-1)}$. If we compare this with the given orientation, we set
$\bar{m}(\gamma)=1$ when they coincide, and $-1$ if not.
With this definition, we have a sequence of maps
(7)
$0\to\mathcal{M}_{n}^{PL}\xrightarrow{d_{PL}}\mathcal{M}_{n-1}^{PL}\xrightarrow{d_{PL}}\cdots\xrightarrow{d_{PL}}\mathcal{M}_{0}^{PL}\to
0.$
A priori, we do not know whether this gives a chain complex. In the next
section, we see this is indeed the case, and its homology is the same as the
singular homology of $X$.
## 9\. The main theorem
Let $X,X_{1},F$ and $f$ be as above. Namely, $X$ is an affine complex of
dimension $n$, $X_{1}$ is the barycentric subdivision of $X$, $F$ is a tame
generic discrete Morse function on $X$, and $f$ is the affine linear function
on $X_{1}$ constructed from $F$. Put a sharp piecewise affine metric on
$X_{1}$. Our purpose is to show the following result.
###### Theorem 9.1.
The sequence $({\mathcal{M}}_{\bullet}^{PL},d_{PL})$ in the previous section
is a complex, and its homology is isomorphic to the singular homology of $X$.
###### Proof.
We examine the relation between discrete and piecewise linear Morse theories.
Recall that a gradient path in discrete Morse theory (Definition 2.7) is an
appropriate sequence $\gamma$ of $i$-cells of $X$
$\gamma=\alpha_{0},\alpha_{1},\alpha_{2},\dots,\alpha_{r}.$
Let $\Gamma_{r}(\alpha_{0},\alpha_{r})$ be the set of such gradient paths.
Consecutive $i$-cells $\alpha_{l-1}$ and $\alpha_{l}$ are in the boundary of a
uniquely determined $(i+1)$-cell, if $\alpha_{l-1}\neq\alpha_{l}$. Let us
write this $(i+1)$-dimensional cell by $\beta_{l}$. If
$\alpha_{l-1}=\alpha_{l}$, take $\beta_{l}=\emptyset$. Let $\beta_{0}$ be an
$(i+1)$-dimensional cell adjacent to $\alpha_{0}$ with the property
$F(\beta_{0})>F(\alpha_{0})$, and consider the sequence
$\beta_{0},\alpha_{0},\beta_{1},\alpha_{1},\beta_{2},\cdots,\alpha_{r-1},\beta_{r},\alpha_{r}.$
Let $\Gamma_{r}(\beta_{0},\alpha_{r})$ denote the set of such sequences. An
element of $\Gamma_{r}(\beta_{0},\alpha_{r})$ is determined by an element of
$\Gamma_{r}(\alpha_{0},\alpha_{r})$ and the choice of $\beta_{0}$. Then, if
$\beta_{0}$ is an $(i+1)$-dimensional critical cell, the differential of
discrete Morse theory (5) can be written in the form
(8)
$\tilde{\partial}\beta_{0}=\sum_{\alpha\in\mathcal{M}_{i}}\sum_{\tilde{\gamma}\in\Gamma_{N}(\beta_{0},\alpha)}\tilde{m}(\tilde{\gamma})\alpha,$
where $N$ is a sufficiently large number and $\mathcal{M}_{i}$ is the set of
critical $i$-cells. The sign $\tilde{m}(\gamma)\in\\{\pm 1\\}$ is equal to
$m(\gamma)$, where $\gamma$ is the element of $\Gamma_{N}(\alpha_{0},\alpha)$
associated with $\tilde{\gamma}$. Here, $\alpha_{0}$ is a facet of $\beta_{0}$
with an induced orientation, that is,
$\langle\partial\beta_{0},\alpha_{0}\rangle=1$.
To adjust the terminology with those in piecewise linear theory, we redefine
the sequence
$\beta_{0},\alpha_{0},\beta_{1},\alpha_{1},\beta_{2},\cdots,\alpha_{r-1},\beta_{r},\alpha_{r}$
above as a gradient path in discrete Morse theory. We identify two sequences
$\\{\beta_{i},\alpha_{i}\\}$ and
$\\{\beta_{i}^{\prime},\alpha_{i}^{\prime}\\}$ which are the same up to
stabilization, that is, if the union
$\cup_{l}int({\beta}_{l})\bigcup\cup_{l}\alpha_{l}$ and
$\cup_{l}int({\beta}^{\prime}_{l})\bigcup\cup_{l}\alpha^{\prime}_{l}$ are the
same as a subset of $X$. Here, $int({\beta}_{l})$ and
$int({\beta}^{\prime}_{l})$ are the interior of $\beta_{l}$ and
$\beta_{l}^{\prime}$, respectively.
Clearly, an element in $\Gamma_{N}(\beta_{0},\alpha_{r})$ determines a
sequence
$p_{0}\to q_{0}\to p_{1}\to\cdots\to p_{r}\to q_{r},$
where $p_{i}$ is the $(i+1)$-dimensional vertex in $X_{1}$ corresponding to
$\beta_{i}$, $q_{i}$ is the $i$-dimensional vertex in $X_{1}$ corresponding to
$\alpha_{i}$, and $r$ is the smallest number at which the sequence
$\alpha_{0},\alpha_{1},\alpha_{2},\dots,\alpha_{r}$ stabilizes (that is, the
smallest number such that $\alpha_{r}=\alpha_{r+1}$ holds). This sequence has
the following properties:
* •
Consecutive vertices in this sequence are connected by a unique edge in
$X_{1}$.
* •
Following inequalities hold:
$f(p_{0})>f(q_{0})>f(p_{1})>\cdots>f(p_{r})>f(q_{r}).$
Conversely, a sequence of $i$\- and $(i+1)$-dimensional adjacent vertices with
the properties above determines a unique (up to stabilization) gradient path
in discrete Morse theory. By Proposition 8.2, we have the following.
###### Proposition 9.2.
Let $\beta$ be a critical $(i+1)$-dimensional cell and $\alpha$ be a critical
$i$-dimensional cell of discrete Morse theory. Let $p,q$ be corresponding
$(i+1)$\- and $i$-dimensional critical vertices of piecewise linear Morse
theory. There is a canonical one to one correspondence between the following
objects:
* •
Gradient paths in discrete Morse theory connecting $\beta$ and $\alpha$ up to
stabilization.
* •
Gradient trajectories in piecewise linear Morse theory connecting $p$ and $q$.
Recall that the differential
$d_{PL}\colon\mathcal{M}_{i+1}^{PL}\to\mathcal{M}_{i}^{PL}$ in piecewise Morse
theory is given by
$d_{PL}p=\sum_{q\in\mathcal{M}_{i}^{PL}}\sum_{\gamma\in\Gamma_{PL}(p,q)}\bar{m}(\gamma)q,$
and the differential
$\tilde{\partial}\colon\mathcal{M}_{i+1}\to\mathcal{M}_{i}$ in discrete Morse
theory is given by
$\tilde{\partial}\beta=\sum_{\alpha\in\mathcal{M}_{i}}\sum_{\tilde{\gamma}\in\Gamma_{N}(\beta,\alpha)}\tilde{m}(\tilde{\gamma})\alpha.$
Let $\tilde{\gamma}$ be the element of $\Gamma_{N}(\beta,\alpha)$
corresponding to $\gamma\in\Gamma_{PL}(p,q)$ by Proposition 9.2. Then, Theorem
9.1 is a consequence of the following claim and Theorem 2.8.
###### Proposition 9.3.
The equality $\bar{m}(\gamma)=\tilde{m}(\tilde{\gamma})$ holds.
###### Proof.
This follows from a straightforward comparison between the argument in
Subsections 2.1 and 8.1.∎
## 10\. Stable and unstable complexes
Let $X,X_{1},F$ and $f$ be as before. Let $i$ be a positive integer. In this
section, we will show that in piecewise linear Morse theory, we have nice
stable and unstable complexes associated with critical points. This is another
benefit from the fact that we can consider the original Morse theory and its
dual in the same complex, see Proposition 5.6. We use the same notation as in
the previous sections. First, we observe the following.
###### Lemma 10.1.
Let $C$ be a closed subset of $X_{1}$. Then, the union of gradient
trajectories starting from a point in $C$ is a closed subset of $X_{1}$.
###### Proof.
It suffices to prove the claim on a simplex $\sigma$. If $x_{1}$ is a point in
the interior of $\sigma$, there is a unique gradient vector at $x_{1}$ tangent
to $\sigma$. This does not depend on the choice of $x_{1}$, and gradient
vectors at the points on the boundary of $\sigma$ also contain this direction.
It follows that the subset of $\sigma$ swept by the flow along this direction
starting from points in $C$ is closed in $\sigma$. Namely, embed $\sigma$ in
some $\mathbb{R}^{d}$ isometrically and let $v\in\mathbb{R}^{d}$ be the
direction of the gradient flow on $\sigma$. If we define the set $C_{\geq}$ by
$C_{\geq}=\\{x+tv\;|\;x\in C,\;t\geq 0\\}\subset\mathbb{R}^{d},$
the subset of $\sigma$ swept by the flow starting from points in $C$ is
$C_{\geq}\cap\sigma$. Since $C_{\geq}$ is closed, the claim follows.
The same argument applies to the intersection of a face of $\sigma$ and $C$.
Since the union of gradient trajectories starting from a point in $C$ is the
union of these closed subsets on each face of $\sigma$, the claim follows.∎
###### Lemma 10.2.
Let $\sigma$ be a simplex in $X_{1}$. Let $p$ be the vertex of $\sigma$ at
which the function $f$ takes the highest value in $\sigma$. Then, $\sigma$ is
swept by the gradient trajectories starting from $p$.
###### Proof.
By the previous lemma, it suffices to prove that the interior of $\sigma$ is
swept by the gradient trajectories starting from $p$. We use an induction. If
$\dim\sigma=1$, the claim is obvious. Assume that the claim is proved for
$\sigma$ with $\dim\sigma\leq k$ for some positive integer $k$. Let us prove
the case where $\dim\sigma=k+1$.
Let $x$ be a point in the interior of $\sigma$. As in the proof of the
previous lemma, there is a unique gradient vector at $x$ tangent to $\sigma$.
Tracing back the gradient trajectory along this direction, we will hit a
boundary of $\sigma$. Let $q$ be the point in $\partial\sigma$ at which this
occurs for the first time. Let $\\{A_{1},\dots,A_{m}\\}$ be the set of facets
of $\sigma$ containing $q$. By Corollary 6.3, each $A_{l}$ contains $p$. Then,
by applying the induction hypothesis to any of the cells
$\\{A_{1},\dots,A_{m}\\}$, the claim follows.∎.
###### Lemma 10.3.
Let $q$ be an $(i-1)$-dimensional vertex and $p$ be an adjacent
$i$-dimensional vertex such that $f(q)>f(p)$ holds. Let $\sigma_{p}$ be the
cell of $X$ corresponding to $p$. Then, $\sigma_{p}$ is swept by the gradient
trajectories starting from $q$.
###### Proof.
By the tameness, if $\alpha$ is a maximal dimensional simplex in $\sigma_{p}$,
the vertex taking the highest value among the set of vertices of $\alpha$ is
$p$ or $q$. Since $p$ is contained in a gradient trajectory starting from $q$,
the claim follows from the previous lemma.∎
We also prove the following claim.
###### Lemma 10.4.
Let $p$ be an $i$-dimensional critical vertex. Let $\gamma$ be a gradient
trajectory starting from $p$. Then, $\gamma$ is contained in $X_{1}^{(i)}$,
the $i$-th rib of $X_{1}$ (see Definition 8.1).
###### Proof.
We assume $i>0$ since otherwise the claim is trivial. By Corollary 6.3, for
sufficiently small $\varepsilon$, the image $\gamma((0,\varepsilon))$ is
contained in $X_{1}^{(i)}$. Let $\tau$ be the unique cell of $X_{1}$
satisfying $\gamma((0,\varepsilon))\subset int(\tau)$. Let $t_{1}>0$ be the
minimal real number satisfying $\gamma(t_{1})\in\partial\tau$. Then, we have
$\gamma(t_{1})\in X_{1}^{(i-1)}$. The lemma follows from this and Claim 8.3.∎
Let $k$ be a positive integer.
###### Proposition 10.5.
Let $p$ be an $i$-dimensional critical vertex. Then, the subset of $X_{1}$
swept by the trajectories starting from $p$ is a subcomplex of $X^{(i)}$.
###### Proof.
By Corollary 6.3, for sufficiently small $\varepsilon$, the image
$\gamma((0,\varepsilon))$ is contained in $\sigma_{p}$, for any trajectory
$\gamma$ starting from $p$. By Lemma 10.2, the cell $\sigma_{p}$ is swept by
the trajectories starting from $p$. Thus, to prove that the union of the
trajectories is a subcomplex of $X$, it suffices to show that for any facet
$A$ of $\sigma_{p}$, the union of trajectories starting from points on $A$ is
a subcomplex of $X$. If this is proved, by Claim 8.3, it must be a subcomplex
of $X^{(i)}$.
We prove this from the following more general statement.
###### Claim 10.6.
Let $B$ be a cell in $X$. Then, the union of trajectories starting from points
on $B$ is a subcomplex of $X$.
###### Proof.
We prove this by induction on the dimension of $B$. First, let us consider the
case when $\dim B=0$. If $B$ is a critical cell, the claim is obvious since
there is no trajectory emanating from $B$. If $B$ is not critical, there is a
unique one dimensional cell $C$ adjacent to $B$ such that
$F(B)>F(C)$
holds. Higher dimensional cells adjacent to $B$ have larger value of $F$ than
$F(B)$ by the tameness.
By Corollary 6.3, any trajectory starting from $B$ is given by an out-flow
into $C$. On the other hand, again by the tameness, there is only one cell
$B^{\prime}$ adjacent to $C$ satisfying
$F(C)>F(B^{\prime}),$
and this $B^{\prime}$ is a zero dimensional cell. Repeating this argument, the
claim follows.
Assume that we have proved the claim when $\dim B\leq k-1$ for some positive
integer $k$. Let us prove the claim when $\dim B=k$.
We use further induction on the value of $F$. Let $B_{1}$ be a $k$-dimensional
cell of $X$ such that there is no other $k$-dimensional cell whose value of
$F$ is smaller than $F(B_{1})$. Then, there is no adjacent $(k+1)$-cell which
has a smaller value of $F$ than $F(B_{1})$, by definition of discrete Morse
functions. Then, $B_{1}$ is critical, or there is a facet $C$ of $B_{1}$
satisfying $F(C)>F(B_{1})$.
If $B_{1}$ is critical, as in the beginning part of the proof of Proposition
10.5, any trajectory starting from a point on $B_{1}$ touches $\partial
B_{1}$. Thus, the claim follows by the induction hypothesis on the dimension
of $B$. If there is a facet $C$ of $B_{1}$ satisfying $F(C)>F(B_{1})$, $B_{1}$
is swept by trajectories starting from $C$, by Lemma 10.3. Thus, it suffices
to show that the union of trajectories starting from $C$ is a subcomplex of
$X$, since it coincides with the union of trajectories starting from $B_{1}$.
However, this follows from the induction hypothesis on dimension again.
Now, let
$c_{1}<c_{2}<\cdots<c_{l}$
be the values of $F$ taken by $k$-dimensional cells of $X$. Assume that the
claim is proved for a $k$-dimensional cell whose value of $F$ is at most
$c_{i}$, $1\leq i<l$. We will prove the claim for a $k$-dimensional cell $B$
satisfying $F(B)=c_{i+1}$.
If $B$ is critical or there is a facet $C^{\prime}$ of $B$ satisfying
$F(C^{\prime})>F(B)$, the claim is proved as in the case of $B_{1}$ above.
Assume that there is a $(k+1)$-dimensional cell $D$ adjacent to $B$ satisfying
$F(B)>F(D)$. By Lemma 10.3, $D$ is swept by trajectories starting from $B$.
Moreover, if $x\in int(B)$, for sufficiently small $\varepsilon$, a trajectory
$\gamma$ starting from $x$ satisfies $\gamma((0,\varepsilon))\subset D$, by
definition of discrete Morse functions and Corollary 6.3. It follows that the
trajectory $\gamma$ hits some facet of $D$ other than $B$ itself. Therefore,
it suffices to prove that the union of trajectories starting from a facet $E$
of $D$ other than $B$ is a subcomplex of $X$. However, we have $F(E)<F(B)$ by
definition of discrete Morse functions. Therefore, we can apply the induction
hypothesis on the values of $F$ to $E$. This proves the claim and the
proposition.∎
Using this proposition, we can define stable and unstable complexes of
critical points in piecewise linear Morse theory.
###### Definition 10.7.
Assume $X$ is a manifold without boundary. We call the subcomplex of $X^{(i)}$
constructed in Proposition 10.5 the _unstable complex_ of the critical point
$p$. Consider the dual theory, see Section 5. Then, $p$ is an
$(n-i)$-dimensional critical point. We call its unstable complex the _stable
complex_ of $p$ of the original theory.
Note that the stable complex is not a subcomplex of $X$, but of its dual
complex. In any case, it is a subcomplex of $X_{1}$.
Since the gradient trajectory is defined purely in terms of a function and a
metric as we mentioned in Section 5, this definition can be extended to the
case when $X$ has boundary, or is not necessarily a manifold. Namely, if $p$
is a critical point, we define the unstable complex as the subcomplex of
$X^{(i)}$ as in Proposition 10.5, and define the stable complex as the
subcomplex with respect to the reversed gradient flow. Note that the reversed
gradient flow is not necessarily induced by a discrete Morse function, since
if $X$ is an affine complex which is not a manifold, a dual complex does not
make sense in general. In other words, it is not a gradient flow associated
with a discrete Morse function, although it is the gradient flow associated
with a piecewise linear function on $X_{1}$. Nevertheless, the set of gradient
trajectories gives a complex, which is isomorphic to the dual of the original
complex $\mathcal{M}$.
## Acknowledgement
The author was supported by JSPS KAKENHI Grant Number 18K03313.
## References
* [1] M. Bestvina and N. Brady, Morse theory and finiteness properties of groups. Invent. Math. 129 (1997), 445–470.
* [2] R. Forman, Morse theory for cell complexes. Adv. Math. 134 (1998), no. 1, 90–145.
* [3] J. Llibre and E. Ponce, Bifurcation of a periodic orbit from infinity in planar piecewise linear vector fields. Nonlinear Anal. 36 (1999), 623–653.
* [4] J. Milnor, Morse theory. Annals of Mathematics Studies, No. 51 Princeton University Press, Princeton, N.J., 1963.
* [5] R. Stern, On topological and piecewise linear vector fields. Topology 14 (1975), 257–269.
* [6] M. Zaremsky, Bestvina-Brady discrete Morse theory and Vietoris-Rips complexes. To appear, Amer. J. Math.
|
FKN, first proof, rewritten
Ehud Friedgut and Gil Kalai and Assaf Naor
###### Abstract
About twenty years ago we wrote a paper, ”Boolean Functions whose Fourier
Transform is Concentrated on the First Two Levels”, [1]. In it we offered
several proofs of the statement that Boolean functions
$f(x_{1},x_{2},\dots,x_{n})$, whose Fourier coefficients are concentrated on
the lowest two levels are close to a constant function or to a function of the
form $f=x_{k}$ or $f=1-x_{k}$.
Returning to the paper lately, we noticed that the presentation of the first
proof is rather cumbersome, and includes several typos. In this note we
rewrite that proof, as a service to the public.
Here is the main theorem of FKN [1].
###### Theorem 0.1.
Let $f:\\{0,1\\}^{n}\rightarrow\\{0,1\\}$, $\|f\|_{2}^{2}=p$. If
$\sum_{|S|>1}\widehat{f}^{2}(S)=\delta$ then either $p<K^{\prime}\delta$ or
$p>1-K^{\prime}\delta$ or $\|f(x_{1},x_{2},\dots,x_{n})-x_{i}\|_{2}^{2}\leq
K\delta$ for some $i$ or $\|f(x_{1},x_{2},\dots,x_{n})-(1-x_{i})\|_{2}^{2}\leq
K\delta$ for some $i$. Here, $K^{\prime}$ and $K$ are absolute constants.
Proof: First observe that we may assume that $p=1/2$: replace $f$ by a
function $g:\\{0,1\\}^{n+1}\rightarrow\\{0,1\\}$ defined by
$g(x_{1},\ldots,x_{n},x_{n+1})=f(x_{1},\ldots,x_{n})\cdot\frac{1+\raise
2.2pt\hbox{$\chi$}_{n+1}}{2}+(1-f(1-x_{1},\ldots,1-x_{n}))\cdot\frac{1-\raise
2.2pt\hbox{$\chi$}_{n+1}}{2}$
and note that $|g|_{2}^{2}=1/2$, and
$\sum_{|S|>1}\widehat{f}^{2}(S)=\sum_{|S|>1}\widehat{g}^{2}(S)$. Then prove
that $g$ is close to a dictator (or anti-dictator). If the influential
coordinate for $g$ is $n+1$ then $f$ is necessarily close to a constant, and
if the influential coordinate for $g$ is some other $i$, then $f$ too is
necessarily close to a dictator or anti-dictator of the $i$th variable.
Let
$f^{=1}:=\sum\widehat{f}(i)\raise 2.2pt\hbox{$\chi$}_{i}.$
Note that $f^{=1}$ is an odd function, i.e. $f^{=1}(S)=-f^{=1}(S^{c})$, and
that
$|f^{=1}|_{2}^{2}=\sum\widehat{f}(i)^{2}=1/4-\delta.$
Note also that
$(1/4-\delta)=\langle f^{=1},f^{=1}\rangle=\sum\widehat{f}(i)^{2}=\langle
f,f^{=1}\rangle$
$=\sum_{S\subset[n]}2^{-n}f(S)f^{=1}(S)=\sum_{S\subset[n]}2^{-n}f(S)\frac{f^{=1}(S)-f^{=1}(S^{c})}{2}=$
$\sum_{S\subset[n]}2^{-n}f^{=1}(S)\frac{f(S)-f(S^{c})}{2}\leq\sum_{S\subset[n]}2^{-n}\frac{1}{2}\left|f^{=1}(S)\right|=\frac{1}{2}|f^{=1}|_{1}.$
So
$\frac{1}{2}-2\delta\leq|f^{=1}|_{1},|f^{=1}|_{2}=\sqrt{1/4-\delta}\leq\frac{1}{2}-\delta.$
Normalizing, let $h=\frac{f^{=1}}{|f^{=1}|_{2}}$, then
$|h|_{2}=1,|h|_{1}\geq\frac{1/2-2\delta}{1/2-\delta}\geq 1-4\delta$
So $h=\sum a_{i}\raise 2.2pt\hbox{$\chi$}_{i}$, with $\sum a_{i}^{2}=1$, and
we wish to prove that one of the $a_{i}$’s is close to 1 or -1. This follows
immediately from the following result of König, Schütt and Tomczak-Jaegermann,
[2].
###### Theorem 0.2.
Let $a_{1},a_{2},\dots,a_{n}$ be real numbers such that $\sum a_{i}^{2}=1$.
Let $E$ be the expected value of $|\sum a_{i}\raise 2.2pt\hbox{$\chi$}_{i}|$.
Then
$\left|E-\sqrt{\frac{2}{\pi}}\right|\leq\left(1-\sqrt{\frac{2}{\pi}}\right)\max_{1\leq
i\leq n}|a_{i}|.$ (0.1)
In our case this yields
$\max\\{|a_{i}|\\}\geq\frac{1-\sqrt{\frac{2}{\pi}}-4\delta}{1-\sqrt{\frac{2}{\pi}}}>1-20\delta.$
## References
* [1] E. Friedgut, G. Kalai, A. Naor, Boolean functions whose Fourier transform is concentrated on the first two levels, Advances in Applied Mathematics 29 (2002), pp. 427–437.
* [2] König, H., C. Schütt, N. Tomczak-Jaegermann , Projection constants of symmetric spaces and variants of Khintchine’s inequality. J. Reine Angew. Math. 511 (1999), 1–42.
|
# MilliKelvin microwave impedance microscopy in a dry dilution refrigerator
Leonard Weihao Cao Department of Physics, University of California, San
Diego, CA 92093, USA Chen Wu Department of Physics, University of
California, San Diego, CA 92093, USA Rajarshi Bhattacharyya Department of
Physics, University of California, San Diego, CA 92093, USA Ruolun Zhang
Department of Physics, University of California, San Diego, CA 92093, USA
Monica T. Allen<EMAIL_ADDRESS>Department of Physics, University
of California, San Diego, CA 92093, USA
###### Abstract
Microwave impedance microscopy (MIM) is a near-field imaging technique that
has been used to visualize the local conductivity of materials with nanoscale
resolution across the GHz regime. In recent years, MIM has shown great promise
for the investigation of topological states of matter, correlated electronic
states and emergent phenomena in quantum materials. To explore these low-
energy phenomena, many of which are only detectable in the milliKelvin regime,
we have developed a novel low-temperature MIM incorporated into a dilution
refrigerator. This setup, which consists of a tuning-fork-based atomic force
microscope with microwave reflectometry capabilities, is capable of reaching
temperatures down to $70\text{\,}\mathrm{m}\mathrm{K}$ during imaging and
magnetic fields up to $9\text{\,}\mathrm{T}$. To test the performance of this
microscope, we demonstrate microwave imaging of the conductivity contrast
between graphite and silicon dioxide at cryogenic temperatures and discuss the
resolution and noise observed in these results. We extend this methodology to
visualize edge conduction in Dirac semimetal cadmium arsenide in the quantum
Hall regime.
††preprint: AIP/123-QED
## I Introduction
Figure 1: Scanning probe microscopy system with combined AFM and MIM readout,
integrated into a dilution refrigerator. (a) Schematic of the scanning MIM
readout electronics and hardware integrated into a dilution refrigerator. The
shaded regions refer to the different thermal stages inside the fridge. (b)
Left panel: Photo of the microscope head, scanners and sample stage
(corresponding to the red box in the schematic). Right panels: Zoomed-in view
of the tip glued onto the tuning fork (blue box) and a scanning electron
microscope image of the etched tungsten tip used for combined AFM and MIM
imaging. (c) Plot of the reflect microwave power $S_{11}$ of the impedance
matching network (IMN) shows the fundamental resonance at the
$1.8\text{\,}\mathrm{G}\mathrm{H}\mathrm{z}$. Inset: Circuit diagram of the
IMN with a $0.2\text{\,}\mathrm{p}\mathrm{F}$ capacitor with
$5\text{\,}\mathrm{c}\mathrm{m}$ coax connected in series. (d) Plots of the
oscillation amplitude of the tuning fork as a function of frequency, showing
the mechanical resonance used for height feedback. The upper and lower panels
show the resonance peak at room temperature and 70 mK, respectively.
Microwave impedance microscopy (MIM) has the unique capacity to probe the
local conductivity and permittivity of quantum materials with nanoscale
spatial resolutionLai _et al._ (2007, 2011a, 2011b); Kundhikanjana _et al._
(2013); Ma _et al._ (2015a, b); Liu _et al._ (2015); Seabron _et al._
(2016); Chu _et al._ (2020); Barber, Ma, and Shen (2022). This enables direct
visualization of the microscopic nature of electronic states, including the
real-space disorder landscape, multi-domain behavior, or the presence of
topological modes that propagate along the sample boundaries. By coupling
microwaves with a wavelength of 1-$100\text{\,}\mathrm{c}\mathrm{m}$ to a
sharp metallic probe and collecting the reflected signal, MIM characterizes
the complex admittance between the tip and the sample without the requirement
for the sample to be conductive, which is less restrictive than other
electronic imaging techniquesEriksson _et al._ (1996); Döring, Eng, and Kehr
(2016); Lu _et al._ (2017); McGilly _et al._ (2019); Rosenberger _et al._
(2020). As demonstrated in recent experiments, MIM can provide insight into
the real-space nature of correlated states and topological states in two-
dimensional heterostructuresLai _et al._ (2008, 2010); Ma _et al._ (2015c);
Wu _et al._ (2018); Allen _et al._ (2019); Chu _et al._ (2020); Barber, Ma,
and Shen (2022). However, many of these states are characterized by low energy
scales and are therefore most robust at millikelvin temperatures, motivating
the development of cryogenic MIM instrumentation. Thus far, most state-of-the-
art MIM experiments have been performed in
1.5-$2\text{\,}\mathrm{K}$Kundhikanjana _et al._ (2011) or He-3 cryostats,
which can reach of a minimum temperature of $450\text{\,}\mathrm{m}\mathrm{K}$
Allen _et al._ (2019).
Here we report on the construction of a novel milliKelvin MIM, which will
support spatially-resolved detection of quantum electronic states at ultra-low
temperatures. This setup consists of scanning probe microscope with tuning-
fork-based height feedback integrated into a dry dilution refrigerator. A
sharp metallic probe driven by an AC signal at microwave frequency is coupled
to the tuning fork and scanned over the sample. Using reflectometry, MIM
detects the sample’s response to high frequency electromagnetic fields
emanating from the probe. To demonstrate the measurement capabilities of this
setup, we present MIM images of the conductivity contrast between graphite and
SiO2 down to temperatures of 70 mK. Finally, we also demonstrate microwave
imaging of edge states in Cd3As2 thin films in the quantum Hall regime at the
base temperature.
## II Experimental Setup
This setup consists of an custom-designed tuning fork based atomic force
microscope (AFM) integrated into a Leiden Cryogenics CF-CS110 dilution
refrigerator. The microscope housing is in thermal equilibrium with the mixing
chamber plate on the cold-insertable probe, which is loaded into a dilution
refrigerator, as shown schematically in Figure 1(a). Figure 1(b) shows the
design of the microscope head, which houses an etched tungsten wire mounted
onto to the end of one prong of a tuning fork (TF) mechanical resonator (blue
box) Khan _et al._ (2012). The oscillation amplitude of the TF is monitored
for continuous height feedback, which enables tapping-mode topographic
imagingEdwards _et al._ (1997). Below the tip holder, the sample stage is
mounted on a stack of CuBe piezoelectric scanners (Attocube AN-Sxyz100) and
the positioners (ANPx(z)100), which control fine xyz scanning (up to
$$40\text{\,}\mu\mathrm{m}$\times$40\text{\,}\mu\mathrm{m}$$ below
$4\text{\,}\mathrm{K}$) and coarse positioning
($$5\text{\,}\mathrm{m}\mathrm{m}$\times$5\text{\,}\mathrm{m}\mathrm{m}$$
below $4\text{\,}\mathrm{K}$), respectively.
On the MIM circuitry side, GHz signals are generated by an analog signal
generator, and one split branch of the signal is coupled to the tip via an
impedance matching network (IMN) Pozar (2011), which is responsible for
minimizing the reflected signal [inset in Figure 1(c)] Cui, Ma, and Shen
(2016). A plot of the reflected microwave power $S_{11}$ of an example IMN is
shown in Figure 1(c), showing the first resonance at
$1.8\text{\,}\mathrm{G}\mathrm{H}\mathrm{z}$. The reflected signal from the
tip passes through two directional couplers mounted on the probe-still plate
($1\text{\,}\mathrm{K}$) to cancel out the residual reflected power. The
signal from the sample is then amplified by a cryogenic amplifier (Cosmic
Microwave Technology CITCRYO1-12) mounted on the $3\text{\,}\mathrm{K}$ stage,
after which the signal propagates out of the probe and gets further amplified
and demodulated at room temperature, as shown in Figure 1(a).
During the tip approach procedure, active height feedback can be performed by
monitoring either the TF oscillation amplitude or the MIM signal. Here we use
a Nanonis SC5 controller to excite and track the TF oscillation and control
the fine scanners during imaging Cui, Ma, and Shen (2016). Figure 1(d)
displays a measurement of the oscillation amplitude of the tuning fork as a
function of excitation frequency, showing the resonance peak near 32.768 kHz.
The Q-factor of the resonance is around 500 - 2000 at room temperature (upper
panel), while at base temperature it can easily reach 10,000-100,000 (lower
panel).
The main technical challenge of microwave imaging in a dry dilution fridge is
the emergence of new noise sources, which impact both spatial resolution and
the signal-to-noise of the microwave reflectometry measurements. There are two
main sources of increased noise: (1) mechanical pulse tube vibrations, which
are associated with the cooling mechanism of the dilution fridge, place limits
on the lateral spatial resolution and add noise to the measured MIM signal,
and (2) the high Q factor of the tuning fork at mK temperatures leads to
fluctuations in the tip-sample distance, which also couples with the pulse
tube vibration. Our fridge is equipped with a pulse tube cryocooler operating
at $\sim$1.4\text{\,}\mathrm{H}\mathrm{z}$$ Wang and Gifford (2002); Chijioke
and Lawall (2010) generating vibrations that amplitude modulate the tuning
fork oscillation, and consequently also modulate the GHz MIM signal. To
mitigate these vibrations, we physically decoupled the pulse tube motion from
the microscope by unscrewing the rotary valve from the fridge and putting
isolation foam in between Li _et al._ (2005), while the high-pressure helium
lines connected to the compressor are wrapped with acoustic pipe lagging.
We found that performing AC-mode MIM imaging, described below, largely
eliminates background oscillations in the images that arise from pulse tube
vibrations. In AC height-modulated imaging mode, a low frequency lock-in
amplifier (SR830) is added to the output of the GHz frequency mixer to
demodulate the reflected MIM signal at the tuning fork resonance frequency (32
kHz), after which low-pass filters can be used to attenuate noise Cui, Ma, and
Shen (2016). We note that because the GHz MIM signal (from the tip) is
amplitude-modulated by both the tuning fork oscillation at 32 kHz and the
pulse tube vibration, there are multiple side-bands around the measurement
frequency. Therefore, band-pass filters between 20-30 kHz are added to the
output of the GHz mixer to reduce noise, after which the MIM signal is fed
into the SR830 lock-in amplifier for demodulation. During this step, the lock-
in amplifier multiplies the MIM input signal with a TF reference signal
(provided by the measured piezo current from the tuning fork, after
amplification by a commercial low-noise charge amplifier) to extract the in-
phase components. Both the filters inside SR830 and the additional low-pass
filter added to the output of the lock-in are chosen to eliminate noise at the
pulse tube vibration frequency.
Figure 2: Topographic imaging of a micropatterned dielectric film at mK
temperatures using tuning-fork-based atomic force microscopy. (a) Optical
image of an etched array of holes $\mathrm{SiO_{2}}$. The diameter and the
spacing of the holes are 1 $um$. The hole depth is
$20\text{\,}\mathrm{n}\mathrm{m}$. (b) AFM spatial scan at 70mK. The scan
covers 4$\times$ $4\text{\,}\mathrm{u}\mathrm{m}$ and the scan speed is 400
$nm/s$. (c) Cross-sectional line cut corresponding to the black line in (b).
Figure 3: Microwave impedance microscopy of graphite at milliKelvin
temperatures. (a) Optical image of a graphite flake exfoliated onto a SiO2/Si
substrate. The dark purple region has a thickness of
3-$5\text{\,}\mathrm{n}\mathrm{m}$, and the light yellow region has a
thickness of $\sim$20\text{\,}\mathrm{n}\mathrm{m}$$. The blue box marks the
imaging window for (c) and (d). (b) Theoretical MIM response curves simulated
at $1.8\text{\,}\mathrm{G}\mathrm{H}\mathrm{z}$, illustrating the evolution of
the MIM contrast with the sample conductivity. Inset: vertical cut of the
potential distribution for the tip-sample interaction, calculated using
finite-element analysis. (c) AFM and MIM imaging of the graphite flake at
$4\text{\,}\mathrm{K}$, with the scan window covering the
$20\text{\,}\mathrm{n}\mathrm{m}$ region (lower left),
3-$5\text{\,}\mathrm{n}\mathrm{m}$ region (middle), and the SiO2 region (upper
right). The scan speed is 0.5 $\mu m/s$ . (d) AFM and MIM images of the same
location at $70\text{\,}\mathrm{m}\mathrm{K}$. The scan speed is 0.2 $\mu
m/s$.
## III Results and Discussion
We characterized the low temperature performance of the AFM on a sample
consisting of an array of etched $\mathrm{SiO_{2}}$ holes patterned on a Si
wafer, as depicted in the optical image in Figure 2(a). Cryogenic AFM
measurements are used to visualize the topographic profile of a 5 $\mu$m × 5
$\mu$m scan region at $70\text{\,}\mathrm{m}\mathrm{K}$, as depicted in Figure
2(b). Figure 2(c) shows a cross-sectional cut of this AFM image, whose
position is marked by the black line, revealing a noise level of roughly
$3\text{\,}\mathrm{n}\mathrm{m}$. To more carefully assess the magnitude of
the z-noise during AFM scanning, we performed 96 x 96-pixel noise scans over a
1 nm x 1 nm area, such that the spatial profile is irrelevant. Root mean
square (RMS) roughness was calculated using Gwyddion after line fitting, which
gives z-noise levels in the range of 1.8 - 2.2 nm. Furthermore, upon careful
inspection of Figure 2(b), we noticed that a tilted stripe pattern appears as
a background modulation in the AFM image. By taking a Fourier transform of
this data, we found that the stripe pattern has a frequency of
$1.4\text{\,}\mathrm{H}\mathrm{z}$, which coincides with the frequency of the
pulse tube.
Next, to demonstrate our combined AFM and MIM imaging capabilities at low
temperatures, we measured the spatial contrast of the MIM response across the
boundary between graphite and SiO2 at 70 mK. Figure 3(a) shows an optical
image of the graphite sample, which has terraces of varying thicknesses: the
purple region is $\sim$3\text{\,}\mathrm{n}\mathrm{m}$$ and the bright yellow
region is 15-$20\text{\,}\mathrm{n}\mathrm{m}$. In Figure 3, panels (b) and
(c) display AFM and MIM images of the graphite/SiO2 interface measured at 4K
and $70\text{\,}\mathrm{m}\mathrm{K}$, respectively. In both sets of AFM
images, the 3/$20\text{\,}\mathrm{n}\mathrm{m}$ step height in graphite is
clearly visible, while the graphite/SiO2 boundary only shows a faint contour,
as the z-movement of the scanner to compensate for fluctuations in the tip-
sample distance dominates over the $3\text{\,}\mathrm{n}\mathrm{m}$ boundary.
Meanwhile, we observe a sharp contrast in the MIM signal across the
graphite/SiO2 boundary due to the different conductivities of the two
materials, as predicted by the response curves in Figure 3(b). To explain the
experimental observations, one can model the tip-sample interaction for this
system using finite element analysis, which can be used to calculate the MIM
response curves as a function of sample conductivityBarber, Ma, and Shen
(2022). At a measurement frequency of 1.8 GHz, the imaginary part of the MIM
response should monotonically increase with the sample conductivity,
saturating when the resistivity is higher than $10^{-2}$ $\Omega\cdot m$
(insulating limit) or lower than $10^{-5}$ $\Omega\cdot m$ (conductive limit),
as shown in Figure 3(b). A cross-sectional profile of the penetration of the
tip potential into the sample is provided in the inset. We estimate the MIM
spatial resolution to be around $200\text{\,}\mathrm{n}\mathrm{m}$,
constrained by the apex geometry of the etched tungsten tip and mechanical
noise from pulse tube vibrations.
Figure 4: Microwave imaging of edge modes in a cadmium arsenide film in the
quantum Hall regime. (a) Cross-sectional schematic of an epitaxially-grown
Cd3As2 heterostructure. (b) Transport measurement of the longitudinal
resistance $R_{xx}$ as a function of magnetic field at
$90\text{\,}\mathrm{m}\mathrm{K}$. The minima correspond to the emergence of
quantum Hall plateaus. (c) MIM image at $6.5\text{\,}\mathrm{T}$, revealing a
sharp enhancement of the reflected signal at the boundaries of a quantum Hall
insulator state. (d) MIM image at $5.4\text{\,}\mathrm{T}$, showing spatially
uniform conductivity at the transition between quantum Hall plateaus (e-f)
Cross-sectional line cuts of the MIM response across the sample extracted from
(c) and (d) respectively.
We also apply this methodology to visualize edge states at the boundaries of
thin film cadmium arsenide (Cd3As2), a novel three-dimensional Dirac semi-
metal, in the quantum Hall regimeSankar _et al._ (2015); Schumann _et al._
(2016). A cross-sectional schematic of the epitaxially-grown hetero-structure
is shown in Figure4 (a), where the film thickness is
$20\text{\,}\mathrm{n}\mathrm{m}$Goyal _et al._ (2018); Lygo _et al._
(2023). The Cd3As2 device is lithographically patterned and etched into strips
of width 10-15 $\mu$m, which are electrically grounded. Transport measurements
were performed to characterize the magnetic field behavior of the sample,
which reveal dips in the longitudinal resistance at around
$4.7\text{\,}\mathrm{T}$ and $6.5\text{\,}\mathrm{T}$, as shown in Figure
4(b). These minima should correspond to the emergence of quantum Hall
plateausGuo _et al._ (2022).
To shed light on the real-space conductivity profile of Cd3As2 in the quantum
Hall regime and monitor its evolution across the topological phase transition
between plateaus, MIM measurements were performed at a series of magnetic
fields at a base temperature of 90 mK. Microwave imaging reveals a sharp
enhancement of the reflected MIM signal at the boundaries of the sample in the
quantum Hall insulator state, which rapidly decays into the bulk of the
sample, as shown in Figure 4(c). Meanwhile, we observed a spatially-uniform
conductivity at the transition between quantum Hall plateaus when the
longitudinal resistance deviates from zero at B = 5.4 T (Figure 4(d)). The
variation of the MIM signal between different lines comes both from the noise
in the MIM signal and spatial inhomogeneities in the sample.
To more clearly compare the spatial dependence of the MIM signal in these two
regimes, in Figure 4(e-f) we plot the cross-sectional profiles of the MIM
response across the sample extracted from panels (c) and (d), respectively.
These low temperature microwave images reveal sharply enhanced edge conduction
that encloses an insulating interior in the quantum Hall regime, which is
consistent with the results of transport measurements performed on this system
in prior experimental studies.
We note that one way to improve signal quality is to use “floating” AC-mode
MIM, where imaging is performed with the tip retracted a fixed distance
(60-$100\text{\,}\mathrm{n}\mathrm{m}$) above the sample surface. At this
distance, the AFM channel will not be modulated due to the topography
feedback, but the MIM tip can still interact with the sample via the
electromagnetic fields in the vicinity of the tip (when operated in the near-
field regime). Because periodic oscillations in the tip-sample distance at the
tuning fork resonance are decoupled from the surface roughness of the sample,
noise in the MIM response can be dramatically reduced in floating mode. Figure
5 shows the results of a floating mode MIM scan performed at
$3\text{\,}\mathrm{G}\mathrm{H}\mathrm{z}$ and T=
$70\text{\,}\mathrm{m}\mathrm{K}$, with the tip lifted
$100\text{\,}\mathrm{n}\mathrm{m}$ above an hBN-covered graphite layer. The
tip apex is around $0.8\text{\,}\mathrm{u}\mathrm{m}$, which is reflected in
the spatial profile of the MIM signal change across the boundary between the
graphite flake and hBN. In this case, the signal-to-noise ratio is even better
than that observed in tapping mode MIM images (Figure 3(c-d), which is
especially useful for fixed-location MIM measurements. However, this advantage
comes at the expense of signal size, as the tip is further away from the
sample than for tapping mode.
The choice of tip-sample distance for floating-mode measurements is a
compromise between maximizing signal sensitivity and minimizing the risk of a
tip crash due to vertical fluctuations in the tip-sample distance, which arise
from pulse tube vibrations and are aggravated by the large Q factor of the
tuning fork at mK temperatures. For larger scan windows or rougher sample
surfaces, the tip may need to be retracted further. We expect the sensitivity
of floating mode to be around
$0.01-$0.1\text{\,}\mathrm{a}\mathrm{F}\mathrm{/}\sqrt{Hz}$$ at
$0.1\text{\,}\mathrm{u}\mathrm{W}$ input power, and in our case the noise is
mostly due to vertical modulations of the tip-sample distanceLai _et al._
(2008).
Figure 5: Demonstration of low-noise height-modulated MIM in a dry dilution
fridge. (a) Optical image of a van der Waals heterostucture with a graphite
gate ( $5\text{\,}\mathrm{n}\mathrm{m}$ thick) covered by hBN (
$50\text{\,}\mathrm{n}\mathrm{m}$ thick). (b) A floating mode MIM image of the
region enclosed by the red square in panel (a), acquired at
$70\text{\,}\mathrm{m}\mathrm{K}$. The measurement frequency is
$3\text{\,}\mathrm{G}\mathrm{H}\mathrm{z}$ and the tip is retracted
$100\text{\,}\mathrm{n}\mathrm{m}$ from the highest feature inside the scan
window.
## IV Conclusion and outlook
In summary, we report on the development of a microwave impedance microscope
that operates at temperatures down to $70\text{\,}\mathrm{m}\mathrm{K}$. This
is achieved by incorporating a TF-based AFM with near-field GHz imaging
capabilities into a dry dilution refrigerator. Pushing the limits of MIM into
new low temperature regimes should enable local sensing of quantum phenomena
that only exist at low energy scales, including certain topological states of
matter, domain wall physics at phase transitions, quantum states arising from
geometric confinement in mesoscopic devices, and correlated states in two-
dimensional materials and van der Waals heterostructures. Because this
instrumentation is equipped with combined transport and imaging capabilities,
it can also illuminate the correspondence between macroscopic transport
behavior and the underlying microscopic nature of electronic states, including
the real-space disorder landscape or presence of edge modes.
During the preparation of this manuscript, we became aware of a pre-print on a
related topicJiang _et al._ (2023).
###### Acknowledgements.
We thank Alex Lygo and Susanne Stemmer for providing cadmium arsenide devices
for these experiments, Yongtao Cui for inspiring discussions, and Evan Cobb
for helping develop some of the MIM circuitry. We gratefully acknowledge
funding support from the UC Office of the President, specifically the UC
Laboratory Fees Research Program (award LFR-20-653926), the AFOSR Young
Investigator Program (award FA9550-20-1-0035) and the AFOSR/ARO MURI Program
(award FA9550-22-1-0270). This work was performed, in part, at the San Diego
Nanotechnology Infrastructure (SDNI) of UCSD, a member of the National
Nanotechnology Coordinated Infrastructure, which is supported by the National
Science Foundation (Grant ECCS-2025752).
## Data Availability
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* Lai _et al._ (2007) K. Lai, M. Ji, N. Leindecker, M. Kelly, and Z. Shen, “Atomic-force-microscope-compatible near-field scanning microwave microscope with separated excitation and sensing probes,” Review of scientific instruments 78, 063702 (2007).
* Lai _et al._ (2011a) K. Lai, W. Kundhikanjana, M. A. Kelly, and Z. xun Shen, “Nanoscale microwave microscopy using shielded cantilever probes,” Applied Nanoscience 1, 13–18 (2011a).
* Lai _et al._ (2011b) K. Lai, W. Kundhikanjana, M. A. Kelly, Z.-X. Shen, J. Shabani, and M. Shayegan, “Imaging of coulomb-driven quantum hall edge states,” Phys. Rev. Lett. 107, 176809 (2011b).
* Kundhikanjana _et al._ (2013) W. Kundhikanjana, Y. Yang, Q. Tanga, K. Zhang, K. Lai, Y. Ma, M. Kelly, X. Li, and Z. Shen, “Unexpected surface implanted layer in static random access memory devices observed by microwave impedance microscope,” Semiconductor science and technology 28, 025010 (2013).
* Ma _et al._ (2015a) E. Y. Ma, M. R. Calvo, J. Wang, B. Lian, M. Mühlbauer, C. Brüne, Y.-T. Cui, K. Lai, W. Kundhikanjana, Y. Yang, _et al._ , “Unexpected edge conduction in mercury telluride quantum wells under broken time-reversal symmetry,” Nature communications 6, 7252 (2015a).
* Ma _et al._ (2015b) E. Y. Ma, Y.-T. Cui, K. Ueda, S. Tang, K. Chen, N. Tamura, P. M. Wu, J. Fujioka, Y. Tokura, and Z.-X. Shen, “Mobile metallic domain walls in an all-in-all-out magnetic insulator,” Science 350, 538–541 (2015b).
* Liu _et al._ (2015) Y. Liu, C. Tan, H. Chou, A. Nayak, D. Wu, R. Ghosh, H.-Y. Chang, Y. Hao, X. Wang, J.-S. Kim, _et al._ , “Thermal oxidation of wse2 nanosheets adhered on sio2/si substrates,” Nano letters 15, 4979–4984 (2015).
* Seabron _et al._ (2016) E. Seabron, S. MacLaren, X. Xie, S. V. Rotkin, J. A. Rogers, and W. L. Wilson, “Scanning probe microwave reflectivity of aligned single-walled carbon nanotubes: Imaging of electronic structure and quantum behavior at the nanoscale,” ACS nano 10, 360–368 (2016).
* Chu _et al._ (2020) Z. Chu, E. C. Regan, X. Ma, D. Wang, Z. Xu, M. I. B. Utama, K. Yumigeta, M. Blei, K. Watanabe, T. Taniguchi, S. Tongay, F. Wang, and K. Lai, “Nanoscale conductivity imaging of correlated electronic states in ${\mathrm{WSe}}_{2}/{\mathrm{WS}}_{2}$ Moiré superlattices,” Physical Review Letters 125, 186803 (2020).
* Barber, Ma, and Shen (2022) M. E. Barber, E. Y. Ma, and Z.-X. Shen, “Microwave impedance microscopy and its application to quantum materials,” Nature Reviews Physics 4, 61–74 (2022).
* Eriksson _et al._ (1996) M. Eriksson, R. Beck, M. Topinka, J. Katine, R. Westervelt, K. Campman, and A. Gossard, “Cryogenic scanning probe characterization of semiconductor nanostructures,” Applied Physics Letters 69, 671–673 (1996).
* Döring, Eng, and Kehr (2016) J. Döring, L. M. Eng, and S. C. Kehr, “Low-temperature piezoresponse force microscopy on barium titanate,” Journal of Applied Physics 120, 084103 (2016).
* Lu _et al._ (2017) C.-I. Lu, C. J. Butler, J.-K. Huang, Y.-H. Chu, H.-H. Yang, C.-M. Wei, L.-J. Li, and M.-T. Lin, “Moiré-related in-gap states in a twisted mos2/graphite heterojunction,” npj 2D Materials and Applications 1, 24 (2017).
* McGilly _et al._ (2019) L. McGilly, A. Kerelsky, N. Finney, K. Shapovalov, E.-M. Shih, A. Ghiotto, Y. Zeng, S. Moore, W. Wu, Y. Bai, _et al._ , “Seeing moir$\backslash$’e superlattices,” arXiv preprint arXiv:1912.06629 (2019).
* Rosenberger _et al._ (2020) M. R. Rosenberger, H.-J. Chuang, M. Phillips, V. P. Oleshko, K. M. McCreary, S. V. Sivaram, C. S. Hellberg, and B. T. Jonker, “Twist angle-dependent atomic reconstruction and moiré patterns in transition metal dichalcogenide heterostructures,” ACS nano 14, 4550–4558 (2020).
* Lai _et al._ (2008) K. Lai, W. Kundhikanjana, M. Kelly, and Z. X. Shen, “Modeling and characterization of a cantilever-based near-field scanning microwave impedance microscope,” Review of Scientific Instruments 79, 063703 (2008), https://doi.org/10.1063/1.2949109 .
* Lai _et al._ (2010) K. Lai, M. Nakamura, W. Kundhikanjana, M. Kawasaki, Y. Tokura, M. A. Kelly, and Z.-X. Shen, “Mesoscopic percolating resistance network in a strained manganite thin film,” Science 329, 190–193 (2010), https://www.science.org/doi/pdf/10.1126/science.1189925 .
* Ma _et al._ (2015c) E. Y. Ma, B. Bryant, Y. Tokunaga, G. Aeppli, Y. Tokura, and Z.-X. Shen, “Charge-order domain walls with enhanced conductivity in a layered manganite,” Nature Communications 6, 7595 (2015c).
* Wu _et al._ (2018) X. Wu, Z. Hao, D. Wu, L. Zheng, Z. Jiang, V. Ganesan, Y. Wang, and K. Lai, “Quantitative measurements of nanoscale permittivity and conductivity using tuning-fork-based microwave impedance microscopy,” Review of Scientific Instruments 89, 043704 (2018), https://doi.org/10.1063/1.5022997 .
* Allen _et al._ (2019) M. Allen, Y. Cui, E. Yue Ma, M. Mogi, M. Kawamura, I. C. Fulga, D. Goldhaber-Gordon, Y. Tokura, and Z.-X. Shen, “Visualization of an axion insulating state at the transition between 2 chiral quantum anomalous hall states,” Proceedings of the National Academy of Sciences 116, 14511–14515 (2019).
* Kundhikanjana _et al._ (2011) W. Kundhikanjana, K. Lai, M. A. Kelly, and Z.-X. Shen, “Cryogenic microwave imaging of metal–insulator transition in doped silicon,” Review of Scientific Instruments 82, 033705 (2011), https://doi.org/10.1063/1.3554438 .
* Khan _et al._ (2012) Y. Khan, H. Al-Falih, Y. Zhang, T. K. Ng, and B. S. Ooi, “Two-step controllable electrochemical etching of tungsten scanning probe microscopy tips,” Review of Scientific Instruments 83, 063708 (2012), https://doi.org/10.1063/1.4730045 .
* Edwards _et al._ (1997) H. Edwards, L. Taylor, W. Duncan, and A. J. Melmed, “Fast, high-resolution atomic force microscopy using a quartz tuning fork as actuator and sensor,” Journal of Applied Physics 82, 980–984 (1997), https://doi.org/10.1063/1.365936 .
* Pozar (2011) D. M. Pozar, _Microwave engineering_ (John wiley & sons, 2011).
* Cui, Ma, and Shen (2016) Y.-T. Cui, E. Y. Ma, and Z.-X. Shen, “Quartz tuning fork based microwave impedance microscopy,” Review of Scientific Instruments 87, 063711 (2016).
* Wang and Gifford (2002) C. Wang and P. Gifford, “Development of 4 k pulse tube cryorefrigerators at cryomech,” in _AIP conference proceedings_ , Vol. 613 (American Institute of Physics, 2002) pp. 641–648.
* Chijioke and Lawall (2010) A. Chijioke and J. Lawall, “Vibration spectrum of a pulse-tube cryostat from 1hz to 20khz,” Cryogenics 50, 266–270 (2010).
* Li _et al._ (2005) R. Li, Y. Ikushima, T. Koyama, T. Tomaru, T. Suzuki, T. Haruyama, T. Shintomi, and A. Yamamoto, “Vibration-free pulse tube cryocooler system for gravitational wave detectors, part ii: cooling performance and vibration,” in _Cryocoolers 13_ (Springer, 2005) pp. 703–710.
* Sankar _et al._ (2015) R. Sankar, M. Neupane, S.-Y. Xu, C. Butler, I. Zeljkovic, I. Panneer Muthuselvam, F.-T. Huang, S.-T. Guo, S. K. Karna, M.-W. Chu, _et al._ , “Large single crystal growth, transport property and spectroscopic characterizations of three-dimensional dirac semimetal cd3as2,” Scientific reports 5, 12966 (2015).
* Schumann _et al._ (2016) T. Schumann, M. Goyal, H. Kim, and S. Stemmer, “Molecular beam epitaxy of cd3as2 on a iii-v substrate,” APL Materials 4, 126110 (2016).
* Goyal _et al._ (2018) M. Goyal, L. Galletti, S. Salmani-Rezaie, T. Schumann, D. A. Kealhofer, and S. Stemmer, “Thickness dependence of the quantum hall effect in films of the three-dimensional dirac semimetal cd3as2,” APL Materials 6, 026105 (2018).
* Lygo _et al._ (2023) A. C. Lygo, B. Guo, A. Rashidi, V. Huang, P. Cuadros-Romero, and S. Stemmer, “Two-dimensional topological insulator state in cadmium arsenide thin films,” arXiv preprint arXiv:2301.02759 (2023).
* Guo _et al._ (2022) B. Guo, A. C. Lygo, X. Dai, and S. Stemmer, “$\nu$= 0 quantum hall state in a cadmium arsenide thin film,” APL Materials 10, 091116 (2022).
* Jiang _et al._ (2023) Z. Jiang, S. K. Chong, P. Zhang, P. Deng, S. Chu, S. Jahanbani, K. L. Wang, and K. Lai, “Implementing microwave impedance microscopy in a dilution refrigerator,” arXiv preprint arXiv:2304.08323 (2023).
* Wang _et al._ (2023) T. Wang, C. Wu, M. Mogi, M. Kawamura, Y. Tokura, Z.-X. Shen, Y.-Z. You, and M. T. Allen, “Theory of the microwave impedance microscopy of chern insulators,” arXiv preprint arXiv:2304.09227 (2023).
* Lin _et al._ (2022) W. Lin, Y. Feng, Y. Wang, J. Zhu, Z. Lian, H. Zhang, H. Li, Y. Wu, C. Liu, Y. Wang, _et al._ , “Direct visualization of edge state in even-layer mnbi2te4 at zero magnetic field,” Nature Communications 13, 7714 (2022).
|
# KGAP: Knowledge Graph Augmented Political
Perspective Detection in News Media
Shangbin Feng♠ Zilong Chen♠11footnotemark: 1 Wenqian Zhang♠11footnotemark: 1
Qingyao Li♠ Qinghua Zheng♠ Xiaojun Chang♢ Minnan Luo♠
Xi’an Jiaotong University♠ University of Technology Sydney♢
<EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>These authors
contributed equally to this work. Corresponding author.
###### Abstract
Identifying political perspectives in news media has become an important task
due to the rapid growth of political commentary and the increasingly polarized
political ideologies. Previous approaches focus on textual content and leave
out the rich social and political context that is essential in the perspective
detection process. To address this limitation, we propose KGAP, a political
perspective detection method that incorporates external domain knowledge.
Specifically, we construct a political knowledge graph to serve as domain-
specific external knowledge. We then construct heterogeneous information
networks to represent news documents, which jointly model news text and
external knowledge. Finally, we adopt relational graph neural networks and
conduct political perspective detection as graph-level classification.
Extensive experiments demonstrate that our method consistently achieves the
best performance on two real-world perspective detection benchmarks. Ablation
studies further bear out the necessity of external knowledge and the
effectiveness of our graph-based approach.
KGAP: Knowledge Graph Augmented Political
Perspective Detection in News Media
Shangbin Feng♠††thanks: These authors contributed equally to this work.
Zilong Chen♠11footnotemark: 1 Wenqian Zhang♠11footnotemark: 1 Qingyao Li♠
Qinghua Zheng♠ Xiaojun Chang♢ Minnan Luo♠††thanks: Corresponding author.
Xi’an Jiaotong University♠ University of Technology Sydney♢
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
## 1 Introduction
The past decade has witnessed dramatic changes of political commentary (Wilson
et al., 2020) in two major ways. Firstly, the popularity of social media has
led to a dramatic increase in political discussions on online media and social
networks. Besides, ever-increasing political polarization has made it hard for
journalists and news agencies to remain impartial in nature. Detecting
political perspectives in the news media would help alleviate the issue of
“echo chamber” (Barberá et al., 2015), where only a single viewpoint is
reiterated in closely formed communities and further deepens the divide. That
being said, political perspective detection has become a pressing task which
calls for further research efforts.
Previous news bias detection methods have focused on analyzing the textual
content of news articles. Recurrent neural networks (Yang et al., 2016) and
pre-trained language models (Devlin et al., 2018) are adopted by Li and
Goldwasser (2021) to analyze news content for perspective analysis. Jiang et
al. (2019) uses convolutional neural networks with GloVe word embeddings
(Pennington et al., 2014) for political perspective detection and achieves the
best result in the hyperpartisan news detection task in SemEval 2019 (Kiesel
et al., 2019). Li and Goldwasser (2021) leverages the attention mechanism and
entity mentions in text and achieves state-of-the-art results.
Figure 1: External knowledge of social and political entities in news articles
that is essential for political perspective detection.
However, these text-based methods fail to incorporate pervasive social and
political context in news articles, while human readers rely on external
knowledge as background information to facilitate commonsense reasoning and
perspective detection. For example, Figure 1 presents a news article that
contains real-world entities such as political organizations, elected
officials and geographical locations in the United States. External knowledge
of these entities informs the reader that the mentioned individuals are
republicans and mostly come from conservative states, which helps to identify
the liberal stance expressed in criticizing them. This reasoning demonstrates
that political perspective detection relies on social and political context as
background information in addition to news content. That being said, political
perspective detection methods should leverage external knowledge to reason
beyond text and identify implicit perspectives.
Figure 2: Overview of our proposed approach KGAP.
In light of these challenges, we propose KGAP (Knowledge Graph Augmented
Political perspective detection), which leverages domain-specific external
knowledge to incorporate background information and augment argument
reasoning. Specifically, we firstly construct a political knowledge graph to
represent the external knowledge that serves as background information for
political narratives. We then construct heterogeneous information networks to
represent news documents, which jointly model textual content and external
knowledge in the knowledge graph. Finally, we adopt relational graph neural
networks for graph representation learning and conduct political perspective
detection as graph-level classification. Our main contributions are summarized
as follows:
* •
To the best of our knowledge, we construct the first domain-specific knowledge
graph of contemporary U.S. politics with 1,071 entities and 10,703 triples to
serve as external knowledge for political perspective detection. We will
publicize the political knowledge graph upon acceptance to facilitate research
in related tasks.
* •
We propose to leverage domain knowledge and model news articles as
heterogeneous information networks for political perspective detection. Our
approach is end-to-end, inductive, and effectively framing the task as graph-
level classification to incorporate external knowledge.
* •
We conduct extensive experiments to evaluate KGAP and competitive baselines.
KGAP consistently outperforms state-of-the-art methods on two real-world
datasets. Further studies demonstrate the necessity of external knowledge and
explore the effect of knowledge graphs, text models, and the news graph
structure in KGAP.
## 2 Related Work
### 2.1 Political Perspective Detection
The task of political perspective detection is generally studied in two
different settings: social media and news media. For stance detection in
social media, many works have studied the problem as text classification to
identify stances in specific posts based on their textual content. They used
techniques such as sentiment analysis (Jiang et al., 2011) and neural
attention networks (Du et al., 2017). Other approaches explored the problem of
identifying stances of social media users instead of individual posts. Darwish
et al. (2020) conducted user clustering to identify their stances. Magdy et
al. (2016) proposed to predict user perspectives on major events based on
network dynamics and user interactions.
For political perspective detection in news media, news documents are often
classified into several perspective labels. Previous works have proposed to
leverage semantic information in news articles with bias features (Horne et
al., 2018) and various deep language encoders (Li and Goldwasser, 2021; Yang
et al., 2016; Jiang et al., 2019). Later approaches have tried to enrich the
textural content with the graph structures of online communities that interact
with different news outlets. Baly et al. (2020) proposes to leverage media
sources and design a triplet loss for political perspective detection. Li and
Goldwasser (2019) supplemented news articles with a network of Twitter users
and their interaction. In this paper, we focus on political perspective
detection in news media and propose to incorporate political external
knowledge.
### 2.2 Knowledge-aware NLP
Knowledge graphs (KGs) are often leveraged as external knowledge sources in
various tasks in natural language processing. Yu et al. (2021) leverages
knowledge graphs for open-domain question answering. Hu et al. (2021)
incorporates KGs in fake news detection with graph neural networks, knowledge
graph embedding techniques and Wikipedia description of KG entities. Liu et
al. (2021) proposes to extract KG subgraphs and combine textual context and
mentioned KG entities as input to language models. Zhu et al. (2021) fuses
commonsense statements in KGs with textual context using a transformer-based
encoder-decoder architecture. In this paper, we construct a politics-specific
knowledge graph and build on these works to incorporate political KGs into
stance detection. Specifically, we explore the novel approach of leveraging
knowledge graph embeddings (Bordes et al., 2013) as initial node features to
enable knowledge-text interaction from a structural perspective.
## 3 Methodology
Figure 2 presents an overview of our knowledge-aware political perspective
detection approach KGAP. Specifically, we firstly construct a domain-specific
knowledge graph of contemporary politics to serve as external knowledge for
perspective detection. We then transform news articles into heterogeneous
information networks, which jointly models textual content and related
external knowledge. Finally, we adopt gated relational graph convolutional
networks to learn graph representations and conduct political perspective
detection.
### 3.1 Knowledge Graph Construction
Existing knowledge graphs (KGs) such as ConceptNet (Speer et al., 2017),
Freebase (Bollacker et al., 2008), and YAGO (Tanon et al., 2020) have been
leveraged in various NLP tasks such as misinformation detection (Hu et al.,
2021), knowledge-aware question answering (Yu et al., 2021), and emotion
detection (Zhu et al., 2021). However, the task of political perspective
detection demands politics-related external knowledge to facilitate argument
reasoning, while existing KGs are too generic to be compatible or effective.
In this paper, we propose to construct a political knowledge graph and hope to
complement the scarce literature in domain-specific knowledge graph
construction. The design goals of our political knowledge graph are summarized
as follows:
* •
domain-specific: the knowledge graph should focus on closely related political
background knowledge for efficient knowledge infusion.
* •
diversified sources: the knowledge graph should draw from both generic
information sources and political expert knowledge.
* •
scalable: the knowledge graph construction process should be able to expand in
scope to cope with unseen entities and task-specific settings.
* •
adaptable: the knowledge graph construction process should be applicable to
different nations, time periods, and political systems.
Bearing these design principles in mind, we first select active U.S. political
actors in the past decade as entities and determine ten types of political
relations to ensure the KG is domain-specific. We then draw from Wikipedia
pages of political actors as generic knowledge and two political think tanks,
AFL-CIO111aflcio.org/scorecard and Heritage
Action222heritageaction.com/scorecard, as political expert knowledge to
determine triples and ensure the KG leverages diversified sources. Based on
the initial entities and relations, we then leverage co-reference resolution
(Lee et al., 2018) to expand the political KG by identifying new entities in
Wikipedia documents. As a result, we construct a scalable KG while ensuring
it’s domain-specific for political perspective detection. Our political KG
focuses on U.S. politics in the past decade to better align with news corpus,
while our proposed construction process is adaptable to other nations and time
periods by starting with corresponding political actors. As a result, we
obtain a political knowledge graph with 1,071 entities and 10,703 triples to
serve as external knowledge. A complete list of entities, relations and more
KG construction details could be found in the technical appendix.
### 3.2 News Graph Construction
Graphs and graph neural networks have become increasingly involved in NLP
tasks such as misinformation detection (Hu et al., 2021) and question
answering (Yu et al., 2021). In this paper, we construct news graphs to
jointly model textual content and external knowledge. We also propose to
leverage knowledge graph embeddings as node attributes to enable structural
knowledge-text interaction. Specifically, we firstly determine the nodes in
the news graph:
* •
document node: we use one document node to represent the news article. We use
pre-trained RoBERTa (Liu et al., 2019) to encode news title and use it as node
attribute $v^{d}$.
* •
paragraph node: we use one node to represent each paragraph and use pre-
trained RoBERTa for node features $v^{p}$.
* •
entity node: we use one node for each entity in our political KG. We propose
the novel approach of using TransE (Bordes et al., 2013) KG embeddings as node
attributes $v^{e}$.
After determining the nodes in the news graph, we construct a heterogeneous
graph by connecting them with three types of edges:
* •
doc-para edge: the document node is connected with each paragraph node by doc-
para edges. This ensures that every paragraph contributes to the overall
perspective analysis.
* •
para-para edge: each paragraph node is connected with paragraphs that precede
and follow it. These edges preserve the sequential flow of the original news
document.
* •
para-ent edge: each paragraph node is connected with mentioned entities in our
political KG. We conduct co-reference resolution and adopt TagMe (Ferragina
and Scaiella, 2011) to align text with entities. These edges aim to
incorporate external knowledge for stance detection.
We denote these three edges as $R=\\{doc-para,para-para,para-ent\\}$. As a
result, we obtain news graphs for each news article, which jointly model
textual content and external knowledge to enable knowledge-text interaction
from a structural perspective. We then transform node attributes $v^{d}$,
$v^{p}_{i}$, and $v^{e}_{i}$ with fully connected layers to obtain the initial
node features, where $i$ denotes the $i$-th node in the set. Specifically, for
document node and paragraph nodes:
$x_{0}^{(0)}=\phi(W_{S}\cdot v^{d}+b_{S}),\ \ \ x_{i}^{(0)}=\phi(W_{S}\cdot
v^{p}_{i}+b_{S})$ (1)
where $W_{S}$ and $b_{S}$ are learnable parameters and $\phi$ denotes Leaky-
ReLU. Similarly, we use another fully connected layer to obtain initial
features for entity nodes:
$x_{i}^{(0)}=\phi(W_{E}\cdot v^{e}_{i}+b_{E})$ (2)
where $W_{E}$ and $b_{E}$ are learnable parameters. In the following, we use
$x^{(0)}$ to denote initial node features for graph neural networks.
Method | Text+ | SemEval | AllSides
---|---|---|---
Acc | MaF | Acc | MaF
CNN_GloVe | | $79.63$ | $N/A$ | $N/A$ | $N/A$
CNN_ELMo | | $84.04$ | $N/A$ | $N/A$ | $N/A$
HLSTM_GloVe | | $81.58$ | $N/A$ | $N/A$ | $N/A$
HLSTM_ELMo | | $83.28$ | $N/A$ | $N/A$ | $N/A$
HLSTM_Embed | | $81.71$ | $N/A$ | $76.45$ | $74.95$
HLSTM_Output | | $81.25$ | $N/A$ | $76.66$ | $75.39$
BERT | | $84.03$ | $82.60$ | $81.55$ | $80.13$
MAN_GloVe | ✓ | $81.58$ | $79.29$ | $78.29$ | $76.96$
MAN_ELMo | ✓ | $84.66$ | $83.09$ | $81.41$ | $80.44$
MAN_Ensemble | ✓ | $86.21$ | $84.33$ | $85.00$ | $84.25$
KGAP_RGCN | ✓ | $89.22$ | $84.41$ | 86.98 | 86.53
KGAP_GRGCN | ✓ | 89.56 | 84.94 | $86.02$ | $85.52$
Table 1: Model performance on datasets SemEval and Allsides. $N/A$ denotes
that the result is not reported in the related work. Text+ indicates whether
the method leverages more than textual content. Acc and MaF denote accuracy
and macro-averaged F1-score.
### 3.3 Learning and Optimization
We propose to adopt gated relational graph convolutional networks (gated
R-GCNs) to learn representations for the news graphs and conduct political
perspective detection as graph-level classification. For the $l$-th layer of
gated R-GCN, we firstly aggregate messages from neighbors as follows:
$u_{i}^{(l)}=\Theta_{s}\cdot x_{i}^{(l-1)}+\sum_{r\in R}\sum_{j\in
N_{r}(i)}\frac{1}{|N_{r}(i)|}\Theta_{r}\cdot x_{j}^{(l-1)}$ (3)
where $u_{i}^{(l)}$ is the hidden state for the $i$-th node in the $l$-th
layer, $N_{r}(i)$ is node $i$’s neighborhood of relation $r$, $\Theta_{s}$ and
$\Theta_{r}$ are learnable parameters. We then calculate gate levels:
$a_{i}^{(l)}=\sigma(W_{A}\cdot[u_{i}^{(l)},x_{i}^{(l-1)}]+b_{A})$ (4)
where $\sigma(\cdot)$ is the sigmoid function, $[\cdot,\cdot]$ denotes the
concatenation operation, $W_{A}$ and $b_{A}$ are learnable parameters. We then
apply the gate to $u_{i}^{(l)}$ and $x_{i}^{(l-1)}$:
$x_{i}^{(l)}=tanh(u_{i}^{(l)})\odot
a_{i}^{(l)}+x_{i}^{(l-1)}\odot(1-a_{i}^{(l)})$ (5)
where $x_{i}^{(l)}$ is the output of the $l$-th gated R-GCN layer and $\odot$
denotes the Hadamard product operation.
After applying a total of $L$ gated R-GCN layers, we obtain the learned node
representations $x^{(L)}$. We then apply average pooling on paragraph node
representations:
$v^{g}=\frac{1}{s}\sum_{i=1}^{s}x_{i}^{(L)}$ (6)
where $s$ denotes the total amount of paragraphs in the news document. We then
transform $v^{g}$ with a softmax layer:
$\hat{y}=softmax(W_{O}\cdot v^{g}+b_{O})$ (7)
where $\hat{y}$ is the predicted perspective label, $W_{O}$ and $b_{O}$ are
learnable parameters. We then derive the loss function $L$:
$L=-\sum_{D}\sum_{i=1}^{Y}y_{i}log(\hat{y_{i}})+\lambda\sum_{w\in\theta}w^{2}$
(8)
where $Y$ is the number of stance labels, $y$ denotes stance annotation, $D$
represents the data set, $\theta$ denotes all learnable parameters in our
proposed model, and $\lambda$ is a hyperparameter.
## 4 Experiments
Dataset | # News Articles | # class | Class Distribution | Average # Sentence | Average # Word
---|---|---|---|---|---
SemEval | 645 | 2 | 407 / 238 | 27.11 | 494.29
Allsides | 10,385 | 3 | 4,164 / 3,931 / 2,290 | 49.96 | 1040.05
Table 2: Details of two datasets SemEval and Allsides.
### 4.1 Dataset
We evaluate our approach on the same benchmark datasets as in previous works
(Li and Goldwasser, 2019, 2021), namely SemEval and Allsides. Dataset details
are are presented in Table 2. SemEval is the training data set from the
SemEval 2019 Task 4: Hyperpartisan News Detection (Kiesel et al., 2019).
Allsides is a larger and more diversified political perspective detection
dataset collected in Li and Goldwasser (2019). We follow the same settings in
Li and Goldwasser (2019, 2021) so that our results are directly comparable
with previous works.
### 4.2 Baselines
We compare our method with the following competitive baselines on the SemEval
and Allsides benchmarks:
* •
CNN (Jiang et al., 2019) is the first place solution from the SemEval 2019
Task 4 contest (Kiesel et al., 2019). It uses GloVe (CNN_GloVe) and ELMo
(CNN_ELMo) word embeddings with convolutional layers for stance prediction.
* •
HLSTM (Li and Goldwasser, 2019) stands for hierarchical long short-term memory
networks (Yang et al., 2016). It encodes news with GloVe (HLSTM_GloVe) and
ELMo (HLSTM_ELMo), hierarchical LSTMs, and aggregate with self-attention for
political perspective detection.
* •
HLSTM_Embed and HLSTM_Output (Li and Goldwasser, 2021) use Wikipedia2Vec
(Yamada et al., 2018) and BERT-inspired masked entity models to learn entity
representations and concatenate them with word embeddings or document
representation for stance detection.
* •
BERT (Devlin et al., 2018) Pre-trained BERT is fine-tuned on the specific task
of political perspective detection.
* •
MAN (Li and Goldwasser, 2021) is a political perspective detection method that
is pre-trained with social and linguistic information and fine-tuned with news
bias labels.
Figure 3: Statistics of entity mentions and para-ent edges under generic and
domain-specific knowledge graphs. The line, shadow and line segment indicates
the average, distribution and range of entity mentions and para-ent edges on
two benchmark datasets.
### 4.3 Implementation
We implement our proposed knowledge-aware political perspective detection
approach with pytorch (Paszke et al., 2019), pytorch lightning (Falcon and The
PyTorch Lightning team, 2019), pytorch geometric (Fey and Lenssen, 2019), and
the HuggingFace transformers library (Wolf et al., 2020). We present the
hyperparameter settings of our proposed knowledge-aware political perspective
detection approach in Table 3 to facilitate reproduction. We follow the same
hyperparameter settings in Table 3 to compare with baselines and conduct
ablation studies unless stated otherwise.
Hyperparameter | SemEval / Allsides
---|---
RoBERTa embedding size | 768
TransE embedding size | 200 / 768
gated R-GCN hidden size | 512
optimizer | Adam
learning rate | $10^{-3}$
batch size | 16
dropout | 0.5 / 0.6
maximum epochs | 50 / 150
$L2$-regularization $\lambda$ | $10^{-5}$
gated R-GCN layer count $L$ | 2
Table 3: Hyperparameter settings of our model.
### 4.4 Experiment Results
We run KGAP with R-GCN (KGAP_RGCN) and gated R-GCN (KGAP_GRGCN) and report
model performance in Table 1, which demonstrates that:
* •
KGAP consistently outperforms all competitive baselines on both SemEval and
Allsides, including the state-of-the-art method MAN (Li and Goldwasser, 2021).
* •
Two methods that leverage information other than textual content, namely MAN
and ours, outperform other baselines. Our method further incorporates external
knowledge in political knowledge graphs and outperforms MAN.
* •
For methods that use word embeddings, ELMo often outperforms their GloVe
counterparts. This suggests that text modeling is still essential in political
perspective detection.
Figure 4: Average model performance and standard deviation when triples in our
constructed political knowledge graph (a-d) and para-ent edges in our news
graphs (e-h) are gradually removed.
In conclusion, KGAP achieves state-of-the-art performance on two widely
adopted benchmarks. In the following, we study the effect of external
knowledge, textual content, and graphs in our method and the task of political
perspective detection. We run all experiments five times to ensure a
consistent evaluation and report the average performance as well as standard
deviation if necessary.
### 4.5 Knowledge Study
#### 4.5.1 Knowledge Density
We study the effect of knowledge density with two different settings. We
gradually remove triples in our constructed political KG and report model
performance in Figure 4 (a) - (d). We then gradually remove para-ent edges in
news graphs and report model performance in Figure 4 (e) - (h). In both
settings, model performance drops significantly with very little knowledge,
while quickly saturating with $10\%$ to $20\%$ external knowledge. Another
performance boost is often witnessed at $100\%$ external knowledge, which
argues for complete KGs. Besides, the effect of external knowledge is greater
on the smaller dataset SemEval, which suggests that our knowledge-aware
approach is especially effective with limited data.
#### 4.5.2 Knowledge Type
To examine whether our constructed political knowledge graph is essential in
our model’s performance, we compare it with various generic knowledge graphs
(Speer et al., 2017; Bollacker et al., 2008; Tanon et al., 2020) that are
widely adopted in NLP tasks. Using TagMe (Ferragina and Scaiella, 2011) as a
tool for text-entity alignment, Figure 3 illustrates how many KG entities are
mentioned in news articles and how many para-ent edges are captured in
constructed news graphs. It shows that fewer entities and para-ent edges are
provided by our politics-specific knowledge graph, which indicates that our KG
adds little computational burden to the news graphs. We then report model
performance paired with different knowledge graphs in Table 4. It is
demonstrated that our domain-specific knowledge graph outperforms generic
knowledge graphs in political perspective detection, which indicates
successful domain-specific KG construction and proves the necessity and
efficiency of our political knowledge graph.
KG | SemEval | AllSides
---|---|---
Acc | MaF | Acc | MaF
ConceptNet | $87.81$ | $82.65$ | $85.59$ | $85.08$
FreeBase | $87.98$ | $82.73$ | $85.84$ | $85.32$
YAGO | $87.30$ | $81.78$ | $85.18$ | $84.66$
Ours | 89.56 | 84.94 | 86.02 | 85.52
Table 4: Average model performance when using generic and domain-specific knowledge graphs as external knowledge. KGE Method | SemEval | AllSides
---|---|---
Acc | MaF | Acc | MaF
TransE | $89.56$ | $84.94$ | $86.02$ | $85.52$
TransR | $88.54$ | $83.45$ | $85.15$ | $84.61$
DistMult | $88.51$ | $83.63$ | $84.47$ | $83.90$
HolE | $88.85$ | $83.68$ | $84.78$ | $84.24$
RotatE | $88.84$ | $84.04$ | $85.61$ | $85.11$
Table 5: Average model performance with entity node attributes derived by
different knowledge graph embedding techniques.
#### 4.5.3 Knowledge Graph Embedding
In this paper, we propose the novel approach of leveraging knowledge graph
embeddings as initial node features to facilitate knowledge-text interaction
from a structural perspective. We further explore this idea with other
knowledge graph embedding techniques (Lin et al., 2015; Yang et al., 2014;
Nickel et al., 2016; Sun et al., 2019) and report model performance in Table
5. It is demonstrated that model performance on both datasets does not change
significantly, indicating that our approach does not rely on specific
knowledge graph embedding techniques. We further train TransE (Bordes et al.,
2013) to different extents and report performance in Figure 5. It is
illustrated that as little as 10 epochs of TransE training would lead to
better task performance than random initialization (0 epoch), while our
approach is generally effective with knowledge graph embeddings with more
training epochs.
Figure 5: Average model performance and standard deviation when TransE KG
embeddings are trained for different epochs. Figure 6: Average performance of
text models (base, lighter) and combining them with our news graph structure
(graph, darker).
### 4.6 Text Study
We observed in Section 4.4 that text modeling plays an important role in
political perspective detection. We further explore how different text
analysis techniques perform on their own and when combined with our news
graphs. Specifically, we encode news text with text analysis techniques and
conduct political perspective detection with fully connected layers for text-
only settings. We then substitute the paragraph node attributes with these
methods for graph-based settings. Figure 6 demonstrates that advanced language
models consistently outperform classic methods, indicating the correlation
between model performance and text processing ability. Besides, Figure 6 shows
that language modeling methods perform better when combined with our proposed
news graph, indicating the necessity of incorporating structural information
and external knowledge in political perspective detection.
### 4.7 Graph Study
We propose the novel approach of constructing news graphs to jointly model
textual content and external knowledge while conducting political perspective
detection as graph-level classification. To examine the effect of news graphs,
we firstly remove three types of edges and report model performance in Table
6. It is demonstrated that all three types of edges contribute to model
performance, while para-ent edges are most essential on the smaller SemEval
since they help incorporate external knowledge to boost task performance on
limited data. Besides, we adopt R-GCN and gated R-GCN for graph representation
learning on our constructed news graphs. We further explore the effect of GNNs
by changing to other GNN layers and report performance in Table 7, which
indicates the necessity of GNN heterogeneity in our approach.
Ablation Setting | SemEval | AllSides
---|---|---
Acc | MaF | Acc | MaF
no doc-para edges | $89.03$ | $84.23$ | $-$ | $-$
no para-para edges | $88.41$ | $83.14$ | $82.52$ | $81.83$
no para-ent edges | $86.71$ | $80.60$ | $85.54$ | $85.02$
full graph | 89.56 | 84.94 | 86.02 | 85.52
Table 6: Average model performance when three different types of edges in the
news graphs are respectively removed.
GNN Type | Het. | SemEval | AllSides
---|---|---|---
Acc | MaF | Acc | MaF
GCN | | $87.20$ | $81.63$ | $86.91$ | $86.43$
GAT | | $89.00$ | $84.41$ | $84.12$ | $83.33$
GraphSAGE | | $89.12$ | $84.47$ | $86.77$ | $86.24$
R-GCN | ✓ | $89.22$ | $84.41$ | 86.98 | 86.53
Gated R-GCN | ✓ | 89.56 | 84.94 | $86.02$ | $85.52$
Table 7: Average model performance with various heterogeneous and homogeneous
GNNs. Het. indicates heterogeneous GNNs.
## 5 Conclusion
In this paper, we propose an end-to-end, inductive and graph-based approach to
leverage domain-specific external knowledge in political perspective
detection. We firstly construct a domain-specific knowledge graph as external
knowledge for political perspective detection. We then construct news graphs
to jointly model news content and external knowledge. Finally, we adopt
relational graph neural networks to learn graph representations and framing
the task as graph-level classification. Extensive experiments demonstrate that
KGAP consistently outperforms state-of-the-art methods on two widely adopted
perspective detection benchmarks. Further ablation studies indicate the
necessity of external knowledge and examine the effect of knowledge, text, and
graph in KGAP.
## References
* Baly et al. (2020) Ramy Baly, Giovanni Da San Martino, James Glass, and Preslav Nakov. 2020. We can detect your bias: Predicting the political ideology of news articles. _arXiv preprint arXiv:2010.05338_.
* Barberá et al. (2015) Pablo Barberá, John T Jost, Jonathan Nagler, Joshua A Tucker, and Richard Bonneau. 2015. Tweeting from left to right: Is online political communication more than an echo chamber? _Psychological science_ , 26(10):1531–1542.
* Bollacker et al. (2008) Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008\. Freebase: a collaboratively created graph database for structuring human knowledge. In _Proceedings of the 2008 ACM SIGMOD international conference on Management of data_ , pages 1247–1250.
* Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. _Advances in neural information processing systems_ , 26.
* Darwish et al. (2020) Kareem Darwish, Peter Stefanov, Michaël Aupetit, and Preslav Nakov. 2020. Unsupervised user stance detection on twitter. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 14, pages 141–152.
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* Du et al. (2017) Jiachen Du, Ruifeng Xu, Yulan He, and Lin Gui. 2017. Stance classification with target-specific neural attention networks. International Joint Conferences on Artificial Intelligence.
* Falcon and The PyTorch Lightning team (2019) William Falcon and The PyTorch Lightning team. 2019. PyTorch Lightning.
* Ferragina and Scaiella (2011) Paolo Ferragina and Ugo Scaiella. 2011. Fast and accurate annotation of short texts with wikipedia pages. _IEEE software_ , 29(1):70–75.
* Fey and Lenssen (2019) Matthias Fey and Jan Eric Lenssen. 2019. Fast graph representation learning with pytorch geometric. _arXiv preprint arXiv:1903.02428_.
* Horne et al. (2018) Benjamin D Horne, Sara Khedr, and Sibel Adali. 2018. Sampling the news producers: A large news and feature data set for the study of the complex media landscape. In _Twelfth International AAAI Conference on Web and Social Media_.
* Hu et al. (2021) Linmei Hu, Tianchi Yang, Luhao Zhang, Wanjun Zhong, Duyu Tang, Chuan Shi, Nan Duan, and Ming Zhou. 2021. Compare to the knowledge: Graph neural fake news detection with external knowledge. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 754–763.
* Jiang et al. (2011) Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In _Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies_ , pages 151–160.
* Jiang et al. (2019) Ye Jiang, Johann Petrak, Xingyi Song, Kalina Bontcheva, and Diana Maynard. 2019\. Team bertha von suttner at semeval-2019 task 4: Hyperpartisan news detection using elmo sentence representation convolutional network. In _Proceedings of the 13th International Workshop on Semantic Evaluation_ , pages 840–844.
* Kiesel et al. (2019) Johannes Kiesel, Maria Mestre, Rishabh Shukla, Emmanuel Vincent, Payam Adineh, David Corney, Benno Stein, and Martin Potthast. 2019. Semeval-2019 task 4: Hyperpartisan news detection. In _Proceedings of the 13th International Workshop on Semantic Evaluation_ , pages 829–839.
* Lee et al. (2018) Kenton Lee, Luheng He, and Luke Zettlemoyer. 2018. Higher-order coreference resolution with coarse-to-fine inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 687–692, New Orleans, Louisiana. Association for Computational Linguistics.
* Li and Goldwasser (2019) Chang Li and Dan Goldwasser. 2019. Encoding social information with graph convolutional networks forpolitical perspective detection in news media. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 2594–2604.
* Li and Goldwasser (2021) Chang Li and Dan Goldwasser. 2021. Using social and linguistic information to adapt pretrained representations for political perspective identification. In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_ , pages 4569–4579.
* Lin et al. (2015) Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In _Twenty-ninth AAAI conference on artificial intelligence_.
* Liu et al. (2021) Rui Liu, Zheng Lin, Yutong Tan, and Weiping Wang. 2021. Enhancing zero-shot and few-shot stance detection with commonsense knowledge graph. In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_ , pages 3152–3157.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_.
* Magdy et al. (2016) Walid Magdy, Kareem Darwish, Norah Abokhodair, Afshin Rahimi, and Timothy Baldwin. 2016. # isisisnotislam or# deportallmuslims? predicting unspoken views. In _Proceedings of the 8th ACM Conference on Web Science_ , pages 95–106.
* Nickel et al. (2016) Maximilian Nickel, Lorenzo Rosasco, and Tomaso Poggio. 2016. Holographic embeddings of knowledge graphs. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 30.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019\. Pytorch: An imperative style, high-performance deep learning library. _Advances in neural information processing systems_ , 32:8026–8037.
* Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In _Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)_ , pages 1532–1543.
* Peters et al. (2018) Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proc. of NAACL_.
* Řehůřek and Sojka (2010) Radim Řehůřek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Corpora. In _Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks_ , pages 45–50, Valletta, Malta. ELRA. http://is.muni.cz/publication/884893/en.
* Speer et al. (2017) Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In _Thirty-first AAAI conference on artificial intelligence_.
* Sun et al. (2019) Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. _arXiv preprint arXiv:1902.10197_.
* Tanon et al. (2020) Thomas Pellissier Tanon, Gerhard Weikum, and Fabian Suchanek. 2020. Yago 4: A reason-able knowledge base. In _European Semantic Web Conference_ , pages 583–596. Springer.
* Wilson et al. (2020) Anne E Wilson, Victoria A Parker, and Matthew Feinberg. 2020. Polarization in the contemporary political and media landscape. _Current Opinion in Behavioral Sciences_ , 34:223–228. Political Ideologies.
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics.
* Yamada et al. (2018) Ikuya Yamada, Akari Asai, Jin Sakuma, Hiroyuki Shindo, Hideaki Takeda, Yoshiyasu Takefuji, and Yuji Matsumoto. 2018. Wikipedia2vec: An efficient toolkit for learning and visualizing the embeddings of words and entities from wikipedia. _arXiv preprint arXiv:1812.06280_.
* Yang et al. (2014) Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Embedding entities and relations for learning and inference in knowledge bases. _arXiv preprint arXiv:1412.6575_.
* Yang et al. (2016) Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016\. Hierarchical attention networks for document classification. pages 1480–1489.
* Yu et al. (2021) Donghan Yu, Chenguang Zhu, Yuwei Fang, Wenhao Yu, Shuohang Wang, Yichong Xu, Xiang Ren, Yiming Yang, and Michael Zeng. 2021. Kg-fid: Infusing knowledge graph in fusion-in-decoder for open-domain question answering. _arXiv preprint arXiv:2110.04330_.
* Zhu et al. (2021) Lixing Zhu, Gabriele Pergola, Lin Gui, Deyu Zhou, and Yulan He. 2021. Topic-driven and knowledge-aware transformer for dialogue emotion detection. _arXiv preprint arXiv:2106.01071_.
## Appendix A Limitation
We identified two minor limitations of KGAP:
* •
We noticed that there are only a few para-ent edges in the news graphs of
certain news articles. We examined these news articles and found that some
entity mentions are not captured by the entity linking tool TagMe. This issue
might be addressed by using better entity linking tools.
* •
Political facts change over time, for example, politicians in the United
States may represent different districts due to redistricting. Our knowledge
graph does not model such changes for now, while this issue may be addressed
by introducing temporal knowledge graphs.
## Appendix B Knowledge Graph Details
We construct a political knowledge graph (KG) of contemporary U.S. politics to
serve as domain-specific external knowledge for political perspective
detection, while our approach could be extended to political scenarios in
other countries. Table 8 presents entity and relation types in our KG.
For elected office, we consider the White House, House of Representatives,
Senate, Supreme Court and governorship. For time period, we consider the 114th
to 117th congress while our KG could be similarly extended to other time
periods. We then retrieve the presidents, supreme court justices, senators,
congresspersons, and governors that overlap with these time periods from
Wikipedia333https://www.wikipedia.org/ and U.S. Congress
website444https://www.congress.gov/. For states, we consider the 50 U.S.
states. For political parties, we consider the Democratic Party, the
Republican Party and independents. For the first five types of relations
(affiliated_to, from, appoint, overlap_with and member_of), we conduct co-
reference resolution to identify new entity mentions, connect the entities and
expand the KG. For political ideology, we use two entities to represent
liberal conservative values. We then resort to the legislator scoreboards at
AFL-CIO555aflcio.org/scorecard and Heritage
Action666heritageaction.com/scorecard. These scoreboards score U.S.
legislators from 0 to 100 to evaluate how liberal or conservative they are. We
partition the score range into strongly favor (90 - 100), favor (75 - 90),
neutral (25 - 75), oppose (10 - 25) and strongly oppose (0 - 10) to connect
entities with these five relations. Our political KG draws from both generic
information sources and political expert knowledge.
As a result, we construct a political knowledge graph with 1,071 entities and
10,703 triples to serve as external knowledge, which is domain-specific,
scalable, adaptable and draws from diversified information sources. Our
political KG could also be helpful for related tasks such as disinformation
detection. We submit the constructed knowledge graph as supplementary material
to facilitate reproduction.
Entity Type | Example | Relation Type | Example
---|---|---|---
elected office | the U.S. Senate | affiliated_to | (Joe Biden, affiliated_to, Democratic Party)
time period | 117th congress | from | (Ted Cruz, from, Texas)
president | Joe Biden | appoint | (Donald Trump, appoint, Amy Coney Barrett)
supreme court justice | Amy Coney Barrett | overlap_with | (Joe Biden, overlap_with, 117th congress)
senator | Elizabeth Warren | member_of | (Dianne Feinstein, member_of, the U.S. Senate)
congressperson | Nancy Pelosi | strongly_favor | (Bernie Sanders, strongly_favor, liberal_values)
governor | Ron DeSantis | favor | (Joe Cunningham, favor, liberal_values)
state | Massachusetts | neutral | (Henry Cuellar, neutral, liberal_values)
political party | Republican Party | oppose | (Lamar Alexander, oppose, liberal_values)
political ideology | liberal values | strongly_oppose | (Ted Cruz, strongly_oppose, liberal_values)
Table 8: List of entities and relations in our collected political knowledge
graph.
## Appendix C Dataset Details
We evaluate our knowledge-aware political perspective detection approach on
SemEval (Kiesel et al., 2019) and Allsides (Li and Goldwasser, 2019), two
widely adopted benchmarks in previous works. We present dataset details in
Table 2. For SemEval, we follow the same 10-fold evaluation settings and the
exact folds as in Li and Goldwasser (2021). For Allsides, we follow the same
3-fold evaluation settings and the exact folds as in Li and Goldwasser (2021),
while a few news urls have expired and we could not retrieve the original news
content, leading to minor differences. In this way, our method’s performance
are directly comparable with previous state-of-the-art approaches (Li and
Goldwasser, 2019, 2021).
## Appendix D Experiment Details
### D.1 GNN Setting
We train our approach with both relational graph convolutional networks
(R-GCN) and gated relational graph convolutional networks (gated R-GCN), while
our approach does not rely on any specific GNN architecture. All experiments
in Section 4.4, 4.5 and 4.6 are conducted with gated R-GCN.
### D.2 Figure 4
For better visualization effects in Figure 4, we leave out news articles whose
number of entity mentions or para-ent edges is five times greater than the
average value. This approximation does not compromise the main conclusion that
our domain-specific knowledge graph involves less computational burden while
performing better than generic knowledge graphs.
### D.3 Table 4
The Allsides benchmark does not explicitly provide news titles. As a result,
there are no doc-para edges in the constructed news graphs for this dataset,
and thus the “no doc-para edges" setting could not be conducted on Allsides.
### D.4 Figure 6
For Word2Vec word embeddings, we use the gensim library (Řehůřek and Sojka,
2010). For GloVe word embeddings, we use the pre-trained
vectors777https://nlp.stanford.edu/projects/glove/ from Stanford NLP. For ELMo
word embeddings, we use the AllenNLP implementation (Peters et al., 2018). For
BERT and RoBERTa, we use pre-trained models in the transformers library (Wolf
et al., 2020) to encode news content and serve as textual features. For all
the base settings, we use Leaky-ReLU and two fully connected layers with 512
as hidden layer size.
|
# Improving Automatic Quotation Attribution in Literary Novels
Krishnapriya Vishnubhotla1,4, Frank Rudzicz2,4,1, Graeme Hirst1 Adam Hammond3
1Department of Computer Science, University of Toronto
2Faculty of Computer Science, Dalhousie University
3Department of English, University of Toronto
4Vector Institute for Artificial Intelligence
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Current models for quotation attribution in literary novels assume varying
levels of available information in their training and test data, which poses a
challenge for in-the-wild inference. Here, we approach quotation attribution
as a set of four interconnected sub-tasks: character identification,
coreference resolution, quotation identification, and speaker attribution. We
benchmark state-of-the-art models on each of these sub-tasks independently,
using a large dataset of annotated coreferences and quotations in literary
novels (the Project Dialogism Novel Corpus). We also train and evaluate models
for the speaker attribution task in particular, showing that a simple
sequential prediction model achieves accuracy scores on par with state-of-the-
art models111Code and data can be found at https://github.com/Priya22/speaker-
attribution-acl2023.
## 1 Introduction
We focus on the task of automatic quotation attribution, or speaker
identification, in full-length English-language literary novels. The task
involves attributing each quotation (dialogue) in the novel to the character
who utters it. The task is complicated by several factors: characters in a
novel are referred to by various names and aliases (Elizabeth, Liz, Miss
Bennet, her sister); these aliases can change and be added over the course of
the novel; and authors often employ differing patterns of dialogue in the
text, whereby quotations are sometimes attached to the speaker explicitly via
a speech verb, and at other times require keeping track of character turns
over multiple paragraphs. The development of automated methods has also been
hindered by the paucity of annotated datasets on which models can be trained
and evaluated.
Existing methods for quotation attribution fall into one of two groups: those
that directly attribute the quotation to a named character entity and those
that treat it as a two-step process in which quotations are first attached to
the nearest relevant mention of a character and mentions are then resolved to
a canonical character name via a coreference resolution model. We contend that
most use-cases of a quotation attribution system involve resolving the speaker
mention to one among a list of character entities. Thus, the usability of
these systems is very much dependent on their ability to compile such a list
of character entities and to resolve each attributed mention to an entity from
this list.‘
Here, we use the Project Dialogism Novel Corpus Vishnubhotla et al. (2022), a
large dataset of annotated coreferences and quotations in literary novels, to
design and evaluate pipelines of quotation attribution. Our analysis shows
that state-of-the-art models are still quite poor at character identification
and coreference resolution in this domain, thus hindering functional quotation
attribution.
## 2 Background and Prior Work
Elson and McKeown (2010) introduce the CQSA corpus, which contains quotations
from excerpts from 4 novels and 7 short-stories that are annotated for the
nearest speaker mention, which can be named (e.g., Elizabeth), or nominal (her
friend). On average, only 25% of the attributions in CQSA are to a named
entity.
In contrast, He et al. (2013) link quotations directly to entities, and a list
of characters and aliases is required for attribution. This list is generated
with a named entity recognition (NER) model to obtain entity terms, which are
then grouped together using Web resources such as Wikipedia.
The GutenTag package from Brooke et al. (2015) contains modules for generating
character lists and identifying speakers in literary texts. The former is
based on the LitNER model Brooke et al. (2016a), which bootstraps a classifier
from a low-dimensional Brown clustering of named entities from Project
Gutenberg texts. The speaker attribution model is a simple rule-based approach
that identifies the nearest named entity.
Sims and Bamman (2020) annotate the first 2000 tokens of 100 novels from the
LitBank dataset111https://github.com/dbamman/litbank. Quotations are linked to
a unique speaker from a predefined list of entities. LitBank also contains
annotations for coreference for these tokens Bamman et al. (2020). The BookNLP
package222https://github.com/booknlp/booknlp from the same group contains pre-
trained models for NER, coreference resolution, and speaker attribution,
although the latter is only at the mention-level.
Cuesta-Lazaro et al. (2022) attempt to reconcile the differences in pre-
requisites and methodologies of prior attribution systems by proposing a
modularization of the task into three sub-tasks: quotation identification,
character identification, and speaker attribution. They evaluate baselines for
each component, propose a new state-of-the-art method for speaker attribution,
and quantify the relative importance of each module in an end-to-end pipeline.
Their speaker attribution module, however, considers only named mentions in
the text as candidate speakers, leading to a lower performance on implicit and
anaphoric quotations. Neither their dataset of 15 novels nor their model for
speaker attribution have been made public, precluding comparison with our work
below.
In our work, we follow this modular formulation, with some key differences:
(a) we evaluate an additional sub-task of coreference resolution, allowing us
to (b) test an attribution model that can work with both named and pronominal
candidate mentions surrounding a quotation; and (c) we evaluate our models on
a publicly available dataset.
## 3 Dataset: PDNC
We briefly describe here the Project Dialogism Novel Corpus Vishnubhotla et
al. (2022). PDNC consists of 22 full-length English novels, published in the
19th and 20th centuries, annotated with the following information:
Characters: A list of characters in the novel. This includes characters who
speak, are addressed to, or referred to multiple times in the novel. Each
character is identified by a main name (e.g., Elizabeth Bennet), as well as a
set of aliases (Liz, Lizzie, Eliza). We do not distinguish between the two,
and treat each character entity as identifiable by a set of names (so that
Elizabeth Bennet, Liz, Lizzie, Eliza forms one character entity).
Quotations: Each uttered quotation in the novel is annotated with its speaker
and addressee(s); with the referring expression, if any, that indicates who
the speaker is; and with internal mentions, i.e., named or pronominal phrases
within the quotation that refer to one or more character entities. The
annotations in PDNC make it ideal for evaluating several aspects of quotation
attribution in novels, including named entity recognition, coreference
resolution, and speaker attribution.
## 4 Modularization of the Task
Character identification: The goal of this sub-task is to build a list of the
unique character entities in a novel. Although NER models perform quite well
at identifying spans of text that constitute a named entity (here, a character
name), the task is complicated by the fact that characters can have multiple
aliases in the text. Moreover, some characters may be introduced and referred
to only by social titles (the policeman, the Grand Inquisitor, the little old
man, the bystander).
Coreference resolution: The goals here are to identify text spans that refer
to a character entity (which we refer to as mentions) and to link each mention
to the correct character entity or entities to which it refers. In addition to
mentions that are personal pronouns such as he, she, and them, literary texts
have an abundance of pronominal phrases that reflect relationships between
characters, such as her husband and their father. Such phrases can also occur
within quotations uttered by a character (e.g., my father), requiring
quotation attribution as a prerequisite for complete coreference resolution.
Quotation identification: Perhaps the most straightforward of our sub-tasks,
here we identify all text spans in a novel that constitute dialogue, i.e., are
uttered by a character entity or entities.
Speaker attribution: Finally, this sub-task links each identified quotation to
a named character identity. While most models are designed to solve the more
tractable and practical problem of linking quotations to the nearest relevant
speaker mention, we subsume the mention–entity linking tasks under the
coreference resolution module, equating the two tasks.
## 5 Models and Evaluation Metrics
We evaluate each of the modules of section 4 separately. In order not to
confound the evaluation with cascading errors, at each step, we “correct” the
outputs of the automated system from the previous step by using annotations
from PDNC.
### 5.1 Character Identification
We evaluate two pipelines — GutenTag and BookNLP — on their ability to
identify the set of characters in a novel, and potentially, the set of aliases
for each character. In addition, we also test the NER system from the
spaCy333https://explosion.ai/blog/spacy-v3 module as a proxy for the state-of-
the-art in NER that is not trained explicitly for the literary domain.
Character recognition (CR): For each novel, we compute the proportion of
annotated character entities that are identified as named entities of the
category ‘person’ Doddington et al. (2004). We use a simple string-matching
approach, where we try for either a direct match, or a unique match when
common prefixes such as Mr. and Sir are removed. Thus, if a particular novel
has $N$ character entities annotated, the NER model outputs a list of $K$
named ‘person’ entities, and $K^{\prime}$ of these entities are in turn
matched with $M$ out of the $N$ characters, the CR metric is calculated as
$M/N$.
Character clustering: We use the clustering evaluation metrics of homogeneity
(C.Hom), completeness (C.Comp), and their harmonic mean, v-score to evaluate
named entity clusters. Homogeneity (between 0 and 1) is the fraction of named
clusters that link to the same character entity; completeness is the number of
homogeneous clusters a single entity is distributed over (ideal value of 1).
As an example, consider the case where we have three annotated characters for
a novel: Elizabeth Bennet, Mary Bennet, and The Queen. The set of annotated
aliases for the characters are {Elizabeth Bennet, Eliza, Lizzie, Liz}, {Mary
Bennet, Mary}, and {The Queen}. Say model $M_{1}$ outputs the following entity
clusters: {Elizabeth Bennet, Eliza}, {Liz, Lizzie} and {Mary Bennet, Mary};
model $M_{2}$ outputs {Elizabeth Bennet, Mary Bennet, Eliza, Mary}, {Liz,
Lizzie}. Each model has recognized two out of the three characters in our
list; this evaluates to a CR score of $2/3$. Each of the three clusters from
model $M_{1}$ refers solely to one character entity, resulting in a
homogeneity score of 1.0. However, these three clusters are formed for only
two unique character entities, resulting in a completeness score of $1.5$
(v-score 0.6). Analogously, model $M_{2}$ has a homogeneity score of 0.5 and a
completeness score of 1.0 (v-score 0.5).
### 5.2 Coreference Resolution
We consider two pipelines for coreference resolution: BookNLP (based on Ju et
al. (2018)) and spaCy (based on Dobrovolskii (2021)). Given a text, these
neural coreference resolution models output a set of clusters, each comprising
a set of coreferent mention spans from the input.
Evaluating this module requires annotations that link each mention span in a
novel to the character entity referred to. PDNC, unfortunately, contains these
mention annotations only for text spans within quotations.
We therefore evaluate coreference resolution only on a subset of the mention
spans in a novel, extracted as follows: We first identify the set of mention
clusters from our models that can be resolved to an annotated character
entity, using the character lists from PDNC and the string-matching approach
described above. We then prune this to only include those mention spans that
are annotated in the PDNC dataset, i.e, mention spans that occur within
quotations, and evaluate the accuracy of the resolution.
Mention clustering (M-Clus): We compute the fraction of mention clusters that
can be matched to a unique (Uniq) annotated character entity rather than to
multiple (Mult) or no (None) entities.
Mention resolution (M-Res): For those mention spans within PDNC that are
identified by the model and are assigned to a cluster that can be uniquely
matched to a character entity (# Eval), we compute the accuracy of the linking
(Acc.).
### 5.3 Quotation Identification
Most models, rule-based or neural, can identify quotation marks and thus
quotations. We evaluate how many of such quoted text instances actually
constitute dialogue, in that they are uttered by one or more characters. Our
gold standard is the set of quotations that have been annotated in PDNC, which
includes quotations uttered by multiple characters and by unnamed characters
such as a crowd.
### 5.4 Speaker Attribution
The speaker-attribution part of BookNLP’s pipeline is a BERT-based model that
uses contextual and positional information to score the BERT embedding for the
quotation span against the embeddings of mention spans that occur within a
50-word context window around the quotation; the highest-scoring mention is
selected as the speaker. We supplement this approach by limiting the set of
candidates to resolved mention spans from the coreference resolution step,
thereby directly performing quotation-to-entity linking. As we see from our
results, this method, which we refer to as BookNLP+, greatly improves the
performance of the speaker attribution model by eliminating spurious candidate
spans.
We also evaluate a sequential prediction model that predicts the speaker of a
quotation simply by looking at the sequence of speakers and mentions that
occur in some window around the quotation. We implement this as a one-layer
RNN that is fed a sequence of tokens representing the five characters
mentioned most recently prior to the quotation text, one character mention
that occurs right after, and, optionally, the set of characters mentioned
within the quotation.
## 6 Experimental Setup
We evaluate the models for character identification, coreference resolution,
and quotation identification on the entire set of 22 novels in PDNC, since we
are neither training nor fine-tuning these on this dataset. For the speaker
attribution models, we define the training setup below.
We curate the set of mention candidates for each novel in the following
manner: the mention clusters generated by BookNLP are used to extract the set
of mention spans that could be successfully resolved to a character entity
from the annotated PDNC character lists for each novel. We append to this set
the annotated mention spans (within quotations) from PDNC, as well as explicit
mention spans — that is, text spans that directly match a named alias from the
character list.
Overlaps between the three sets are resolved with a priority ranking, whereby
PDNC annotations are considered to be more accurate than explicit name
matches, which in turn take precedence over the automated coreference
resolution model.
We test with 5-fold cross-validation in two ways: splitting the annotated
quotations in each novel 80/20 and splitting the set of entire novels 80/20.
## 7 Results
Model CR C.Hom C.Comp v-score spaCy 0.81 0.16 1.02 0.27 GutenTag 0.60 0.98
1.33 1.12 BookNLP 0.85 0.86 1.18 0.99
Table 1: Character identification: Average scores across all the novels in the
dataset. Column headings are defined in the text. Scores for each individual
novel are reported in Appendix B.
From Table 1, we see that the neural NER models of spaCy and BookNLP are
better at recognizing character names than GutenTag’s heuristic system (0.81
and 0.85 vs 0.60). However, the strengths of GutenTag’s simpler Brown-
clustering–based NER system are evident when looking at the homogeneity; when
two named entities are assigned as aliases of each other, it is almost always
correct. This shows the advantage of document-level named entity clustering as
opposed to local span-level mention clustering for character entity
recognition. The cluster quality metric, on the other hand, tells us that
GutenTag still tends to be conservative with its clustering compared to
BookNLP, which nonetheless is a good strategy for the literary domain, where
characters often share surnames.
M-Clus M-Res Model # Clus Uniq Mult None # Eval Acc. spaCy 1503.1 0.093 0.061
0.846 499.0 0.746 BookNLP 1662.8 0.043 0.003 0.953 1126.6 0.774
Table 2: Coreference resolution: All scores are averaged over the 22 novels in
PDNC. Column headings are defined in the text.
Model Quotations Novels BookNLP-OG 0.40 0.40 BookNLP+ (LitBank) 0.62 0.61 Seq-
RNN 0.72 0.64 BookNLP+ (PDNC) 0.78 0.68
Table 3: Accuracy on speaker attribution for the end-to-end BookNLP model
(BookNLP-OG), the restricted model with only resolved mention spans as
candidates (row 2), the sequential prediction model, and the restricted model
trained on PDNC, for the Quotations and the entire Novels cross-validation
split.
Performance of these models on the coreference resolution task is
significantly lower (Table 2). A majority of the mention clusters from both
BookNLP and spaCy’s coreference resolution modules end up as unresolved
clusters, with no containing named identifier that could be linked to a PDNC
character entity. However, when we evaluate mention-to-entity linking on the
subset of clusters that can be resolved, both systems achieve accuracy scores
of close to 0.78, although spaCy is able to resolve far fewer mentions (499 vs
1127).
The importance of the character identification and coreference resolution
tasks can be quantified by looking the performance of the speaker attribution
models (Table 3). The end-to-end pretrained BookNLP pipeline, when evaluated
on the set of PDNC quotations (which were identified with accuracy of 0.94),
achieves an accuracy of 0.42. When we restrict the set of candidate mentions
for each quotation to only those spans that can be resolved to a unique
character entity, the attribution accuracy increases to 0.61. However, the RNN
model still beats this performance with an accuracy of 0.72 on the random data
split. When BookNLP’s contextual model is trained on data from PDNC, its
accuracy improves to 0.78. These scores drop to 0.63 and 0.68 for the entire-
novel split, where we have the disadvantage of being restricted only to
patterns of mention sequences, and not speakers.
## 8 Analysis
We briefly go over some qualitative analyses of the errors made by models in
the different sub-tasks, which serves to highlight the challenges presented by
literary text and opportunities for future research.
#### Character Identification and Coreference Resolution:
We manually examine the mention clusters identified by our coreference
resolution modules that could not be matched a unique character entity as
annotated in PDNC.
We find that, by far, the most common error is conflating characters with the
same surname or family name within a novel. For example, several of the women
characters in these novels are often referred to by the names of their
husbands or fathers, prefixed with a honorific such as Mrs. or Miss. Thus Mrs.
Archer refers to May Welland in The Age of Innocence and Miss Woodhouse refers
to Emma Woodhouse in Emma. However, a surname without a title, such as Archer
or Woodhouse, generally refers to the corresponding male character. This
results in the formation of mention clusters that take the spans Miss
Woodhouse and Woodhouse to be coreferent, despite being different character
entities. We see similar issues with father–son character pairs, such as
George Emerson and Mr. Emerson in A Room With A View, and with character pairs
that are siblings.
Quotations Novels Model Exp. Rest Exp. Rest BookNLP-OG 0.64 0.28 0.63 0.28
BookNLP+ (LitBank) 0.93 0.47 0.95 0.43 Seq-RNN 0.85 0.65 0.76 0.57 BookNLP+
(PDNC) 0.98 0.70 0.97 0.53
Table 4: Attribution accuracy for the speaker attribution models, broken down
by quotation type, for the Quotations and Novels cross-validation splits.
Column Exp. refers to explicit quotations, and column Rest refers to implicit
and anaphoric quotations.
#### Speaker Attribution:
We first quantify the proportion of quotations attributed to a mention cluster
that cannot be resolved to a named character entity with the end-to-end
application of the BookNLP pipeline.
On average, 47.7% of identified quotations are assigned to an unresolved
mention cluster as the speaker. The range of this value varies from as low as
12.5% (The Invisible Man) to as high as 78.7% (Northanger Abbey). A majority
of these unresolved attributions occur with implicit and anaphoric quotations
(76.2%), where the speaker is not explicitly indicated by a referring
expression such as Elizabeth said, as opposed to explicit quotations (23.8%).
In Table 4, we break down the performance of the speaker attribution models by
quotation type. We see that even our local context–based RNN model is able to
identify the speaker of explicit quotations with a relatively high accuracy,
and that the speaker for non-explicit quotations can also generally be modeled
using the sequence of 5–6 characters mentioned in the vicinity of the
quotation. The transformer-based models are of course able to use this local
context more effectively by making use of linguistic cues and non-linear
patterns of mentions and speakers in the surrounding text. Still, our best
performing model achieves an accuracy of only 0.53 on implicit and anaphoric
quotations when applied to novels unseen in the training set (the Novels
split).
## 9 Conclusions and Future Work
In this work, we quantitatively evaluated the key components of a functional
quotation attribution system. We showed that the initial task of recognizing
characters and their aliases in a novel remains quite a challenge, but doing
so greatly improves the performance of speaker attribution by limiting the set
of candidate speakers. However, with existing coreference resolution systems,
a large portion of mention clusters (around 90%) remain unresolved, so this
remains a problem for new research.
## Limitations
There is much variation in literary writing and narrative styles, and our work
here deals with a small, curated subset of this domain. The novels we analyze
are all in the English language, and were published between the early 19th and
early 20th centuries. The authors and novels themselves are drawn from what is
considered to be the established literary canon, and are not necessarily
representative of all the works of that era, let alone literary works of other
eras. The texts we analyze are largely uniform in narrative style. We limit
ourselves to only those quotations that are explicitly indicated as such in
the text by quotation marks, thereby eliminating more-complex styles such as
free indirect discourse Brooke et al. (2016b) and stream-of-consciousness
novels. We do not deal with nuances such as letters and diary entries nor
quotations within quotations. The models we analyze for named entity
recognition and coreference resolution use a fixed, binary formulation of the
gender information conveyed by pronominal terms. Though the development of
fairer, more representative models is constrained by current datasets, we note
that there is encouraging progress being made in this area Bamman et al.
(2020); Yoder et al. (2021).
## References
* Bamman et al. (2020) David Bamman, Olivia Lewke, and Anya Mansoor. 2020. An annotated dataset of coreference in English literature. In _Proceedings of the 12th Language Resources and Evaluation Conference_ , pages 44–54.
* Brooke et al. (2016a) Julian Brooke, Adam Hammond, and Timothy Baldwin. 2016a. Bootstrapped text-level named entity recognition for literature. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 344–350.
* Brooke et al. (2015) Julian Brooke, Adam Hammond, and Graeme Hirst. 2015. GutenTag: an NLP-driven tool for digital humanities research in the Project Gutenberg corpus. In _Proceedings of the Fourth Workshop on Computational Linguistics for Literature_ , pages 42–47.
* Brooke et al. (2016b) Julian Brooke, Adam Hammond, and Graeme Hirst. 2016b. Using models of lexical style to quantify free indirect discourse in modernist fiction. _Digital Scholarship in the Humanities_ , 32:234–250.
* Cuesta-Lazaro et al. (2022) Carolina Cuesta-Lazaro, Animesh Prasad, and Trevor Wood. 2022. What does the sea say to the shore? A BERT based DST style approach for speaker to dialogue attribution in novels. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 5820–5829.
* Dobrovolskii (2021) Vladimir Dobrovolskii. 2021. Word-level coreference resolution. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7670–7675, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Doddington et al. (2004) George R Doddington, Alexis Mitchell, Mark A Przybocki, Lance A Ramshaw, Stephanie M Strassel, and Ralph M Weischedel. 2004. The Automatic Content Extraction (ACE) Program ––– tasks, data, and evaluation. In _Language Resources and Evaluation Conference_ , volume 2, pages 837–840. Lisbon.
* Elson and McKeown (2010) David K Elson and Kathleen R McKeown. 2010. Automatic attribution of quoted speech in literary narrative. In _Twenty-Fourth AAAI Conference on Artificial Intelligence_.
* He et al. (2013) Hua He, Denilson Barbosa, and Grzegorz Kondrak. 2013. Identification of speakers in novels. In _Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1312–1320.
* Ju et al. (2018) Meizhi Ju, Makoto Miwa, and Sophia Ananiadou. 2018. A neural layered model for nested named entity recognition. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1446–1459, New Orleans, Louisiana. Association for Computational Linguistics.
* Sims and Bamman (2020) Matthew Sims and David Bamman. 2020. Measuring information propagation in literary social networks. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 642–652.
* Vishnubhotla et al. (2022) Krishnapriya Vishnubhotla, Adam Hammond, and Graeme Hirst. 2022. The project dialogism novel corpus: A dataset for quotation attribution in literary texts. In _Proceedings of the Thirteenth Language Resources and Evaluation Conference_ , pages 5838–5848, Marseille, France. European Language Resources Association.
* Yoder et al. (2021) Michael Yoder, Sopan Khosla, Qinlan Shen, Aakanksha Naik, Huiming Jin, Hariharan Muralidharan, and Carolyn Rosé. 2021. FanfictionNLP: A text processing pipeline for fanfiction. In _Proceedings of the Third Workshop on Narrative Understanding_ , pages 13–23, Virtual. Association for Computational Linguistics.
## Appendix A Implementation Details
The BookNLP pipeline is available to use as a Python package, as is spaCy,
with pretrained models for coreference resolution and speaker attribution. For
the former, these models are trained on the LitBank corpus, which is a dataset
from the literary domain. We use these pretrained models to evaluate
performance on the character identification and coreference resolution tasks.
GutenTag can be run either via a Web interface or a command-line executable
(requiring Python 2). It was designed to interface with texts from the Project
Gutenberg corpus. Some of the novels in PDNC were not found in GutenTag’s
predefined database of texts, so we exclude these when reporting average
performance metrics.
## Appendix B Results by Novel
Tables 5 and 6 show for each novel in PDNC the per-model results for character
identification that are summarized in Table 1.
BookNLP GutenTag Novel # Chars CR # Clus C.Hom C.Comp v-score CR # Clus C.Hom
C.Comp v-score A Room With A View 63 0.83 60 0.95 1.19 1.06 0.48 35 1.00 1.17
1.08 The Age of Innocence 55 0.84 48 0.81 1.26 0.99 0.64 49 1.00 1.40 1.17
Alice’s Adventures in Wonderland 51 0.67 34 0.97 1.03 1.00 0.25 14 1.00 1.08
1.04 Anne of Green Gables 113 0.87 102 0.92 1.08 0.99 0.19 25 1.00 1.14 1.06
Daisy Miller 10 1.00 13 1.00 1.30 1.13 0.80 12 1.00 1.50 1.20 Emma 18 0.89 17
0.71 1.09 0.86 0.89 27 1.00 1.69 1.26 A Handful of Dust 104 0.82 94 0.89 1.15
1.01 $-$ $-$ $-$ $-$ $-$ Howards End 55 0.95 64 0.89 1.27 1.05 0.49 33 0.97
1.23 1.08 Night and Day 50 0.94 53 0.77 1.17 0.93 0.62 40 0.97 1.30 1.11
Northanger Abbey 20 0.90 12 0.75 1.00 0.86 0.85 23 0.96 1.29 1.10 Persuasion
35 0.86 29 0.79 1.28 0.98 0.77 28 0.96 1.08 1.02 Pride and Prejudice 74 0.81
62 0.85 1.10 0.96 0.35 30 0.90 1.35 1.08 Sense and Sensibility 24 0.83 25 0.56
1.17 0.76 0.79 26 0.96 1.39 1.14 The Sign of the Four 35 0.94 32 0.72 1.05
0.85 0.60 28 1.00 1.33 1.14 The Awakening 22 0.82 17 0.88 1.07 0.97 0.77 21
0.95 1.25 1.08 The Gambler 27 0.70 22 0.91 1.18 1.03 0.59 22 1.00 1.38 1.16
The Invisible Man 31 0.94 40 0.95 1.36 1.12 0.61 32 1.00 1.68 1.25 The Man Who
Was Thursday 30 0.80 35 0.97 1.55 1.19 0.53 23 1.00 1.44 1.18 The Mysterious
Affair at Styles 30 0.80 25 0.88 1.05 0.96 0.70 28 0.96 1.35 1.12 The Picture
of Dorian Gray 43 0.88 43 0.98 1.14 1.05 0.56 27 1.00 1.12 1.06 The Sport of
the Gods 37 0.81 34 0.94 1.23 1.07 0.54 28 0.96 1.50 1.17 The Sun Also Rises
51 0.86 51 0.96 1.23 1.08 $-$ $-$ $-$ $-$ $-$ Mean 44.5 0.85 41.45 0.86 1.18
0.99 0.60 27.55 0.98 1.33 1.12
Table 5: Results of character identification for each novel with BookNLP and GutenTag. ‘# Chars’ is the number of characters in the novel. Other headers are the same as in Table 1. Novel | # Chars | CR | # Clus | C.Hom | C.Comp | v-score
---|---|---|---|---|---|---
A Room With A View | 63 | 0.78 | 64 | 0.33 | 1.24 | 0.52
The Age of Innocence | 55 | 0.85 | 90 | 0.04 | 1.00 | 0.09
Alice’s Adventures in Wonderland | 51 | 0.80 | 44 | 0.39 | 1.00 | 0.56
Anne of Green Gables | 113 | 0.69 | 98 | 0.24 | 1.04 | 0.40
Daisy Miller | 10 | 0.90 | 3 | 0.00 | 0.00 | 0.00
Emma | 18 | 0.89 | 14 | 0.07 | 1.00 | 0.13
A Handful of Dust | 104 | 0.71 | 85 | 0.26 | 1.00 | 0.41
Howards End | 55 | 0.84 | 72 | 0.18 | 1.08 | 0.31
Night and Day | 50 | 0.88 | 52 | 0.15 | 1.00 | 0.27
Northanger Abbey | 20 | 0.90 | 15 | 0.07 | 1.00 | 0.12
Persuasion | 35 | 0.89 | 36 | 0.06 | 1.00 | 0.11
Pride and Prejudice | 74 | 0.68 | 78 | 0.17 | 1.00 | 0.29
Sense and Sensibility | 24 | 0.83 | 21 | 0.10 | 1.00 | 0.17
The Sign of the Four | 35 | 0.80 | 40 | 0.05 | 1.00 | 0.10
The Awakening | 22 | 0.86 | 24 | 0.12 | 1.00 | 0.22
The Gambler | 27 | 0.74 | 18 | 0.22 | 1.00 | 0.36
The Invisible Man | 31 | 0.84 | 37 | 0.22 | 1.00 | 0.36
The Man Who Was Thursday | 30 | 0.73 | 26 | 0.19 | 1.00 | 0.32
The Mysterious Affair at Styles | 30 | 0.87 | 29 | 0.10 | 1.00 | 0.19
The Picture of Dorian Gray | 43 | 0.86 | 32 | 0.19 | 1.00 | 0.32
The Sport of the Gods | 37 | 0.81 | 43 | 0.12 | 1.00 | 0.21
The Sun Also Rises | 51 | 0.82 | 56 | 0.32 | 1.12 | 0.50
Mean | 44.5 | 0.81 | 44.40 | 0.16 | 1.02 | 0.27
Table 6: Results of character identification for each novel with spaCy. ‘#
Chars’ is the number of characters in the novel. Other headers are the same as
in Table 1.
|
# Eavesdropping on competing condensates by the edge supercurrent in a Weyl
superconductor
Stephan Kim1,2 Shiming Lei3 Leslie M. Schoop3 R. J. Cava3 N. P. Ong1,§
Departments of Physics1, Electrical and Computer Engineering2 and Chemistry3,
Princeton University, Princeton, NJ 08544, USA
In a topological insulator the metallic surface states are easily
distinguished from the insulating bulk states [1]. By contrast, in a
topological superconductor [2, 3, 4, 5], much less is known about the
relationship between an edge supercurrent and the bulk pair condensate. Can we
force their pairing symmetries to be incompatible? In the superconducting
state of the Weyl semimetal MoTe2, an edge supercurrent is observed as
oscillations in the current-voltage (_I-V_) curves induced by fluxoid
quantization [6]. We have found that the $s$-wave pairing potential of
supercurrent injected from niobium contacts is incompatible with the intrinsic
pair condensate in MoTe2. The incompatibility leads to strong stochasticity in
the switching current $I_{c}$ as well as other anomalous properties such as an
unusual antihysteretic behavior of the “wrong” sign. Under supercurrent
injection, the fluxoid-induced edge oscillations survive to much higher
magnetic fields _H_. Interestingly, the oscillations are either very noisy or
noise-free depending on the pair potential that ends up dictating the edge
pairing. Using the phase noise as a sensitive probe that eavesdrops on the
competiting bulk states, we uncover an underlying blockade mechanism whereby
the intrinsic condensate can pre-emptively block proximitization by the Nb
pair potential depending on the history.
_Preliminaries_
As reported in Ref. [6], the edge supercurrent in MoTe2 realizes a toroidal
topology despite the absence of holes in the exfoliated crystals. As $H$ is
slowly increased, fluxoid quantization leads to a sawtooth field profile for
the edge superfluid velocity ${\bf v}_{s}$, which translates to oscillations
of $I_{\rm c}$ that are clearly seen in a color-map plot of $dV/dI$ vs. $H$ (a
fluxoid is a flux quantum $\phi_{0}$ plus the superfluid circulation; Sec. VI
in Supplementary Materials SM). The large number of oscillation cycles ($\sim
90$) allow its phase noise to be analyzed. The analysis reveals that a large
phase noise appears whenever edge-state pairing, driven by Nb, is incompatible
with bulk-state pairing in MoTe2. Incompatibility between the injected
$s$-wave supercurrent and paired states intrinsic to MoTe2 has observable
consequences in both the bulk and edge states. By varying the contact geometry
across 8 devices, we can distinguish edge-state features from those in the
bulk.
Because the interfaces between Nb and MoTe2 have high transmittivity at 20 mK,
our experiment lies in the strong-coupling proximity-effect regime, well
beyond the Josephson effect regime (Sec. V in SM). In strong coupling
junctions, the system (Nb plus MoTe2) is treated as a single superconductor
with a unique $T_{c}$ and a gap function $\hat{\Psi}({\bf r})$ that is
inhomogeneous [7, 8, 9, 10, 11, 12]. In device $SA$, $T_{c}$ ($\simeq 850$ mK)
lies well above the unperturbed $T_{c}$ of pristine MoTe2 (100 mK) although
still far below that of Nb ($\sim 8$ K). Similarly, the critical field and
critical current are greatly enhanced over the values in pristine MoTe2.
To discuss the proximity effect (Sec. VII of SM) [7, 8, 9, 10, 11, 12], we
denote the Gor’kov pair amplitude in Nb by $\eta({\bf
r})=\langle\hat{\psi}_{\downarrow}({\bf r})\hat{\psi}_{\uparrow}({\bf
r})\rangle$ where $\hat{\psi}_{\alpha}({\bf r})$ annihilates an electron in
spin state $\alpha=\uparrow,\downarrow$ at $\bf r$. We call the pair amplitude
in pristine MoTe2 (of unknown symmetry) $F_{\alpha\beta}({\bf
r})=\langle\hat{\psi}_{\alpha}({\bf r})\hat{\psi}_{\beta}({\bf r})\rangle$.
Aside from MoTe2, evidence for edge supercurrents has been reported in Bi
nanowires [13] and in a Kagome superconductor [14].
_Anti-hysteretic behavior of central peak_
Figure 1a shows the color map of the differential resistance $dV/dI$ versus
$H$ and current $I$ measured in device $SA$ at temperature $T=18$ mK. In weak
$H$, supercurrent injection from Nb leads to a large critical current $I_{\rm
c}$ within a narrow central peak (shown shaded black) that reaches values of
$\sim 80\;\mu$A (20$\times$ larger than seen in pristine crystals). An
immediate surprise is the anomalous position of the central peak. In
conventional superconductors, we expect $H$ to cross zero before the flux
density $B$ does. Instead, the central peak here emerges before $H$ reaches
zero for either scan (which seems oddly anti-causal). We dub this curious
pattern “anti-hysteretic.”
When $|H|$ exceeds 10 mT, we observe a dense array of periodic peaks in the
color map (Figs. 1b and 1c). These are the fluxoid-induced oscillations [6],
now persisting to 90 mT (compared with 3 mT in pristine crystals). As shown in
Figs. 1d and 1e, the oscillations appear as narrow peaks in the differential
resistance $(dV/dI)_{0}$ measured at $I=0$. At each peak, ${\bf v}_{s}$
reverses direction to admit a fluxoid [6].
_Stochastic Switching_
The incompatibility between the intrinsic pair condensation amplitude
$F_{\alpha\beta}$ and $\eta$ injected from Nb is most evident in $I$-$V$
curves measured when $H$ lies within the central peak in Fig. 1a. The
switching transition is strongly stochastic (Sec. II in SM). In Fig. 2a, we
plot 100 curves of $dV/dI$ vs. $I$ measured at 18 mK in device $SC$ with $H$
fixed at $-2.5$ mT. The critical currents $\\{I_{\rm c}\\}$ obey a
distribution function $P(I_{\rm c},H)$ that is distinctly bimodal, with a
histogram that shows a major peak at $10.5\;\mu$A and a minor one at 8.5
$\mu$A (see Fig. 2b and Sec. II in SM). In the color map of $dV/dI(H,I)$ (Fig.
1a), the stochastic nature of the distribution is also apparent as random
spikes within the central peak. By contrast, outside the central peak
($H=-7.5$ mT), the distribution $P(I_{\rm c},H)$ becomes very narrow (Fig.
2c). The standard deviation $s(H)$ (obtained from 100 scans at each $H$) is
plotted in Fig. 2d. The profile $s(H)$ displays a narrow peak at the central
peak with peak value 100$\times$ larger than the baseline (red curve). At 311
mK, however, the peak is absent (blue curve). The stochastic switching
reflects the competition between bulk-date pairing with symmetry $\eta$ or
$F_{\alpha\beta}$.
_Phase Noise and Anti-hysteretic $(dV/dI)_{0}$ Curves_
The competition between $F_{\alpha\beta}$ and $\eta$ has a strong effect on
the phase noise of the oscillations in $(dV/dI)_{0}$. Field scans are called
inbound if $|H|$ decreases with time, and outbound if $|H|$ increases. On
inbound branches (Figs. 1b and 1d), the noise content of the oscillations is
invariably very large, whereas, on outbound branches (Figs. 1c and 1e), the
phase noise is conspicuously suppressed, especially in the interval $5<H<45$
mT, which we call the “quiet” zone.
In Fig. 3 we emphasize the differences by displaying the color map of the
oscillations over the entire field range. On the inbound branch (Fig. 3a), the
noise content is much higher than on the outbound (Fig. 3d). To quantify the
phase noise, we express the fluctuations in the frequency as an $H$-dependent
phase $\theta(H)$ (Sec. III in SM). It is expedient to subtract from
$\theta(H)$ a background $\theta_{0}(H)$ derived from an $H$-linear fit to
$\theta(H)$ in the quiet zone (5, 45) mT (Sec. III in SM). The deviation
$\Delta\theta(H)=\theta(H)-\theta_{0}(H)$ then highlights field intervals with
large phase noise. In Fig. 3b, $\Delta\theta$ on the inbound branch (blue
curve) is seen to be much larger than on the outbound curve (red). In the
quiet zone (5, 45) mT, $\Delta\theta$ is flat by design. We also employ the
smoothed derivative $\langle d\Delta\theta/dH\rangle$ which is more sensitive
to local changes in the frequency. As seen in Fig. 3c, $\langle
d\Delta\theta/dH\rangle$ is much larger on the inbound branch than in the
quiet zone on the outbound.
Aside from the oscillations, $(dV/dI)_{0}$ also displays a dissipationless
regime bracketed by the two fields $B^{\rm in}_{0}$ and $B^{\rm out}_{0}$ (red
arrows in Figs. 1d and 1e). An expanded view of these transitions is shown in
Figs. 4a – 4h for 4 devices. In device $SA$, the transition on the inbound
branch occurs at $-B^{\rm in}_{0}=-54$ mT, larger in magnitude than the field
$B^{\rm out}_{0}$ (= 28 mT) on the outbound branch (blue curves in Fig. 4e).
The reverse sweep (red curves) is the mirror reflection of the blue curve. The
resulting hysteresis is again anti-hysteretic.
_Distinguishing Edge from Bulk Supercurrents_
The observed supercurrents segregate into two groups. In group I (central peak
in Fig. 1a), $I_{\rm c}$ attains very large values ($80-200\;\mu$A) but is
easily suppressed in a weak $H$ ($3-5$ mT). The second group (II) are the weak
supercurrents ($1-2\;\mu$A) seen in the oscillations in $(dV/dI)_{0}$ which
survive to fields up to 90 mT (Figs. 1 and 3).
By tailoring the contact designs in 8 devices, we have found that the group II
features are specific to edge states. The oscillation period corresponds to a
fluxoid area ${\cal A}_{\phi}$ bracketed by the Nb contacts (Sec. I in SM).
Its perimeter is comprised of two segments (length $w$) lying under Nb
contacts, and two segments called “links” (of length $\ell_{1}$ and
$\ell_{2}$) at the uncovered physical edges (inset in Fig. 1a).
In five devices ($SA,SK,SI,SJ,SB$) the Nb contacts are laid down as parallel
strips with spacing $d$ (Table S1, Fig. 4 and Fig. S3 in SM). In the remaining
three ($SF,SH,SD$), the Nb contacts are progressively reduced in volume while
$\ell_{1}$ and $\ell_{2}$ are greatly lengthened. These changes strongly
attenuate group II features.
In Fig. 4, we show two parallel-strip devices, $SA$ and $SI$ (Panels a and b,
respectively), and two devices, $SH$ and $SD$, in the second series (c and d,
respectively). The corresponding $(dV/dI)_{0}$ curves are displayed in Figs.
4e-4h, together with the color maps in Figs. 4i-4j. Group II features are
prominently seen in $SA$ and $SI$ where we have $2w\gg\ell_{1}=\ell_{2}=d$. As
we increase $d$, the widths of the antihysteretic loops decrease. In devices
$SH$ (with $\ell_{1}$ = 5,420 and $\ell_{2}$ = 5,710 nm) and $SD$ ($\ell_{1}$
= 8,030 nm and $\ell_{2}$ = 17,670 nm), the edge oscillations and
antihystereses are completely suppressed (aside from residual wings in $SH$).
Data from 4 other samples are shown in Fig. S3 in SM.
In devices $SA$ and $SI$, supercurrent in the bulk flows in parallel with
supercurrent in the links if $|H|$ is weak. When $|H|$ exceeds $\sim$4 mT, the
bulk supercurrent is suppressed but the link supercurrents survive to large
$H$ because they are one-dimensional. The robustness enables fluxoid
oscillations to be observed up to 90 mT. To realize this, however, we need
$\ell_{1}$ and $\ell_{2}$ to be shorter than the decay length $\lambda$ ($\sim
800$ nm) that governs the decay of the pair amplitude $\eta$ along a link
(inset in Fig. 1a). If $\ell_{1}$, $\ell_{2}$ $>\lambda$ (the case in SH and
SD), $\eta$ decays to zero before the loop is closed, and the oscillations
vanish (Fig. 4g,h). The vanishing of the closed-loop supercurrent is also
evident in $(dV/dI)_{0}$ vs. $H$, which stays finite except when $|H|<$ 2 mT.
These findings establish that the group II features arise from edge
supercurrents at the links.
By comparison, the much larger $I_{c}$ values of the group I supercurrents
(central peak in Fig. 1a) suggest a bulk origin. The 20-fold decrease in the
peak value $I_{\rm c,max}$ as $d$ increases from 156 to 700 nm also implies
bulk states proximitized by $\eta$ (Table S1 in SM). The decrease is
consistent with the short coherence length of Nb ($\xi_{0}\sim$ 40 nm). By
contrast, at the links, $\lambda\gg\xi_{0}$.
_Proximity Effect Between Competing Symmetries_
In de Gennes’ treatment of the proximity effect (valid in the linear-gap
regime) the amplitude for a Cooper pair to propagate between points ${\bf
r}^{\prime}$ and ${\bf r}$ is given by the kernel $K({\bf r,r}^{\prime})$ [7,
9, 12] (Sec. VII in SM). Whenever a propagator segment lies in Nb, $K({\bf
r,r}^{\prime})$ gains a large enhancement from the strong $s$-channel
attraction. Hence the gap function $\hat{\Psi}({\bf r})$ in MoTe2 can adopt
either the symmetry of $F_{\alpha\beta}$ or that of $\eta$ depending on the
weighted contributions of propagators going from all points $\bf
r^{\prime}\to\bf r$ (Eq. S23 in SM). An applied field $H$ can tip the balance
by selectively weakening one or the other condensation amplitude. Calculations
beyond the linear-gap regime show how the gap function changes symmetry in a
junction between an $s$\- and a $p$-wave superconductor [10].
At $H=0$, the weighted average favors $F_{\alpha\beta}$. Vortices inserted by
$H$ initially form a vortex solid. With increasing $H$, melting of the vortex
solid leads to a dissipative vortex liquid in which the gap modulus
$|\hat{\Psi}|$ remains finite but its phase fluctuates strongly due to vortex
motion. In the vortex liquid, which survives to 80 mT, the loss of long-range
phase coherence substantially weakens the pair amplitude $F_{\alpha\beta}$
relative to $\eta$. Conversely, solidification of the vortex system strongly
tilts the competition towards the intrinsic $F_{\alpha\beta}$.
_Correlating central peak, phase noise and antihysteresis_
In each device, edge-state pairing at the links may be driven either by
$F_{\alpha\beta}$ or $\eta$ from the nearest Nb contact. From the phase noise
analysis, we infer that compatibility between edge-pairing and bulk-pairing
produces oscillations that are nearly noise-free. By contrast, incompatibility
between $s$-wave pairing at the edge and $F_{\alpha\beta}$ in the bulk
generates substantial phase noise.
Combining the group I and II features with the phase noise measured by
$\Delta\theta$ and $d\langle\Delta\theta/dH\rangle$ in device $SA$, we arrive
at a consistent picture of the antihysteretic behavior. The color map of
$dV/dI$ and the trace of $(dV/dI)_{0}$ at 18 mK are displayed in Figs. 5a and
b, respectively. Starting at $-100$ mT, the phase noise is initially small, as
seen in $\langle d\Delta\theta/dH\rangle$ (blue curve in Fig. 5c). This
implies that edge pairing is driven by the Nb contacts while all pairing of
bulk states is strongly suppressed. When $H$ reaches $-80$ mT, we observe a
surge onset in phase noise which we take as evidence for incompatibility with
the vortex liquid that appears at $-80$ mT. The phase noise remains large over
the entire interval ($-80$, $-12$) mT (red curve in 5c).
When $H$ crosses $-5$ mT, proximitization of the bulk states driven by $\eta$
becomes strongly favored ($H$ becomes too weak to depair the bulk $s$-wave
condensate). The abrupt emergence at $-4$ mT of a large bulk supercurrent (the
central peak) signals when $\hat{\Psi}$ switches to $s$-wave pairing (Figs.
5a), but this favorable interval is brief. When $H$ reaches $-2$ mT,
solidification of the vortex liquid (and the ensuing long-range phase
coherence) cause $\hat{\Psi}({\bf r})$ to switch back to the intrinsic pairing
symmetry $F_{\alpha\beta}$. Suppression of all $s$-wave paired regions
throughout the crystal collapses the central peak _before_ $H$ reaches 0-.
Hence the placement of the central peak is antihysteretic as observed.
_Blockade_
When the field crosses zero to the outbound branch ($H>0$), we should expect
to see the re-appearance of the central peak in the interval (2, 4) mT.
However, this is not observed in any of the 8 devices studied.
Together, the absence of the central peak for $H>0$ and the noise suppression
in the interval $(5,45)$ mT suggest a “blockade” mechanism. Once $\hat{\Psi}$
switches to the intrinsic symmetry $F_{\alpha\beta}$ in the limit $H\to
0^{-}$, the intrinsic condensate appears to block re-proximitization of the
bulk by $\eta$ on the outbound branch throughout the quiet zone which extends
to a field that we call $B^{\flat}\sim$ 45 mT (shaded lilac in Figs. 5a-c).
Edge pairing by Nb is blocked within the quiet zone.
The identification of $(dV/dI)_{0}$ with supercurrents at the edge clarifies
considerably the anti-hysteretic loops in Figs. 4e and 4f. As shown by the
green curve in Fig. 5b, the dissipationless interval for the edge current
spans the interval $(-B^{\rm in}_{0},B^{\rm out}_{0})=(-54,28)$ mT for a left-
to-right field scan. The interval $(-B^{\rm in}_{0},B^{\rm out}_{0})$ is
shifted left because, on the inbound branch, the edges are proximitized by the
strong Nb pair potential, whereas, on the outbound, edge pairing is driven by
the much weaker $F_{\alpha\beta}$ while $\eta$-pairing is excluded by the
blockade. Likewise, in a right-to-left scan, $(-B^{\rm out}_{0},B^{\rm
in}_{0})$ is shifted right. This accounts for the anti-hysteretic sign of the
loops in $(dV/dI)_{0}$.
_Raising the temperature_
Figure 6a shows the blockade region (shaded maroon) inferred from left-to-
right field scans taken at elevated temperatures 378, 532 and 722 mK (Fig. 6b
is for the opposite field scan). The corresponding color maps of $dV/dI$ are
displayed in Figs. 6c, d and e. For the results at 522 mK, we also show the
trace of $(dV/dI)_{0}$ (Fig. 6f) and the phase noise measured by
$\Delta\theta$ and $\langle d\Delta\theta/dH\rangle$ (Fig. 6g). The overall
patterns are similar to those taken at 18 mK (Fig. 5) except that raising $T$
decreases the field scales. The asymmetry of the phase noise, including its
suppression in the blockade interval, is apparent in Figs. 6f and 6g. We find
that the widths of the antihysteretic loops in $(dV/dI)_{0}$, the edge
dissipationless interval $(-B^{\rm in}_{0},B^{\rm out}_{0})$ and $B^{\flat}$
all decrease systematically as $T$ increases, reaching zero at the device
$T_{c}\simeq 850$ mK. The linear decrease of $B^{\flat}(T)$ as $T\to T_{c}$ is
shown in Fig. S9. These trends are consistent with the key role played by
$F_{\alpha\beta}$ in generating the antihysteresis, phase noise and the
blockade.
_Hysteretic solid-to-liquid transition_
If we inspect the blockade region in the $H$-$T$ plane (shaded maroon in Figs.
6a,b), we find that its placement mimics that of a vortex solid phase that is
field-shifted by hysteresis. We conjecture that the blockade mechanism and
quiet zone are features intrinsic to the vortex solid. If $H$ is swept from
left to right (Fig. 6a), the liquid-to-solid transition is “delayed” on the
inbound branch until $H$ reaches $-2$ mT, analogous to supercooling of a
liquid (regarding $|H|$ as an effective temperature). On the outbound branch,
the solid-to-liquid transition is also delayed and shifted to 45 mT. Hence the
observed vortex solid phase is displaced to the right as shown in Fig. 6a. For
the opposite scan, the shift is to the left. We note that the supercooling
hysteresis has the conventional sign. However, the blockade mechanism and
incompatibility between condensates together displace the bulk and edge
supercurrent responses in the opposite direction, which leads to the
antihysteresis.
## References
* [1] Liang Fu and C. L. Kane, Topological insulators with inversion symmetry, Phys. Rev. B 76, 045302 (2007).
* [2] Liang Fu and C. L. Kane, Superconducting Proximity Effect and Majorana Fermions at the Surface of a Topological Insulator, Phys. Rev. Lett. 100, 096407 (2008). DOI: 10.1103/PhysRevLett.100.096407
* [3] Xiao-Liang Qi, Taylor L. Hughes, S. Raghu and Shou-Cheng Zhang, Time reversal invariant topological superconductors and superfluids in two and three dimensions, Phys. Rev. Lett. 102, 187001 (2009). DOI: 10.1103/PhysRevLett.102.187001
* [4] Liang Fu and Erez Berg, Odd-parity topological superconductors: theory and application to CuxBi2Se3. Phys. Rev. Lett. 105, 097001 (2010). DOI: 10.1103/PhysRevLett.105.097001
* [5] Yang Peng, Falko Pientka, Erez Berg, Yuval Oreg, and Felix von Oppen, Signatures of topological Josephson junctions, Phys. Rev. B94, 085409 (2016). DOI: 10.1103/PhysRevB.94.085409
* [6] Wudi Wang, Stephan Kim, Minhao Liu, F. A. Cevallos, R. J. Cava, and N. P. Ong, Evidence for an edge supercurrent in the Weyl superconductor MoTe2, _Science_ 368, 534-537 (2020). 10.1126/science.aaw9270
* [7] P. G. de Gennes, Boundary effects in superconductors, _Rev. Mod. Phys._ 36, 235 (1964).
* [8] P. G. de Gennes, _Superconductivity of Metals and Alloys_ (Addison Wesley, 1989), Ch. 7.
* [9] G. Deutscher and P. G. de Gennes, Proximity Effects, in _Superconductivity, Vol. II_ ed. Parks (Taylor and Francis 1969), Ch. 17.
* [10] B. Ashauer, G. Kieselmann and D. Rainer, On the proximity effect in unconventional superconductors, _J. Low Temp. Phys._ 63, 349 (1986).
* [11] A. Millis, D. Rainer and J. A. Sauls, Quasiclassical theory of superconductivity near magnetically active interfaces, _Phys. Rev. B_ 38, 4504 (1988).
* [12] Manfred Sigrist and Kazuo Ueda, Phenomenological theory of unconventional superconductivity, _Rev. Mod. Phys._ 63, 23 (1991).
* [13] Anil Murani, Alik Kasumov, Shamashis Sengupta, Yu A. Kasumov, V.T. Volkov, I.I. Khodos, F. Brisset, Raphaelle Delagrange, Alexei Chepelianskii, Richard Deblock, Helene Bouchiat and Sophie Gueron, Ballistic edge states in Bismuth nanowires revealed by SQUID interferometry, _Nat. Commun._ 8, 15941 (2017). doi: 10.1038/ncomms15941.
* [14] Yaojia Wang, Shuo-Ying Yang, Pranava K. Sivakumar, Brenden R. Ortiz, Samuel M.L. Teicher, Heng Wu, Abhay K. Srivastava, Chirag Garg, Defa Liu, Stuart S. P. Parkin, Eric S. Toberer, Tyrel McQueen, Stephen D. Wilson, Mazhar N. Ali, Proximity-induced spin-triplet superconductivity and edge supercurrent in the topological Kagome metal, K1-xV3Sb5, cond-mat arXiv: 2012.05898.
* * *
§Corresponding author email<EMAIL_ADDRESS>
Data availability
The data in the plots will be uploaded to Harvard DataVerse.
Acknowledgements
We have benefitted from discussions with Wudi Wang and Zheyi Zhu. N.P.O. and
S.K. were supported by the U.S. Department of Energy (DE-SC0017863). The
crystal growth effort was supported by a MRSEC grant from the U.S. National
Science Foundation (NSF DMR-2011750) which supported S.L, L.M.S., R.J.C and
N.P.O. The Gordon and Betty Moore Foundation’s EPiQS initiative provided
generous support via grants GBMF9466 (N.P.O.) and GBMF9064 (L.M.S.).
Author contributions
S.K. and N.P.O. conceptualized and designed the experiment. S.K. performed all
device fabrication and measurements. The crystals were grown by S.L., L.M.S.
and R.J.C. Analysis of the data were carried out by S.K. and N.P.O. who
jointly wrote the manuscript.
Competing financial interests
The authors declare no competing financial interests.
Additional Information
Supplementary Materials is available in the online version of the paper.
Correspondence and requests for materials should be addressed to N.P.O.
Figure 1: Anti-hysteretic central peak and edge oscillations observed in
MoTe2 measured by supercurrent injection from Nb pads in device $SA$ at 18 mK.
Panel (a) shows the color map of the differential resistance $dV/dI$ plotted
versus magnetic field $H$ and current $I$. Dissipationless regions are shown
in black (color scale at left). The central peak occurs at -3 or +3 mT when
$H$ is scanned from left to right or right to left, respectively. The inset
defines the width $w$ and spacing $d$ of the device. The decays of the Nb
pairing amplitudes in the bulk and along an edge are sketched. Panels (b) and
(c) display the fluxoid-induced edge oscillations on the inbound ($d|H|/dt<0$)
and outbound branches ($d|H|/dt>0$), respectively. Panels (d) and (e) plot the
zero-bias differential resistance $(dV/dI)_{0}$ vs. $H$ on the inbound and
outbound branches, respectively. The phase noise is much larger in (d) than in
(e). The transition to dissipationless behavior ($(dV/dI)_{0}\to 0$) occurs at
$-B^{\rm in}_{0}=-54$ mT and $B^{\rm out}_{0}=+28$ mT in (d) and (e),
respectively (red arrows). In each panel, the green arrow indicates field scan
direction. Figure 2: Stochastic switching in bulk states measured in device
SC. When $H$ lies within the central peak ($H$ = $-2.5$ mT, Panel a), the
switch from dissipationless to dissipative behavior is stochastic. The
critical current in 100 curves of $dV/dI$ vs. $I$ shows bunching of $I_{c}$
suggestive of a bimodal distribution $P(I_{\rm c},H)$. In Panel b, the
histogram plot of $P(I_{\rm c},H)$ confirms the bimodal distribution. However,
when $H$ lies outside the central peak ($-7.5$ mT), the switching transitions
are non-stochastic. In all 100 scans the transition occurs at $1.27\pm
0.01\;\mu$A (7 are shown in Panel c). Panel d plots the standard deviation $s$
vs. $H$ of the distribution $P(I_{\rm c},H)$ at 18 mK (Sec. II in SM). At the
peak, $s$ is a 100$\times$ larger than its value outside the peak. At 311 mK,
the peak is unresolved. Figure 3: Nature of the phase noise in the fluxoid-
induced oscillations over the full field interval in Sample $SA$ at 18 mK. The
four strips in Panel (a) show the observed color map of $dV/dI$ in the $H$-$I$
plane as $H$ is scanned from -90 mT to 0 mT (inbound branch). Initially,
($-90<H<-80$ mT) the phase noise is quite small. Between -80 mT and -12 mT,
the emergence of large phase noise strongly distorts the oscillations. The
noise is dominated by 2$\pi$ jumps in the phase $\theta(H)$ which lead to
random compressions and dilations of the period. We attribute the noise to
incompatibility between edge pairing induced by $\eta$ (Nb) and the intrinisic
pair amplitude $F_{\alpha\beta}$ in the vortex-liquid state of MoTe2. Panel
(b) compares the curves of the phase deviation $|\Delta\theta(H)|$ in the
inbound branch (blue) with the outbound (red).
$\Delta\theta(H)=\theta(H)-\theta_{0}(H)$ measures the phase deviation caused
by random phase jumps accumulated over the entire field interval (see text).
Panel (c) compares the smoothed derivative $\langle d\Delta\theta/dH\rangle$
between the inbound (blue) and outbound branches (red). The large phase noise
in the inbound branch causes $\langle d\Delta\theta/dH\rangle$ to lie well
above the outbound, especially between 5 and 45 mT. The four strips in Panel
(d) show the color map of $dV/dI$ as $H$ is swept from 0 to 90 mT (outbound
branch). Over the “quiet” zone (5 to 45 mT), the phase noise is negligible
except for isolated 2$\pi$ jumps (at 13.4, 35 and 36.5 mT). In all panels the
green arrow indicates field scan direction. Figure 4: Systematic suppression
of edge oscillations and anti-hysteresis in four devices ($SA,SI,SH$ and $SD$
in Panels a,…, d, respectively). $SA$ and $SI$ (with spacing $d$ = 156 and 285
nm, respectively) are examples from a series of 5 parallel-strip devices. $SH$
and $SD$ are from a second series in which the Nb contact volumes are
decreased while $\ell_{1}$ and $\ell_{2}$ are progressively increased. The
middle row (Panels e,…,h) shows the corresponding antihystereses in
$(dV/dI)_{0}$ vs. $H$ (blue and red arrows indicate field scan directions).
The corresponding color maps of $dV/dI$ vs. $H$ and $I$ are in the bottom row
(i,…, l). In devices $SA$ and $SI$, prominent antihysteresis and edge
oscillations are seen, both in $(dV/dI)_{0}$ (Panels e, f) and their color
maps (i, j). In devices $SH$ and $SD$, these group II features are suppressed
or absent in $(dV/dI)_{0}$ (Panels g and h) and in the color maps (k and l).
See Table S1 for parameters in the 8 devices and Fig. S3 for results from
devices $SK,SJ,SB,SF$. Figure 5: Correlating the central peak, phase noise
and antihysteresis measured in device $SA$ as $H$ is swept from $-100$ to
$+100$ mT at 18 mK. Panel (a) shows the color map of $dV/dI$, highlighting the
low-$I$ region. Panel (b) plots the oscillations in the zero-bias
$(dV/dI)_{0}$ (grey curve) together with its floor value (thick green curve).
The dissipationless interval for $(dV/dI)_{0}$ lies between $-B^{\rm
in}_{0}=-54$ mT and $B^{\rm out}_{0}=28$ mT (red arrows). Panel (c) plots the
phase noise measured by $|\Delta\theta|$ (red curve) and $\langle
d\Delta\theta/dH\rangle$ (blue). Within the quiet zone extending to
$B^{\flat}$ on the outbound branch (shaded lilac), the phase noise is
suppressed. We infer the existence of a blockade mechanism in this interval.
In each panel the green arrow indicates the field-scan direction. Figure 6:
Color maps, zero-bias $(dV/dI)_{0}$ and phase noise at elevated temperatures
in device $SA$. Panels (a) and (b) display the inferred metastable states in
the $H$-$T$ plane, for left-to-right and right-to-left field scans,
respectively. The regions where the blockade mechanism operates are shaded
maroon. Thin blue stripes indicate regions in which the central peak emerges
(note the antihysteretic placement). On the outbound branch in (a), the maroon
region extends to $B^{\flat}$ ($\sim 45$ mT) at 18 mK. The dissipative vortex
liquid (orange and yellow regions) extend to $\pm 80$ mT. The device $T_{c}$
is 850 mK. Panels (c), (d) and (e) display color maps of $dV/dI$ measured at
722, 378 and 532 mK, respectively. Panel (f) plots the oscillations in the
zero-bias $(dV/dI)_{0}$ (grey curve) and its floor value (green curve). Panel
(g) display the phase noise measured by $\Delta\theta$ (red curve) and
$\langle d\Delta\theta/dH\rangle$ (blue). The lilac regions in Panels (f) and
(g) represent the quiet zone in which phase noise is minimal. As $T\to T_{c}$,
the field scales $B^{\rm in}_{0}$, $B^{\rm out}_{0}$ and $B^{\flat}$ decrease
to zero. In all panels, green arrows indicate the field-scan direction.
Methods
## I Device fabrication
The MoTe2 devices were fabricated following standard nanofabrication
procedures. Substrates with pre-patterned alignment makers were prepared in
advance. The alignment markers were made of 5 nm of Ti and 50 nm Au. MoTe2
microflakes were mechanically exfoliated on to the substrates using Nitto
Denko SPV 5057A5 Surface Protection Tape. Microflakes with adequate sizes with
sharp edges were chosen using an optical microscope.
Niobium electrodes were made following the standard PMMA-copolymer bilayer
recipe. After the resists were spun on top of substrates, contacts were
patterned using EBPG 5150 from Raith. The devices were developed with MIBK-IPA
(1:3) solution for 60 seconds and rinsed with IPA solution. The devices were
then transferred to an EvoVac Sputtering System (Angstrom Engineering). An
_in-situ_ Ar plasma beam bombarded the surfaces of the devices for 120 seconds
to remove the top few layers of MoTe2 that were oxidized. A thin layer (3 nm)
of Ti layer was sputtered at a rate $0.1-0.2$ nm/s to form a sticking layer,
followed by 100 nm of Nb deposited at a rate 1 nm/s. Finally, a 5 nm of Au was
sputtered on top of Nb layer to protect it from oxidization. All sputtering
procedures were performed at a high vacuum of approximately $10^{-8}$ mbar.
## II Measurement techniques
Measurements were carried out in a top-loading dilution refrigerator (Oxford
Instruments Kelvinox TLM). Once loaded to the fridge, devices were immersed in
the circulating 3He-4He mixture in the mixing chamber. The base temperature of
the fridge was $T\sim 20$ mK. Three filters were used to reduce noises during
measurements. An $LC$ $\pi$-filter, located at the top of the probe, provided
80-dB suppression for frequencies $f>100$ MHz. The two other filters were
located in the mixing chamber. The first one was a sintered metal powder
filter that consisted of Cu particles of 30-60 $\mu m$ in diameter. It
suppressed any stray electromagnetic radiation for frequencies $f>1$ GHz. The
second filter was a low-pass $RC$ filter with the cutoff frequency of $f=300$
kHz. A NbTi superconducting magnet was used to apply magnetic fields to
devices. A Keithley 2400 and a Keithley 6221 provided the current to
superconducting magnet. The smallest field-step size was as small as a few
$\mu$T.
Differential resistances of all devices were measured using a SR830 lock-in
amplifier. A DC bias from a Keithley 2400 voltage source was superposed with a
small AC excitation voltage from the lock-in amplifier through a home-made
voltage adder. The resulting voltage was converted to current through a buffer
resistor that was orders of magnitude larger in resistance than the device of
interest. The voltage signal $V$ across the device first passed through a pre-
amplifier to improve the signal-to-noise ratio. The gain was usually in the
order of $G\sim 1000$. The amplified signal then reached the lock-in amplifier
for detection. Measurements were done in quasi-4-point fashion. Although both
voltage and current contacts shared the same electrodes, the electrodes were
superconducting at 20 mK.
The differential resistance plots in the main text were obtained through the
following procedures: first, the magnetic field was set to the desired value.
Then the DC bias was ramped from zero to the desired bias value with a small
step size. After reaching the desired bias, the current was ramped back to
zero with a much larger step size. Such procedures were repeated for desired
field ranges. The collected differential resistance curves were plotted
together. The differential resistance versus field at zero bias $(dV/dI)_{0}$
plots were prepared as follows: starting with the field at zero, we swept $H$
to its maximum negative value then to the maximum positive value before
terminating at zero.
|
# Relative equilibria in curved restricted 4-body problems
###### Abstract.
We consider the curved 4-body problems on spheres and hyperbolic spheres.
After obtaining a criterion for the existence of quadrilateral configurations
on the equator of the sphere, we study two restricted 4-body problems, one in
which two masses are negligible, and another in which only one mass is
negligible. In the former we prove the evidence square-like relative
equilibria, whereas in the latter we discuss the existence of kite-shaped
relative equilibria.
Florin Diacu1,2,3 and Sawsan Alhowaity3,4
1Yangtze Center of Mathematics, Sichuan University, Chengdu, China
2Yale-NUS College, National University of Singapore, Singapore
3Department of Mathematics and Statistics, University of Victoria, Canada
4Department of Mathematics, University of Shaqra, Saudi Arabia
<EMAIL_ADDRESS><EMAIL_ADDRESS>
## 1\. Introduction
The classical $N$-body problem has a long history. Isaac Newton first proposed
it in 1687 in his first edition of Principia in the context of the Moon’s
motion. He assumed that universal gravitation acts between celestial bodies
(reduced to point masses) in direct proportion with the product of the masses
and in inverse proportion with the square of the distance. The study of the
$N$-body problem was further advanced by the Bernoulis, Lagrange, Laplace,
Euler, Cauchy, Jacobi, Dirichlet, Poincaré and many others.
The idea of extending the gravitational force between point masses to spaces
of constant curvature occurred soon after the discovery of hyperbolic
geometry. In the 1830s, independently of each other, Bolyai and Lobachevsky
realized that there must be an intimate connection between the laws of physics
and the geometry of the universe, [2], [29], [25]. A few years earlier, Gauss
had interpreted Newton’s gravitational law as stating that the attracting
force between bodies is inversely proportional with the area of the sphere of
radius equal to the distance between the point masses (i.e. proportional to
$1/r^{2}$, where $r$ is the distance). Using this idea, Bolyai and Lobachevsky
suggested that, should space be hyperbolic, the attracting force between
bodies must be inversely proportional to the hyperbolic area of the
corresponding hyperbolic sphere (i.e. proportional to
$1/\sinh^{2}(|\kappa|^{1/2}r)$, where $r$ is the distance and $\kappa<0$ the
curvature of the hyperbolic space). This is equivalent to saying that, in
hyperbolic space, the potential that describes the gravitational force is
proportional to $\coth(|\kappa|^{1/2}r)$.
The above analytic expression of the potential was first introduced by
Schering, [33], [34], and then extended to elliptic space by Killing, [22–24].
But with no physical ways of checking the validity of this generalization of
the gravitational force, it was unclear whether the cotangent potential had
any physical meaning, the more so since Lipschitz had proposed a different
extension of the law, which turned out to be short lived, [28]. The
breakthrough came at the dawn of the 20th century when Liebmann made two
important discoveries, [26], [27]. He showed that two basic properties of the
Newtonian potential are also satisfied by the cotangent potential: (1) in the
Kepler problem, which studies the motion of one body around a fixed centre,
the potential is a harmonic function (i.e. a solution of the Laplace equation
in the Euclidean case, but of the Laplace-Beltrami equation in the non-flat
case); (2) in both the flat and the non-flat case, all bounded orbits of the
Kepler problem are closed, a property discovered by Bertrand for the Newtonian
law, [1]. These similarities between the flat and the curved problem convinced
the scientific community that the cotangent potential was the natural way to
express gravity in spaces of constant curvature.
The curved $N$-body problem became somewhat neglected after the birth of
general relativity, but was revived after the discretization of Einstein’s
equation showed that an $N$-body problem in spaces of variable curvature is
too complicated to be treated with analytical tools. In the 1990s, the Russian
school of celestial mechanics considered both the curved Kepler and the curved
2-body problem, [24], [35]. After understanding that, unlike in the Euclidean
case, these problems are not equivalent, the latter failing to be integrable,
[35], the 2-body case was intensively studied by several researchers of this
school. More recently, the work of Diacu, Santoprete, and Pérez-Chavela
considered the curved $N$-body problem for $N>2$ in a new framework, leading
to many interesting results, [3–20], [32]. Other researchers developed these
ideas further, [20] [30], [31], [37–40], and the problem is growing in
popularity.
In this short note we prove three results. The first is a criterion for the
existence of quadrilateral relative equilibria on the equator of the sphere.
The second shows that if two masses are negligible and the other two are
equal, then square-like relative equilibria exists on spheres,
but—surprisingly—not on hyperbolic spheres. The element of surprise arises
from the fact that, in the general problem, square-like equilibria exist both
on the hyperbolic sphere and on the sphere (except for the case when they are
on the equator), [5]. Finally we prove that if only one mass is negligible and
the other three are equal, some kite-shaped relative equilibria exist on
spheres, but not on hyperbolic spheres.
## 2\. Equations of motion
We consider the motion of four bodies on 2-dimensional surfaces of constant
curvature $\kappa$, namely spheres $\mathbb{S}_{\kappa}^{2}$ for $\kappa>0$,
the Euclidean plane $\mathbb{R}^{2}$ for $\kappa=0$, and hyperbolic spheres
$\mathbb{H}_{\kappa}^{2}$ for $\kappa<0$. We will arrange these objects in
$\mathbb{R}^{3}$ such that they all have a common point at which lie all the
north poles of the spheres and the vertices of the hyperbolic spheres, to all
of which the plane $\mathbb{R}^{2}$ is tangent. If we fix the origin of a
coordinate system at this point, then we can write
$\mathbb{S}_{\kappa}^{2}:=\\{(x,y,z)\ \\!|\
\\!\kappa(x^{2}+y^{2}+z^{2})+2\kappa^{\frac{1}{2}}z=0\\}\ \ {\rm for}\ \
\kappa>0,$ $\mathbb{H}_{\kappa}^{2}:=\\{(x,y,z)\ \\!|\
\\!\kappa(x^{2}+y^{2}-z^{2})+2|\kappa|^{\frac{1}{2}}z=0\hskip 5.69046ptz\geq
0\\}\ \ {\rm for}\ \ \kappa<0.$
Consider now four point masses, $m_{i}>0,\ i=1,2,3,4$, whose position vectors,
velocities, and accelerations are given by
${\bf r}_{i}=(x_{i},y_{i},z_{i}),\ \dot{\bf
r}_{i}=(\dot{x}_{i},\dot{y}_{i},\dot{z}_{i}),\ \ddot{\bf
r}_{i}=(\ddot{x}_{i},\ddot{y}_{i},\ddot{z}_{i}),\ i=1,2,3,4.$
Then, as shown in [9], the equations of motion take the form
$\begin{cases}\ddot{x}_{i}=\sum_{j=1,j\neq
i}^{N}\frac{m_{j}\Big{[}{x}_{j}-\Big{(}1-\frac{\kappa
r_{ij}^{2}}{2}\Big{)}{x}_{i}\Big{]}}{\Big{(}1-\frac{\kappa
r_{ij}^{2}}{4}\Big{)}^{3/2}r_{ij}^{3}}-\kappa(\dot{\bf r}_{i}\cdot\dot{\bf
r}_{i})x_{i}\\\ \ddot{y}_{i}=\sum_{j=1,j\neq
i}^{N}\frac{m_{j}\Big{[}{y}_{j}-\Big{(}1-\frac{\kappa
r_{ij}^{2}}{2}\Big{)}{y}_{i}\Big{]}}{\Big{(}1-\frac{\kappa
r_{ij}^{2}}{4}\Big{)}^{3/2}r_{ij}^{3}}-\kappa(\dot{\bf r}_{i}\cdot\dot{\bf
r}_{i})y_{i}\\\ \ddot{z}_{i}=\sum_{j=1,j\neq
i}^{N}\frac{m_{j}\Big{[}{z}_{j}-\Big{(}1-\frac{\kappa
r_{ij}^{2}}{2}\Big{)}{z}_{i}\Big{]}}{\Big{(}1-\frac{\kappa
r_{ij}^{2}}{4}\Big{)}^{3/2}r_{ij}^{3}}-(\dot{\bf r}_{i}\cdot\dot{\bf
r}_{i})(\kappa z_{i}+\sigma|\kappa|^{1/2}),\ i=1,2,3,4,\end{cases}$
where $\sigma=1$ for $\kappa\geq 0$, $\sigma=-1$ for $\kappa<0$, and
$r_{ij}:=\begin{cases}[(x_{i}-x_{j})^{2}+(y_{i}-y_{j})^{2}+(z_{i}-z_{j})^{2}]^{1/2}\
\ {\rm for}\ \ \kappa\geq
0\cr[(x_{i}-x_{j})^{2}+(y_{i}-y_{j})^{2}-(z_{i}-z_{j})^{2}]^{1/2}\ \ {\rm
for}\ \ \kappa<0\cr\end{cases}$
for $i,j\in\\{1,2,3,4\\}$. The above system has eight constraints, namely
$\kappa(x_{1}^{2}+y_{i}^{2}+\sigma z_{i}^{2})+2|\kappa|^{1/2}z_{i}=0,$
$\kappa{\bf r}_{i}\cdot\dot{\bf r}_{i}+|\kappa|^{1/2}\dot{z}_{i}=0,\hskip
8.5359pti=1,2,3,4.$
If satisfied at an initial instant, these constraints are satisfied for all
time because the sets $\mathbb{S}_{\kappa}^{2},\mathbb{R}^{2}$, and
$\mathbb{H}_{\kappa}^{2}$ are invariant for the equations of motion, [5].
Notice that for $\kappa=0$ we recover the classical Newtonian equations of the
$4$-body problem on the Euclidean plane, namely
$\ddot{\bf r}_{i}=\sum_{j=1,j\neq i}^{N}\frac{m_{j}({\bf r}_{j}-{\bf
r}_{i})}{r_{ij}^{3}},$
where ${\bf r}_{i}=(x_{i},y_{i},0),\ i=1,2,3,4.$
## 3\. Relative equilibria
Relative equilibria are solutions for which the relative distances remain
constant during the motion. We first introduce some coordinates
$(\varphi,\omega)$, which were originally used in [9] for the case $N=3$, to
detect relative equilibria on and near the equator of
$\mathbb{S}_{\kappa}^{2}$, where $\varphi$ measures the angle from the
$x$-axis in the $xy$-plane, while $\omega$ is the height on the vertical
$z$-axis. In these new coordinates, the constraints become
$x_{i}^{2}+y_{i}^{2}+\omega_{i}^{2}+2\kappa^{-1/2}\omega_{i}=0,\hskip
5.69046pti=1,2,3,4.$
With the notation,
$\Omega_{i}=x_{i}^{2}+y_{i}^{2}=-\kappa^{-1/2}\omega_{i}(\kappa^{1/2}\omega_{i}+2)\geq
0,\ \omega_{i}\in[-2\kappa^{-1/2},0],\ i=1,2,3,4,$
where equality occurs when the body is at the North or the South Pole of the
sphere, the $(\varphi,\omega)$-coordinates are given by the transformations
$x_{i}=\Omega_{i}^{1/2}\cos\varphi_{i},\
y_{i}=\Omega_{i}^{1/2}\sin\varphi_{i}.$
Thus the equations of motion take the form
$\begin{cases}\ddot{\varphi_{i}}=\Omega_{i}^{-1/2}\sum_{j=1,j\neq
i}^{N}\frac{m_{j}\Omega_{j}^{1/2}\sin(\varphi_{j}-\varphi_{i})}{\rho_{ij}^{3}(1-\frac{\kappa\rho_{ij}^{2}}{4})^{3/2}}-\frac{\dot{\varphi_{i}}\dot{\Omega_{i}}}{\Omega_{i}}\\\
\ddot{\omega_{i}}=\Omega_{i}^{-1/2}\sum_{j=1,j\neq
i}^{N}\frac{m_{j}\Big{[}\omega_{j}+\omega_{i}+\frac{\kappa\rho_{ij}^{2}}{2}(\omega_{i}+\kappa^{-1/2})\Big{]}}{\rho_{ij}^{3}\Big{(}1-\frac{\kappa\rho_{ij}^{2}}{4}\Big{)}^{3/2}}-(\kappa\omega_{i}+\kappa^{1/2})(\frac{\dot{\Omega_{i}^{2}}}{4\Omega_{i}}+\dot{\varphi_{i}}^{2}\Omega_{i}+\dot{\omega_{i}^{2}}),\end{cases}$
where
$\dot{\Omega}_{i}=-2\kappa^{-1/2}\dot{\omega_{i}}(\kappa^{1/2}\omega_{i}+1)$
$\rho_{ij}^{2}=\Omega_{i}+\Omega_{j}-2\Omega_{i}^{1/2}\Omega_{j}^{1/2}\cos(\varphi_{i}-\varphi_{j})+(\omega_{i}-\omega_{j})^{2},\hskip
5.69046pti,j=1,2,3,4,\hskip 5.69046pti\neq j.$
## 4\. Relative equilibria on the equator
If we restrict the motion of the four bodies to the equator of
$\mathbb{S}_{\kappa}^{2}$, then
$\omega_{i}=-\kappa^{-1/2},\hskip 8.5359pt\dot{\omega_{i}}=0,\hskip
8.5359pt\Omega_{i}=\kappa^{-1},\ i=1,2,3,4,$
and the equations of motion take the form
$\ddot{\varphi}_{i}=\kappa^{3/2}\sum_{j=1,j\neq
i}^{4}\frac{m_{j}\sin(\varphi_{j}-\varphi_{i})}{|\sin(\varphi_{j}-\varphi_{i})|^{3}},\
\ i=1,2,3,4.$
For the relative equilibria, the angular velocity is the same constant for all
masses, so we denote this velocity by $\alpha\neq 0$ and take
$\varphi_{1}=\alpha t+a_{1},\hskip 8.5359pt\varphi_{2}=\alpha t+a_{2},\hskip
8.5359pt\varphi_{3}=\alpha t+a_{3},\hskip 8.5359pt\varphi_{4}=\alpha t+a_{4},$
where $a_{1},a_{2},a_{3},a_{4}$ are real constants, so
$\ddot{\varphi}_{i}=0,\ i=1,2,3,4.$
Using the notation
$s_{1}:=\frac{\kappa^{3/2}\sin(\varphi_{1}-\varphi_{2})}{|\sin(\varphi_{1}-\varphi_{2})|^{3}},\hskip
8.5359pts_{2}:=\frac{\kappa^{3/2}\sin(\varphi_{2}-\varphi_{3})}{|\sin(\varphi_{2}-\varphi_{3})|^{3}},\hskip
8.5359pts_{3}:=\frac{\kappa^{3/2}\sin(\varphi_{3}-\varphi_{1})}{|\sin(\varphi_{3}-\varphi_{1})|^{3}},$
$s_{4}:=\frac{\kappa^{3/2}\sin(\varphi_{4}-\varphi_{1})}{|\sin(\varphi_{4}-\varphi_{1})|^{3}},\hskip
8.5359pts_{5}:=\frac{\kappa^{3/2}\sin(\varphi_{2}-\varphi_{4})}{|\sin(\varphi_{2}-\varphi_{4})|^{3}},\hskip
8.5359pts_{6}:=\frac{\kappa^{3/2}\sin(\varphi_{3}-\varphi_{4})}{|\sin(\varphi_{3}-\varphi_{4})|^{3}},$
we obtain from the equations of motion that
$\begin{cases}-m_{2}s_{1}+m_{3}s_{3}+m_{4}s_{4}=0\cr
m_{1}s_{1}-m_{3}s_{2}-m_{4}s_{5}=0\cr-m_{1}s_{3}+m_{2}s_{2}-m_{4}s_{6}=0\cr-
m_{1}s_{4}+m_{2}s_{5}+m_{3}s_{6}=0.\end{cases}$
To have other solutions of the masses than $m_{1}=m_{2}=m_{3}=m_{4}=0$, the
determinant of the above system must vanish, which is equivalent to
$s_{1}s_{6}+s_{3}s_{5}=s_{2}s_{4}.$
We have thus proved the following result.
###### Theorem 1.
A necessary condition that the quadrilateral inscribed in the equator of
$\mathbb{S}_{\kappa}^{2}$, with the four masses $m_{1},m_{2},m_{3},m_{4}>0$ at
its vertices, forms a relative equilibrium is that
$s_{1}s_{6}+s_{3}s_{5}=s_{2}s_{4}.$
## 5\. Equivalent equations of motion
Let us now introduce some equivalent equations of motion that are suitable for
the kind of solutions we are seeking. First, rewriting the above constraints
as
$\kappa(x_{i}^{2}+y_{i}^{2})+(|\kappa|^{1/2}z_{i}+1)^{2}=1,$
and solving explicitly for $z_{i}$, we obtain
$z_{i}=|\kappa|^{-1/2}[\sqrt{1-\kappa(x_{i}^{2}+y_{i}^{2})}-1].$
The idea here is to eliminate the four equations involving
$z_{1},z_{2},z_{3},z_{4}$, but they still appear in the terms $r_{ij}^{2}$ in
the form $\sigma(z_{i}-z_{j})^{2}$ as
$\sigma(z_{i}-z_{j})^{2}=\frac{\kappa(x_{i}^{2}+y_{i}^{2}-x_{j}^{2}-y_{j}^{2})^{2}}{\left[\sqrt{1-\kappa(x_{i}^{2}+y_{i}^{2})}+\sqrt{1-\kappa(x_{j}^{2}+y_{j}^{2})}\right]^{2}}.$
The case of physical interest is when $\kappa$ is not far from zero, so the
above expression exist even for $\kappa>0$ under this assumption. Then the
equations of motion become
$\begin{cases}\ddot{x_{i}}=\sum_{j=1,j\neq
i}^{N}\frac{m_{j}\Big{[}{x}_{j}-\Big{(}1-\frac{\kappa\rho_{ij}^{2}}{2}\Big{)}{x}_{i}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{ij}^{2}}{4}\Big{)}^{3/2}\rho_{ij}^{3}}-\kappa(\dot{x_{i}}^{2}+\dot{y_{i}}^{2}+\kappa
B_{i})x_{i}\\\ \ddot{y_{i}}=\sum_{j=1,j\neq
i}^{N}\frac{m_{j}\Big{[}{y}_{j}-\Big{(}1-\frac{\kappa\rho_{ij}^{2}}{2}\Big{)}{y}_{i}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{ij}^{2}}{4}\Big{)}^{3/2}\rho_{ij}^{3}}-\kappa(\dot{x_{i}}^{2}+\dot{y_{i}}^{2}+\kappa
B_{i})y_{i},\end{cases}$
where
$\rho_{ij}^{2}=(x_{i}-x_{j})^{2}+(y_{i}-y_{j})^{2}+\frac{\kappa(A_{i}-A_{j})^{2}}{(\sqrt{1-\kappa
A_{i}}+\sqrt{1-\kappa A_{j}})^{2}},$ $A_{i}=x_{i}^{2}+y_{i}^{2},$
$B_{i}=\frac{(x_{i}\dot{x_{i}}+y_{i}\dot{y_{i}})^{2}}{1-\kappa(x_{i}^{2}+y_{i}^{2})},\hskip
11.38092pti=1,2,3,4.$
It is obvious that for $\kappa=0$ we recover the classical Newtonian equations
of motion of the planar 4-body problem.
## 6\. The case of two negligible masses
We now consider the case when two out of the four given masses are negligible,
$m_{3}=m_{4}=0$. Then the equations of motion become
$\begin{cases}\ddot{x}_{1}=\frac{m_{2}\Big{[}{x}_{2}-\Big{(}1-\frac{\kappa\rho_{12}^{2}}{2}\Big{)}{x}_{1}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{12}^{2}}{2}\Big{)}^{3/2}\rho_{12}^{3}}-\kappa(\dot{x_{1}}^{2}+\dot{y_{1}}^{2}+\kappa
B_{1})x_{1}\\\
\ddot{y}_{1}=\frac{m_{2}\Big{[}{y}_{2}-\Big{(}1-\frac{\kappa\rho_{12}^{2}}{2}\Big{)}{y}_{1}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{12}^{2}}{4}\Big{)}^{3/2}\rho_{12}^{3}}-\kappa(\dot{x_{1}}^{2}+\dot{y_{1}}^{2}+\kappa
B_{1})y_{1}\\\ \end{cases}$
$\begin{cases}\ddot{x}_{2}=\frac{m_{1}\Big{[}{x}_{1}-\Big{(}1-\frac{\kappa\rho_{12}^{2}}{2}\Big{)}{x}_{2}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{12}^{2}}{4}\Big{)}^{3/2}\rho_{12}^{3}}-\kappa(\dot{x_{2}}^{2}+\dot{y_{2}}^{2}+\kappa
B_{2})x_{2}\\\
\ddot{y}_{2}=\frac{m_{1}\Big{[}{y}_{1}-\Big{(}1-\frac{\kappa\rho_{12}^{2}}{2}\Big{)}{y}_{2}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{12}^{2}}{4}\Big{)}^{3/2}\rho_{12}^{3}}-\kappa(\dot{x_{2}}^{2}+\dot{y_{2}}^{2}+\kappa
B_{2})y_{2}\\\ \end{cases}$
$\begin{cases}\ddot{x}_{3}=\frac{m_{1}\Big{[}{x}_{1}-\Big{(}1-\frac{\kappa\rho_{13}^{2}}{2}\Big{)}{x}_{3}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{13}^{2}}{4}\Big{)}^{3/2}\rho_{13}^{3}}+\frac{m_{2}\Big{[}{x}_{2}-\Big{(}1-\frac{\kappa\rho_{32}^{2}}{2}\Big{)}{x}_{3}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{23}^{2}}{4}\Big{)}^{3/2}\rho_{23}^{3}}-\kappa(\dot{x_{3}}^{2}+\dot{y_{3}}^{2}+\kappa
B_{3})x_{3}\\\
\ddot{y}_{3}=\frac{m_{1}\Big{[}{y}_{1}-\Big{(}1-\frac{\kappa\rho_{13}^{2}}{2}\Big{)}{y}_{3}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{13}^{2}}{4}\Big{)}^{3/2}\rho_{13}^{3}}+\frac{m_{2}\Big{[}{y}_{2}-\Big{(}1-\frac{\kappa\rho_{32}^{2}}{2}\Big{)}{y}_{3}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{32}^{2}}{2}\Big{)}^{3/2}\rho_{32}^{3}}-\kappa(\dot{x_{3}}^{2}+\dot{y_{3}}^{2}+\kappa
B_{3})y_{3}\\\ \end{cases}$
$\begin{cases}\ddot{x}_{4}=\frac{m_{1}\Big{[}{x}_{1}-\Big{(}1-\frac{\kappa\rho_{14}^{2}}{2}\Big{)}{x}_{4}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{14}^{2}}{4}\Big{)}^{3/2}\rho_{14}^{3}}+\frac{m_{4}\Big{[}{x}_{4}-\Big{(}1-\frac{\kappa\rho_{42}^{2}}{2}\Big{)}{x}_{2}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{42}^{2}}{4}\Big{)}^{3/2}\rho_{42}^{3}}-\kappa(\dot{x_{4}}^{2}+\dot{y_{4}}^{2}+\kappa
B_{4})x_{4}\\\
\ddot{y}_{4}=\frac{m_{1}\Big{[}{y}_{1}-\Big{(}1-\frac{\kappa\rho_{14}^{2}}{2}\Big{)}{y}_{4}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{14}^{2}}{4}\Big{)}^{3/2}\rho_{14}^{3}}+\frac{m_{4}\Big{[}{y}_{4}-\Big{(}1-\frac{\kappa\rho_{42}^{2}}{2}\Big{)}{y}_{2}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{42}^{2}}{4}\Big{)}^{3/2}\rho_{42}^{3}}-\kappa(\dot{x_{4}}^{2}+\dot{y_{4}}^{2}+\kappa
B_{4})y_{4},\end{cases}$
where $\rho_{ij}^{2}=\rho_{ji}^{2},\ i\neq j$,
$\rho_{ij}^{2}=(x_{i}-x_{j})^{2}+(y_{i}-y_{j})^{2}+\frac{\kappa(x_{i}^{2}+y_{i}^{2}-x_{j}^{2}-y_{j}^{2})^{2}}{[\sqrt{1-\kappa(x_{i}^{2}+y_{i}^{2})}+\sqrt{1-\kappa(x_{j}^{2}+y_{j}^{2})}]^{2}}.$
We can now show that when $m_{1}=m_{2}=:m>0$ and $m_{3}=m_{4}=0$, square-like
relative equilibria, i.e. equilateral equiangular quadrilaterals, always exist
on $\mathbb{S}_{\kappa}^{2}$, but not on $\mathbb{H}_{\kappa}^{2}$.
###### Theorem 2.
In the curved 4-body problem, assume that $m_{1}=m_{2}=:m>0$ and
$m_{3}=m_{4}=0$. Then, in $\mathbb{S}_{\kappa}^{2}$, there is a circle of
radius $r<\kappa^{-1/2}$, parallel with the $xy$-plane, such that a square
configuration inscribed in this circle, with $m_{1},m_{2}$ at the opposite
ends of one diagonal and $m_{3},m_{4}$ at the opposite ends of the other
diagonal, forms a relative equilibrium. But in $\mathbb{H}_{\kappa}^{2}$,
there is no such solution.
$m_{1}$$m_{2}$$m_{4}$$m_{3}$ Figure 1. The case of 2 equal masses and 2
negligible masses.
###### Proof.
We must check the existence of a solution of the form
${\bf q}=(q_{1},q_{2},q_{3},q_{4})\in\mathbb{S}_{\kappa}^{2},\hskip
8.5359pt{\bf q_{i}}=(x_{i},y_{i}),\hskip 5.69046pti=1,2,3,4.$
$x_{1}=r\cos\alpha t,\hskip 14.22636pty_{1}=r\sin\alpha t,\hskip 25.6073pt$
$x_{2}=-r\cos\alpha t,\hskip 14.22636pty_{2}=-r\sin\alpha t,\hskip 14.22636pt$
$x_{3}=r\cos(\alpha t+\pi/2)=-r\sin\alpha t,\hskip
14.22636pty_{3}=r\sin(\alpha t+\pi/2)=r\cos\alpha t,\hskip 14.22636pt$
$x_{4}=-r\cos(\alpha t+\pi/2)=r\sin\alpha t,\hskip
22.76228pty_{4}=-r\sin(\alpha t+\pi/2)=-r\cos\alpha t,\hskip 14.22636pt$
where
$\hskip 14.22636ptx_{i}^{2}+y_{i}^{2}=r^{2},\ \
\rho^{2}=\rho_{13}^{2}=\rho_{14}^{2}=\rho_{23}^{2}=\rho_{24}^{2}=2r^{2},\ \
\rho_{12}^{2}=\rho_{34}^{2}=4r^{2}.$
Substituting these expressions into the system, the first four equations lead
us to
$\alpha^{2}=\frac{m}{4r^{3}(1-\kappa r^{2})^{3/2}},$
whereas the last four equations yield
$\alpha^{2}=\frac{2m(1-\frac{\kappa\rho^{2}}{2})}{\rho^{3}(1-\frac{\kappa\rho^{2}}{4})^{3/2}(1-\kappa
r^{2})}.$
So, to have a solution, the equation
$\frac{m}{4r^{3}(1-\kappa
r^{2})^{3/2}}=\frac{2m(1-\frac{\kappa\rho^{2}}{2})}{\rho^{3}(1-\frac{\kappa\rho^{2}}{4})^{3/2}(1-\kappa
r^{2})}$
must be satisfied. This equation is equivalent to
$\frac{1}{8r^{3}(1-\kappa r^{2})^{3/2}}=\frac{1}{2\sqrt{2}r^{3}(1-\frac{\kappa
r^{2}}{2})^{3/2}},$
which leads to
$3\kappa r^{2}=2.$
Obviously, in the case of $\mathbb{H}_{\kappa}^{2}$, we have $\kappa<0$, so
this equation has no solutions. For $\mathbb{S}_{\kappa}^{2}$, it leads to
$r=\sqrt{2/3}\kappa^{-1/2}.$
Since $r<\kappa^{-1/2}$, such a solution always exists in
$\mathbb{S}_{\kappa}^{2}$. ∎
$m_{1}$$m_{3}$$m_{2}$$m_{4}$ Figure 2. The case of two equal masses and two
negligible masses.
## 7\. The case of one negligible mass
Let $m_{1},m_{2},m_{3}>0$ and assume that $m_{4}=0$. Then the equations of
motion take the form
$\begin{cases}\ddot{x_{1}}=\frac{m_{2}\Big{[}{x}_{2}-\Big{(}1-\frac{\kappa\rho_{12}^{2}}{2}\Big{)}{x}_{1}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{12}^{2}}{4}\Big{)}^{3/2}\rho_{12}^{3}}+\frac{m_{3}\Big{[}{x}_{3}-\Big{(}1-\frac{\kappa\rho_{31}^{2}}{2}\Big{)}{x}_{1}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{31}^{2}}{4}\Big{)}^{3/2}\rho_{31}^{3}}-\kappa(\dot{x_{1}}^{2}+\dot{y_{1}}^{2}+\kappa
B_{1})x_{1}\\\
\ddot{y_{1}}=\frac{m_{2}\Big{[}{y}_{2}-\Big{(}1-\frac{\kappa\rho_{12}^{2}}{2}\Big{)}{y}_{1}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{12}^{2}}{4}\Big{)}^{3/2}\rho_{12}^{3}}+\frac{m_{3}\Big{[}{y}_{3}-\Big{(}1-\frac{\kappa\rho_{31}^{2}}{2}\Big{)}{y}_{1}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{31}^{2}}{4}\Big{)}^{3/2}\rho_{31}^{3}}-\kappa(\dot{x_{1}}^{2}+\dot{y_{1}}^{2}+\kappa
B_{1})y_{1}\\\ \end{cases}$
$\begin{cases}\ddot{x_{2}}=\frac{m_{1}\Big{[}{x}_{1}-\Big{(}1-\frac{\kappa\rho_{12}^{2}}{2}\Big{)}{x}_{2}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{12}^{2}}{4}\Big{)}^{3/2}\rho_{12}^{3}}+\frac{m_{3}\Big{[}{x}_{3}-\Big{(}1-\frac{\kappa\rho_{32}^{2}}{2}\Big{)}{x}_{2}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{32}^{2}}{4}\Big{)}^{3/2}\rho_{32}^{3}}-\kappa(\dot{x_{2}}^{2}+\dot{y_{2}}^{2}+\kappa
B_{2})x_{2}\\\
\ddot{y_{2}}=\frac{m_{1}\Big{[}{y}_{1}-\Big{(}1-\frac{\kappa\rho_{12}^{2}}{2}\Big{)}{y}_{2}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{12}^{2}}{4}\Big{)}^{3/2}\rho_{12}^{3}}+\frac{m_{3}\Big{[}{y}_{3}-\Big{(}1-\frac{\kappa\rho_{32}^{2}}{2}\Big{)}{y}_{2}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{32}^{2}}{4}\Big{)}^{3/2}\rho_{32}^{3}}-\kappa(\dot{x_{2}}^{2}+\dot{y_{2}}^{2}+\kappa
B_{2})y_{2}\end{cases}$
$\begin{cases}\ddot{x_{3}}=\frac{m_{1}\Big{[}{x}_{1}-\Big{(}1-\frac{\kappa\rho_{13}^{2}}{2}\Big{)}{x}_{3}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{13}^{2}}{4}\Big{)}^{3/2}\rho_{13}^{3}}+\frac{m_{2}\Big{[}{x}_{2}-\Big{(}1-\frac{\kappa\rho_{32}^{2}}{2}\Big{)}{x}_{3}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{32}^{2}}{4}\Big{)}^{3/2}\rho_{32}^{3}}-\kappa(\dot{x_{3}}^{2}+\dot{y_{3}}^{2}+\kappa
B_{3})x_{3}\\\
\ddot{y_{3}}=\frac{m_{1}\Big{[}{y}_{1}-\Big{(}1-\frac{\kappa\rho_{13}^{2}}{2}\Big{)}{y}_{3}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{13}^{2}}{4}\Big{)}^{3/2}\rho_{13}^{3}}+\frac{m_{2}\Big{[}{y}_{2}-\Big{(}1-\frac{\kappa\rho_{32}^{2}}{2}\Big{)}{y}_{3}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{32}^{2}}{4}\Big{)}^{3/2}\rho_{32}^{3}}-\kappa(\dot{x_{3}}^{2}+\dot{y_{3}}^{2}+\kappa
B_{3})y_{3}\end{cases}$
$\begin{cases}\ddot{x_{4}}=\frac{m_{1}\Big{[}{x}_{1}-\Big{(}1-\frac{\kappa\rho_{14}^{2}}{2}\Big{)}{x}_{4}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{14}^{2}}{4}\Big{)}^{3/2}\rho_{14}^{3}}+\frac{m_{2}\Big{[}{x}_{2}-\Big{(}1-\frac{\kappa\rho_{42}^{2}}{2}\Big{)}{x}_{4}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{42}^{2}}{4}\Big{)}^{3/2}\rho_{42}^{3}}+\frac{m_{3}\Big{[}{x}_{3}-\Big{(}1-\frac{\kappa\rho_{43}^{2}}{2}\Big{)}{x}_{4}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{43}^{2}}{4}\Big{)}^{3/2}\rho_{43}^{3}}\\\
\hfill-\kappa(\dot{x_{4}}^{2}+\dot{y_{4}}^{2}+\kappa B_{4})x_{4}\\\
\ddot{y_{4}}=\frac{m_{1}\Big{[}{y}_{1}-\Big{(}1-\frac{\kappa\rho_{14}^{2}}{2}\Big{)}{y}_{4}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{14}^{2}}{4}\Big{)}^{3/2}\rho_{14}^{3}}+\frac{m_{2}\Big{[}{y}_{2}-\Big{(}1-\frac{\kappa\rho_{42}^{2}}{2}\Big{)}{y}_{4}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{42}^{2}}{4}\Big{)}^{3/2}\rho_{42}^{3}}+\frac{m_{3}\Big{[}{y}_{3}-\Big{(}1-\frac{\kappa\rho_{43}^{2}}{2}\Big{)}{y}_{4}\Big{]}}{\Big{(}1-\frac{\kappa\rho_{43}^{2}}{4}\Big{)}^{3/2}\rho_{43}^{3}}\\\
\hfill-\kappa(\dot{x_{4}}^{2}+\dot{y_{4}}^{2}+\kappa B_{4})y_{4}.\end{cases}$
We will next show that if the non-negligible masses are equal, then there
exist some kite-shaped relative equilibria.
###### Theorem 3.
Consider the curved 4-body problem with masses $m_{1}=m_{2}=m_{3}:=m>0$ and
$m_{4}=0$. Then, in $\mathbb{S}_{\kappa}^{2}$, there exists at least one kite-
shaped relative equilibrium for which the equal masses lie at the vertices of
an equilateral triangle, whereas the negligible mass is at the intersection of
the extension of one height of the triangle with the circle on which all the
bodies move. In $\mathbb{H}_{\kappa}^{2}$, however, there are no such kite-
shaped relative equilibria.
$m_{1}$$m_{2}$$m_{3}$$m_{4}$ Figure 3. A kite configuration of 3 equal masses
and one negligible mass.
###### Proof.
We will check a solution of the form
$x_{1}=r\cos\alpha t,\hskip 14.22636pty_{1}=r\sin\alpha t,\hskip 28.45274pt$
$x_{2}=r\cos\Big{(}\alpha t+\frac{2\pi}{3}\Big{)},\hskip
14.22636pty_{2}=r\sin\Big{(}\alpha t+\frac{2\pi}{3}\Big{)}$
$x_{3}=r\cos\Big{(}\alpha t+\frac{4\pi}{3}\Big{)},\hskip
14.22636pty_{3}=r\sin\Big{(}\alpha t+\frac{4\pi}{3}\Big{)},\hskip 14.22636pt$
$x_{4}=r\cos\Big{(}\alpha t-\frac{\pi}{3}\Big{)},\hskip
14.22636pty_{4}=r\sin\Big{(}\alpha t-\frac{\pi}{3}\Big{)},\hskip 14.22636pt$
where
$\rho_{12}^{2}=\rho_{13}^{2}=\rho_{23}^{2}=3r^{2},\ \
\rho_{43}^{2}=\rho_{41}^{2}=r^{2},\ \ \rho_{24}^{2}=4r^{2}.$
Substituting these expressions into the above system, we are led to the
conclusion that the following two equations must be satisfied,
$\alpha^{2}=\frac{m}{\sqrt{3}r^{3}(1-\frac{3\kappa r^{2}}{4})^{3/2}},$
$\alpha^{2}=\frac{m}{4r^{3}(1-\kappa
r^{2})^{3/2}}+\frac{m}{r^{3}(1-\frac{\kappa r^{2}}{4})^{3/2}}.$
Comparing these equations we obtain the condition for the existence of the
kite-shaped relative equilibria,
$\frac{1}{\sqrt{3}(1-\frac{3\kappa r^{2}}{4})^{3/2}}=\frac{1}{4(1-\kappa
r^{2})^{3/2}}+\frac{1}{(1-\frac{\kappa r^{2}}{4})^{3/2}}.$
Straightforward computations show that $r$ is a solution of this equation if
it is a root of the polynomial
$P(r)=a_{24}r^{24}+a_{22}r^{22}+a_{20}r^{20}+a_{18}r^{18}+a_{16}r^{16}+a_{14}r^{14}+a_{12}r^{12}+$
$a_{10}r^{10}+a_{8}r^{8}+a_{6}r^{6}+a_{4}r^{4}+a_{2}r^{2}+a_{0},$
$a_{24}=\frac{6697290145}{16777216}\kappa^{12},\ \
a_{22}=-\frac{2884257825}{524288}\kappa^{11},\ \
a_{20}=\frac{18063189465}{524288}\kappa^{10},$
$a_{18}=-\frac{4241985935}{32768}\kappa^{9},\ \ \
a_{16}=\frac{21267471735}{65536}\kappa^{8},$
$a_{14}=-\frac{584429805}{1024}\kappa^{7},\ \ \
a_{12}=\frac{737853351}{1024}\kappa^{6},\ \ \
a_{10}=-\frac{41995431}{64}\kappa^{5},$
$a_{8}=\frac{109080063}{256}\kappa^{4},\ \ \
a_{6}=-\frac{1530101}{8}\kappa^{3},$ $a_{4}=\frac{446217}{8}\kappa^{2},\ \ \
a_{2}=-9318\kappa,\ \ \ a_{0}=649$
that belongs to the interval $r\in(0,\kappa^{-1/2})$ for
$\mathbb{S}_{\kappa}^{2}$, but needs only to be positive for
$\mathbb{H}_{\kappa}^{2}$. To find out if we have such a root, we make the
substitution $x=r^{2}$, and obtain the polynomial
$Q(x)=a_{24}x^{12}+a_{22}x^{11}+a_{20}x^{10}+a_{18}x^{9}+a_{16}x^{8}+a_{14}x^{7}+a_{12}x^{6}+$
$a_{10}x^{5}+a_{8}x^{4}+a_{6}x^{3}+a_{4}x^{2}+a_{2}x+a_{0}.$
By Descartes’s rule of signs the number of positive roots depends on the
number of changes of sign of the coefficients, which in turn depends on the
sign of $\kappa$. So let us discuss the two cases separately.
In $\mathbb{S}_{\kappa}^{2}$, i.e. for $\kappa>0$, there are twelve changes of
sign, so $Q$ can have twelve, ten, eight, six, four, two, or zero positive
roots, so this does not guarantee the existence of a positive root. However,
we can notice that $Q(\frac{\kappa^{-1}}{2})=-2.4959<0$ and $Q(0)=649>0$, so a
root must exist for $x\in(0,\kappa^{-1/2})$, i.e. for $r\in(0,\kappa^{-1})$, a
remark that proves the existence of at least one kite-shaped relative
equilibrium.
In $\mathbb{H}_{\kappa}^{2}$, i.e. for $\kappa<0$, we seek a positive root of
$Q$. But for $\kappa<0$, there is no sign change in $Q(x)$, so the polynomial
has no positive roots. Therefore there are no kite solutions in
$\mathbb{H}_{\kappa}^{2}$. This remark completes the proof. ∎
Acknowledgment. Florin Diacu did most of the work on this paper while visiting
the Yangtze Center of Mathematics at Sichuan University as Distinguished
Foreign Professor in April-May 2017. He was also supported in part by a grant
from the Yale-NUS College at the National University of Singapore and an NSERC
of Canada Discovery Grant. Sawsan Alhowaity was funded by a scholarship from
the University of Shaqra, Saudia Arabia, towards the completion of her
doctoral degree at the University of Victoria in Canada.
## References
* [1] J. Bertrand, Théorème relatif au mouvement d’un point attiré vers un center fixe, C. R. Acad. Sci. 77 (1873), 849–853.
* [2] W. Bolyai and J. Bolyai, Geometrische Untersuchungen, Teubner, Leipzig-Berlin, 1913.
* [3] F. Diacu, On the singularities of the curved $N$-body problem, Trans. Amer. Math. Soc. 363, 4 (2011), 2249–2264.
* [4] F. Diacu, Polygonal homographic orbits of the curved 3-body problem, Trans. Amer. Math. Soc. 364 (2012), 2783–2802.
* [5] F. Diacu, Relative equilibria of the curved $N$-body problem, Atlantis Studies in Dynamical Systems, vol. 1, Atlantis Press, Amsterdam, 2012.
* [6] F. Diacu, Relative equilibria of the 3-dimensional curved $n$-body problem, Memoirs Amer. Math. Soc. 228, 1071 (2013).
* [7] F. Diacu, The curved $N$-body problem: risks and rewards, Math. Intelligencer 35, 3 (2013), 24–33.
* [8] F. Diacu, The classical $N$-body problem in the context of curved space, Canad. J. Math. 69, 4 (2017), 790–806.
* [9] F. Diacu, Bifurcations of the Lagrangian orbits from the classical to the curved 3-body problem, J. Math. Phys. 57, 11 (2016), DOI: 10.1063/1.4967443.
* [10] F. Diacu and S. Kordlou, Rotopulsators of the curved $N$-body problem, J. Differential Equations 255 (2013) 2709–2750.
* [11] F. Diacu, R. Martínez, E. Pérez-Chavela, and C. Simó, On the stability of tetrahedral relative equilibria in the positively curved 4-body problem, Physica D 256–7 (2013), 21–35.
* [12] F. Diacu and E. Pérez-Chavela, Homographic solutions of the curved $3$-body problem, J. Differential Equations 250 (2011), 340–366.
* [13] F. Diacu, E. Pérez-Chavela, and M. Santoprete, Saari’s conjecture for the collinear $N$-body problem, Trans. Amer. Math. Soc. 357, 10 (2005), 4215–4223.
* [14] F. Diacu, E. Pérez-Chavela, and M. Santoprete, The $N$-body problem in spaces of constant curvature. Part I: Relative equilibria, J. Nonlinear Sci. 22, 2 (2012), 247–266, DOI: 10.1007/s00332-011-9116-z.
* [15] F. Diacu, E. Pérez-Chavela, and M. Santoprete, The $N$-body problem in spaces of constant curvature. Part II: Singularities, J. Nonlinear Sci. 22, 2 (2012), 267–275, DOI: 10.1007/s00332-011-9117-y.
* [16] F. Diacu, E. Pérez-Chavela, and J. Guadalupe Reyes Victoria, An intrinsic approach in the curved $N$-body problem. The negative curvature case, J. Differential Equations 252 (2012), 4529–4562.
* [17] F. Diacu and S. Popa, All Lagrangian relative equilibria have equal masses, J. Math. Phys. 55, 112701 (2014).
* [18] F. Diacu, J.M. Sánchez-Cerritos, and S. Zhu, Stability of fixed points and associated relative equilibria of the 3-body problem on $\mathbb{S}^{1}$ and $\mathbb{S}^{2}$, J. Dyn. Diff. Equat. (2016). DOI:10.1007/s10884-016-9550-6.
* [19] F. Diacu and B. Thorn, Rectangular orbits of the curved 4-body problem, Proc. Amer. Math. Soc. 143 (2015), 1583–1593.
* [20] L.C. García-Naranjo, J.C. Marrero, E. Pérez-Chavela, M. Rodríguez-Olmos, Classification and stability of relative equilibria for the two-body problem in the hyperbolic space of dimension 2, arXiv:1505.01452.
* [21] W. Killing, Die Rechnung in den nichteuklidischen Raumformen, J. Reine Angew. Math. 89 (1880), 265–287.
* [22] W. Killing, Die Mechanik in den nichteuklidischen Raumformen, J. Reine Angew. Math. 98 (1885), 1–48.
* [23] W. Killing, Die Nicht-Euklidischen Raumformen in Analytischer Behandlung, Teubner, Leipzig, 1885.
* [24] V. V. Kozlov and A. O. Harin, Kepler’s problem in constant curvature spaces, Celestial Mech. Dynam. Astronom. 54 (1992), 393–399.
* [25] H. Kragh, Is space Flat? Nineteenth century astronomy and non-Euclidean geometry, J. Astr. Hist. Heritage 15, 3 (2012), 149–158.
* [26] H. Liebmann, Die Kegelschnitte und die Planetenbewegung im nichteuklidischen Raum, Berichte Königl. Sächsischen Gesell. Wiss., Math. Phys. Klasse 54 (1902), 393–423.
* [27] H. Liebmann, Über die Zentralbewegung in der nichteuklidische Geometrie, Berichte Königl. Sächsischen Gesell. Wiss., Math. Phys. Klasse 55 (1903), 146-153.
* [28] R. Lipschitz, Extension of the planet-problem to a space of $n$ dimensions and constant integral curvature, Quart. J. Pure Appl. Math. 12 (1873), 349–370.
* [29] N. I. Lobachevsky, The new foundations of geometry with full theory of parallels [in Russian], 1835-1838, in Collected Works, vol. 2, GITTL, Moscow, 1949.
* [30] R. Martínez and C. Simó, On the stability of the Lagrangian homographic solutions in a curved three-body problem on $\mathbb{S}^{2}$, Discrete Contin. Dyn. Syst. Ser. A 33 (2013) 1157–1175.
* [31] R. Martínez and C. Simó, Relative equilibria of the restricted 3-body problem in curved spaces, Celestial Mech. Dynam. Astronom. 128, 2–3 (2017), 221–259.
* [32] E. Pérez-Chavela and J.G. Reyes Victoria, An intrinsic approach in the curved $N$-body problem. The positive curvature case, Trans. Amer. Math. Soc. 364, 7 (2012), 3805–3827.
* [33] E. Schering, Die Schwerkraft im Gaussischen Räume, Nachr. Königl. Ges. Wiss. Gött. 15, (1870), 311–321.
* [34] E. Schering, Die Schwerkraft in mehrfach ausgedehnten Gaussischen und Riemmanschen Räumen. Nachr. Königl. Ges. Wiss. Gött. 6, (1873), 149–159
* [35] A.V. Shchepetilov, Nonintegrability of the two-body problem in constant curvature spaces, J. Phys. A: Math. Gen. V. 39 (2006), 5787-5806; corrected version at math.DS/0601382.
* [36] P. Tibboel, Polygonal homographic orbits in spaces of constant curvature, Proc. Amer. Math. Soc. 141 (2013), 1465–1471.
* [37] P. Tibboel, Existence of a class of rotopulsators, J. Math. Anal. Appl. 404 (2013), 185–191.
* [38] P. Tibboel, Existence of a lower bound for the distance between point masses of relative equilibria in spaces of constant curvature, J. Math. Anal. Appl. 416 (2014), 205–211.
* [39] S. Zhu, Eulerian relative equilibria of the curved 3-body problems in $\mathbb{S}^{2}$, Proc. Amer. Math. Soc. 142 (2014), 2837–2848.
|
# Emotion-guided Cross-domain Fake News Detection using Adversarial Domain
Adaptation
Arjun Choudhry11footnotemark: 1
Biometric Research Laboratory
Delhi Technological University
New Delhi, India
<EMAIL_ADDRESS>
&Inder Khatri11footnotemark: 1
Biometric Research Laboratory
Delhi Technological University
New Delhi, India
<EMAIL_ADDRESS>
&Arkajyoti Chakraborty
Biometric Research Laboratory
Delhi Technological University
New Delhi, India
<EMAIL_ADDRESS>
&Dinesh Kumar Vishwakarma
Biometric Research Laboratory
Delhi Technological University
New Delhi, India
<EMAIL_ADDRESS>
&Mukesh Prasad
School of Computer Science
University of Technology Sydney
Ultimo, Australia
<EMAIL_ADDRESS>
###### Abstract
Recent works on fake news detection have shown the efficacy of using emotions
as a feature or emotions-based features for improved performance. However, the
impact of these emotion-guided features for fake news detection in cross-
domain settings, where we face the problem of domain shift, is still largely
unexplored. In this work, we evaluate the impact of emotion-guided features
for cross-domain fake news detection, and further propose an emotion-guided,
domain-adaptive approach using adversarial learning. We prove the efficacy of
emotion-guided models in cross-domain settings for various combinations of
source and target datasets from FakeNewsAMT, Celeb, Politifact and Gossipcop
datasets.††*Equal Contribution
_K_ eywords Fake News Detection $\cdot$ Domain Adaptation $\cdot$ Emotion
Classification $\cdot$ Adversarial Training $\cdot$ Cross-domain Analysis
## 1 Introduction
In recent years, our reliance on social media as a source of information has
increased multi-fold, bringing along exponential increase in the spread of
_fake news_. To counter this, researchers have proposed various approaches for
fake news detection (Shu et al., 2019; Sheng et al., 2022). However, models
trained on one domain are often brittle and vulnerable to incorrect
predictions for the samples of another domain (Saikh et al., 2019; Pérez-Rosas
et al., 2018). This is primarily due to the shift between the two domains, as
depicted in Figure 1(1). To handle this, some domain-adaptive frameworks
(Zhang et al., 2020; Huang et al., 2021; Li et al., 2021) have been proposed
which help align the source and target domains in the feature space to
ameliorate domain shift across different problems. These frameworks guide the
feature extractors to extract domain-invariant features by aligning the source
and target domains in the feature space, thus generalizing well across
domains. However, due to the absence of labels in the target-domain data, the
adaptation is often prone to negative transfer, which can disturb the class-
wise distribution and affect the discriminability of the final model, as shown
in Figure 1(2).
Some recent studies have observed a correlation between the veracity of a text
and its emotions. There exists a prominent affiliation for certain emotions
with fake news, and for other emotions with real news (Vosoughi et al., 2018),
as illustrated in Figure 1(3). Further, some works have successfully utilized
emotions as features, or emotion-guided features to aid in fake news detection
(Guo et al., 2019; Zhang et al., 2021; Choudhry et al., 2022). However, we
observe that these works only consider the in-domain setting for evaluation,
without analyzing the robustness of these frameworks to domain shift in cross-
domain settings. This is another important direction that needs to be
explored.
Figure 1: (1) Cross-domain texts not aligned. (2) Domain adaptation leads to
some alignment. (3) Emotion-guided classification in one domain. (4) Emotion-
guided domain adaptation leads to improved alignment of the two domains.
In this paper, we study the efficacy of emotion-aided models in capturing
better generalizable features for cross-domain fake news detection. Table 1
shows the improvements observed in various cross-domain settings when our
emotion-guided models were evaluated in cross-domain settings. We observe that
emotion-guided frameworks show improved performance in cross-domain settings,
as compared to their baseline models without the said emotion-aided features,
thus underscoring the generalized feature extraction in emotion-aided models.
We further propose an emotion-guided unsupervised domain adaptation framework,
which utilizes emotion labels in a multi-task adversarial setting for better
feature alignment across domains. The emotion labels for emotion
classification, trained parallel to the fake news detection branch in the
multi-task learning setup, help provide additional supervision for improved
alignment during domain adaptation, mitigating the issue of incorrect
alignment of domains. This is illustrated in Figure 1(4)). This leads to
better discriminability. We experimentally prove the efficacy of our approach
across a variety of datasets in cross-domain settings for various combinations
of single-task or multi-task, domain-adaptive or non-adaptive, emotion-guided
or unguided settings on the accuracy of the models.
Our contributions can be summarized as follows:
* •
We suggest the use of emotion classification as an auxiliary task for improved
fake news detection in cross-domain settings, indicating the applicability of
emotion-guided features across domains.
* •
We compare how Ekman’s and Plutchik’s base emotion classes individually affect
the performance of our multi-task domain-adaptive framework, and if there are
meaningful differences between them.
* •
We propose an emotion-guided domain-adaptive framework for fake news detection
across domains. We show that domain-adaptive fake news detection models better
align the two domains with the help of supervised learning using emotion-aided
features.
* •
We evaluate our approach on a variety of source and target combinations from
four datasets. Our results prove the efficacy of our approach.
## 2 Related Works
Several studies over the last few years have explored the correlation of fake
news detection with emotions. K et al. (2020) _emotionized_ text
representations using explicit emotion intensity lexicons. Guo et al. (2019)
utilized the discrepancies between publisher’s emotion and the thread’s
comments’ emotions to detect fake news. However, most of these methods relied
upon some additional inputs during evaluation. Choudhry et al. (2022) proposed
an emotion-aided multi-task learning approach, where emotion classification
was the auxiliary task implicitly aligning fake news features according to
emotion labels.
Inspired by Ganin et al. (2015), Zhang et al. (2020) proposed the first fake
news detection work to tackle domain shifts between different datasets. They
proposed a multi-modal framework with a Gradient Reversal Layer (GRL) to learn
domain-invariant features across different domains and used a joint fake news
detector on the extracted features. Huang et al. (2021) proposed a robust and
generalized fake news detection framework adaptable to a new target domain
using adversarial training to make the model robust to outliers and Maximum
Mean Difference (MMD)-based loss to align the features of source and target.
Li et al. (2021) extended the problem by treating it as a multi-source domain
adaptation task, using the labeled samples from multiple source domains to
improve the performance on unlabeled target domains. They also utilized weak
labels for weak supervision on target samples to improve performance.
However, no previous work has aligned features between different domains using
emotion-guided features and domain adaptation using adversarial training. We
show that applying both of these approaches leads to improved performance due
to better alignment of inter-domain features.
## 3 Proposed Methodology
### 3.1 Datasets, Emotion Annotation & Preprocessing
We use the FakeNewsAMT (Pérez-Rosas et al., 2018), Celeb (Pérez-Rosas et al.,
2018), Politifact111https://www.politifact.com, and
Gossipcop222https://www.gossipcop.com datasets for cross-domain fake news
detection. FakeNewsAMT is a multi-domain dataset containing samples from
technology, education, business, sports, politics, and entertainment domains.
The Celeb dataset has been derived from the web, and contains news about
celebrities. Politifact is a web-scrapped dataset containing political news,
while Gossipcop contains news extracted from the web, manually annotated via
crowd-sourcing and by experts.
Figure 2: Pictorial depiction of our emotion-guided domain-adaptive approach
for cross-domain fake news detection.
We use the Unison model (Colnerič and Demšar, 2020) to annotate all datasets
with the core emotions from Ekman’s (Ekman, 1992) (6 emotions: Joy, Surprise,
Anger, Sadness, Disgust, Fear) and Plutchik’s (Plutchik, 1982) (8 emotions:
Joy, Surprise, Trust, Anger, Anticipation, Sadness, Disgust, Fear) emotion
theories. During preprocessing, we convert text to lowercase, remove
punctuation, and de-contract verb forms (eg. “I’d” to “I would”).
### 3.2 Multi-task Learning
We use multi-task learning (MTL) to incorporate emotion classification as an
auxiliary task to our fake news detection branch. Multi-task learning enables
a model to learn the shared features between two or more correlated tasks for
improved feature extraction and performance. We use Ekman’s or Plutchik’s
emotions labels for emotion classification branch in our MTL models to see
which performs better, and compare the performance with the corresponding
single-task (STL) models in domain-adaptive and non-adaptive settings.
### 3.3 Emotion-guided Domain-adaptive Framework
We propose the cumulative use of domain adaptation and emotion-guided feature
extraction for cross-domain fake news detection. Our approach aims to improve
the feature alignment between different domains using adversarial domain
adaptation, by leveraging the correlation between the emotion and the veracity
of a text (as shown in Figure 1(4)). Figure 2 shows our proposed framework. We
use an LSTM-based (Hochreiter and Schmidhuber, 1997) feature extractor, which
is trained using the accumulated loss from fake news classifier, emotion
classifier and the discriminator (aids in learning domain-invariant features).
LSTM can be replaced with better feature extractors. We used it specifically
for easier comparison to non-adapted emotion-guided and non-adapted single-
task models. The domain classifier acts as the discriminator. In our proposed
framework, we do not use the truth labels for the target dataset for domain
adaptation. However, we utilize the target domain emotion labels in our
approach to better align the two domains using the emotion labels for
supervised learning. The fake news classification loss, emotion classification
loss, adversarial loss, and total loss are defined as in Equations 1, 2, 3,
and 4:
$L_{FND}\ =\ \min\limits_{\theta_{l},\theta_{f}}\sum_{i=1}^{n_{s}}L_{f}^{i}$
(1)
$L_{emo}\ =\ \min\limits_{\theta_{l},\theta_{e}}\sum_{i=1}^{n_{s}}L_{es}^{i}\
+\ \sum_{j=1}^{n_{t}}L_{et}^{j}))$ (2)
$L_{adv}\ =\
\min\limits_{\theta_{d}}(\max\limits_{\theta_{l}}(\sum_{i=1}^{n_{s}}L_{ds}^{i}\
+\ \sum_{j=1}^{n_{t}}L_{dt}^{j}))$ (3)
$L_{Total}\ =\ (1-\alpha-\beta)*L_{FND}\ +\ \alpha\ *\ (L_{adv})\ +\ \beta\ *\
(L_{emo})$ (4)
where $n_{s}$ and $n_{t}$ are number of samples in source and target sets;
$\theta_{d}$, $\theta_{f}$, $\theta_{e}$ and $\theta_{l}$ are parameters for
discriminator, fake news classifier, emotion classifier and LSTM feature
extractor; $L_{d_{s}}$ and $L_{d_{t}}$ are binary crossentropy loss for source
and target classification; $L_{es}$ and $L_{et}$ are crossentropy loss for
emotion classification; $L_{f}$ is binary crossentropy loss for Fake News
Classifier; $\alpha$ and $\beta$ are weight parameters in $L_{Total}$. We
optimised $\alpha$ and $\beta$ for each setting for optimal performance.
We used 300 dimension GloVe (Pennington et al., 2014) embeddings. All models
were trained for up to 50 epochs, stopped when the peak validation accuracy
for the in-domain validation set was achieved. We used a batch size of 25 for
every experiment. Each model used the Adam optimizer with learning rate
0.0025. We used an LSTM layer with 256 units for feature extraction, while
both fake news detection and emotion classification branches consisted of two
dense layers each.
## 4 Experimental Analysis & Results
We evaluated our proposed approach on various combinations of source and
target datasets. Each model was optimized on an in-domain validation set from
the source set. Table 1 illustrates our results proving the efficacy of using
emotion-guided models in non-adapted and domain-adapted cross-domain settings.
It compares non-adaptive models, domain-adaptive models, and our emotion-
guided domain-adaptive models in various settings. MTL(E) and MTL(P) refer to
emotion-guided multi-task frameworks using Ekman’s and Plutchik’s emotions
respectively. STL refers to the single-task framework. DA refers to the use of
the domain-adaptive framework, containing a discriminator. Some findings
observed are:
Source | Target | Setting | Accuracy
---|---|---|---
FakeNewsAMT | Celeb | STL | 0.420
MTL(E) | 0.520
MTL(P) | 0.530
DA STL | 0.560
DA MTL(E) | 0.540
DA MTL(P) | 0.600
Celeb | FakeNewsAMT | STL | 0.432
MTL(E) | 0.471
MTL(P) | 0.476
DA STL | 0.395
DA MTL(E) | 0.501
DA MTL(P) | 0.551
Politifact | Gossipcop | STL | 0.527
MTL(E) | 0.555
MTL(P) | 0.603
DA STL | 0.585
DA MTL(E) | 0.698
DA MTL(P) | 0.671
Celeb | Gossipcop | STL | 0.488
MTL(E) | 0.501
MTL(P) | 0.490
DA STL | 0.525
DA MTL(E) | 0.555
DA MTL(P) | 0.587
FakeNewsAMT | Gossipcop | STL | 0.451
MTL(E) | 0.652
MTL(P) | 0.620
DA STL | 0.790
DA MTL(E) | 0.805
DA MTL(P) | 0.795
FakeNewsAMT | Politifact | STL | 0.363
MTL(E) | 0.450
MTL(P) | 0.530
DA STL | 0.621
DA MTL(E) | 0.704
DA MTL(P) | 0.621
Table 1: Cross-domain evaluation of non-adaptive and adaptive models on
FakeNewsAMT, Celeb, Politifact and Gossipcop datasets. Emotion-guided domain-
adaptive models (DA MTL(E) and DA MTL(P)) outperform their corresponding STL
models in cross-domain settings. Domain-adaptive MTL models consistently
outperform baseline STL, non-adaptive MTL and domain-adaptive STL models.
MTL(E) and MTL(P) models outperform their STL counterparts in cross-domain
settings, as seen in Table 1. This indicates improved extraction of
generalizable features by the emotion-guided models, which aids in improved
fake news detection across datasets from different domains. MTL(E) and MTL(P)
further perform comparably for most settings, and each outperforms the other
in three settings respectively.
DA STL models generally outperform STL models in cross-domain settings across
multiple combinations of datasets. However, we see the STL model outperformed
the DA STL model for Celeb dataset as the source dataset and FakeNewsAMT
dataset as target, confirming that unguided adaptation can sometimes lead to
negative alignment, reducing the performance of the model.
DA MTL(E) and DA MTL(P) models improve performance in cross-domain settings.
Table 1 shows improved results obtained using the emotion-guided adversarial
DA models over their non-adaptive counterparts. This shows the scope for
improved feature extraction even after using DA, and emotion-guided models act
as a solution aiding in correct alignment of the samples and features
extracted by the adaptive framework from different domains. Emotion-guided DA
models mitigated the issue of negative alignment when Celeb dataset was the
source and FakeNewsAMT dataset the target, where STL model outperformed the DA
STL model. The emotion-guided DA models helped correctly align the two
domains, leading to significantly improved performance.
## 5 Conclusion
In this work, we showed the efficacy of emotion-guided models for improved
cross-domain fake news detection and further presented an emotion-guided
domain-adaptive fake news detection approach. We evaluated our proposed
framework against baseline STL, emotion-guided MTL, DA STL and emotion-guided
DA MTL models for various source and target combinations from four datasets.
Our proposed approach led to improved cross-domain fake news detection
accuracy, indicating that emotions are generalizable across domains and aid in
better alignment of different domains during domain adaptation.
## References
* Choudhry et al. (2022) Choudhry, A., Khatri, I., Jain, M., 2022. An emotion-based multi-task approach to fake news detection (student abstract). Proceedings of the AAAI Conference on Artificial Intelligence 36, 12929–12930. URL: https://ojs.aaai.org/index.php/AAAI/article/view/21601, doi:10.1609/aaai.v36i11.21601.
* Colnerič and Demšar (2020) Colnerič, N., Demšar, J., 2020\. Emotion recognition on twitter: Comparative study and training a unison model. IEEE Transactions on Affective Computing 11, 433–446. doi:10.1109/TAFFC.2018.2807817.
* Ekman (1992) Ekman, P., 1992. An argument for basic emotions. Cognition & Emotion 6.
* Ganin et al. (2015) Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V., 2015\. Domain-adversarial training of neural networks. URL: https://arxiv.org/abs/1505.07818, doi:10.48550/ARXIV.1505.07818.
* Guo et al. (2019) Guo, C., Cao, J., Zhang, X., Shu, K., Yu, M., 2019\. Exploiting emotions for fake news detection on social media. ArXiv abs/1903.01728.
* Hochreiter and Schmidhuber (1997) Hochreiter, S., Schmidhuber, J., 1997\. Long Short-Term Memory. Neural Computation 9, 1735–1780. URL: https://doi.org/10.1162/neco.1997.9.8.1735, doi:10.1162/neco.1997.9.8.1735, arXiv:https://direct.mit.edu/neco/article-pdf/9/8/1735/813796/neco.1997.9.8.1735.pdf.
* Huang et al. (2021) Huang, Y., Gao, M., Wang, J., Shu, K., 2021. DAFD: domain adaptation framework for fake news detection, in: Mantoro, T., Lee, M., Ayu, M.A., Wong, K.W., Hidayanto, A.N. (Eds.), Neural Information Processing - 28th International Conference, ICONIP 2021, Sanur, Bali, Indonesia, December 8-12, 2021, Proceedings, Part I, Springer. pp. 305--316. URL: https://doi.org/10.1007/978-3-030-92185-9_25, doi:10.1007/978-3-030-92185-9\\_25.
* K et al. (2020) K, A., P, D., L, L.V., 2020\. Emotion cognizance improves health fake news identification, in: Proceedings of the 24th Symposium on International Database Engineering & Applications, Association for Computing Machinery. doi:10.1145/3410566.3410595.
* Li et al. (2021) Li, Y., Lee, K., Kordzadeh, N., Faber, B., Fiddes, C., Chen, E., Shu, K., 2021. Multi-source domain adaptation with weak supervision for early fake news detection, in: 2021 IEEE International Conference on Big Data (Big Data), pp. 668--676. doi:10.1109/BigData52589.2021.9671592.
* Pennington et al. (2014) Pennington, J., Socher, R., Manning, C., 2014. GloVe: Global vectors for word representation, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, Doha, Qatar. pp. 1532--1543. URL: https://aclanthology.org/D14-1162, doi:10.3115/v1/D14-1162.
* Pérez-Rosas et al. (2018) Pérez-Rosas, V., Kleinberg, B., Lefevre, A., Mihalcea, R., 2018\. Automatic detection of fake news, in: Proceedings of the 27th International Conference on Computational Linguistics, Association for Computational Linguistics. pp. 3391--3401.
* Plutchik (1982) Plutchik, R., 1982. A psychoevolutionary theory of emotions. Social Science Information 21.
* Saikh et al. (2019) Saikh, T., De, A., Ekbal, A., Bhattacharyya, P., 2019. A deep learning approach for automatic detection of fake news, in: Proceedings of the 16th International Conference on Natural Language Processing, NLP Association of India, International Institute of Information Technology, Hyderabad, India. pp. 230--238.
* Sheng et al. (2022) Sheng, Q., Cao, J., Zhang, X., Li, R., Wang, D., Zhu, Y., 2022. Zoom out and observe: News environment perception for fake news detection. URL: https://arxiv.org/abs/2203.10885, doi:10.48550/ARXIV.2203.10885.
* Shu et al. (2019) Shu, K., Cui, L., Wang, S., Lee, D., Liu, H., 2019\. Defend: Explainable fake news detection, in: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Association for Computing Machinery, New York, NY, USA. p. 395–405. URL: https://doi.org/10.1145/3292500.3330935, doi:10.1145/3292500.3330935.
* Vosoughi et al. (2018) Vosoughi, S., Roy, D., Aral, S., 2018. The spread of true and false news online. Science 359, 1146--1151. doi:10.1126/science.aap9559.
* Zhang et al. (2020) Zhang, T., Wang, D., Chen, H., Zeng, Z., Guo, W., Miao, C., Cui, L., 2020. Bdann: Bert-based domain adaptation neural network for multi-modal fake news detection, in: IJCNN. doi:10.1109/IJCNN48605.2020.9206973.
* Zhang et al. (2021) Zhang, X., Cao, J., Li, X., Sheng, Q., Zhong, L., Shu, K., 2021. Mining dual emotion for fake news detection, in: Proceedings of the Web Conference 2021, Association for Computing Machinery, New York, NY, USA. p. 3465–3476. doi:10.1145/3442381.3450004.
|
# Sufficient conditions, lower bounds and trade-off relations for quantumness
in Kirkwood-Dirac quasiprobability
Agung Budiyono<EMAIL_ADDRESS>Research Center for Quantum Physics,
National Research and Innovation Agency, South Tangerang 15314, Republic of
Indonesia
###### Abstract
Kirkwood-Dirac (KD) quasiprobability is a quantum analog of classical phase
space probability. It offers an informationally complete representation of
quantum state wherein the quantumness associated with quantum noncommutativity
manifests in its nonclassical values, i.e., the nonreal and/or negative values
of the real part. This naturally raises a question: how does such form of
quantumness comply with the uncertainty principle which also arise from
quantum noncommutativity? Here, first, we obtain sufficient conditions for the
KD quasiprobability defined relative to a pair of PVM (projection-valued
measure) bases to have nonclassical values. Using these nonclassical values,
we then introduce two quantities which capture the amount of KD quantumness in
a quantum state relative to a single PVM basis. They are defined respectively
as the nonreality, and the classicality which captures both the nonreality and
negativity, of the associated KD quasiprobability over the PVM basis of
interest, and another PVM basis, and maximized over all possible choices of
the latter. We obtain their lower bounds, and derive trade-off relations
respectively reminiscent of the Robertson and Robertson-Schrödinger
uncertainty relations but with lower bounds maximized over the convex sets of
Hermitian operators whose complete sets of eigenprojectors are given by the
PVM bases. We discuss their measurement using weak value measurement and
classical optimization. We then suggest an information theoretical
interpretation of the KD nonreality relative to a PVM basis as a lower bound
to the maximum total root-mean-squared error in an optimal estimation of the
PVM basis, and thereby obtain a lower bound and a trade-off relation for the
root-mean-squared error. Finally, we suggest an interpretation of the KD
nonclassicality relative to a PVM basis as a lower bound to the total state
disturbance caused by a nonselective projective binary measurement associated
with the PVM basis, and derive a lower bound and a trade-off relation for the
disturbance.
Kirkwood-Dirac quasiprobability, quantum noncommutativity, nonreality,
negativity, quantumness, sufficient conditions, lower bounds, trade-off
relations
###### pacs:
03.65.Ta, 03.65.Ca
## I Introduction
Heisenberg uncertainty principle is a basic tenet of quantum mechanics which
sets down a radical conceptual demarcation from classical mechanics Heisenberg
UR . It stipulates a fundamental restriction, in the forms of trade-off
relations, on the simultaneous predictability of outcomes of measurement of
two physical quantities. Formally, the trade-off relations arise from the
noncommutativity of operators representing quantum measurements Kennard UR ;
Weyl UR ; Robertson UR . From the very beginning, the uncertainty principle
has led to the foundational debate about the deep nature of randomness arising
in quantum measurement EPR paradox and the intimately related conceptual
issue on the meaning of quantum correlation Bell's theorem . In recent
decades, attempts to better understand the meaning of uncertainty relation and
quantum randomness in general, has opened an avenue for fruitful applications
in different areas of quantum science and quantum technology Coles entropic
uncertainty relation review . It is thus important to study the uncertainty
principle from various perspectives to appreciate its rich and multi-faceted
nature and to conceive further implications.
The earliest uncertainty relations are developed based on the quantification
of the measurement uncertainty in terms of variance of measurement outcomes
Kennard UR ; Weyl UR ; Robertson UR ; Schroedinger UR . Certain drawbacks of
variance for characterizing unpredictability motivated the construction of
uncertainty relations based on the Shannon entropy of the measurement outcomes
Hirschman UR based on entropy ; Everett question on the UR based on entropy ;
Bialynicki-Birula UR based on entropy for phase space ; Deutsch entropic UR ;
Maassen entropic UR ; Berta entropic UR with side information ; Coles entropic
UR ; Hall quantum-classical decomposition ; Wehner entropic UR review ; Coles
entropic uncertainty relation review . Variance and Shannon entropy of
measurement outcomes however do not only quantify the genuine quantum
uncertainty originating from the noncommutativity between the quantum state
and the measurement operators. But, they also take into account the classical
uncertainty stemming from the agent’s ignorance about the preparation, either
due to classical noise or lack of access of another system entangled with the
system of interest, leading to the preparation of mixed states. It is thus
instructive to ask if it is possible to develop uncertainty relations for the
intrinsic quantum uncertainty rather than for the total measurement
uncertainty. A notable result along this direction was reported in Ref. Luo
quantum Robertson-like uncertainty relation for WY skew information where the
author derived a trade-off relation for an intrinsic quantum uncertainty
quantified by means of Wigner-Yanase skew information Wigner-Yanase skew
information , having a form similar to the Robertson uncertainty relation.
This result is generalized in Ref. Furuichi quantum Robertson-Schroedinger
uncertainty relation for WY skew information to obtain a trade-off relation
similar to the Robertson-Schrödinger uncertainty relation. Another approach is
suggested in Refs Korzekwa quantum-classical decomposition ; Singh uncertainty
relation for coherence ; Yuan uncertainty relation for coherence ; Hall
quantum-classical decomposition which used some measures of quantum coherence
to isolate the intrinsic quantum uncertainty and showed that they satisfy some
trade-off relations similar to the entropic uncertainty relations.
In the present study, we work with an informationally equivalent
representation of quantum states on a finite-dimensional Hilbert space using
Kirkwood-Dirac (KD) quasiprobability Kirkwood quasiprobability ; Dirac
quasiprobability ; Barut KD quasiprobability . KD quasiprobability is a
quantum analog of classical phase space probability wherein the quantumness
associated with noncommutativity manifests in its nonclassical values, i.e.,
non-real values and/or negative values of its real part. This prompts the
question on how the uncertainty principle imposes a restriction on such form
of quantumness. In order to answer this question, we first derive sufficient
conditions for the KD quasiprobability relative to a pair of rank-1 orthogonal
PVM (projection-valued measure) bases to have nonclassical values. We then
introduce two quantities which measure the KD quantumness in a quantum state
relative to a single PVM basis. The first quantity is defined as the
nonreality in the KD quasiprobability over the PVM basis of interest and
another PVM basis, and maximized over all possible choices of the latter. We
call it the KD nonreality in the quantum state relative the PVM basis. The
second quantity is defined similarly, but relative to the nonclassicality
which captures simultaneously both the nonreality and the negativity of the KD
quasiprobability. We call it the KD nonclassicality in the quantum state
relative the PVM basis. Both quantities have been proposed earlier in Refs.
Agung KD-nonreality coherence ; Agung KD-nonclassicality coherence as
faithful quantifiers of quantum coherence relative to the incoherent
orthonormal basis corresponding to the rank-1 PVM basis. We obtain lower
bounds for the quantumness captured by the above defined KD nonreality and KD
nonclassicality in a state relative to a PVM basis.
We then proceed to derive trade-off relations for the KD nonreality in a state
relative to a PVM basis and that relative to another PVM basis, and similarly
for the KD nonclassicality in a state relative to a PVM basis and that
relative to another PVM basis. They are respectively reminiscent of the
Robertson Robertson UR and the Robertson-Schrödinger uncertainty relations
Schroedinger UR , but with lower bounds that are optimized over the convex
sets of all pairs of Hermitian operators whose eigenprojectors are given by
the two PVM bases of interest. The lower bounds and the trade-off relations
for the KD nonreality and KD nonclassicality in a state relative to a rank-$1$
orthogonal PVM basis lead to similar lower bounds and trade-off relations for
the $l_{1}$-norm coherence of the state relative to the incoherent orthonormal
basis corresponding to the PVM basis Baumgratz quantum coherence measure . We
sketch a measurement scheme of the KD nonreality and KD nonclassicality
relative to a PVM basis based on weak value measurement and classical
optimization. We then suggest an information theoretical interpretation of the
KD nonreality in a state relative to a PVM basis as a lower bound to the root-
mean-squared error of an optimal estimation of the PVM basis based on
projective measurement in the worst case scenario. This allows us to derive a
lower bound and a trade-off relation for the root-mean-squared error of the
optimal estimation of a PVM basis in the worst case scenario. We further
suggest an operational interpretation of the KD nonclassicality in a state
relative to a PVM basis as a lower bound to the total state disturbance caused
by a nonselective projective binary measurement associated with the PVM basis,
and thereby derive a lower bound and a trade-off relation of such state
disturbance.
## II Sufficient conditions for nonclassical Kirkwood-Dirac quasiprobability
KD quasiprobability is a specific quantum analog of phase space probability
distribution in classical statistical mechanics Kirkwood quasiprobability ;
Dirac quasiprobability . The KD quasiprobability associated with a quantum
state represented by a density operator $\varrho$ on a Hilbert space
$\mathcal{H}$ over a pair of orthonormal bases $\\{\ket{a}\\}$ and
$\\{\ket{b}\\}$ of $\mathcal{H}$, is defined as Kirkwood quasiprobability ;
Dirac quasiprobability ; Barut KD quasiprobability
$\displaystyle{\rm Pr}_{\rm KD}(a,b|\varrho):={\rm
Tr}\\{\Pi_{b}\Pi_{a}\varrho\\},$ (1)
where $\Pi_{x}:=\ket{x}\bra{x}$, $x=a,b$. We note that $\\{\Pi_{x}\\}$
comprises a set of rank-1 orthogonal PVM, i.e., $\sum_{x}\Pi_{x}=\mathbb{I}$,
$\Pi_{x}\Pi_{x^{\prime}}=\delta_{xx^{\prime}}\Pi_{x}$, where $\mathbb{I}$ is
the identity operator on $\mathcal{H}$ and $\delta_{xx^{\prime}}$ is the
Kronecker delta. The PVM $\\{\Pi_{x}\\}$ describes a sharp projective
measurement with outcomes $x$ and probability ${\rm Pr}(x|\varrho)={\rm
Tr}\\{\Pi_{x}\varrho\\}$. Here on we shall thus refer to $\\{\Pi_{x}\\}$ as a
rank-1 PVM basis.
KD quasiprobability gives correct marginal probabilities, i.e., $\sum_{i}{\rm
Pr}_{\rm KD}(a,b|\varrho)={\rm Pr}(j|\varrho)$, $i\neq j$, $i,j=\\{a,b\\}$.
However, unlike conventional classical probability, KD quasiprobability may
take nonreal value, and its real part, called the Terletsky-Margenou-Hill
quasiprobability Terletsky TBMH quasiprobability ; Barut KD quasiprobability ;
Margenau TBMH quasiprobability , may be negative. Such nonreality and
negativity capture the quantum noncommutativity, that is, assuming two of its
three ingredients $\\{\varrho,\Pi_{a},\Pi_{b}\\}$ commute, e.g.,
$[\Pi_{a},\varrho]_{-}=0$, renders the KD quasiprobability ${\rm Pr}_{\rm
KD}(a,b|\varrho)$ real and nonnegative. Here and in what follows,
$[X,Y]_{\mp}:=XY\mp YX$ denotes the commutator and anticommutator between two
Hermitian operators $X$ and $Y$. In this sense, the nonreality or/and the
negativity of KD quasiprobability delineate some form of quantumness stemming
from quantum noncommutativity. The converse however is not necessarily true
Drori nonclassicality tighter and noncommutativity ; deBievre nonclassicality
in KD distribution . Remarkably, the real and imaginary parts of the KD
quasiprobability can be estimated in experiment without resorting to full
state tomography either using weak value measurement or other methods Aharonov
weak value ; Aharonov-Daniel book ; Wiseman weak value ; Lundeen measurement
of KD distribution ; Salvail direct measurement KD distribution ; Bamber
measurement of KD distribution ; Thekkadath measurement of density matrix ;
Johansen quantum state from successive projective measurement ; Lostaglio KD
quasiprobability and quantum fluctuation ; Hernandez-Gomez experimental
observation of TBMH negativity ; Wagner measuring weak values and KD
quasiprobability ; Vallone strong measurement to reconstruct quantum wave
function ; Cohen estimating of weak value with strong measurements ; Lundeen
complex weak value ; Jozsa complex weak value . This form of quantumness,
i.e., the nonreality or/and the negativity in the KD quasiprobability has thus
found applications in different areas of quantum science and technology
Lostaglio KD quasiprobability and quantum fluctuation ; Allahverdyan TBMH as
quasiprobability distribution of work ; Lostaglio TBMH quasiprobability
fluctuation theorem contextuality ; Levy quasiprobability distribution for
heat fluctuation in quantum regime ; Alonso KD quasiprobability witnesses
quantum scrambling ; Halpern quasiprobability and information scrambling ;
Arvidsson-Shukur quantum advantage in postselected metrology ; Lupu-Gladstein
negativity enhanced quantum phase estimation 2022 ; Das KD quasiprobability in
postselected metrology ; Pusey negative TBMH quasiprobability and
contextuality ; Kunjwal contextuality of non-real weak value ; Lostaglio
contextuality in quantum linear response ; Agung KD-nonreality coherence ;
Agung KD-nonclassicality coherence ; Agung translational asymmetry from
nonreal weak value ; Agung estimation and operational interpretation of trace-
norm asymmetry ; Agung KD general quantum correlation .
KD quasiprobability gives an informationally complete representation of an
arbitrary quantum state. That is, given a KD quasiprobability ${\rm Pr}_{\rm
KD}(a,b|\varrho)$ defined over a pair of orthonormal bases $\\{\ket{a}\\}$ and
$\\{\ket{b}\\}$ with $\braket{a}{b}\neq 0$ for all $(a,b)$, the associated
quantum state can be reconstructed as $\varrho=\sum_{a,b}{\rm Pr}_{\rm
KD}(a,b|\varrho)\frac{\ket{a}\bra{b}}{\braket{b}{a}}$. This important fact
naturally raises an intriguing question on how the KD quantumness capture
different yet interrelated nonclassical concepts associated with a quantum
state subjected to quantum measurements. To this end, we have argued
previously that the nonreality or simultaneously both the nonreality and
negativity of the KD quasiprobability can be used to quantitatively
characterize quantum coherence Agung KD-nonreality coherence ; Agung KD-
nonclassicality coherence , asymmetry Agung translational asymmetry from
nonreal weak value ; Agung estimation and operational interpretation of trace-
norm asymmetry , and general quantum correlation Agung KD general quantum
correlation . In the present article, we study how the quantumness in the KD
quasiprobability complies with the quantum uncertainty principle. Both the KD
quantumness and the uncertainty principle arise from the quantum
noncommutativity.
First, we summarize two mathematical objects for quantifying respectively the
nonreality and the total nonclassicality which captures simultaneously both
the nonreality and the negativity in the KD quasiprobability. To quantify the
nonreality in the KD quasiprobability, we use the following $l_{1}$-norm of
the nonreal part of the KD quasiprobability:
$\displaystyle{\rm NRe}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$
$\displaystyle:=$ $\displaystyle\sum_{a,b}|{\rm Im}{\rm Pr}(a,b|\varrho)|$ (2)
$\displaystyle=$ $\displaystyle\sum_{a,b}|{\rm Im}{\rm
Tr}\\{\Pi_{b}\Pi_{a}\varrho\\}|.$
It vanishes if and only if the KD quasiprobability is real. Next, let us
define the following quantity Drori nonclassicality tighter and
noncommutativity ; Alonso KD quasiprobability witnesses quantum scrambling ;
Lostaglio KD quasiprobability and quantum fluctuation :
$\displaystyle{\rm NCl}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$
$\displaystyle:=$ $\displaystyle\sum_{a,b}|{\rm Pr}_{\rm KD}(a,b|\varrho)|-1$
(3) $\displaystyle=$ $\displaystyle\sum_{a,b}|{\rm
Tr}\\{\Pi_{b}\Pi_{a}\varrho\\}|-1.$
It is nonnegative by definition since $\sum_{a,b}|{\rm Pr}_{\rm
KD}(a,b|\varrho)|\geq|\sum_{a,b}{\rm Pr}_{\rm KD}(a,b|\varrho)|=1$, where the
equality follows from the fact that KD quasiprobability is always normalized,
i.e., $\sum_{a,b}{\rm Pr}_{\rm KD}(a,b|\varrho)=1$. Moreover, it vanishes only
when $|{\rm Pr}_{\rm KD}(a,b|\varrho)|={\rm Pr}_{\rm KD}(a,b|\varrho)$ for all
$a$ and $b$, i.e., only when ${\rm Pr}_{\rm KD}(a,b|\varrho)$ is real and
nonnegative. ${\rm NCl}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$ defined in Eq
(3) thus quantifies the failure of the KD quasiprobability ${\rm Pr}_{\rm
KD}(a,b|\varrho)$ to be both real and nonnegative. We refer to ${\rm
NRe}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$ and ${\rm NCl}(\\{{\rm Pr}_{\rm
KD}(a,b|\varrho)\\})$ defined respectively in Eqs. (2) and (3) as the KD
nonreality and the KD nonclassicality in the quantum state $\varrho$ relative
to the pair of PVM bases $\\{\Pi_{a}\\}$ and $\\{\Pi_{b}\\}$.
We obtain two simple sufficient conditions respectively for nonvanishing ${\rm
NRe}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$ and ${\rm NCl}(\\{{\rm Pr}_{\rm
KD}(a,b|\varrho)\\})$. Below, we use the notation $\|X\|_{\infty}$ to denote
the operator norm or the $\infty$-Schatten norm of an operator $X$.
$\|X\|_{\infty}$ is equal to the largest eigenvalue modulus or the spectral
radius of $X$. Using the operator norm of a Hermitian operator $X$, we then
define the corresponding normalized Hermitian operator as
$\tilde{X}:=X/\|X\|_{\infty}$.
First, we have the following result for the KD nonreality in a state relative
to a pair of PVM bases.
Lemma 1. Given a state $\varrho$ on a Hilbert space $\mathcal{H}$, the
nonreality in the associated KD quasiprobability over a pair of PVM bases
$\\{\Pi_{a}\\}$ and $\\{\Pi_{b}\\}$ of $\mathcal{H}$ defined in Eq. (2), is
lower bounded as
$\displaystyle{\rm NRe}(\\{{\rm Pr}_{\rm
KD}(a,b|\varrho)\\})\geq\frac{1}{2}\big{|}{\rm
Tr}\\{\tilde{B}[\tilde{A},\varrho]_{-}\\}\big{|},$ (4)
where $A$ and $B$ are any Hermitian operators with bounded spectrum whose
complete set of eigenprojectors are respectively given by $\\{\Pi_{a}\\}$ and
$\\{\Pi_{b}\\}$.
Proof. Let $A=\sum_{a}a\Pi_{a}$ be a Hermitian operator on $\mathcal{H}$ with
the complete set of eigenprojectors $\\{\Pi_{a}\\}$ and the associated
spectrum of eigenvalues $\\{a\\}$. Similarly, let $B=\sum_{b}b\Pi_{b}$ be a
Hermitian operator on $\mathcal{H}$ with the complete set of eigenprojectors
$\\{\Pi_{b}\\}$ and the associated spectrum of eigenvalues $\\{b\\}$. From the
definition of the KD nonreality in the quantum state $\varrho$ relative to a
pair of PVM bases $\\{\Pi_{a}\\}$ and $\\{\Pi_{b}\\}$ in Eq. (2), we have
$\displaystyle{\rm NRe}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$
$\displaystyle=$
$\displaystyle\frac{1}{\|A\|_{\infty}\|B\|_{\infty}}\sum_{a,b}\|A\|_{\infty}\|B\|_{\infty}\big{|}{\rm
Im}({\rm Tr}\\{\Pi_{b}\Pi_{a}\varrho\\})\big{|}$ (5) $\displaystyle\geq$
$\displaystyle\frac{1}{\|A\|_{\infty}\|B\|_{\infty}}\big{|}{\rm Im}{\rm
Tr}\\{BA\varrho\\}\big{|}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\big{|}{\rm
Tr}\\{\tilde{B}[\tilde{A},\varrho]_{-}\\}\big{|},$
where the inequality in Eq. (5) is due to the fact that
$\|A\|_{\infty}=\max\\{|a|\\}$ and $\|B\|_{\infty}=\max\\{|b|\\}$ and triangle
inequality. ∎
As an immediate corollary of the Lemma 1, while noncommutativity of all pairs
of $A$, $B$, and $\varrho$ are not sufficient for the KD quasiprobability
${\rm Pr}_{\rm KD}(a,b|\varrho)$ associated with $\varrho$ defined over the
eigenbasis $\\{\ket{a}\\}$ of $A$ and the eigenbasis $\\{\ket{b}\\}$ of $B$ to
have nonreal value (or its real part is negative, or both) Drori
nonclassicality tighter and noncommutativity ; deBievre nonclassicality in KD
distribution ; deBievre incompatibility-uncertainty-KD nonclassicality ; Xu KD
classical pure states , a nonvanishing lower bound in Eq. (4), i.e., ${\rm
Tr}\\{B[A,\varrho]_{-}\\}\neq 0$ for a pair of Hermitian operators $A$ and
$B$, is sufficient for the corresponding KD quasiprobability ${\rm Pr}_{\rm
KD}(a,b|\varrho)$ to be nonreal for some $(a,b)$. It is interesting to remark
that the lower bound in Eq. (4) takes a form similar to that of the Robertson
uncertainty relation Robertson UR .
Next, we derive a lower bound for the KD nonclassicality in a state relative
to a pair of PVM bases defined in Eq. (3).
Lemma 2. Given a state $\varrho$ on a Hilbert space $\mathcal{H}$, the
nonclassicality in the associated KD quasiprobability associated with
$\varrho$ over a pair of PVM bases $\\{\Pi_{a}\\}$ and $\\{\Pi_{b}\\}$ of
$\mathcal{H}$ defined in Eq. (3) is lower bounded as
$\displaystyle{\rm NCl}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$
$\displaystyle\geq$ $\displaystyle\frac{1}{2}\big{(}\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}\big{|}^{2}$ (6)
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}^{1/2}-1,$
where $\tilde{X}_{\varrho}:=\frac{X}{\|X-{\rm
Tr}\\{X\varrho\\}\mathbb{I}\|_{\infty}}$, $X=A,B$, and $A$ and $B$ are any
Hermitian operators with bounded spectrum whose complete set of
eigenprojectors are respectively given by $\\{\Pi_{a}\\}$ and $\\{\Pi_{b}\\}$.
Proof. Let again $A=\sum_{a}a\Pi_{a}$ be a Hermitian operator on $\mathcal{H}$
with the complete set of eigenprojectors $\\{\Pi_{a}\\}$ and the associated
spectrum of eigenvalues $\\{a\\}$. Likewise, let $B=\sum_{b}b\Pi_{b}$ be a
Hermitian operator on $\mathcal{H}$ with the complete set of eigenprojectors
$\\{\Pi_{b}\\}$ and the associated spectrum of eigenvalues $\\{b\\}$. Then,
from the definition of the KD nonclassicality in $\varrho$ relative to a pair
of PVM bases $\\{\Pi_{a}\\}$ and $\\{\Pi_{b}\\}$ in Eq. (3), we first have
$\displaystyle{\rm NCl}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$
$\displaystyle=$ $\displaystyle\sum_{a,b}\big{|}{\rm
Tr}\\{\Pi_{b}\Pi_{a}\varrho\\}\big{|}-1$ (7) $\displaystyle=$
$\displaystyle\frac{\sum_{a,b}\|A\|_{\infty}\|B\|_{\infty}\big{|}{\rm
Tr}\\{\Pi_{b}\Pi_{a}\varrho\\}\big{|}}{\|A\|_{\infty}\|B\|_{\infty}}-1$
$\displaystyle\geq$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho\tilde{B}\tilde{A}\\}\big{|}-1$ $\displaystyle=$
$\displaystyle\frac{1}{2}\big{|}{\rm
Tr}\\{\varrho[\tilde{B},\tilde{A}]_{-}\\}+{\rm
Tr}\\{\varrho[\tilde{A},\tilde{B}]_{+}\\}\big{|}-1,$
where to get the last line, we have used a decomposition:
$\tilde{B}\tilde{A}=\frac{1}{2}[\tilde{B},\tilde{A}]_{-}+\frac{1}{2}[\tilde{A},\tilde{B}]_{+}$.
Notice that ${\rm Tr}\\{\varrho[\tilde{B},\tilde{A}]_{-}\\}$ is pure imaginary
while ${\rm Tr}\\{\varrho[\tilde{A},\tilde{B}]_{+}\\}$ is real. Hence, the
modulus in Eq. (7) can be evaluated to give
$\displaystyle{\rm NCl}(\\{{\rm Pr}_{\rm
KD}(a,b|\varrho)\\})\geq\frac{1}{2}\big{(}\big{|}{\rm
Tr}\\{\varrho[\tilde{B},\tilde{A}]_{-}\\}\big{|}^{2}+\big{|}{\rm
Tr}\\{\varrho[\tilde{A},\tilde{B}]_{+}\\}\big{|}^{2}\big{)}^{1/2}-1.$ (8)
Next, note that the left-hand side in Eq. (8) does not depend on the spectrum
of eigenvalues of $A$ and $B$. Now, consider the following Hermitian operators
$A^{\prime}=\sum_{a}(a-{\rm Tr}\\{A\varrho\\})\Pi_{a}=A-{\rm
Tr}\\{A\varrho\\}\mathbb{I}$ and $B^{\prime}=\sum_{b}(b-{\rm
Tr}\\{B\varrho\\})\Pi_{b}=B-{\rm Tr}\\{B\varrho\\}\mathbb{I}$. Then, we have
${\rm Tr}\\{\varrho[A^{\prime},B^{\prime}]_{-}\\}={\rm
Tr}\\{\varrho[A,B]_{-}\\}$ and ${\rm
Tr}\\{\varrho[A^{\prime},B^{\prime}]_{+}\\}={\rm
Tr}\\{\varrho[A,B]_{+}\\}-2{\rm Tr}\\{A\varrho\\}{\rm Tr}\\{B\varrho\\}$.
Using these relations, replacing $A$ and $B$ in Eq. (8) respectively with
$A^{\prime}$ and $B^{\prime}$, we obtain Eq. (6). ∎
Lemma 2 shows that a nonvanishing lower bound in Eq. (6), i.e.,
$\frac{1}{2}\big{(}\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}\big{|}^{2}+\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}^{1/2}-1>0$, provides a
sufficient condition for the associated KD quasiprobability ${\rm Pr}_{\rm
KD}(a,b|\varrho)$ to be nonreal, or its real part is negative, or both, for
some $(a,b)$. It is again interesting to note that the lower bound takes a
form similar to the lower bound of the Robertson-Schrödinger uncertainty
relation. Unlike the latter, however, the lower bound in Eq. (6) depends
nonlinearly on the state. Note that the sufficient condition in Lemma 2 is
stronger than that in Lemma 1 since the former can also detect negativity of
the KD quasiprobability.
## III Lower bounds and trade-off relations for KD quantumness in a state
relative to a single rank-1 orthogonal PVM basis
We first stress that both ${\rm NRe}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$
and ${\rm NCl}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$ defined in Eqs. (2) and
(3) quantify the KD quantumness stemming from the failure of commutativity
between the state $\varrho$ and both of the rank-1 PVMs bases $\\{\Pi_{a}\\}$
and $\\{\Pi_{b}\\}$, and also between the pair of the PVMs bases. How does the
quantumness of the KD quasiprobability portray the noncommutativity between a
state and a single PVM basis, e.g., between $\varrho$ and the PVM basis
$\\{\Pi_{a}\\}$? Quantities which reliably capture the noncommutativity
between a state $\varrho$ and a single PVM basis $\\{\Pi_{a}\\}$ is desirable
as we discuss the relation between the quantumness of the KD quasiprobability
and the uncertainty in measurement described by the rank-1 PVM $\\{\Pi_{a}\\}$
over the state $\varrho$, and the associated uncertainty relations. To this
end we introduce the following two quantities.
Definition 1. The KD nonreality in a state $\varrho$ on a finite-dimensional
Hilbert space $\mathcal{H}$ relative to a PVM basis $\\{\Pi_{a}\\}$ of
$\mathcal{H}$ is defined as
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NRe}(\varrho;\\{\Pi_{a}\\})$
$\displaystyle:=$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}{\rm NRe}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$ (9)
$\displaystyle=$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}\sum_{a,b}\big{|}{\rm Im}{\rm Pr}_{\rm
KD}(a,b|\varrho)\big{|},$
where the supremum is taken over the set $\mathcal{M}_{\rm
r1PVM}(\mathcal{H})$ of all the rank-1 PVM bases of $\mathcal{H}$.
Definition 2. The KD nonclassicality in a state $\varrho$ on a finite-
dimensional Hilbert space $\mathcal{H}$ relative to a PVM $\\{\Pi_{a}\\}$ of
$\mathcal{H}$ is defined as
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NCl}(\varrho;\\{\Pi_{a}\\})$
$\displaystyle:=$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}{\rm NCl}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$ (10)
$\displaystyle=$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}\sum_{a,b}|{\rm Tr}\\{\Pi_{b}\Pi_{a}\varrho\\}|-1.$
Let us mention that $\mathcal{Q}_{\rm KD}^{\rm NRe}(\varrho;\\{\Pi_{a}\\})$
and $\mathcal{Q}_{\rm KD}^{\rm NCl}(\varrho;\\{\Pi_{a}\\})$ defined
respectively in Eqs. (9) and (10) have been introduced earlier in Refs. Agung
KD-nonreality coherence ; Agung KD-nonclassicality coherence . There, it is
argued that both quantities can be used as faithful quantifiers of coherence
of $\varrho$ relative to the incoherent orthonormal basis $\\{\ket{a}\\}$
corresponding to the rank-1 orthogonal PVM basis $\\{\Pi_{a}\\}$ possessing
certain desirable properties. In particular, one can show that both quantities
are vanishing if and only if the state and the measurement basis are
commuting: $[\Pi_{a},\varrho]_{-}=0$ for all $a$ so that $\varrho$ is
incoherent relative to the orthonormal basis $\\{\ket{a}\\}$.
In the following subsections, we will derive lower bounds and trade-off
relations for the KD nonreality and KD nonclassicality in a quantum state
relative to a PVM basis defined respectively in Eqs. (9) and (10). For this
purpose, we denote by $\mathbb{H}(\mathcal{H})$ the convex set of all bounded
Hermitian operators on the Hilbert space $\mathcal{H}$,
$\mathbb{H}(\mathcal{H}|\\{\Pi_{x}\\})$ is the convex set of all bounded
Hermitian operators on $\mathcal{H}$ having the complete set of
eigenprojectors $\\{\Pi_{x}\\}$, and $\mathbb{H}(\mathcal{H}|\\{x\\})$ denotes
the set of all bounded Hermitian operators with a spectrum of eigenvalues
$\\{x\\}$, $x\in\mathbb{R}$.
### III.1 Lower bound and trade-off relation for the KD nonreality in a state
relative to a PVM basis
Using Lemma 1, we directly obtain a lower bound for the quantumness associated
with the KD nonreality in a quantum state relative to a PVM basis.
Proposition 1. The KD nonreality in a state $\varrho$ on a finite-dimensional
Hilbert space $\mathcal{H}$ relative to a PVM basis $\\{\Pi_{a}\\}$ of
$\mathcal{H}$ defined in Eq. (9) is lower bounded as
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})\geq\frac{1}{2}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H})}\big{|}{\rm
Tr}\\{\tilde{B}[\tilde{A},\varrho]_{-}\\}\big{|}.$ (11)
Proof. Taking the supremum over the set $\mathcal{M}_{\rm r1PVM}(\mathcal{H})$
of all the rank-1 PVM bases $\\{\Pi_{b}\\}$ of $\mathcal{H}$ to both sides of
Eq. (4), and noting Eq. (9), we first have
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NRe}(\varrho;\\{\Pi_{a}\\})$
$\displaystyle=$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}{\rm NRe}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$ (12)
$\displaystyle\geq$
$\displaystyle\frac{1}{2}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{b\\})}\big{|}{\rm
Tr}\\{\tilde{B}[\tilde{A},\varrho]_{-}\\}\big{|}.$
Next, notice that the left-hand side of Eq. (12) depends only on the PVM basis
$\\{\Pi_{a}\\}$, i.e., it is independent of the spectrum of eigenvalues
$\\{a\\}$ of $A$ and the spectrum of eigenvalues $\\{b\\}$ of $B$. Hence, upon
further taking the supremum over all possible eigenvalues spectrum of $A$ and
that of $B$ on the right-hand side of Eq. (12), the inequality can be
strengthened as in Eq. (11). ∎
The lower bound in Eq. (11) can be further evaluated to have the following
result.
Proposition 2. The KD nonreality in a quantum state $\varrho$ on a finite-
dimensional Hilbert space $\mathcal{H}$ relative to a PVM basis
$\\{\Pi_{a}\\}$ of $\mathcal{H}$ is lower bounded by the maximum trace-norm
asymmetry of the state relative to the translation group generated by all
Hermitian operators with the complete set of eigenprojectors that is given by
$\\{\Pi_{a}\\}$ as
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})\geq\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\|[A,\varrho]_{-}\|_{1}/2\|A\|_{\infty}.$
(13)
Here, $\|O\|_{1}={\rm Tr}\\{\sqrt{OO^{\dagger}}\\}$ is the Schatten 1-norm or
the trace-norm of operator $O$, and $\|[A,\varrho]_{-}\|_{1}/2$ is just the
trace-norm asymmetry of the state $\varrho$ relative to the group of
translation unitary generated by the Hermitian operator $A$ Marvian - Spekkens
speakable and unspeakable coherence .
Proof. See Appendix A.
We show in Appendix B that for two-dimensional Hilbert space
$\mathbb{H}\cong\mathbb{C}^{2}$, i.e., a system of a single qubit, both the
inequalities in Eqs. (11) and (13) become equalities for arbitrary state
$\varrho$ on $\mathbb{C}^{2}$ and arbitrary PVM basis $\\{\Pi_{a}\\}$ of
$\mathbb{C}^{2}$. In this case, both sides in Eqs. (11) and (13) are given by
the corresponding $l_{1}$-norm coherence of a state $\varrho$ relative to the
incoherent orthonormal basis $\\{\ket{a}\\}$ defined as
$C_{l_{1}}(\varrho;\\{\ket{a}\\}):=\sum_{a\neq
a^{\prime}}|\braket{a}{\varrho}{a^{\prime}}|$ Baumgratz quantum coherence
measure , directly quantifying the total magnitude of the off-diagonal terms
of the density matrix. Hence, for any state $\varrho$ on $\mathbb{C}^{2}$ and
any PVM basis $\mathbb{A}=\\{\ket{e}\bra{e},\ket{e_{\perp}}\bra{e_{\perp}}\\}$
of $\mathbb{C}^{2}$, where $\ket{e_{\perp}}$ is the orthonormal partner of
$\ket{e}$, we have
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NRe}(\varrho;\mathbb{A})$
$\displaystyle=$
$\displaystyle\frac{1}{2}\sup_{A\in\mathbb{H}(\mathbb{C}^{2}|\mathbb{A})}\sup_{B\in\mathbb{H}(\mathbb{C}^{2})}\big{|}{\rm
Tr}\\{\tilde{B}[\tilde{A},\varrho]_{-}\\}\big{|}$ (14) $\displaystyle=$
$\displaystyle\sup_{A\in\mathbb{H}(\mathbb{C}^{2}|\mathbb{A})}\|[\tilde{A},\varrho]_{-}\|_{1}/2$
$\displaystyle=$ $\displaystyle 2|\braket{e}{\varrho}{e_{\perp}}|$
$\displaystyle=$ $\displaystyle
C_{l_{1}}(\varrho;\\{\ket{e},\ket{e_{\perp}}\\}).$
Moreover, the eigenbasis of $B_{*}$, where
$B_{*}\in\mathbb{H}(\mathbb{C}^{2})$ is a Hermitian operator which attains the
supremum in Eq. (14), is mutually unbiased with the orthonormal reference
basis $\\{\ket{e},\ket{e_{\perp}}\\}$ and also with the eigenbasis of
$\varrho$.
We finally obtain the following trade-off relation.
Proposition 3. The KD nonreality in a quantum state $\varrho$ on a finite-
dimensional Hilbert space $\mathcal{H}$ relative to a rank-1 PVM basis
$\\{\Pi_{a}\\}$ of $\mathcal{H}$ and that relative to another rank-1 PVM basis
$\\{\Pi_{b}\\}$ of $\mathcal{H}$ satisfy the following trade-off relation:
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{b}\\})\geq\frac{1}{4}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\big{|}{\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\big{|}^{2}.$ (15)
Proof. We first write the inequality in Eq. (11) as
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})\geq\frac{1}{2}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H})}\big{|}{\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\big{|}.$ (16)
Next, exchanging the role of $A$ and $B$ in Eq. (16), we also have
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{b}\\})\geq\frac{1}{2}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\sup_{A\in\mathbb{H}(\mathcal{H})}\big{|}{\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\big{|},$ (17)
where the supremum are now taken over the set
$\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})$ of all bounded Hermitian operators $B$
on $\mathcal{H}$ whose complete set of eigenprojectors is given by the PVM
basis $\\{\Pi_{b}\\}$, and over the set $\mathbb{H}(\mathcal{H})$ of all
bounded Hermitian operator $A$ on $\mathcal{H}$. Combining Eqs. (16) and (17),
we thus finally obtain
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{b}\\})$ (18) $\displaystyle\geq$
$\displaystyle\frac{1}{4}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}\\})}\big{|}{\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\big{|}$ $\displaystyle\times$
$\displaystyle\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\sup_{A\in\mathbb{H}(\mathcal{H})}\big{|}{\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\big{|}$ $\displaystyle\geq$
$\displaystyle\frac{1}{4}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\big{|}{\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\big{|}^{2},$
where to get the inequality in Eq. (18), we have made use of the fact that
$\sup_{X\in\mathbb{H}(\mathcal{H})}\\{\cdot\\}\geq\sup_{X\in\mathbb{H}(\mathcal{H}|\\{\Pi_{x}\\})}\\{\cdot\\}$.
∎
One can see that the lower bound in the trade-off relation of Eq. (15) takes a
form similar to that of the Robertson uncertainty relation. Unlike the latter,
however, it involves optimizations over the convex sets
$\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})$ and
$\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})$ of all Hermitian operators on
$\mathcal{H}$ whose complete set of eigenprojectors are given respectively by
the PVM bases $\\{\Pi_{a}\\}$ and $\\{\Pi_{b}\\}$ relative to which we define
the KD nonreality in the state $\varrho$: $\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})$ and $\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{b}\\})$. The trade-off relation shows that if there is a
pair of Hermitian operators $A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})$ and
$B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})$ such that ${\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\neq 0$, then the lower bound in Eq.
(15) is not vanishing. In this case, both the KD nonreality $\mathcal{Q}_{\rm
KD}^{\rm NRe}(\varrho;\\{\Pi_{a}\\})$ in $\varrho$ relative to the PVM basis
$\\{\Pi_{a}\\}$ and the KD nonreality $\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{b}\\})$ in $\varrho$ relative to the PVM basis
$\\{\Pi_{b}\\}$ cannot be vanishing, and their product must satisfy the trade
off relation of Eq. (15).
Let us proceed to show that the lower bounds in Eqs. (11) and (13) and the
trade-off relation of Eq. (15) for the KD nonreality in a state relative to a
rank-1 orthogonal PVM basis, imply lower bounds and trade-off relation for the
$l_{1}$-coherence of the state relative to the orthonormal basis corresponding
to the PVM basis. First, note that as shown in Ref. Agung KD-nonreality
coherence , the KD nonreality $\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})$ in the state $\varrho$ relative to the rank-1 PVM
basis $\\{\Pi_{a}\\}$ gives a lower bound to the $l_{1}$-norm coherence
$C_{l_{1}}(\varrho;\\{\ket{a}\\})$ of $\varrho$ relative to the orthonormal
basis $\\{\ket{a}\\}$ corresponding to $\\{\Pi_{a}\\}$ as
$\displaystyle C_{l_{1}}(\varrho;\\{\ket{a}\\})\geq\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\}).$ (19)
Moreover, for an arbitrary state of a single qubit and arbitrary orthonormal
basis $\\{\ket{a}\\}$ of $\mathbb{C}^{2}$, the inequality becomes equality
Agung KD-nonreality coherence .
Using Eqs. (11) and (19), we thus obtain the following result.
Corollary 1. The $l_{1}$-norm coherence of a quantum state $\varrho$ on a
finite-dimensional Hilbert space $\mathcal{H}$ relative to an incoherent
orthonormal basis $\\{\ket{a}\\}$ of $\mathcal{H}$ is lower bounded as:
$\displaystyle C_{l_{1}}(\varrho;\\{\ket{a}\\})$ $\displaystyle\geq$
$\displaystyle\frac{1}{2}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H})}\big{|}{\rm
Tr}\\{\tilde{B}[\tilde{A},\varrho]_{-}\\}\big{|}.$ (20)
As shown in Appendix B, for two-dimensional Hilbert space $\mathbb{C}^{2}$,
the inequality in Eq. (20) becomes equality for arbitrary single qubit state
and arbitrary orthonormal basis, as expressed in Eq. (14).
Next, from Eqs. (13) and (19), we have the following result.
Corollary 2. The KD nonreality in a quantum state $\varrho$ on a finite-
dimensional Hilbert space $\mathcal{H}$ relative to a rank-1 orthogonal PVM
basis $\\{\Pi_{a}\\}$ of $\mathcal{H}$, the corresponding $l_{1}$-norm
coherence of $\varrho$ relative to the orthonormal basis $\\{\ket{a}\\}$, and
the trace-norm asymmetry of $\varrho$ relative to the translation group
generated by any Hermitian operator $A$ with a complete set of eigenprojectors
$\\{\Pi_{a}\\}$, obey the following ordering:
$\displaystyle C_{l_{1}}(\varrho;\\{\ket{a}\\})$ $\displaystyle\geq$
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})\geq\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\|[\tilde{A},\varrho]_{-}\|_{1}/2.$
(21)
For two-dimensional Hilbert space $\mathbb{C}^{2}$, as shown in Appendix B,
both inequalities in Eq. (21) become equalities for arbitrary state $\varrho$
and arbitrary incoherent orthonormal basis $\\{\ket{a}\\}$.
Recall that whilst $C_{l_{1}}(\varrho;\\{\ket{a}\\})$ is a measure of quantum
coherence of $\varrho$ which is independent of its encoding in the reference
incoherent orthonormal basis $\\{\ket{a}\\}$, the trace-norm asymmetry
$\|[\tilde{A},\varrho]\|_{1}/2$ can also be seen as a measure of coherence (as
translational asymmetry) of $\varrho$ which depends on its encoding in the
reference incoherent eigenbasis $\\{\ket{a}\\}$ of the generator
$\tilde{A}=A/\|A\|_{\infty}$ of the translation group
$U_{\theta}=e^{-i\tilde{A}\theta}$, $\theta\in\mathbb{R}$. The former is
sometimes called speakable coherence while the latter is called unspeakable
coherence Marvian - Spekkens speakable and unspeakable coherence .
Finally, combining Eqs. (15) with (19), we obtain the following trade-off
relation.
Corollary 3. The $l_{1}$-norm coherence of a quantum state $\varrho$ on a
finite-dimensional Hilbert space $\mathcal{H}$ relative to an incoherent
orthonormal basis $\\{\ket{a}\\}$ of $\mathcal{H}$ and that relative to an
incoherent orthonormal basis $\\{\ket{b}\\}$ of $\mathcal{H}$ satisfy the
following trade-off relation:
$\displaystyle
C_{l_{1}}(\varrho;\\{\ket{a}\\})C_{l_{1}}(\varrho;\\{\ket{b}\\})\geq\frac{1}{4}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\big{|}{\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\big{|}^{2}.$ (22)
Next, from Eq. (11) of Proposition 1, we obtain the following additive trade-
off relation for the KD nonreality in a state $\varrho$ relative to the PVM
basis $\\{\Pi_{a}\\}$ and that relative to the PVM basis $\\{\Pi_{b}\\}$:
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})+\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{b}\\})\geq\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\big{|}{\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\big{|}.$ (23)
The proof follows exactly similar steps as the proof of Eq. (15) of
Proposition 3. It can also be proven by applying the inequality for the
arithmetic mean and geometric mean, i.e., $(a+b)/2\geq\sqrt{ab}$,
$a,b\in\mathbb{R}^{+}$, to Eq. (15). Since $\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})$ is a faithful measure of coherence, Eq. (23) has
a form of additive uncertainty relation for coherence measure reported in
Refs. Korzekwa quantum-classical decomposition ; Singh uncertainty relation
for coherence ; Yuan uncertainty relation for coherence ; Hall quantum-
classical decomposition . One can then check that the left-hand side is not
vanishing when the state is not totally mixed, i.e.,
$\varrho\neq\mathbb{I}/d$, and the PVM bases are noncommuting as stated in
Theorem 1 of Ref. Yuan uncertainty relation for coherence . In particular,
combining Eq. (23) with Eq. (19), we have
$\displaystyle
C_{l_{1}}(\varrho;\\{\ket{a}\\})+C_{l_{1}}(\varrho;\\{\ket{b}\\})\geq\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\big{|}{\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\big{|}.$ (24)
We note that unlike the standard entropic uncertainty relation Coles entropic
uncertainty relation review ; Wehner entropic UR review , the lower bound in
Eqs. (23) and (24) depends on the state as for the uncertainty relation for
coherence measures in Refs. Korzekwa quantum-classical decomposition ; Singh
uncertainty relation for coherence ; Yuan uncertainty relation for coherence ;
Hall quantum-classical decomposition . In particular, it is vanishing when the
state is maximally mixed $\varrho=\mathbb{I}/d$, in case of which, the left-
hand sides in Eqs. (23) and (24) are also vanishing. Hence, the uncertainty
relation depends on the purity of the state as expected Korzekwa quantum-
classical decomposition . It is interesting in the future to compare the type
of lower bound in Eqs. (23) and (24) to those reported in Refs. Korzekwa
quantum-classical decomposition ; Singh uncertainty relation for coherence ;
Yuan uncertainty relation for coherence ; Hall quantum-classical decomposition
. In the Appendix C, we evaluate the optimization in the lower bound
analytically for two-dimensional system, showing that it is determined by the
purity of the state, and three parameters that characterize the pairwise
noncommutativity among the two PVM bases and the eigenbasis of the state. We
furthermore show that for pure state in two dimensional Hilbert space, the
inequality becomes equality when the bases $\\{\ket{a}\\}$, $\\{\ket{b}\\}$,
and the eigenbasis of $\varrho$ comprise a set of three mutually unbiased
bases of $\mathbb{C}^{2}$.
### III.2 Lower bound and uncertainty relation for the KD nonclassicality in
a state relative to a PVM
First, using Lemma 2, we obtain the following proposition.
Proposition 4. The KD nonclassicality in a state $\varrho$ on a finite-
dimensional Hilbert space $\mathcal{H}$ relative to a PVM basis
$\\{\Pi_{a}\\}$ of $\mathcal{H}$ defined in Eq. (10) is lower bounded as
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NCl}(\varrho;\\{\Pi_{a}\\})$
$\displaystyle\geq$
$\displaystyle\frac{1}{2}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H})}\big{\\{}\big{(}\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}\big{|}^{2}$ (25)
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}^{1/2}\big{\\}}-1.$
Proof. Taking the supremum over the set $\mathcal{M}_{\rm r1PVM}(\mathcal{H})$
of all the rank-1 PVM bases $\\{\Pi_{b}\\}$ of $\mathcal{H}$ to both sides of
Eq. (6), and noting Eq. (10), we obtain
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NCl}(\varrho;\\{\Pi_{a}\\})$
$\displaystyle=$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}{\rm NCl}(\\{{\rm Pr}_{\rm KD}(a,b|\varrho)\\})$ (26)
$\displaystyle\geq$
$\displaystyle\frac{1}{2}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{b\\})}\big{\\{}\big{(}\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}\big{|}^{2}$
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}^{1/2}\big{\\}}-1.$
Observe further that the left-hand side depends only on the PVM basis
$\\{\Pi_{a}\\}$, i.e., it is independent of the eigenvalues spectrum $\\{a\\}$
of $A$ and the eigenvalues spectrum $\\{b\\}$ of $B$. Noting this, the
inequality of Eq. (26) can be further strengthened to get Eq. (25). ∎
We then obtain the following trade-off relation.
Proposition 5. The KD nonclassicality in a quantum state $\varrho$ on a
finite-dimensional Hilbert space $\mathcal{H}$ relative to a PVM basis
$\\{\Pi_{a}\\}$ of $\mathcal{H}$ and that relative to a PVM basis
$\\{\Pi_{b}\\}$ of $\mathcal{H}$ satisfy the following trade-off relation:
$\displaystyle(\mathcal{Q}_{\rm KD}^{\rm
NCl}(\varrho;\\{\Pi_{a}\\})+1)(\mathcal{Q}_{\rm KD}^{\rm
NCl}(\varrho;\\{\Pi_{b}\\})+1)$ (27) $\displaystyle\geq$
$\displaystyle\frac{1}{4}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\big{\\{}\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}\big{|}^{2}$
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{\\}}.$
Proof. First, swapping the role of $A$ and $B$ in Eq. (25), we have
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NCl}(\varrho;\\{\Pi_{b}\\})+1$ (28)
$\displaystyle\geq$
$\displaystyle\frac{1}{2}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\sup_{A\in\mathbb{H}(\mathcal{H})}\big{\\{}\big{(}|{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}|^{2}$
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}^{1/2}\big{\\}}.$
Hence, combining Eqs. (25) and (28), we obtain
$\displaystyle(\mathcal{Q}_{\rm KD}^{\rm
NCl}(\varrho;\\{\Pi_{a}\\})+1)(\mathcal{Q}_{\rm KD}^{\rm
NCl}(\varrho;\\{\Pi_{b}\\})+1)$ (29) $\displaystyle\geq$
$\displaystyle\frac{1}{4}\Big{(}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H})}\big{\\{}\big{(}|{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}|^{2}$
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{{\varrho}}\varrho\\}\big{|}^{2}\big{)}^{1/2}\big{\\}}\Big{)}$
$\displaystyle\times$
$\displaystyle\Big{(}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\sup_{A\in\mathbb{H}(\mathcal{H})}\big{\\{}\big{(}|{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}|^{2}$
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}^{1/2}\big{\\}}\Big{)}$
$\displaystyle\geq$
$\displaystyle\frac{1}{4}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\big{\\{}|{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}|^{2}$
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{\\}},$
where the last inequality in Eq. (29) is due to the fact that
$\sup_{X\in\mathbb{H}(\mathcal{H})}\\{\cdot\\}\geq\sup_{X\in\mathbb{H}(\mathcal{H}|\\{\Pi_{x}\\})}\\{\cdot\\}$.
∎
One can see that Eq. (27) takes a form analogous to the Robertson-Schrödinger
uncertainty relation for observables $\tilde{A}_{\varrho}$ and
$\tilde{B}_{\varrho}$. Unlike the Robertson-Schrödinger uncertainty relation,
however, the lower bound in Eq. (27) is nonlinear in the state $\varrho$.
Moreover, there are optimizations over a pair of convex sets
$\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})$ and
$\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})$ of Hermitian operators whose complete
sets of eigenprojectors are respectively $\\{\Pi_{a}\\}$ and $\\{\Pi_{b}\\}$
relative to which we define the KD nonclassicality in $\varrho$:
$\mathcal{Q}_{\rm KD}^{\rm NCl}(\varrho;\\{\Pi_{a}\\})$ and $\mathcal{Q}_{\rm
KD}^{\rm NCl}(\varrho;\\{\Pi_{b}\\})$. Recall that the KD nonclassicality in a
state relative to a PVM basis quantifies the total quantumness, i.e., it
quantifies simultaneously the nonreality and negativity of the corresponding
KD quasiprobability, capturing the noncommutativity between the state and the
PVM basis. In this sense, the trade-off relation of Eq. (27) thus imposes a
restriction on a joint nonclassicality in a quantum state relative to a pair
of noncommuting PVM bases.
Let us proceed to show that the lower bound and the trade-off relation for the
KD nonclassicality relative to a PVM basis obtained above lead also to a lower
bound and a trade-off relation for the corresponding $l_{1}$-norm coherence.
First, as shown in Appendix D, the KD nonclassicality in a state $\varrho$ on
a finite-dimensional Hilbert space $\mathcal{H}$ relative to a rank-1
orthogonal PVM basis $\\{\Pi_{a}\\}$ of $\mathcal{H}$ gives a lower bound to
the $l_{1}$-norm coherence of the state $\varrho$ relative to the incoherent
orthonormal basis $\\{\ket{a}\\}$ corresponding to $\\{\Pi_{a}\\}$, i.e.,
$\displaystyle C_{l_{1}}(\varrho;\\{\ket{a}\\})\geq\mathcal{Q}_{\rm KD}^{\rm
NCl}(\varrho;\\{\Pi_{a}\\}).$ (30)
Combining Eq. (30) with Eq. (25), we thus obtain the following corollary.
Corollary 4. The $l_{1}$-norm coherence of a quantum state $\varrho$ on a
finite-dimensional Hilbert space $\mathcal{H}$ relative to an incoherent
orthonormal basis $\\{\ket{a}\\}$ of $\mathcal{H}$ is lower bounded as
$\displaystyle C_{l_{1}}(\varrho;\\{\ket{a}\\})$ $\displaystyle\geq$
$\displaystyle\frac{1}{2}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H})}\big{\\{}\big{(}|{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}|^{2}$ (31)
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}^{1/2}\big{\\}}-1.$
Next, combining Eq. (30) with Eq. (27), we obtain the following trade-off
relation.
Corollary 5. The $l_{1}$-norm coherence of a quantum state $\varrho$ on a
finite-dimensional Hilbert space $\mathcal{H}$ relative to an orthonormal
basis $\\{\ket{a}\\}$ of $\mathcal{H}$ and that relative to an orthonormal
basis $\\{\ket{b}\\}$ of $\mathcal{H}$ satisfy the following trade-off
relation:
$\displaystyle\big{(}C_{l_{1}}(\varrho;\\{\ket{a}\\})+1\big{)}\big{(}C_{l_{1}}(\varrho;\\{\ket{b}\\})+1\big{)}$
(32) $\displaystyle\geq$
$\displaystyle\frac{1}{4}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\big{\\{}\big{(}|{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}|^{2}$
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}\big{\\}}.$
Following exactly similar steps as above, we can also prove the following
additive trade-off relation for the $l_{1}$-norm coherence of a state
$\varrho$ relative to an orthonormal basis $\\{\ket{a}\\}$ and that relative
to an orthonormal basis $\\{\ket{b}\\}$ as:
$\displaystyle
C_{l_{1}}(\varrho;\\{\ket{a}\\})+C_{l_{1}}(\varrho;\\{\ket{b}\\})$ (33)
$\displaystyle\geq$
$\displaystyle\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\big{\\{}\big{(}|{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}|^{2}$
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}^{1/2}\big{\\}}-2.$
## IV Operational and statistical meaning
In this section we discuss operational and information theoretical
interpretations of the KD nonreality and KD nonclassicality in a state
relative to a PVM basis in terms of transparent laboratory operations. One is
based on the representation of the KD quasiprobability in terms of weak value
which can be obtained using various methods in experiment, and the other is
based on the decomposition of the KD quasiprobability in terms of real and
nonnegative joint probability and quantum modification terms obtained via two
successive projective measurements.
First, one observes that the KD nonreality and the KD nonclassicality in a
state $\varrho$ relative to a PVM basis $\\{\Pi_{a}\\}$ defined respectively
in Eqs. (9) and (10) can be expressed as
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NRe}(\varrho;\\{\Pi_{a}\\})$
$\displaystyle=$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}\sum_{a,b}\big{|}{\rm Im}\pi_{a}^{\rm
w}(\Pi_{b}|\varrho)\big{|}{\rm Tr}\\{\Pi_{b}\varrho\\},$ (34)
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NCl}(\varrho;\\{\Pi_{a}\\})$
$\displaystyle=$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}\sum_{a,b}\big{|}\pi_{a}^{\rm
w}(\Pi_{b}|\varrho)\big{|}{\rm Tr}\\{\Pi_{b}\varrho\\}-1.$ (35)
Here, $\pi_{a}^{\rm w}(\Pi_{b}|\varrho):=\frac{{\rm
Tr}\\{\Pi_{b}\Pi_{a}\varrho\\}}{{\rm Tr}\\{\Pi_{b}\varrho\\}}$ is known as the
weak value of $\Pi_{a}$ with the preselected state $\varrho$ and postselected
state $\ket{b}$ Aharonov weak value ; Aharonov-Daniel book ; Wiseman weak
value . It is in general complex and its real part may lie outside $[0,1]$.
Remarkably, the real and imaginary parts of the weak value can be estimated in
experiment without recourse to state tomography either using weak measurement
with postselection Aharonov weak value ; Aharonov-Daniel book ; Wiseman weak
value ; Lundeen complex weak value ; Jozsa complex weak value ; Lundeen
measurement of KD distribution ; Salvail direct measurement KD distribution ;
Bamber measurement of KD distribution ; Thekkadath measurement of density
matrix or different methods without weak measurement Johansen quantum state
from successive projective measurement ; Vallone strong measurement to
reconstruct quantum wave function ; Cohen estimating of weak value with strong
measurements ; Lostaglio KD quasiprobability and quantum fluctuation ; Wagner
measuring weak values and KD quasiprobability ; Hernandez-Gomez experimental
observation of TBMH negativity . Noting this, the KD nonreality and KD
nonclassicality in a state relative to a PVM basis of Eqs. (34) and (35) can
thus be directly operationally estimated using weak value measurement together
with the classical optimization over the set $\mathcal{M}_{\rm
r1PVM}(\mathcal{H})$ of all the rank-1 orthogonal PVM bases of the Hilbert
space $\mathcal{H}$. This estimation scheme should in principle be
implementable in terms of variational quantum circuits using the currently
available NISQ hardware Cerezo VQA review .
The above operational interpretation suggests the following information
theoretical meaning of the KD nonreality $\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})$ in a state $\varrho$ relative to a PVM basis
$\\{\Pi_{a}\\}$ defined in Eq. (9) and the associated trade-off relation
expressed in Eq. (15). First, applying the Jensen inequality to Eq. (34), we
have
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NRe}(\varrho;\\{\Pi_{a}\\})^{2}$
$\displaystyle=$ $\displaystyle\big{(}\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}\sum_{a,b}\big{|}{\rm Im}\pi_{a}^{\rm
w}(\Pi_{b}|\varrho)\big{|}{\rm Tr}\\{\Pi_{b}\varrho\\}\big{)}^{2}$ (36)
$\displaystyle\leq$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}\sum_{a,b}\big{|}{\rm Im}\pi_{a}^{\rm
w}(\Pi_{b}|\varrho)\big{|}^{2}{\rm Tr}\\{\Pi_{b}\varrho\\}$ $\displaystyle:=$
$\displaystyle\epsilon^{2}_{\\{\Pi_{a}\\}}(\varrho),$
where $\epsilon^{2}_{\\{\Pi_{a}\\}}(\varrho)$ is the total sum of the variance
of the imaginary part of the weak value $\pi_{a}^{\rm w}(\Pi_{b}|\varrho)$
over the probability ${\rm Pr}(b|\varrho)={\rm Tr}\\{\Pi_{b}\varrho\\}$
maximized over the set $\mathcal{M}_{\rm r1PVM}(\mathcal{H})$ of all the PVM
bases $\\{\Pi_{b}\\}$ of $\mathcal{H}$. On the other hand, it was argued by
Johansen and Hall in Refs. Johansen weak value best estimation ; Hall prior
information , that the variance of the imaginary part of the weak value
$\pi^{\rm w}_{a}(\Pi_{b}|\varrho)$ over the probability ${\rm
Pr}(b|\varrho)={\rm Tr}\\{\Pi_{b}\varrho\\}$ can be interpreted as the mean-
squared error of the optimal estimation of $\Pi_{a}$ based on the outcomes of
measurement described by the PVM basis $\\{\Pi_{b}\\}$ when the preparation is
represented by the state $\varrho$. Noting this,
$\epsilon^{2}_{\\{\Pi_{a}\\}}(\varrho)$ defined in Eq. (36) may thus be
statistically interpreted as the total mean-squared error of the optimal
estimation of the PVM basis $\\{\Pi_{a}\\}$ based on projective measurement,
given the preparation $\varrho$, in the worst case scenario. Equation (36)
thus shows that the total root-mean-squared error of the optimal estimation of
the PVM basis $\\{\Pi_{a}\\}$ given $\varrho$ in the worst case scenario is
lower bounded by the corresponding KD nonreality in $\varrho$ relative to the
PVM basis $\\{\Pi_{a}\\}$.
Combining Eq. (36) with Eqs. (11) and (13), we thus obtain the following
results.
Corollary 5a. The total root-mean-squared error of the optimal estimation of a
PVM basis $\\{\Pi_{a}\\}$ of a finite-dimensional Hilbert space $\mathcal{H}$
given a preselected state $\varrho$ on $\mathcal{H}$, based on projective
measurement described by a PVM basis in $\mathcal{M}_{\rm
r1PVM}(\mathcal{H})$, in the worst case scenario, is lower bounded as
$\displaystyle\epsilon_{\\{\Pi_{a}\\}}(\varrho)$ $\displaystyle\geq$
$\displaystyle\frac{1}{2}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H})}\big{|}{\rm
Tr}\\{\tilde{B}[\tilde{A},\varrho]_{-}\\}\big{|}.$ (37)
Moreover, it can also be lower bounded in terms of trace-norm asymmetry as
$\displaystyle\epsilon_{\\{\Pi_{a}\\}}(\varrho)$ $\displaystyle\geq$
$\displaystyle\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\|[A,\varrho]_{-}\|_{1}/2\|A\|_{\infty}.$
(38)
Next, combining Eqs. (36) with (15), we have the following uncertainty
relation.
Corollary 5b. Given a preparation represented by a density operator $\varrho$
on a finite-dimensional Hilbert space $\mathcal{H}$, the total root-mean-
squared errors of the optimal estimation of $\\{\Pi_{a}\\}$ of $\mathcal{H}$
based on projective measurement described by a PVM basis in $\mathcal{M}_{\rm
r1PVM}(\mathcal{H})$, and that of the optimal estimation of $\\{\Pi_{b}\\}$ of
$\mathcal{H}$, in the worst case scenario, satisfy the following trade-off
relation:
$\displaystyle\epsilon_{\\{\Pi_{a}\\}}(\varrho)\epsilon_{\\{\Pi_{b}\\}}(\varrho)\geq\frac{1}{4}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H}|\\{\Pi_{b}\\})}\big{|}{\rm
Tr}\\{[\tilde{A},\tilde{B}]_{-}\varrho\\}\big{|}^{2}.$ (39)
Let us proceed to discuss an operational interpretation of the KD
nonclassicality in a state relative to a rank-1 PVM basis in terms of a
sequence of two strong projective measurements. First, it has been shown by
Johansen in Ref. Johansen quantum state from successive projective measurement
that the KD quasiprobability associated with a state $\varrho$ over a pair of
rank-1 PVM bases $\\{\Pi_{a}\\}$ and $\\{\Pi_{b}\\}$ can be expressed as
$\displaystyle{\rm Pr}_{\rm KD}(a,b|\varrho)$ $\displaystyle=$
$\displaystyle{\rm Tr}\\{\Pi_{b}\Pi_{a}\varrho\Pi_{a}\\}+\frac{1}{2}{\rm
Tr}\\{(\varrho-\varrho_{\Pi_{a}})\Pi_{b}\\}$ (40) $\displaystyle-$
$\displaystyle i\frac{1}{2}{\rm
Tr}\\{(\varrho-\varrho_{\Pi_{a}})\Pi_{b|a}^{\pi/2}\\}.$
Here,
$\varrho_{\Pi_{a}}:=\Pi_{a}\varrho\Pi_{a}+(\mathbb{I}-\Pi_{a})\varrho(\mathbb{I}-\Pi_{a})$
is the state after a nonselective binary projective measurement described by
$\\{\Pi_{a},\mathbb{I}-\Pi_{a}\\}$, and
$\Pi_{b|a}^{\pi/2}=e^{i\Pi_{a}\pi/2}\Pi_{b}e^{-i\Pi_{a}\pi/2}$. The first term
on the right-hand side of Eq. (40), i.e., ${\rm
Tr}\\{\Pi_{b}\Pi_{a}\varrho\Pi_{a}\\}={\rm
Tr}\\{\Pi_{b}\frac{\Pi_{a}\varrho\Pi_{a}}{{\rm Tr}\\{\varrho\Pi_{a}\\}}\\}{\rm
Tr}\\{\varrho\Pi_{a}\\}$, is just the joint probability to get $a$ in the
measurement described by $\\{\Pi_{a}\\}$ and then to get $b$ afterward in the
measurement described by $\\{\Pi_{b}\\}$, so that it is always real and
nonnegative. In this sense, the other two terms are called the quantum
modification terms responsible for the negativity and nonreality of the KD-
quasiprobability. One can then see that the negativity and the nonreality
capture different forms of state disturbance due to the nonselective binary
projective measurement $\\{\Pi_{a},\mathbb{I}-\Pi_{a}\\}$ as captured by the
expectation values of $\Pi_{b}$ and $\Pi_{b|a}^{\pi/2}$, respectively.
Using the decomposition of the KD quasiprobability in Eq. (40), the KD
nonclassicality in $\varrho$ relative to the PVM $\\{\Pi_{a}\\}$ defined in
Eq. (10) can then be upper bounded as
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NCl}(\varrho;\\{\Pi_{a}\\})$
$\displaystyle=$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}\sum_{a,b}\big{|}{\rm
Tr}\\{\Pi_{b}\Pi_{a}\varrho\Pi_{a}\\}+\frac{1}{2}{\rm
Tr}\\{(\varrho-\varrho_{\Pi_{a}})\Pi_{b}\\}$ (41) $\displaystyle-$
$\displaystyle i\frac{1}{2}{\rm
Tr}\\{(\varrho-\varrho_{\Pi_{a}})\Pi_{b|a}^{\pi/2}\\}\big{|}-1$
$\displaystyle\leq$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}\sum_{a,b}\big{|}{\rm
Tr}\\{(\varrho-\varrho_{\Pi_{a}})\Pi_{b}\\}\big{|}:=\delta_{\\{\Pi_{a}\\}}(\varrho).$
Here, to get Eq. (41), we have used triangle inequality, the normalization
$\sum_{a,b}|{\rm Tr}\\{\Pi_{b}\Pi_{a}\varrho\Pi_{a}\\}|=\sum_{a,b}{\rm
Tr}\\{\Pi_{b}\Pi_{a}\varrho\Pi_{a}\\}=1$ for any
$\\{\Pi_{b}\\}\in\mathcal{M}_{\rm r1PVM}(\mathcal{H})$, and also
$\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm r1PVM}(\mathcal{H})}\sum_{a,b}|{\rm
Tr}\\{(\varrho-\varrho_{\Pi_{a}})\Pi_{b|a}^{\pi/2}\\}|=\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}\sum_{a,b}|{\rm
Tr}\\{(\varrho-\varrho_{\Pi_{a}})\Pi_{b}\\}|$, where the equality is due to
the fact that $\\{\Pi_{b|a}^{\pi/2}\\}$ comprises again a rank-1 PVM basis of
$\mathcal{H}$, and the set of the PVM basis $\\{\Pi_{b|a}^{\pi/2}\\}$ is the
same as the set of the PVM basis $\\{\Pi_{b}\\}$ given by $\mathcal{M}_{\rm
r1PVM}(\mathcal{H})$. Hence, the KD nonclassicality $\mathcal{Q}_{\rm KD}^{\rm
NCl}(\varrho;\\{\Pi_{a}\\})$ in $\varrho$ relative to the rank-1 PVM basis
$\\{\Pi_{a}\\}$ gives a lower bound to the total disturbance
$\delta_{\\{\Pi_{a}\\}}(\varrho)$ in the state $\varrho$ caused by the
nonselective projective binary measurement $\\{\Pi_{a},\mathbb{I}-\Pi_{a}\\}$
associated with the PVM basis $\\{\Pi_{a}\\}$.
Combining Eq. (41) and Eq. (25), we first have the following corollary.
Corollary 6a. The total disturbance $\delta_{\\{\Pi_{a}\\}}(\varrho)$ in the
state $\varrho$ caused by the nonselective projective binary measurement
$\\{\Pi_{a},\mathbb{I}-\Pi_{a}\\}$ associated with the PVM basis
$\\{\Pi_{a}\\}$ of a finite-dimensional Hilbert space $\mathcal{H}$ is lower
bounded as
$\displaystyle\delta_{\\{\Pi_{a}\\}}(\varrho)$ $\displaystyle\geq$
$\displaystyle\frac{1}{2}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H})}\big{\\{}\big{(}\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}\big{|}^{2}$ (42)
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}^{1/2}\big{\\}}-1.$
From Corollary 6a, we finally obtain the following trade-off relation, the
proof of which follows similar steps to that of Proposition 5.
Corollary 6b. Given a preparation represented by $\varrho$ on a finite-
dimensional Hilbert space $\mathcal{H}$, the total disturbance
$\delta_{\\{\Pi_{a}\\}}(\varrho)$ in the state $\varrho$ caused by the
nonselective projective binary measurement $\\{\Pi_{a},\mathbb{I}-\Pi_{a}\\}$
associated with the PVM basis $\\{\Pi_{a}\\}$, and the total disturbance
$\delta_{\\{\Pi_{b}\\}}(\varrho)$ in the state $\varrho$ caused by the
nonselective projective binary measurement $\\{\Pi_{b},\mathbb{I}-\Pi_{b}\\}$
associated with the PVM basis $\\{\Pi_{b}\\}$, satisfy the following trade-off
relation:
$\displaystyle\delta_{\\{\Pi_{a}\\}}(\varrho)\delta_{\\{\Pi_{b}\\}}(\varrho)$
$\displaystyle\geq$
$\displaystyle\frac{1}{4}\Big{(}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{H}(\mathcal{H})}\big{\\{}\big{(}\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{-}\\}\big{|}^{2}$ (43)
$\displaystyle+$ $\displaystyle\big{|}{\rm
Tr}\\{\varrho[\tilde{A}_{\varrho},\tilde{B}_{\varrho}]_{+}\\}-2{\rm
Tr}\\{\tilde{A}_{\varrho}\varrho\\}{\rm
Tr}\\{\tilde{B}_{\varrho}\varrho\\}\big{|}^{2}\big{)}^{1/2}\big{\\}}-1\Big{)}^{2}.$
## V Summary and Discussion
We have first derived lower bounds for the KD nonreality and the KD
nonclassicality relative to a pair of rank-1 PVM bases, respectively in Eqs.
(4) and (6). Nonvanishing lower bounds thus provide sufficient conditions for
the KD quasiprobability to be nonclassical, i.e., its value is nonreal or its
real part is negative, or both. We then defined the KD nonreality and KD
nonclassicality in a state relative to a single PVM basis by taking the
supremum over the other basis as in Eqs. (9) and (10). They can be interpreted
as quantifying the amount of the quantumness in the state relative to the PVM
basis manifesting their noncommutativity. We obtained lower bounds for the KD
nonreality and KD nonclassicality in a state relative to a single PVM basis,
given respectively in Eqs. (11) and (25). A lower bound for the KD-nonreality
in a state relative a rank-1 PVM in terms of extremal trace-norm asymmetry is
given in Eq. (13). The same lower bounds also apply to the corresponding
$l_{1}$-norm coherence.
We proceeded to derive trade-off relations for the KD nonreality and the KD
nonclassicality relative to a PVM basis and those relative to another PVM
basis given in Eqs. (15) and (27), having similar forms respectively to the
Robertson and Roberston-Schrödinger uncertainty relations. The lower bounds
for the trade-off relations involve optimization over two convex sets of
Hermitian operators whose complete set of eigenprojectors are given by the
corresponding PVM bases. We then showed that the trade-off relations imply
similar trade-off relations for the $l_{1}$-norm coherence. The trade-off
relations thus restrict simultaneous quantumness associated with a state
$\varrho$ relative to two noncommuting rank-1 PVM basis. More detailed
comparison of the uncertainty relations to the uncertainty relation for
intrinsic quantum randomness presented in Refs. Korzekwa quantum-classical
decomposition ; Singh uncertainty relation for coherence ; Yuan uncertainty
relation for coherence ; Hall quantum-classical decomposition is left for
future study.
We further briefly discussed a hybrid quantum-classical variational scheme for
a direct measurement of the KD nonreality and KD nonclassicality in a state
relative a PVM basis by means of weak value measurement for the reconstruction
of the KD quasiprobability, combined with a classical optimization scheme for
searching the supremum over the set of rank-1 PVM bases of the Hilbert space.
This operational interpretation leads to an information theoretical
interpretation for the KD nonreality in a state relative a PVM basis as a
lower bound for the total root-mean-squared error of the optimal estimation of
the PVM basis based on the outcomes of projective measurement, in the worst
case scenario. Moreover, it also leads to an uncertainty relation between the
root-mean-squared error of the optimal estimation of the PVM basis and that of
the optimal estimation of the other PVM basis, based on projective
measurement, in the worst case scenario. We further applied the decomposition
of the KD quasiprobability obtained via two successive projective
measurements, into a real and nonnegative joint probability and two quantum
modification terms which are responsible for the negativity and nonreality of
the KD quasiprobability. Using this decomposition, the KD nonclassicality in a
state relative to a PVM basis can be shown to give a lower bound to the total
disturbance to the state caused by a nonselective projective binary
measurement associated with the PVM basis. This further implies similar lower
bound and trade-off relation for such total disturbance as those for the KD
nonclassicality relative to a PVM basis.
In this article, we have based all of our discussion on the standard KD
quasiprobability associated with a density operator over a pair of rank-1
orthogonal PVM bases as in Eq. (1). This suggests directions for further
investigation in the future. First, it is natural to ask if one can extend the
methods and results of the present work to more general POVM (positive-
operator-valued measure) bases. Next, recently, motivated by certain
interesting physical problems such as quantum metrology with postselection
Arvidsson-Shukur quantum advantage in postselected metrology ; Lupu-Gladstein
negativity enhanced quantum phase estimation 2022 and detection of OTOC (out-
of-time-order correlation) in many body chaotic system Halpern
quasiprobability and information scrambling ; Alonso KD quasiprobability
witnesses quantum scrambling , there are proposals to extend the KD
quasiprobability by extending the number of PVM basis. Within the
representation of the KD quasiprobability via weak value, this extension means
that we increase the number of weak measurements before making strong
projective postselection measurement. This extension of the KD
quasiprobability too shares the properties of the standard KD
quasiprobability. In particular, its negativity and nonreality signal
quantumness associated with quantum noncommutativity. It is therefore
interesting to apply the methods and reasoning developed in the present
article and also in Refs. Agung KD-nonreality coherence ; Agung KD-
nonclassicality coherence ; Agung KD general quantum correlation to use the
extended KD quasiprobability to probe quantum coherence, general quantum
correlation and to see the restriction imposed by the uncertainty principle.
Such an approach might in turn help clarify the roles of coherence and general
correlation in quantum metrology with postselection and OTOC.
###### Acknowledgements.
## Appendix A Proof of Proposition 2
Notice that the left-hand side of Eq. (11) does not depend on
$\tilde{B}=B/\|B\|_{\infty}$. Hence, the inequality in Eq. (11) can be
strengthened to obtain
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NRe}(\varrho;\\{\Pi_{a}\\})$
$\displaystyle\geq$
$\displaystyle\frac{1}{2}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\sup_{B\in\mathbb{O}(\mathcal{H})}\big{|}{\rm
Tr}\\{\tilde{B}[\tilde{A},\varrho]_{-}\\}\big{|},$ (44)
where $\mathbb{O}(\mathcal{H})$ is the set of all bounded operators on
$\mathcal{H}$. Next, since $\|\tilde{B}\|_{\infty}=1$, then one can further
strengthen the inequality in Eq. (44) as
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})\geq\frac{1}{2}\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\big{\\{}\sup_{X\in\mathbb{O}(\mathcal{H}|\|X\|_{\infty}\leq
1)}\big{|}{\rm Tr}\\{X^{\dagger}[\tilde{A},\varrho]_{-}\\}\big{|}\big{\\}},$
(45)
where $\mathbb{O}(\mathcal{H}|\|X\|_{\infty}\leq 1)$ is the set of all bounded
operators $X$ with $\|X\|_{\infty}\leq 1$. One then observes that the term on
the right-hand side of Eq. (45) inside the bracket $\\{\dots\\}$ is just the
variational definition of the Schatten $p=1$ norm via its conjugate norm
$p_{*}=\infty$ Watrous book on quantum Shannon theory , so that one has
$\displaystyle\sup_{X\in\mathbb{O}(\mathcal{H}|\|X\|_{\infty}\leq
1)}\big{|}{\rm
Tr}\\{X^{\dagger}[\tilde{A},\varrho]_{-}\\}\big{|}=\|[\tilde{A},\varrho]_{-}\|_{1}.$
(46)
Inserting this into the right-hand side of Eq. (45), we obtain Eq. (13).
As an alternative scheme of proof, first, from Proposition 4 of Ref. Agung
estimation and operational interpretation of trace-norm asymmetry , we have
(see Eq. (22) of Ref. Agung estimation and operational interpretation of
trace-norm asymmetry )
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})\geq\|[A,\varrho]_{-}\|_{1}/2\|A\|_{\infty},$ (47)
where $\mathcal{Q}_{\rm KD}^{\rm NRe}(\varrho;\\{\Pi_{a}\\})$ in this article
is denoted by $C_{\rm KD}(\varrho;\\{\ket{a}\\})$ in Ref. Agung estimation and
operational interpretation of trace-norm asymmetry . This can be proven by
using the equality between the trace-norm asymmetry Marvian - Spekkens
speakable and unspeakable coherence and the measure of asymmetry based on the
maximum mean absolute value of the imaginary part of the weak value of the
generator $A$ of the translation group proposed in Agung translational
asymmetry from nonreal weak value as expressed in Proposition 1 of Ref. Agung
estimation and operational interpretation of trace-norm asymmetry (see Eq.
(5) of Ref. Agung estimation and operational interpretation of trace-norm
asymmetry ). Observe that the left-hand side of Eq. (47) is independent of the
eigenvalues spectrum of the Hermitian operator $A$ which appears on the right-
hand side. Hence, the inequality can be strengthened as
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\\{\Pi_{a}\\})\geq\sup_{A\in\mathbb{O}(\mathcal{H}|\\{\Pi_{a}\\})}\|[A,\varrho]_{-}\|_{1}/2\|A\|_{\infty}.$
(48)
∎
## Appendix B Propositions 1 and 2 for two-dimensional Hilbert space
First, assume without loss of generality, that the PVM basis of the two-
dimensional Hilbert space $\mathcal{H}\cong\mathbb{C}^{2}$ relative to which
we define the KD nonreality in a state on the left-hand side of Eq. (11) is
given by the complete set of eigenprojectors of the Pauli operator
$\sigma_{z}$, i.e., $\mathbb{A}:=\\{\ket{0}\bra{0},\ket{1}\bra{1}\\}$. All the
Hermitian operators on $\mathbb{C}^{2}$ with the eigenprojectors $\mathbb{A}$
thus take the general form as:
$\displaystyle A=a_{0}\ket{0}\bra{0}+a_{1}\ket{1}\bra{1},$ (49)
where $a_{0},a_{1}\in\mathbb{R}$ are the eigenvalues. We denote the set of all
such Hermitian operators by $\mathbb{H}(\mathbb{C}^{2}|\mathbb{A})$. Moreover,
the general form of all Hermitian operators on the Hilbert space
$\mathbb{C}^{2}$ reads as
$\displaystyle
B(\alpha,\beta)=b_{+}\ket{b_{+}(\alpha,\beta)}\bra{b_{+}(\alpha,\beta)}+b_{-}\ket{b_{-}(\alpha,\beta)}\bra{b_{-}(\alpha,\beta)},$
(50)
with the eigenvalues $b_{+},b_{-}\in\mathbb{R}$, and the corresponding
orthonormal eigenvectors
$\\{\ket{b_{+}(\alpha,\beta)},\ket{b_{-}(\alpha,\beta)}\\}$ can be expressed
using the Bloch sphere parameterization as
$\displaystyle\ket{b_{+}(\alpha,\beta)}$ $\displaystyle=$
$\displaystyle\cos\frac{\alpha}{2}\ket{0}+e^{i\beta}\sin\frac{\alpha}{2}\ket{1};$
$\displaystyle\ket{b_{-}(\alpha,\beta)}$ $\displaystyle=$
$\displaystyle\sin\frac{\alpha}{2}\ket{0}-e^{i\beta}\cos\frac{\alpha}{2}\ket{1},$
(51)
where $\alpha\in[0,\pi]$, $\beta\in[0,2\pi)$. Let us denote the set of all
Hermitian operators on $\mathbb{C}^{2}$ by $\mathbb{H}(\mathbb{C}^{2})$.
We further assume, without loss of generality, that the singular values of $A$
and $B$ have the following orderings: $|a_{0}|\geq|a_{1}|$ and
$|b_{+}|\geq|b_{-}|$, so that we have $\|A\|_{\infty}=|a_{0}|$ and
$\|B\|_{\infty}=|b_{+}|$. Then, computing the lower bound in Eq. (11), we
obtain
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NRe}(\varrho;\mathbb{A})$
$\displaystyle\geq$
$\displaystyle\sup_{A\in\mathbb{H}(\mathbb{C}^{2}|\mathbb{A})}\sup_{B\in\mathbb{H}(\mathbb{C}^{2})}\frac{|{\rm
Tr}\\{B[A,\varrho]_{-}\\}|}{2\|A\|_{\infty}\|B\|_{\infty}}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\max_{\\{a_{0},a_{1}\\}\in\mathbb{R}^{2}}\max_{\\{b_{+},b_{-}\\}\in\mathbb{R}^{2}}\max_{\\{\alpha,\beta\\}\in[0,\pi]\times[0,2\pi)}\Big{\\{}|\sin\alpha\sin(\beta-\phi_{01})|$
$\displaystyle\times$
$\displaystyle\frac{|b_{+}-b_{-}|}{|b_{+}|}\frac{|a_{0}-a_{1}|}{|a_{0}|}|\braket{1}{\varrho}{0}|\Big{\\}}$
(53) $\displaystyle=$
$\displaystyle\frac{1}{2}\max_{\\{a_{0},a_{1}\\}\in\mathbb{R}^{2}}\max_{\\{b_{+},b_{-}\\}\in\mathbb{R}^{2}}\Big{\\{}\frac{|b_{+}-b_{-}|}{|b_{+}|}\frac{|a_{0}-a_{1}|}{|a_{0}|}|\braket{0}{\varrho}{1}|\Big{\\}}$
(54) $\displaystyle=$ $\displaystyle
2|\braket{0}{\varrho}{1}|=C_{l_{1}}(\varrho;\mathbb{A}).$ (55)
Here, $\phi_{01}=-\arg\braket{0}{\varrho}{1}$, the maximum is obtained for
Hermitian operator $A$ of Eq. (49) with $|a_{0}-a_{1}|=2|a_{0}|$ and for
Hermitian operator $B(\alpha,\beta)$ having the form of (50) with
$\alpha=\pi/2$ and $\beta=\phi_{01}+\pi/2$ and $|b_{+}-b_{-}|=2|b_{+}|$, and
$C_{l_{1}}(\varrho;\mathbb{A})=2|\braket{0}{\varrho}{1}|$ is the $l_{1}$-norm
coherence of $\varrho$ relative to the orthonormal basis
$\mathbb{A}=\\{\ket{0},\ket{1}\\}$. Note that, to get Eq. (55), we have used
the fact that for any pair $x,y\in\mathbb{C}$ with $|x|\geq|y|$, we always
have $|x-y|\leq 2|x|$, and equality is attained for $x=-y$. On the other hand,
as shown in Ref. Agung KD-nonreality coherence , for arbitrary state of a
single qubit and any PVM basis of $\mathbb{C}^{2}$, the left-hand side of Eq.
(B) is also given by the $l_{1}$-norm coherence, i.e.:
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm
NRe}(\varrho;\mathbb{A})=2|\braket{0}{\varrho}{1}|=C_{l_{1}}(\varrho;\mathbb{A}).$
(56)
Hence, the inequality in Eq. (B) indeed becomes equality. Moreover, one can
see from the values of $\alpha$ and $\beta$ that achieve the maximum in Eq.
(B), that the eigenbasis of $B_{*}$ expressed in Eq. (51) which attains the
supremum in Eq. (B) are mutually unbiased with
$\mathbb{A}=\\{\ket{0},\ket{1}\\}$ and also with the eigenbasis of $\varrho$.
This proves Proposition 1 for the case of two-dimensional Hilbert space.
Next, one can see from the proof of Proposition 2 in Appendix A that the lower
bound in Eq. (11) is less than or equal to the lower bound in Eq. (13), and
the left-hand sides of the two equations are the same. Hence, when the
inequality in Eq. (11) becomes equality, the inequality in Eq. (13) must also
become equality. This combined with the above result means that for two-
dimensional Hilbert space, the inequality in Eq. (13) must also become
equality. Indeed, computing the trace-norm asymmetry of the state $\varrho$
relative to the translation group generated by $A$ having the form of Eq.
(49), one has $\|[A,\varrho]\|_{1}/2=|a_{0}-a_{1}||\braket{0}{\varrho}{1}|$.
Upon inserting into the lower bound in Eq. (13), we have
$\displaystyle\sup_{A\in\mathbb{H}(\mathcal{H}|\\{\Pi_{a}\\})}\|[A,\varrho]_{-}\|_{1}/2\|A\|_{\infty}$
(57) $\displaystyle=$
$\displaystyle\max_{\\{a_{0},a_{1}\\}\in\mathbb{R}^{2}}\frac{|a_{0}-a_{1}|}{|a_{0}|}|\braket{0}{\varrho}{1}|$
$\displaystyle=$ $\displaystyle 2|\braket{0}{\varrho}{1}|$ $\displaystyle=$
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NRe}(\varrho;\mathbb{A}),$
where the last equality in Eq. (57) is just Eq. (56) and the maximum is
obtained when $|a_{0}-a_{1}|=2|a_{0}|$. This proves Proposition 2 for the case
of two-dimensional Hilbert space.
Combining Eqs. (56) and (57), we thus obtain Eq. (14) of the main text. ∎
## Appendix C Trade-off relation of Eq. (24) for a single qubit
Without loss of generality, one can proceed as in Appendix B, but now the
optimization is over the set $\mathbb{H}(\mathbb{C}^{2}|\mathbb{A})$ of all
Hermitian operators on $\mathbb{C}^{2}$ with the complete set of
eigenprojectors $\mathbb{A}=\\{\ket{0}\bra{0},\ket{1}\bra{1}\\}$ having the
form of Eq. (49), and over the set $\mathbb{H}(\mathbb{C}^{2}|\mathbb{B})$ of
all Hermitian operators on $\mathbb{C}^{2}$ with the complete set of
eigenprojectors
$\mathbb{B}=\\{\ket{b_{+}(\alpha,\beta)}\bra{b_{+}(\alpha,\beta)},\ket{b_{-}(\alpha,\beta)}\bra{b_{+}(\alpha,\beta)}\\}$
having the form of Eq. (50). Evaluating the lower bound in Eq. (24) for two-
dimensional Hilbert space, we have
$\displaystyle C_{l_{1}}(\varrho;\mathbb{A})+C_{l_{1}}(\varrho;\mathbb{B})$
(58) $\displaystyle\geq$
$\displaystyle\sup_{A\in\mathbb{H}(\mathbb{C}^{2}|\mathbb{A})}\sup_{B(\alpha,\beta)\in\mathbb{H}(\mathbb{C}^{2}|\mathbb{B})}\frac{|{\rm
Tr}\\{[A,B(\alpha,\beta)]\varrho\\}|}{\|A\|_{\infty}\|B\|_{\infty}}$
$\displaystyle=$
$\displaystyle\max_{\\{b_{+},b_{-}\\}\in\mathbb{R}^{2}}\max_{\\{a_{0},a_{1}\\}\in\mathbb{R}^{2}}\frac{|a_{0}-a_{1}|}{|a_{0}|}\frac{|b_{+}-b_{-}|}{|b_{+}|}|\braket{0}{\varrho}{1}|$
$\displaystyle\times$ $\displaystyle|\sin\alpha\sin(\beta-\phi_{01})|$
$\displaystyle=$ $\displaystyle
4|\braket{0}{\varrho}{1}||\sin\alpha||\sin(\beta-\phi_{01})|$ $\displaystyle=$
$\displaystyle 2\sqrt{r^{2}-r_{z}^{2}}|\sin\alpha||\sin(\beta-\phi_{01})|$
$\displaystyle=$ $\displaystyle
2r|\sin\phi_{z}||\sin\alpha||\sin(\beta-\phi_{01})|.$
Here, the maximum is obtained when $|a_{0}-a_{1}|=2|a_{0}|$ and
$|b_{+}-b_{-}|=2|b_{+}|$ (see Appendix B), we have used the expression for the
qubit state
$\varrho=\frac{1}{2}(\mathbb{I}+r_{x}\sigma_{x}+r_{y}\sigma_{y}+r_{z}\sigma_{z})$,
$r_{x}^{2}+r_{y}^{2}+r_{z}^{2}=r^{2}$ so that
$2|\braket{0}{\varrho}{1}|=|r_{x}-ir_{y}|=\sqrt{r^{2}-r_{z}^{2}}$, and
$\phi_{z}$ is the angle between the Bloch vector of the state and the positive
$z$-axis. One can see that the lower bound decreases as the purity of the
state given by ${\rm Tr}(\varrho^{2})=(1+r^{2})/2$ decreases. Moreover, it
also decreases when the noncommutativity between the two PVM bases, i.e.,
$\mathbb{A}$ and $\mathbb{B}$, quantified by $|\sin\alpha|$, decreases. In
particular, the lower bound vanishes for $r=0$, i.e., for maximally mixed
state $\varrho=\mathbb{I}/2$ with minimum purity, and it also vanishes when
$\sin\alpha=0,\pi$, i.e., when the two PVM bases $\mathbb{A}$ and $\mathbb{B}$
are commuting. This result is in accord with that obtained in Ref. Yuan
uncertainty relation for coherence . Note that $|\sin\phi_{z}|$ and
$|\sin(\beta-\phi_{01})|$ on the right-hand side characterize respectively the
noncommutativity between the state $\varrho$ and the PVM basis $\mathbb{A}$
and between $\varrho$ and the PVM basis $\mathbb{B}$. They vanish respectively
when $\varrho$ commutes with $\mathbb{A}$ and $\varrho$ commutes with
$\mathbb{B}$, as expected. As an example consider the case when the state is
pure so that $r=1$, and take $\alpha=\pi/2$ and $\phi_{01}+\pi/2=\beta$ so
that $\sin\alpha=\sin(\beta-\phi_{01})=1$. Then, taking $\phi_{z}=\pi/2$, we
have $C_{l_{1}}(\varrho;\mathbb{A})=C_{l_{1}}(\varrho;\mathbb{B})=1$ and the
inequality in Eq. (58) becomes equality, i.e., both sides are equal to $2$.
Note that in this case, the triple $\mathbb{A}$, $\mathbb{B}$ and the
eigenbasis of $\varrho$ comprise a three mutually unbiased bases of
$\mathbb{C}^{2}$.
## Appendix D Proof of Eq. (30)
One first has, from the definition of the KD nonclassicality in a state
$\varrho$ relative to a PVM $\\{\Pi_{a}\\}$ in Eq. (10),
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NCl}(\varrho;\\{\Pi_{a}\\})$ (59)
$\displaystyle=$ $\displaystyle\sup_{\\{\Pi_{b}\\}\in\mathcal{M}_{\rm
r1PVM}(\mathcal{H})}\sum_{a,b}\big{|}\sum_{a^{\prime}}{\rm
Tr}\\{\Pi_{b}\Pi_{a}\varrho\Pi_{a^{\prime}}\\}\big{|}-1$ $\displaystyle\leq$
$\displaystyle\sum_{a,a^{\prime}}\big{|}\braket{a}{\varrho}{a^{\prime}}|\sum_{b_{*}}|\braket{a^{\prime}}{b_{*}}\braket{b_{*}}{a}\big{|}-1,$
where we have used a completeness relation
$\sum_{a^{\prime}}\Pi_{a^{\prime}}=\mathbb{I}$, triangle inequality, and
$\\{\Pi_{b_{*}}\\}\in\mathcal{M}_{\rm r1PVM}(\mathcal{H})$ is a PVM basis
which achieves the supremum. On the other hand, using the Cauchy-Schwartz
inequality, we have
$\sum_{b_{*}}|\braket{b_{*}}{a}\braket{a^{\prime}}{b_{*}}|\leq(\sum_{b_{*}}|\braket{b_{*}}{a}|^{2}\sum_{b_{*}^{\prime}}|\braket{a^{\prime}}{b^{\prime}_{*}}|^{2})^{1/2}=1$,
where we have used a completeness relation
$\sum_{b_{*}}\ket{b_{*}}\bra{b_{*}}=\mathbb{I}$. Inserting this into Eq. (59),
we finally obtain
$\displaystyle\mathcal{Q}_{\rm KD}^{\rm NCl}(\varrho;\\{\Pi_{a}\\})$
$\displaystyle\leq$
$\displaystyle\sum_{a,a^{\prime}}\big{|}\braket{a}{\varrho}{a^{\prime}}|-1$
(60) $\displaystyle=$ $\displaystyle\sum_{a\neq
a^{\prime}}\big{|}\braket{a}{\varrho}{a^{\prime}}|$ $\displaystyle=$
$\displaystyle C_{l_{1}}(\varrho;\\{\ket{a}\\}),$
where we have used the normalization $\sum_{a}\braket{a}{\varrho}{a}=1$. ∎
## References
* (1) W. Heisenberg, Z. Phys. 43, 172 (1927).
* (2) E. H. Kennard, Z. Phys. 44, 326 (1927).
* (3) H. Weyl, Gruppentheorie und Quantenmechanik (Leipzig, 1928).
* (4) H. P. Robertson, Phys. Rev. 34 (1), 163 (1929).
* (5) A. Einstein, B. Podolsky, and N. Rosen, Phys. Rev. 47, 777 (1935).
* (6) J. S. Bell, Physics 1, 195 (1964).
* (7) P. J. Coles, M. Berta, M. Tomamichel and S. Wehner, Rev. Mod. Phys. 89, 015002 (2017).
* (8) E. Schrödinger, Proceedings of the Prussian Academy of Sciences XIX, 296 (1930).
* (9) H. Everett, Rev. Mod. Phys. 29 (3), 454 (1957).
* (10) I. Hirschman, Am. J. Math. 79 (1), 152 (1957).
* (11) I. Bialynicki-Birula and J. Mycielski, Comms. Math. Phys. 44 (2), 129 (1975).
* (12) D. Deutsch, Phys. Rev. Lett. 50, 631 (1983).
* (13) H. Maassen and J. B. Uffink, Phys. Rev. Lett. 60, 1103 (1988).
* (14) M. Berta, M. Christandl, R. Colbeck, J. M. Renes, and R. Renner (2010), Nat. Phys. 6 (9), 659.
* (15) P. J. Coles, R. Colbeck, L. Yu, and M. Zwolak Phys. Rev. Lett. 108 (21), 210405 (2012).
* (16) M. J. Hall, Phys. Rev. A 107, 062215 (2023).
* (17) S. Wehner and A. Winter, New J. Phys. 12, 025009 (2010).
* (18) S. Luo, Phys. Rev. A 72, 042110 (2005).
* (19) E. P. Wigner and M. M. Yanase, Proc. Natl. Acad. Sci. U.S.A. 49, 910 (1963).
* (20) S. Furuichi, Phys. Rev. A 82, 034101 (2010).
* (21) K. Korzekwa, M. Lostaglio, D. Jennings, and T. Rudolph, Phys. Rev. A 89, 042122 (2014).
* (22) U. Singh, A. K. Pati, andM. N. Bera, Mathematics 4, 47 (2016).
* (23) X. Yuan, G. Bai, T. Peng, and X. Ma, 96, Phys. Rev. A 032313 (2017).
* (24) J. G. Kirkwood, Phys. Rev. 44, 31 (1933).
* (25) P. A. M. Dirac, Rev. Mod. Phys. 17, 195 (1945).
* (26) O. Barut, Phys. Rev. 108, 565 (1957).
* (27) A. Budiyono and H. K. Dipojono, Phys. Rev. A 107, 022408 (2023).
* (28) A. Budiyono, J. F. Sumbowo, M. K. Agusta, and B. E. B. Nurhandoko, arXiv:2309.09162.
* (29) T. Baumgratz, M. Cramer, and M. B. Plenio, Phys. Rev. Lett. 113, 140401 (2014).
* (30) Y. P. Terletsky, Zh. Eksp. Teor. Fiz. 7, 1290 (1937).
* (31) H. Margenau and R. N. Hill, Prog. Theor. Phys. 26, 722 (1961).
* (32) D. R. M. Arvidsson-Shukur, J. C. Drori and N. Y. Halpern, J. Phys. A: Math. and Theor. 54, 284001 (2021).
* (33) S. deBièvre, Phys. Rev. Lett. 127, 190404 (2021).
* (34) Y. Aharonov, D. Z. Albert and L. Vaidman, Phys. Rev. Lett. 60 (14), 1351 (1988).
* (35) Y. Aharonov and D. Rohrlich, Quantum paradoxes: quantum theory for the perplexed (Wiley-VCH, 2005).
* (36) H. M. Wiseman, Phys. Rev. A 65, 032111 (2002).
* (37) J. S. Lundeen and K. J. Resch, Phys. Lett. A 334, 337 (2005).
* (38) R. Jozsa, Phys. Rev. A 76, 044103 (2007).
* (39) J. S. Lundeen and C. Bamber, Phys. Rev. Lett. 108, 070402 (2012).
* (40) J. Z. Salvail, M. Agnew, A. S. Johnson, E. Bolduc, J. Leach and R. W. Boyd, Nat. Photonics 1 (2013).
* (41) C. Bamber and J. S. Lundeen, Phys. Rev. Lett. 112, 070405 (2014).
* (42) G. S. Thekkadath, L. Giner, Y. Chalich, M. J. Horton, J. Banker, and J. S. Lundeen, Phys. Rev. Lett. 117, 120401 (2016).
* (43) L. M. Johansen, Phys. Rev. A 76, 012119 (2007).
* (44) G. Vallone and D. Dequal, Phys. Rev. Lett. 116, 040502 (2016).
* (45) E. Cohen and E. Pollak, Phys. Rev. A 98, 042112 (2018).
* (46) S. Hernandez-Gomez, S. Gherardini, A. Belenchia, M. Lostaglio, A. Levy, and N. Fabbri, Experimental assessment of non-classicality in a solid-state spin qutrit, arXiv:2207.12960 (2022).
* (47) R. Wagner, Z. Schwartzman-Nowik, I. L. Paiva, A Te’eni, A. Ruiz-Molero, R. S. Barbosa, E. Cohen, and E. F. Galvão, Quantum Sci. Technol. 9, 015030 (2024).
* (48) M. Lostaglio, A. Belenchia, A. Levy, S. Hernandez-Gomez, N. Fabbri, and S Gherardini, Quantum 7, 1128 (2023).
* (49) A. Allahverdyan, Phys. Rev. E 90, 032137 (2014).
* (50) M. Lostaglio Phys. Rev. Lett. 120, 040602 (2018).
* (51) M. Lostaglio, Phys. Rev. Lett. 125, 230603 (2020).
* (52) A. Levy and M. Lostaglio, PRX Quantum 1, 010309 (2020).
* (53) N. Y. Halpern, B. Swingle, and J. Dressel, Phys. Rev. A 97, 042105 (2018).
* (54) J. R. G. Alonso, N. Y. Halpern, and J. Dressel, Phys. Rev. Lett. 122, 040404 (2019).
* (55) D. Arvidsson-Shukur, N. YungerHalpern, H. Lepage, A. Lasek, C. Barnes, and S. Lloyd, Nat. Comm. 11, 3775 (2020).
* (56) N. B. Lupu-Gladstein, B. Y. Yilmaz, D. R. M. Arvidsson-Shukur, A. Brodutch, A. O. T. Pang, A. M. Steinberg, N. Y. Halpern, Phys. Rev. Lett. 128, 220504 (2022).
* (57) S. Das, S. Modak, and M. N. Bera, Phys. Rev. A 107, 042413 (2023).
* (58) M. F. Pusey, Phys. Rev. Lett. 113, 200401 (2014).
* (59) R. Kunjwal, M. Lostaglio, and M. F. Pusey, Phys. Rev. A 100, 042116 (2019).
* (60) A. Budiyono, M. K. Agusta, B. E. B. Nurhandoko, H. K. Dipojono, J. Phys. A: Math. Theor. 56, 235304 (2023).
* (61) A. Budiyono, Phys. Rev. A 108, 012431 (2023).
* (62) A. Budiyono, B. E. Gunara, Bagus E. B. Nurhandoko, and H. K. Dipojono, J. Phys. A: Math. Theor. 56, 435301 (2023).
* (63) S. deBiévre, Journal of Mathematical Physics 64(2), 022202 (2023).
* (64) J. Xu, Kirkwood-Dirac classical pure states, arXiv:2210.02876 (2023).
* (65) I. Marvian and R. Spekkens, Phys. Rev. A 94, 052324 (2016).
* (66) M. Cerezo, A. Arrasmith, R. Babbush, S. C. Benjamin, S. Endo, K. Fujii, J. R. McClean, K. Mitarai, X. Yuan, L. Cincio, P. J. Coles, Nature Reviews Physics 3, 625 (2021).
* (67) L. M. Johansen, Physics Letters A 322, 298 (2004).
* (68) M. J. W. Hall, Phys. Rev. A 69, 052113 (2004).
* (69) J. Watrous, The theory of quantum information, Cambridge University Press, Cambridge, 2018.
|
††thanks: Corresponding author<EMAIL_ADDRESS>
# Heat statistics in the relaxation process of the Edwards-Wilkinson elastic
manifold
Yu-Xin Wu School of Physics, Peking University, Beijing, 100871, China Jin-
Fu Chen School of Physics, Peking University, Beijing, 100871, China Ji-Hui
Pei School of Physics, Peking University, Beijing, 100871, China Fan Zhang
School of Physics, Peking University, Beijing, 100871, China H. T. Quan
School of Physics, Peking University, Beijing, 100871, China Collaborative
Innovation Center of Quantum Matter, Beijing, 100871, China Frontiers Science
Center for Nano-optoelectronics, Peking University, Beijing, 100871, China
(August 28, 2024)
###### Abstract
The stochastic thermodynamics of systems with a few degrees of freedom has
been studied extensively so far. We would like to extend the study to systems
with more degrees of freedom and even further–continuous fields with infinite
degrees of freedom. The simplest case for a continuous stochastic field is the
Edwards-Wilkinson elastic manifold. It is an exactly solvable model of which
the heat statistics in the relaxation process can be calculated analytically.
The cumulants require a cutoff spacing to avoid ultra-violet divergence. The
scaling behavior of the heat cumulants with time and the system size as well
as the large deviation rate function of the heat statistics in the large size
limit is obtained.
## I Introduction
Historically, people studied thermodynamics in macroscopic systems such as
ideal gas with up to $10^{23}$ molecules. Due to the huge number of degrees of
freedom in the macroscopic scale, it is impossible to extract the trajectories
of individual particles explicitly. Hence it is not possible to study
thermodynamics of macroscopic systems in arbitrary far from equilibrium
processes. Nevertheless, for mesoscopic systems with only a few degrees of
freedom, stochastic dynamics (Langevin equation, Fokker-Planck equation,
master equation) provides detailed information about the system. Prominent
examples of mesoscopic systems include colloidal particles, macromolecules,
nanodevices and so on (Livi and Politi, 2017; Luca Peliti, 2021). In all these
examples, researchers focus on the dynamics of a few degrees of freedom of the
system while coarse-graining all the degrees of freedom of the reservoir.
Mesoscopic systems can be driven out of equilibrium by external driving, for
instance, by varying the temperature or by controlling them with optical
tweezers (Hummer and Szabo, 2001; Liphardt _et al._ , 2002; Wang _et al._ ,
2002; Blickle _et al._ , 2006; Douarche _et al._ , 2006; Harris _et al._ ,
2007; Imparato _et al._ , 2007; Toyabe _et al._ , 2010; Gupta _et al._ ,
2011; Alemany _et al._ , 2012; Gieseler _et al._ , 2014; Jun _et al._ ,
2014; Koski _et al._ , 2014; Lee _et al._ , 2015; Martínez _et al._ , 2015;
Hoang _et al._ , 2018).
With the equation of motion, e.g., Langevin equation, Fokker-Planck equation
or master equation, researchers are able to establish a framework of
thermodynamics for mesoscopic systems in arbitrary far from equilibrium
processes. This is stochastic thermodynamics in which thermodynamic quantities
such as work, heat and entropy production in nonequilibrium processes have
been explored extensively in both classical and quantum realms (Jarzynski,
1997; Mazonka and Jarzynski, 1999; Narayan and Dhar, 2003; Speck and Seifert,
2004; van Zon and Cohen, 2004; Lua and Grosberg, 2005; Speck and Seifert,
2005; Taniguchi and Cohen, 2006; Imparato _et al._ , 2007; Quan _et al._ ,
2008; Engel, 2009; Fogedby and Imparato, 2009; Minh and Adib, 2009; Chatterjee
and Cherayil, 2010; Gomez-Solano _et al._ , 2011; Nickelsen and Engel, 2011;
Speck, 2011; Kwon _et al._ , 2013; Jiménez-Aquino and Velasco, 2013; Ryabov
_et al._ , 2013; Jarzynski _et al._ , 2015; Salazar and Lira, 2016; Zhu _et
al._ , 2016; Funo and Quan, 2018a, b; Hoang _et al._ , 2018; Pagare and
Cherayil, 2019; Fogedby, 2020; Chen _et al._ , 2021; Gupta and Sivak, 2021;
Chen and Quan, 2023; Paraguassú _et al._ , 2023). In the study of work or
heat distribution for extreme nonequilibrium processes, rare events with
exponentially small probabilities have dominant contributions making finite
sampling error particularly serious. Hence previous studies, be it
experimental or computer simulations, are predominantly for small systems,
i.e., those with a few degrees of freedom (Hartmann, 2014). Nevertheless,
systems with a few degrees of freedom are too special. Therefore it is
desirable to extend the study of stochastic thermodynamics to more complicated
systems. We thus would like to extend the studies to systems with more degrees
of freedom, for example, stochastic fields. Hopefully in some exactly solvable
model we can obtain analytical results about work and heat distribution. These
rigorous results about work or heat distribution in systems with many degrees
of freedom not only have pedagogical value but also may bring some insights to
the understanding of thermodynamics in extreme nonequilibrium processes, as P.
W. Anderson once advocated, “More is different” (Anderson, 1972). While many
researchers are interested in the dynamic properties of stochastic fields
(Forrest and Tang, 1990; Antal and Rácz, 1996; Racz, 2003; Vvedensky, 2003;
Bustingorry _et al._ , 2007), less research is carried out from the
perspective of stochastic thermodynamics except (Mallick _et al._ , 2011; Wio
_et al._ , 2017; Rodríguez and Wio, 2019; Wio _et al._ , 2020a, b) so far as
we know.
In this article we study the thermodynamics of an elastic manifold whose
underlying dynamics is described by the Edwards-Wilkinson (EW) equation
(Edwards and Wilkinson, 1982)
$\partial_{t}h(\boldsymbol{x},t)=\nu\nabla^{2}h(\boldsymbol{x},t)+\xi(\boldsymbol{x},t).$
(1)
where $h(\boldsymbol{x},t)$ is the local height at spatial point
$\boldsymbol{x}$ at time $t$, $\nu$ is the diffusive coefficient and
$\xi(\boldsymbol{x},t)$ is the Gaussian white noise.
The problem we analyze is the relaxation of an elastic manifold described by
the EW equation. The elastic manifold is initially put in contact with a heat
reservoir at the inverse temperature $\beta^{\prime}$. After initial
equilibration with the first heat reservoir at $\beta^{\prime}$ the system is
detached from it, and is put in contact with a second heat reservoir at the
inverse temperature $\beta$. The manifold subsequently tries to adapt to the
working temperature (Bustingorry _et al._ , 2007). The relaxation is
characterized by the stochastic heat absorbed from/released into the
surrounding reservoir during a period of time $\tau$. We are interested in the
average and fluctuation of the heat in such a process. We find several generic
properties of the average and fluctuating heat in the relaxation process of
the EW elastic manifold. By employing the Feynman-Kac method (Chen _et al._ ,
2021; Limmer _et al._ , 2021), we obtain analytical results of the
characteristic function of heat for the EW model during an arbitrary
relaxation period $\tau$ with an arbitrary diffusive coefficient $\nu$ and
analyze the scaling behavior of the cumulants of heat with time. Analytical
results of the heat statistics bring important insights into understanding the
fluctuating property of heat in such a concrete and exactly solvable model. We
also verify from the analytical results that the heat statistics satisfy the
fluctuation theorem of heat exchange (Jarzynski and Wójcik, 2004). The large
deviation rate function of heat statistics in the large size limit is also
analyzed.
The rest of this article is organized as follows. In Section II we introduce
the EW model. In Section III we define the stochastic heat and obtain
analytical results of the characteristic function of heat using the Feynman-
Kac approach. We also compute the cumulants of heat and discuss their scaling
behavior with time and the system size. Conclusions are given in Section IV.
## II The model
A $d$-dimensional elastic manifold, with finite size $2L$ in each direction,
joggles under thermal noise. Its local height $h(\boldsymbol{x},t)$ at spatial
point $\boldsymbol{x}$ at time $t$ evolves according to the EW equation Eq.
(1) which takes the form of a multivariable overdamped Langevin equation (Livi
and Politi, 2017). The thermal noise $\xi(\boldsymbol{x},t)$ is white in
nature, i.e., $\langle\xi(\boldsymbol{x},t)\rangle=0,$
$\langle\xi(\boldsymbol{x},t)\xi(\boldsymbol{x}^{\prime},t^{\prime})\rangle=\Gamma\delta(\boldsymbol{x}-\boldsymbol{x}^{\prime})\delta(t-t^{\prime}),$
with amplitude $\Gamma=2/\beta$. The EW energy is just that of a massless
field with Hamiltonian $H_{S}=\nu\int d\boldsymbol{x}(\nabla
h(\boldsymbol{x},t))^{2}/2$. Here the subscript $S$ refers to the system.
Initially, the system is prepared in an equilibrium state with the inverse
temperature $\beta^{\prime}$ characterized by a Gibbs-Boltzmann distribution
in the configuration space, i.e., the probability ${\cal P}(h,t)$ to find the
system in the configuration $\\{h(\boldsymbol{x},t)\\}$ is the Gibbs-Boltzmann
distribution
${\cal P}(h,0)={\cal
N}^{\prime-1}\exp\Big{[}-\beta^{\prime}\cdot\frac{\nu}{2}\int
d\boldsymbol{x}\Big{(}\nabla h(\boldsymbol{x},0)\Big{)}^{2}\Big{]}$ (2)
where ${\cal N}^{\prime}$ is the normalization constant
$\displaystyle{\cal N}^{\prime}$ $\displaystyle=\int
dh(\boldsymbol{x},0)\exp\Big{[}-\beta^{\prime}\cdot\frac{\nu}{2}\int
d\boldsymbol{x}\Big{(}\nabla h(\boldsymbol{x},0)\Big{)}^{2}\Big{]}.$ (3)
Here the integration in the normalization constant is taken over all possible
initial configurations while the one in the exponential factor is taken over
all spatial points.
After initial equilibration, the system is detached from the first heat
reservoir, and is placed in contact with a second heat reservoir at the
inverse temperature $\beta$, which is different from $\beta^{\prime}$. The
elastic manifold subsequently relaxes towards the equilibrium state at
temperature $\beta$ since no external driving is involved. The heat
absorbed/released is a fluctuating variable for the system undergoing
stochastic motion. We are interested in the heat statistics in such a
relaxation process.
For a finite-size manifold we take periodic boundary conditions along each
$\boldsymbol{x}$ direction. Following Refs. (Antal and Rácz, 1996; Livi and
Politi, 2017) we employ a Fourier representation of the height field
$h(\boldsymbol{x},t)=\frac{1}{(2\pi)^{d}}\underset{\boldsymbol{q}}{\sum}e^{i\boldsymbol{q}\cdot\boldsymbol{x}}h_{\boldsymbol{q}}(t),$
(4)
$h_{\boldsymbol{q}}(t)=\int
d\boldsymbol{x}e^{-i\boldsymbol{q}\cdot\boldsymbol{x}}h(\boldsymbol{x},t),$
(5)
where $\boldsymbol{q}$ represents a wavevector with $q_{j}=n_{j}\pi/L\
(j=x,y,z\dots,\ n_{j}=\pm 1,\pm 2...$ and
$h_{\boldsymbol{q}=\boldsymbol{0}}(t)=0$ for all time $t)$ (Bustingorry _et
al._ , 2007).
The evolution of the Fourier component is given by
$\frac{\partial h_{\boldsymbol{q}}(t)}{\partial t}=-\nu
q^{2}h_{\boldsymbol{q}}(t)+\xi_{\boldsymbol{q}}(t),$ (6)
$\langle\xi_{\boldsymbol{q}}(t)\rangle=0,$ (7)
$\displaystyle\langle\xi_{\boldsymbol{q}}(t)\xi_{\boldsymbol{q^{\prime}}}(t^{\prime})\rangle$
$\displaystyle=\frac{2}{\beta}(2\pi)^{d}\delta(t-t^{\prime})\delta_{\boldsymbol{q},-\boldsymbol{q}^{\prime}}.$
(8)
The normalization constant in Eq. (3) can be computed as
$\displaystyle{\cal N}^{\prime}$ $\displaystyle=\int
d\\{h_{\boldsymbol{q}}(0)\\}\exp\Big{[}-\beta^{\prime}\nu\frac{1}{(2\pi)^{2d}}\underset{\boldsymbol{q}(q_{j}>0)}{\sum}q^{2}h_{\boldsymbol{q}}(0)h_{-\boldsymbol{q}}(0)\Big{]}$
$\displaystyle=\underset{\boldsymbol{q}(q_{j}>0)}{\prod}\frac{\pi(2\pi)^{2d}}{\beta^{\prime}\nu
q^{2}}.$ (9)
where $q^{2}$ stands for the modulus square of $\boldsymbol{q}.$
The probability density of the system state ${\cal P}(h,t)$ evolves under the
governing of the Fokker-Planck equation
$\displaystyle\frac{\partial{\cal P}(h,t)}{\partial t}$ $\displaystyle=-\int
d\boldsymbol{x}\frac{\delta}{\delta
h}\Big{[}\nu\nabla^{2}h(\boldsymbol{x},t){\cal P}(h,t)\Big{]}$
$\displaystyle\quad+\frac{\Gamma}{2}\int
d\boldsymbol{x}\frac{\delta^{2}}{\delta h^{2}}{\cal P}(h,t).$ (10)
In the Fourier space, the probability of the height field configuration is the
product of the real and the imaginary part over all modes
${\cal P}(\\{h_{\boldsymbol{q}}\\},t)=\underset{\boldsymbol{q}}{\prod}{\cal
P}(h_{\boldsymbol{q}},t)=\underset{\boldsymbol{q}}{\sum}{\cal
P}(h_{\boldsymbol{q}}^{R},t){\cal P}(h_{\boldsymbol{q}}^{I},t)$ (11)
where
$h_{\boldsymbol{q}}^{R}=\mathrm{Re}(h_{\boldsymbol{q}}),\quad
h_{\boldsymbol{q}}^{I}=\mathrm{Im}(h_{\boldsymbol{q}}).$ (12)
The Fokker-Planck equation in the Fourier space can be then written into two
independent parts: the real part and the imaginary part (Bettencourt, 2001)
$\displaystyle\frac{\partial{\cal P}(h_{\boldsymbol{q}}^{R,I},t)}{\partial t}$
$\displaystyle=\frac{(2\pi)^{d}}{2\beta}\frac{\partial^{2}{\cal
P}}{\partial(h_{\boldsymbol{q}}^{R,I})^{2}}+\nu q^{2}{\cal P}+\nu
q^{2}h_{\boldsymbol{q}}^{R,I}\frac{\partial{\cal P}}{\partial
h_{\boldsymbol{q}}^{R,I}}.$ (13)
Having introduced the model, in the following we will calculate the heat
statistics in the relaxation process.
## III Heat Statistics
In this section we study heat statistics of the EW elastic manifold in the
relaxation process. First, we obtain the analytical results of heat statistics
and verify the fluctuation theorem of heat exchange. Second, we study the
asymptotic behavior of the cumulants. Third, we calculate the large deviation
function of heat statistics in the large size limit.
### III.1 Characteristic function
Since no external driving is applied to the system, no work is performed
during the relaxation process. The fluctuating heat $Q$ absorbed from the heat
reservoir equals the energy difference between the initial and the final
states over a time period $\tau$
$Q=H_{S}(h(x,\tau))-H_{S}(h(x,0)).$ (14)
The characteristic function of heat $\chi_{\tau}(u)$ is defined as the Fourier
transform of the heat distribution
$\chi_{\tau}(u)=\int dQ\exp(iuQ){\cal P}(Q,\tau).$ (15)
Here ${\cal P}(Q,\tau)$ stands for the probability of the heat $Q$ transferred
from the heat reservoir to the system during the period of time $\tau$. The
characteristic function of heat $\chi_{\tau}(u)$ can be calculated using the
Feynman-Kac approach (Chen _et al._ , 2021; Limmer _et al._ , 2021; Chen and
Quan, 2023)
$\displaystyle\chi_{\tau}(u)$ $\displaystyle=\langle\exp(iuQ)\rangle$
$\displaystyle=\int dhe^{iuH_{S}(h(x,\tau))}\eta(h,\tau)$ (16)
where the probability-density-like function $\eta(h,\tau)$ satisfies Eq. (10)
and Eq. (13) with the initial condition
$\displaystyle\eta(h,0)$ $\displaystyle=e^{-iuH_{S}(h(x,0))}{\cal P}(h,0).$
(17)
The probability-density-like function $\eta(h,\tau)$ is solved in the Fourier
space (See Appendix A for detailed derivation) and we obtain the
characteristic function of heat for the relaxation process over a time period
of $\tau$
$\chi_{\tau}(u)=\beta\beta^{\prime}\underset{\boldsymbol{q}(q_{j}\geq\frac{\pi}{L})}{\prod}\frac{\exp(2\nu
q^{2}\tau)}{-u(i\beta^{\prime}-i\beta-u)\Big{[}\exp(2\nu
q^{2}\tau)-1\Big{]}+\beta\beta^{\prime}\exp(2\nu q^{2}\tau)}.$ (18)
The wavevector component in each direction only takes positive discrete values
$q_{j}=n_{j}\pi/L,n_{j}=1,2...$
We do the self-consistent check of the analytic result Eq. (18) from three
aspects:
1\. The distribution of heat satisfies the conservation of probability
$\chi_{\tau}(0)=1.$ (19)
2\. One can see the characteristic function of heat exhibits the following
symmetry:
$\chi_{\tau}(u)=\chi_{\tau}(i\beta^{\prime}-i\beta-u),$ (20)
indicating that the heat distribution satisfies the fluctuation theorem of
heat exchange (Jarzynski and Wójcik, 2004; van Zon and Cohen, 2004; Chen and
Quan, 2023)
$\displaystyle\langle e^{iuQ}\rangle$ $\displaystyle=\langle
e^{(-iu+\beta-\beta^{\prime})Q}\rangle.$ (21)
By setting $u=0$, we obtain the relation
$\chi_{\tau}(i\beta^{\prime}-i\beta)=1$, which is exactly the fluctuation
theorem of heat exchange in the integral form
$\langle\exp[-(\beta^{\prime}-\beta)Q]\rangle=1$(Jarzynski and Wójcik, 2004).
3\. In the long time limit $\tau\rightarrow\infty$, the characteristic
function becomes
$\displaystyle\underset{\tau\to\infty}{\lim}\chi_{\tau}(u)$
$\displaystyle=\underset{\boldsymbol{q}(q_{j}\geq\frac{\pi}{L})}{\prod}\frac{\beta\beta^{\prime}}{(u+i\beta)(u-i\beta^{\prime})}.$
This result, independent of the relaxation dynamics, can be written in the
form
$\displaystyle\underset{\tau\to\infty}{\lim}\chi_{\tau}(u)$
$\displaystyle=\Big{\langle}e^{iuH_{S}(h(x,\tau))}\Big{\rangle}_{\beta}\Big{\langle}e^{-iuH_{S}(h(x,0))}\Big{\rangle}_{\beta^{\prime}}$
(22)
where the initial distribution (thermal equilibrium with the inverse
temperature $\beta^{\prime}$) and the final distribution (thermal equilibrium
with the inverse temperature $\beta$) are sampled independently, reflecting
the complete thermalization of the system (Fogedby and Imparato, 2009). This
result agrees with our intuition.
(a)
(b)
Figure 1: Average heat as a function of time. Parameters for both panels:
$d=1,\ \nu=1,\ \beta^{\prime}=4,\ \beta=2.$ (a) $\langle Q\rangle$ as a
function of $\tau$ for three system sizes $L=30,35,40$, fixing $a=0.2.$ Inset:
the saturation value of average heat $\langle Q\rangle_{st}$ as a function of
system size $L$. (b) $\langle Q\rangle$ as a function of $\tau$ for three
cutoff spacings $a=0.2,0.5,1.0$, fixing $L=10.$ Inset: the saturation value of
average heat $\langle Q\rangle_{st}$ as a function of cutoff spacing $a$.
### III.2 Cumulants
The cumulants of heat can be derived by taking derivatives of the logarithm of
the characteristic function $\chi_{\tau}(u)$ with respect to $u$ at $u=0$,
with the first cumulant representing the average heat and the second one
standing for the variance.
The average heat is
$\displaystyle\langle Q\rangle$
$\displaystyle=\frac{1}{i}\frac{d\ln\chi_{\tau}(u)}{du}|_{u=0}$
$\displaystyle=\underset{\boldsymbol{q}(\frac{\pi}{a}\geq
q_{j}\geq\frac{\pi}{L})}{\sum}\frac{\Big{[}1-\exp(-2\nu
q^{2}\tau)\Big{]}(\beta^{\prime}-\beta)}{\beta\beta^{\prime}}$
$\displaystyle=\frac{\beta^{\prime}-\beta}{\beta\beta^{\prime}}\Big{(}\frac{\pi}{L}\Big{)}^{-d}\int_{\frac{\pi}{L}}^{\frac{\pi}{a}}d\boldsymbol{q}\Big{[}1-\exp(-2\nu
q^{2}\tau)\Big{]}.$ (23)
A cutoff $\pi/a$ of the wavevector is needed to avoid ultra-violet divergence,
i.e., we introduce a smallest spacing $a$ in this elastic manifold (Kerson
Huang, 1987; Parisi and Machta, 1989; Livi and Politi, 2017). Since we
consider a continuous field, the cutoff spacing is always much smaller than
the system size $a\ll L$. We will see that the choice of the value of $a$ will
influence the average heat (See Fig. 1 (b) inset plot).
Figure 2: Average heat of a mode $\boldsymbol{q}$ for different time
durations. The parameters take values $L=30,\ d=1,\ \nu=1,\ \beta^{\prime}=4,\
\beta=2$ and the curves correspond to three values of time delay
$\tau=10^{1},10^{0},10^{-1}$ from the bottom to the top. The dashed line
stands for the saturation value.
We rewrite the average heat $\langle Q\rangle$ with a change of the variable
$\boldsymbol{s}=L\boldsymbol{q}$
$\displaystyle\langle Q\rangle$
$\displaystyle=\frac{(\beta^{\prime}-\beta)}{\beta\beta^{\prime}\pi{}^{d}}f\Big{(}\frac{\nu\tau}{L^{2}}\Big{)},$
(24)
where
$\displaystyle f(r)$
$\displaystyle=\int_{\pi}^{\frac{L\pi}{a}}d\boldsymbol{s}\Big{[}1-e^{-2rs^{2}}\Big{]}$
$\displaystyle=(\frac{L-a}{a}\pi)^{d}+(\frac{\pi}{8r})^{\frac{d}{2s}}\Big{[}\mathrm{Erf}(\pi\sqrt{2r})-\mathrm{Erf}(\frac{\pi
L\sqrt{2r}}{a})\Big{]}^{d}.$ (25)
$\mathrm{Erf}(r)$ is the error function.
In the following we discuss the asymptotic behavior of the average heat as a
function of time. For one-dimensional case, the average heat as a function of
time is illustrated in Fig. 1. At the initial stage, for $\tau\ll a^{2}/\nu$,
$\displaystyle\langle Q\rangle$
$\displaystyle\approx\frac{2\pi^{2}}{3a^{2}}\frac{(\beta^{\prime}-\beta)}{\beta\beta^{\prime}}\nu\tau\frac{L}{a}.$
(26)
The average heat initially increases with time linearly. This is Newton’s law
of cooling.
For the intermediate time $a^{2}/\nu\ll\tau\ll L^{2}/\nu$,
$\displaystyle\langle Q\rangle$
$\displaystyle\approx\frac{(\beta^{\prime}-\beta)}{\beta\beta^{\prime}}\frac{L}{a}\Big{(}1-\frac{a}{\sqrt{8\nu}}\tau^{-1/2}\Big{)}.$
(27)
It exhibits $\tau^{-1/2}$ scaling with time.
In the long time limit, for $\tau\gg L^{2}/\nu$,
$\displaystyle\langle Q\rangle$
$\displaystyle\rightarrow\frac{\beta^{\prime}-\beta}{\beta\beta^{\prime}}\frac{L}{a},$
(28)
the average heat saturates, which is a consequence of the equipartition
theorem. The saturation value of heat is an extensive quantity which scales
linearly with the system size $L$. It will not diverge for a finite spacing
$a$ as a result of finite resolution.
From Eq. (23) one can see the average heat for every $\boldsymbol{q}$ mode is
$\langle
Q_{\boldsymbol{q}}\rangle=\frac{\beta^{\prime}-\beta}{\beta\beta^{\prime}}\Big{(}\frac{\pi}{L}\Big{)}^{-d}\Big{[}1-\exp(-2\nu
q^{2}\tau)\Big{]}.$ (29)
As we can see from this equation and Fig. 2, heat transfer occurs mainly
through high-energy modes and occurs in high-energy modes more quickly than
that in lower ones.
For fixed time duration $\tau$, in the small wavevector limit, i.e., $2\nu
q^{2}\tau\ll 1$, it increases with time linearly
$\langle
Q_{\boldsymbol{q}}\rangle=2\nu\tau\frac{\beta^{\prime}-\beta}{\beta\beta^{\prime}}\Big{(}\frac{\pi}{L}\Big{)}^{-d}q^{2},$
(30)
which is the Newton’s law of cooling.
On the other hand, if one takes the large wavevector limit, i.e., $2\nu
q^{2}\tau\gg 1$, the average heat reaches the asymptotic value
$\langle
Q_{\boldsymbol{q}}\rangle=\frac{\beta^{\prime}-\beta}{\beta\beta^{\prime}}\Big{(}\frac{\pi}{L}\Big{)}^{-d},$
(31)
which is the result of the equipartition theorem.
(a)
(b)
Figure 3: Variance of heat as a function of time. Parameters for both panels:
$d=1,\ \nu=1,\ \beta^{\prime}=4,\ \beta=2.$ (a) $\mathrm{var}(Q)$ as a
function of $\tau$ for three system sizes $L=30,35,40$, fixing $a=0.2.$ Inset:
the saturation value of heat variance $\mathrm{var}(Q)_{st}$ as a function of
system size $L$. (b) $\mathrm{var}(Q)$ as a function of $\tau$ for three
cutoff spacings $a=0.2,0.25,0.3$, fixing $L=10.$ Inset: the saturation value
of heat variance $\mathrm{var}(Q)_{st}$ as a function of cutoff spacing $a$.
From the analytical result of heat statistics Eq. (18) we can also study the
variance of heat. The variance of heat is defined as $\mathrm{var}(Q)=\langle
Q^{2}\rangle-\langle Q\rangle^{2}$, and can be calculated as
$\displaystyle\mathrm{var}(Q)$
$\displaystyle=\frac{1}{i^{2}}\frac{d^{2}\ln\chi_{\tau}(u)}{du^{2}}|_{u=0}$
$\displaystyle=\Big{(}\frac{\pi}{L}\Big{)}^{-d}\frac{1}{\beta^{2}\beta^{\prime
2}}\int_{\frac{\pi}{L}}^{\frac{\pi}{a}}d\boldsymbol{q}e^{-4\nu
q^{2}\tau}(-1+e^{2\nu q^{2}\tau})$ $\displaystyle\quad\bigg{[}(-1+e^{2\nu
q^{2}\tau})\beta^{2}+2\beta\beta^{\prime}+(-1+e^{2\nu q^{2}\tau})\beta^{\prime
2}\bigg{]}$ $\displaystyle=\frac{1}{\beta^{2}\beta^{\prime
2}\pi^{d}}g\Big{(}\frac{\nu\tau}{L^{2}}\Big{)}$ (32)
where
$\displaystyle g(r)$
$\displaystyle=\int_{\pi}^{\frac{L\pi}{a}}d\boldsymbol{s}\Big{[}(\beta^{2}+\beta^{\prime
2})(1-2e^{-2rs^{2}}+e^{-4rs^{2}})$
$\displaystyle\quad+2\beta\beta^{\prime}(-e^{-4rs^{2}}+e^{-2rs^{2}})\Big{]}.$
In the one-dimensional case, for $\tau\ll a^{2}/\nu$, we have
$\mathrm{var}(Q,\tau)\approx\frac{4\pi^{2}}{3a^{2}\beta\beta^{\prime}}\nu\tau\frac{L}{a}.$
(33)
It grows with time linearly in the very beginning.
For $a^{2}/\nu\ll\tau\ll L^{2}/\nu$,
$\displaystyle\mathrm{var}(Q,\tau)$
$\displaystyle\approx\frac{4\pi^{4}\nu^{2}\tau^{2}}{5\beta^{2}\beta^{\prime
2}a^{4}}(\beta^{2}-3\beta\beta^{\prime}+\beta^{\prime 2})\frac{L}{a}.$ (34)
It scales as $\tau^{2}$ as time elapses.
Finally, for $\tau\gg L^{2}/\nu$, it reaches the saturation value in the long
time,
$\displaystyle\mathrm{var}(Q,\tau)$
$\displaystyle\approx\frac{\beta^{2}+\beta^{\prime 2}}{\beta^{2}\beta^{\prime
2}}\frac{L}{a}.$ (35)
As can be seen from Fig. 3, the variance of heat depends on the cutoff spacing
$a$ as well. Similar to the average heat, the saturation value of variance
increases linearly with the system size $L$ and will not diverge for finite
spacing $a$. Higher order cumulants of heat can be analyzed in a similar way.
### III.3 Large deviation rate function
We can also study the large deviation rate function of the heat statistics in
the large size limit.
The scaled cumulant generating function (SCGF) $\phi(u,\tau)$ of heat per
volume over time $\tau$, which is defined through
$\langle\exp[(2L)^{d}u\frac{Q}{(2L)^{d}}]\rangle\asymp_{L\to\infty}e^{(2L)^{d}\phi(u,\tau)}$
(36)
or
$\displaystyle\phi(u,\tau)$
$\displaystyle=\underset{L\to\infty}{\lim}\frac{1}{(2L)^{d}}\ln\langle\exp[(2L)^{d}u\frac{Q}{(2L)^{d}}]\rangle$
$\displaystyle=\underset{L\to\infty}{\lim}\frac{1}{(2L)^{d}}\ln\chi_{\tau}(-iu),$
(37)
can be computed by
$\displaystyle\phi(u,\tau)$
$\displaystyle=\underset{L\to\infty}{\lim}-\frac{1}{(2\pi)^{d}}\int_{\frac{\pi}{L}}^{\frac{\pi}{a}}d\boldsymbol{q}\ln\Big{(}\frac{-u(\beta^{\prime}-\beta+u)}{\beta\beta^{\prime}}\Big{[}1-\exp(-2\nu
q^{2}\tau)\Big{]}+1\Big{)}.$
The large deviation rate function for heat per volume over time $\tau$ is just
the Legendre-Fenchel transform of the SCGF (Touchette, 2009)
$\displaystyle I(\frac{Q}{(2L)^{d}},\tau)$
$\displaystyle=\underset{L\to\infty}{\lim}-\frac{1}{(2L)^{d}}\ln{\cal
P}(\frac{Q}{(2L)^{d}},\tau)$ $\displaystyle=\underset{u\mathbb{\in
R}}{\sup}\Big{\\{}u\frac{Q}{(2L)^{d}}-\phi(u,\tau)\Big{\\}}.$ (38)
We emphasize that the large deviation rate function of work distribution in
the large size limit has been studied in other models previously (See e.g.,
Refs. (Gambassi and Silva, 2012; Hartmann, 2014)). But as far as we know, the
large deviation function of heat in the large size limit has not been reported
previously.
With the large deviation rate function Eq. (38), we can write down the
probability distribution of heat per volume over time $\tau$ as
${\cal
P}(\frac{Q}{(2L)^{d}},\tau)\asymp_{L\to\infty}\exp\Big{[}-(2L)^{d}I(\frac{Q}{(2L)^{d}},\tau)\Big{]},$
(39)
which demonstrates the dependence of the heat distribution on the system size.
And the fluctuation theorem of heat exchange Eq. (21) can also be formulated
in terms of the large deviation rate function.
## IV Conclusion
Previously, the stochastic thermodynamics of systems with a few degrees of
freedom have been studied extensively both in classical and quantum realms
(Jarzynski, 1997; Mazonka and Jarzynski, 1999; Narayan and Dhar, 2003; Speck
and Seifert, 2004; van Zon and Cohen, 2004; Lua and Grosberg, 2005; Speck and
Seifert, 2005; Taniguchi and Cohen, 2006; Imparato _et al._ , 2007; Quan _et
al._ , 2008; Engel, 2009; Fogedby and Imparato, 2009; Minh and Adib, 2009;
Chatterjee and Cherayil, 2010; Gomez-Solano _et al._ , 2011; Nickelsen and
Engel, 2011; Speck, 2011; Kwon _et al._ , 2013; Jiménez-Aquino and Velasco,
2013; Ryabov _et al._ , 2013; Jarzynski _et al._ , 2015; Salazar and Lira,
2016; Zhu _et al._ , 2016; Funo and Quan, 2018a, b; Hoang _et al._ , 2018;
Pagare and Cherayil, 2019; Fogedby, 2020; Chen _et al._ , 2021; Gupta and
Sivak, 2021; Chen and Quan, 2023; Paraguassú _et al._ , 2023). However, less
is known in systems with many degrees of freedom. What new results the
complexity of many degrees of freedom will bring to stochastic thermodynamics
remains largely unexplored.
In this article, we extend previous studies about the stochastic
thermodynamics of systems with a few degrees of freedom to a continuous field.
We compute the heat statistics in the relaxation process of an exactly
solvable model — an elastic manifold whose underlying dynamics can be
described by the Edwards-Wilkinson equation. By employing Feynman-Kac
approach, we calculate analytically the characteristic function of heat for
any relaxation time. The analytical results of heat statistics have
pedagogical value and may bring important insights to the understanding of
thermodynamics in extreme nonequilibrium processes. For example, the cumulants
of heat in such a system with many degrees of freedom require a spatial cutoff
to avoid the ultra-violet divergence, which is a consequence of finite
resolution. We also analyze the scaling behavior of the cumulants with time
and the system size. In addition, the large deviation rate function of heat in
the large size limit is analyzed.
This work can be regarded as an early step in the stochastic thermodynamics of
continuous fields. More interesting problems remain to be explored such as the
definitions for the thermodynamic quantities in every space-time point, the
extension to nonlinear models, the work statistics in the presence of external
driving and so on. Studies about these issues will be given in our future
work.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China
(NSFC) under Grants No. 12147157, No. 11775001, and No. 11825501.
## Appendix A Derivation of Eq. (50)
Similar to the probability density distribution, the modified function
$\eta(h,t)$ can be written as the product of the imaginary part and the real
part over all modes in the Fourier space
${\cal\eta}(\\{h_{\boldsymbol{q}}\\},t)=\underset{q_{i}\geq\pi/L}{\prod}{\cal\eta}_{\boldsymbol{q}}(h_{\boldsymbol{q}}^{R},t){\cal\eta}_{\boldsymbol{q}}(h_{\boldsymbol{q}}^{I},t).$
(40)
The probability-density-like function $\eta(h,t)$ follows the same time
evolution as ${\cal P}(h,t)$ in Eq. (13)
$\displaystyle\frac{\partial{\cal\eta}_{\boldsymbol{q}}(h_{\boldsymbol{q}}^{R,I},t)}{\partial
t}$
$\displaystyle=\frac{(2\pi)^{d}}{2\beta}\frac{\partial^{2}{\cal\eta}_{\boldsymbol{q}}}{\partial(h_{\boldsymbol{q}}^{R,I})^{2}}+\nu
q^{2}{\cal\eta}_{\boldsymbol{q}}+\nu
q^{2}h_{\boldsymbol{q}}^{R,I}\frac{\partial{\cal\eta}_{\boldsymbol{q}}}{\partial
h_{\boldsymbol{q}}^{R,I}}.$ (41)
with the initial condition
$\eta(h,0)=e^{-iuH_{S}(0)}{\cal P}(h,0).$ (42)
Due to the quadratic nature of the EW equation, we assume the time-dependent
solution $\eta(h,t)$ takes a Gaussian form at any time
$\displaystyle{\cal\eta}_{\boldsymbol{q}}(h_{\boldsymbol{q}}^{R,I},t)$
$\displaystyle=\sqrt{\frac{\beta^{\prime}\nu
q^{2}}{\pi(2\pi)^{2d}}}\exp\Big{[}-A(t)(h_{\boldsymbol{q}}^{R,I})^{2}+B(t)\Big{]}.$
(43)
The coefficients are governed by the following ordinary differential equations
$\dot{A}(t)=-\frac{2(2\pi)^{d}}{\beta}A^{2}(t)+2A(t)\nu q^{2},$ (44)
$\dot{B}(t)=-\frac{(2\pi)^{d}}{\beta}A(t)+\nu q^{2}.$ (45)
The initial condition Eq. (42) gives way to the initial values of the
coefficients
$A(0)=(\beta^{\prime}+iu)\nu\frac{1}{(2\pi)^{d}}q^{2},$ (46)
$B(0)=0.$ (47)
By solving the above equations we obtain
$\displaystyle A(t)$ $\displaystyle=\frac{1}{(2\pi)^{d}}\frac{e^{2\nu
q^{2}t}\beta(u-i\beta^{\prime})\nu q^{2}}{(e^{2\nu
q^{2}t}-1)u-i[\beta+(e^{2\nu q^{2}t}-1)\beta^{\prime}]},$ (48) $\displaystyle
B(t)$ $\displaystyle=\nu
q^{2}t+\frac{1}{2}\ln\left[\frac{i\beta}{u-i\beta^{\prime}+i\beta+(i\beta^{\prime}-u)e^{2\nu
q^{2}t}}\right].$ (49)
Substituting Eqs. (48) and (49) into Eq. (43), we arrive at
$\displaystyle{\cal\eta}(\\{h_{\boldsymbol{q}}\\},t)$
$\displaystyle=\underset{q_{i}\geq\pi/L}{\prod}{\cal\eta}_{\boldsymbol{q}}(h_{\boldsymbol{q}}^{R},t){\cal\eta}_{\boldsymbol{q}}(h_{\boldsymbol{q}}^{I},t)$
$\displaystyle=\underset{\boldsymbol{q}(q_{i}\geq\pi/L)}{\prod}\frac{\beta^{\prime}\nu
q^{2}}{\pi(2\pi)^{d}}\frac{i\beta\exp(2\nu
q^{2}\tau)}{u-i\beta^{\prime}+i\beta+(i\beta^{\prime}-u)\exp(2\nu q^{2}t)}$
$\displaystyle\hfill\exp\bigg{\\{}-\frac{1}{(2\pi)^{d}}\frac{\exp(2\nu
q^{2}t)\beta(u-i\beta^{\prime})\nu q^{2}}{\Big{[}-1+\exp(2\nu
q^{2}t)\Big{]}u-i\Big{[}\beta-\beta^{\prime}+\beta^{\prime}\exp(2\nu
q^{2}t)\Big{]}}\Big{[}(h_{\boldsymbol{q}}^{R})^{2}+(h_{\boldsymbol{q}}^{I})^{2}\Big{]}\bigg{\\}}.$
(50)
Substituting it into Eq. (16), we obtain the characteristic function of heat
Eq. (18) of the EW elastic manifold in the relaxation process.
## References
* Livi and Politi (2017) R. Livi and P. Politi, _Nonequilibrium Statistical Physics_ (Cambridge University Press, 2017).
* Luca Peliti (2021) S. P. Luca Peliti, _Stochastic Thermodynamics_ (Princeton University Press, 2021).
* Hummer and Szabo (2001) G. Hummer and A. Szabo, Proc. Natl. Acad. Sci. 98, 3658 (2001).
* Liphardt _et al._ (2002) J. Liphardt, S. Dumont, S. B. Smith, I. Tinoco, and C. Bustamante, Science 296, 1832 (2002).
* Wang _et al._ (2002) G. M. Wang, E. M. Sevick, E. Mittag, D. J. Searles, and D. J. Evans, Phys. Rev. Lett. 89, 050601 (2002).
* Blickle _et al._ (2006) V. Blickle, T. Speck, L. Helden, U. Seifert, and C. Bechinger, Phys. Rev. Lett. 96, 070603 (2006).
* Douarche _et al._ (2006) F. Douarche, S. Joubaud, N. B. Garnier, A. Petrosyan, and S. Ciliberto, Phys. Rev. Lett. 97, 140603 (2006).
* Harris _et al._ (2007) N. C. Harris, Y. Song, and C.-H. Kiang, Phys. Rev. Lett. 99, 068101 (2007).
* Imparato _et al._ (2007) A. Imparato, L. Peliti, G. Pesce, G. Rusciano, and A. Sasso, Phys. Rev. E 76, 050101 (2007).
* Toyabe _et al._ (2010) S. Toyabe, T. Sagawa, M. Ueda, E. Muneyuki, and M. Sano, Nat. Phys. 6, 988 (2010).
* Gupta _et al._ (2011) A. N. Gupta, A. Vincent, K. Neupane, H. Yu, F. Wang, and M. T. Woodside, Nat. Phys. 7, 631 (2011).
* Alemany _et al._ (2012) A. Alemany, A. Mossa, I. Junier, and F. Ritort, Nat. Phys. 8, 688 (2012).
* Gieseler _et al._ (2014) J. Gieseler, R. Quidant, C. Dellago, and L. Novotny, Nat. Nanotechnology 9, 358 (2014).
* Jun _et al._ (2014) Y. Jun, M. Gavrilov, and J. Bechhoefer, Phys. Rev. Lett. 113, 190601 (2014).
* Koski _et al._ (2014) J. Koski, V. Maisi, T. Sagawa, and J. Pekola, Phys. Rev. Lett. 113, 030601 (2014).
* Lee _et al._ (2015) D. Y. Lee, C. Kwon, and H. K. Pak, Phys. Rev. Lett. 114, 060603 (2015).
* Martínez _et al._ (2015) I. A. Martínez, É. Roldán, L. Dinis, D. Petrov, J. M. R. Parrondo, and R. A. Rica, Nat. Phys. 12, 67 (2015).
* Hoang _et al._ (2018) T. M. Hoang, R. Pan, J. Ahn, J. Bang, H. Quan, and T. Li, Phys. Rev. Lett. 120, 080602 (2018).
* Jarzynski (1997) C. Jarzynski, Phys. Rev. Lett. 78, 2690 (1997).
* Mazonka and Jarzynski (1999) O. Mazonka and C. Jarzynski, (1999), arXiv:cond-mat/9912121 [cond-mat.stat-mech] .
* Narayan and Dhar (2003) O. Narayan and A. Dhar, J. Phys. A: Math. Gen. 37, 63 (2003).
* Speck and Seifert (2004) T. Speck and U. Seifert, Phys. Rev. E 70, 066112 (2004).
* van Zon and Cohen (2004) R. van Zon and E. G. D. Cohen, Phys. Rev. E 69, 056121 (2004).
* Lua and Grosberg (2005) R. C. Lua and A. Y. Grosberg, The Journal of Physical Chemistry B 109, 6805 (2005).
* Speck and Seifert (2005) T. Speck and U. Seifert, Eur. Phys. J. B 43, 521 (2005).
* Taniguchi and Cohen (2006) T. Taniguchi and E. G. D. Cohen, J. Stat. Phys. 126, 1 (2006).
* Quan _et al._ (2008) H. T. Quan, S. Yang, and C. P. Sun, Phys. Rev. E 78, 021116 (2008).
* Engel (2009) A. Engel, Phys. Rev. E 80, 021120 (2009).
* Fogedby and Imparato (2009) H. C. Fogedby and A. Imparato, J. Phys. A: Math. Theor. 42, 475004 (2009).
* Minh and Adib (2009) D. D. L. Minh and A. B. Adib, Phys. Rev. E 79, 021122 (2009).
* Chatterjee and Cherayil (2010) D. Chatterjee and B. J. Cherayil, Phys. Rev. E 82, 051104 (2010).
* Gomez-Solano _et al._ (2011) J. R. Gomez-Solano, A. Petrosyan, and S. Ciliberto, Phys. Rev. Lett. 106, 200602 (2011).
* Nickelsen and Engel (2011) D. Nickelsen and A. Engel, Eur. Phys. J. B 82, 207 (2011).
* Speck (2011) T. Speck, J. Phys. A: Math. Theor. 44, 305001 (2011).
* Kwon _et al._ (2013) C. Kwon, J. D. Noh, and H. Park, Phys. Rev. E 88, 062102 (2013).
* Jiménez-Aquino and Velasco (2013) J. I. Jiménez-Aquino and R. M. Velasco, Phys. Rev. E 87, 022112 (2013).
* Ryabov _et al._ (2013) A. Ryabov, M. Dierl, P. Chvosta, M. Einax, and P. Maass, J. Phys. A: Math. Theor. 46, 075002 (2013).
* Jarzynski _et al._ (2015) C. Jarzynski, H. Quan, and S. Rahav, Phys. Rev. X 5, 031038 (2015).
* Salazar and Lira (2016) D. S. P. Salazar and S. A. Lira, J. Phys. A: Math. Theor. 49, 465001 (2016).
* Zhu _et al._ (2016) L. Zhu, Z. Gong, B. Wu, and H. T. Quan, Phys. Rev. E 93, 062108 (2016).
* Funo and Quan (2018a) K. Funo and H. Quan, Phys. Rev. Lett. 121, 040602 (2018a).
* Funo and Quan (2018b) K. Funo and H. T. Quan, Phys. Rev. E 98, 012113 (2018b).
* Pagare and Cherayil (2019) A. Pagare and B. J. Cherayil, Phys. Rev. E 100, 052124 (2019).
* Fogedby (2020) H. C. Fogedby, J. Stat. Mech.: Theory Exp. 2020, 083208 (2020).
* Chen _et al._ (2021) J.-F. Chen, T. Qiu, and H.-T. Quan, Entropy 23, 1602 (2021).
* Gupta and Sivak (2021) D. Gupta and D. A. Sivak, Phys. Rev. E 104, 024605 (2021).
* Chen and Quan (2023) J.-F. Chen and H. T. Quan, Phys. Rev. E 107, 024135 (2023).
* Paraguassú _et al._ (2023) P. V. Paraguassú, R. Aquino, and W. A. Morgado, Physica A 615, 128568 (2023).
* Hartmann (2014) A. K. Hartmann, Phys. Rev. E 89, 052103 (2014).
* Anderson (1972) P. W. Anderson, Science 177, 393 (1972).
* Forrest and Tang (1990) B. M. Forrest and L.-H. Tang, Phys. Rev. Lett. 64, 1405 (1990).
* Antal and Rácz (1996) T. Antal and Z. Rácz, Phys. Rev. E 54, 2256 (1996).
* Racz (2003) Z. A. Racz, in _SPIE Proceedings_, edited by M. B. Weissman, N. E. Israeloff, and A. S. Kogan (SPIE, 2003).
* Vvedensky (2003) D. D. Vvedensky, Phys. Rev. E 67, 025102 (2003).
* Bustingorry _et al._ (2007) S. Bustingorry, L. F. Cugliandolo, and J. L. Iguain, J. Stat. Mech.: Theory Exp. 2007, P09008 (2007).
* Mallick _et al._ (2011) K. Mallick, M. Moshe, and H. Orland, J. Phys. A: Math. Theor. 44, 095002 (2011).
* Wio _et al._ (2017) H. S. Wio, M. A. Rodríguez, R. Gallego, J. A. Revelli, A. Alés, and R. R. Deza, Front. Phys. 4, 52 (2017).
* Rodríguez and Wio (2019) M. A. Rodríguez and H. S. Wio, Phys. Rev. E 100, 032111 (2019).
* Wio _et al._ (2020a) H. S. Wio, M. A. Rodríguez, and R. Gallego, Chaos: An Interdisciplinary Journal of Nonlinear Science 30, 073107 (2020a).
* Wio _et al._ (2020b) H. S. Wio, R. R. Deza, and J. A. Revelli, J. Stat. Mech.: Theory Exp. 2020, 024009 (2020b).
* Edwards and Wilkinson (1982) S. F. Edwards and D. R. Wilkinson, Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences 381, 17 (1982).
* Limmer _et al._ (2021) D. T. Limmer, C. Y. Gao, and A. R. Poggioli, Eur. Phys. J. B 94, 145 (2021).
* Jarzynski and Wójcik (2004) C. Jarzynski and D. K. Wójcik, Phys. Rev. Lett. 92, 230602 (2004).
* Bettencourt (2001) L. M. A. Bettencourt, Phys. Rev. D 63, 045020 (2001).
* Kerson Huang (1987) H. Kerson Huang, _Statistical Mechanics_ (John Wiley & Sons, 1987).
* Parisi and Machta (1989) G. Parisi and J. Machta, Am. J. Phys. 57, 286 (1989).
* Touchette (2009) H. Touchette, Phys. Rep. 478, 1 (2009).
* Gambassi and Silva (2012) A. Gambassi and A. Silva, Phys. Rev. Lett. 109, 250602 (2012).
|
# Degenerations and order of graphs realized by finite abelian groups
Rameez Raja 111Department of Mathematics, National Institute of Technology
Srinagar-190006, Jammu and Kashmir, India. Email<EMAIL_ADDRESS>
Abstract. Let $G_{1}$ and $G_{2}$ be two groups. If a group homomorphism
$\varphi:G_{1}\rightarrow G_{2}$ maps $a\in G_{1}$ into $b\in G_{2}$ such that
$\varphi(a)=b$, then we say $a$ degenerates to $b$ and if every element of
$G_{1}$ degenerates to elements in $G_{2}$, then we say $G_{1}$ degenerates to
$G_{2}$. We discuss degeneration in graphs and show that degeneration in
groups is a particular case of degeneration in graphs. We exhibit some
interesting properties of degeneration in graphs. We use this concept to
present a pictorial representation of graphs realized by finite abelian
groups. We discus some partial orders on the set $\mathcal{T}_{p_{1}\cdots
p_{n}}$ of all graphs realized by finite abelian $p_{r}$-groups, where each
$p_{r}$, $1\leq r\leq n$, is a prime number. We show that each finite abelian
$p_{r}$-group of rank $n$ can be identified with saturated chains of Young
diagrams in the poset $\mathcal{T}_{p_{1}\cdots p_{n}}$. We present a
combinatorial formula which represents the degree of a projective
representation of a symmetric group. This formula determines the number of
different saturated chains in $\mathcal{T}_{p_{1}\cdots p_{n}}$ and the number
of finite abelian groups of different orders.
Keywords: Degenerations, Finite abelian groups, Threshold graph, Partial
order.
AMS subject classification: Primary: 13C70, 05C25.
## 1 Introduction
A notion of degeneration in groups was introduced in [8] to parametrize the
orbits in a finite abelian group under its full automorphism group by a finite
distributive lattice. The authors in [8] were motivated by attempts to
understand the decomposition of the weil representation associated to a finite
abelian group $G$. Note that the sum of squares of the multiplicities in the
Weil representation is the number of orbits in $G\times\hat{G}$ under
automorphisms of a symplectic bicharacter, where $\hat{G}$ denotes the
Pontryagin dual of $G$.
The above combinatorial description is one of the explorations between groups
and combinatorial structures (posets and lattices). There is an intimate
relationship between between groups and other combinatorial structures
(graphs). For example, any graph $\Gamma$ give rise to its automorphism group
whereas any group with its generating set give rise to a realization of a
group as a graph (Cayley graph).
Recently, authors in [13] studied the group-annihilator graph $\Gamma(G)$
realized by a finite abelian group $G$ (viewed as a $\mathbb{Z}$-module) of
different ranks. The vertices of $\Gamma(G)$ are all elements of $G$ and two
vertices $x,y\in G$ are adjacent in $\Gamma(G)$ if and only if
$[x:G][y:G]G=\\{0\\}$, where
$[x:G]=\\{r\in\mathbb{Z}:rG\subseteq\mathbb{Z}x\\}$ is an ideal of a ring
$\mathbb{Z}$. They investigated the concept of creation sequences in
$\Gamma(G)$ and determined the multiplicities of eigenvalues $0$ and $-1$ of
$\Gamma(G)$. Interestingly, they considered orbits of the symmetric group
action: $Aut(\Gamma(G))\times G\longrightarrow G$ and proved that the
representatives of orbits are the Laplacian eigenvalues of $\Gamma(G)$.
There are number of realizations of groups as graphs. The generating graph
[11] realized by a simple group was introduced to get an insight that might
ultimately guide us to a new proof of the classification of simple groups. The
graphs such as power graph [6], intersection graph [4] and the commuting graph
[5] were introduced to study the information contained in the graph about the
group.
Moreover, the realizations of rings as graphs were introduced in [1, 3]. The
aim of considering these realizations of rings as graphs is to study the
interplay between combinatorial and ring theoretic properties of a ring $R$.
This concept was further studied in [16, 18, 19, 20] and was extended to
modules over commutative rings in [21].
The main objective of this work is to investigate some deeper interconnections
between partitions of a number, young diagrams, finite abelain groups, group
homomorphisms, graph homomorphisms, posets and lattices. This investigation
will lead us to develop a theory which is going to simplify the concept of
degeneration of elements in groups and also provide a lattice of finite
abelian groups in which each saturated chain of length $n$ can be identified
with a finite abelian $p_{r}$-group of rank $n$.
This research article is organized as follows. In section 2, we discuss some
results related to degeneration in groups and group-annihilator graphs
realized by finite abelian groups. Section 3 is dedicated to the study of
degenerations in graphs realized by finite abelian groups. We present a
pictorial sketch which illustrates degeneration in graphs. Finally in section
4, we investigate multiple relations on the set $\mathcal{T}_{p_{1}\cdots
p_{n}}$ and furnish the information contained in a locally finite distributive
lattice about finite abelian groups. We provide a combinatorial formula which
represents degree of a projective representation of a symmetric group and the
number of saturated chains from empty set to some non-trivial member of
$\mathcal{T}_{p_{1}\cdots p_{n}}$.
## 2 Preliminaries
Let $\lambda=(\lambda_{1},\lambda_{2},\cdots,\lambda_{r})$ be a partition of
$n$ denoted by $\lambda\vdash n$, where $n\in\mathbb{Z}_{>0}$ is a positive
integer. For any $\mu\vdash n$, we have an abelian group of order $p^{n}$ and
conversely every abelian group corresponds to some partition of $n$. In fact,
if $H_{\mu,p}=\mathbb{Z}/p^{{\mu}_{1}}\mathbb{Z}\leavevmode\nobreak\
\oplus\leavevmode\nobreak\
\mathbb{Z}/p^{{\mu}_{2}}\mathbb{Z}\leavevmode\nobreak\
\oplus\leavevmode\nobreak\ \cdots\leavevmode\nobreak\
\oplus\leavevmode\nobreak\ \mathbb{Z}/p^{{\mu}_{r}}\mathbb{Z}$ is a subgroup
of $G_{\lambda,p}$
($G_{\lambda,p}=\mathbb{Z}/p^{\lambda_{1}}\mathbb{Z}\oplus\mathbb{Z}/p^{\lambda_{2}}\mathbb{Z}\oplus\cdots\oplus\mathbb{Z}/p^{\lambda_{r}}\mathbb{Z}$
is a finite abelian $p$-group), then
$\mu_{1}\leq\lambda_{1},\mu_{2}\leq\lambda_{2},\cdots,\mu_{r}\leq\lambda_{r}$.
If these inequalities holds we write $\mu\subset\lambda$, that is a
“containment order”on partitions. For example, a $p$-group
$\mathbb{Z}/p^{7}\mathbb{Z}\leavevmode\nobreak\ \oplus\leavevmode\nobreak\
\mathbb{Z}/p\mathbb{Z}\leavevmode\nobreak\ \oplus\leavevmode\nobreak\
\mathbb{Z}/p\mathbb{Z}$ is of type $\lambda=(7,1,1)$. The possible types for
its subgroup are: $(7,1,1),(6,1,1),(5,1,1),(4,1,1)$,
$(3,1,1),(2,1,1),(1,1,1),2(7,1),2(6,1),2(5,1),2(4,1),2(3,1),2(2,1),2(1,1),(7),(6),(5),(4)$,
$\noindent(3),(2),2(1)$.
Note that the types $(7,1),(6,1),(5,1),(4,1),(3,1),(2,1),(1,1)$ are appearing
twice in the sequence of partitions for a subgroup.
The authors in [8] have considered the group action: $Aut(G)\times
G\rightarrow G$, where $Aut(G)$ is an automorphism group of $G$ and studied
$Aut(G)\setminus G$, the set of all disjoint $Aut(G)$-orbits in $G$. The group
$\mathbb{Z}/p^{k}\mathbb{Z}$ has $k$ orbits of non-zero elements under the
action of its automorphism group, represented by elements
$1,p,\cdots,p^{k-1}$. We denote orbits of the group action:
$Aut(\mathbb{Z}/p^{k}\mathbb{Z})\times\mathbb{Z}/p^{k}\mathbb{Z}\longrightarrow\mathbb{Z}/p^{k}\mathbb{Z}$
by $\mathcal{O}_{k,p^{m}}$, where $0\leq m\leq k-1$.
Miller [17], Schwachhöfer and Stroppel [22] provided some well known formulae
for the cardinality of the set $Aut(G_{\lambda,p})$ $\setminus$
$G_{\lambda,p}$ .
###### Definition 1.
(Degeneration in groups) [8]. Let $G_{1}$ and $G_{2}$ be two groups, then
$a\in G_{1}$ degenerates to $b\in G_{2}$, if a homomorphism
$\varphi:G_{1}\longrightarrow G_{2}$ maps $a$ into $b$ such that
$\varphi(a)=b$.
The following result provide a characterization for degenerations of elements
of the group $\mathbb{Z}/p^{k}\mathbb{Z}$ to elements of the group
$\mathbb{Z}/p^{l}\mathbb{Z}$, where $k\leq l$.
###### Lemma 2.
[8]. $p^{r}u\in\mathcal{O}_{k,p^{r}}$ in $\mathbb{Z}/p^{k}\mathbb{Z}$
degenerates to $p^{s}v\in\mathcal{O}_{l,p^{s}}$ in
$\mathbb{Z}/p^{l}\mathbb{Z}$ if and only if $r\leq s$ and $k-r\geq l-s$, where
$u,v$ are relatively prime to $p$, $r<k$ and $s<l$. If in addition
$p^{s}v\in\mathcal{O}_{l,p^{s}}$ degenerates to
$p^{r}u\in\mathcal{O}_{k,p^{r}}$, then $k=l$ and $r=s$.
By Lemma 2, it is easy to verify that degeneracy is a partial order relation
on the set of all orbits of non-zero elements in $\mathbb{Z}/p^{k}\mathbb{Z}$.
The diagrammatic representation (Hasse diagram) of the set
$Aut(\mathbb{Z}/p^{k}\mathbb{Z})\setminus\mathbb{Z}/p^{k}\mathbb{Z}$ with
respect to degeneracy, which is called a fundamental poset is presented in
[Figure 1 [8]].
Let $a=(a_{1},a_{2},\cdots,a_{r})\in G_{\lambda,p}$, the ideal of a in
$Aut(G_{\lambda,p})$ $\setminus$ $G_{\lambda,p}$ denoted by $I(a)$ is the
ideal generated by orbits of non-zero coordinates
$a_{i}\in\mathbb{Z}/p^{\lambda_{i}}\mathbb{Z}$. One of the explorations
between ideals of posets, partitions and orbits of finite abelian groups is
the following interesting result.
###### Theorem 3.
[8]. Let $\lambda$ and $\mu$ be any two given partitions and $a\in
G_{\lambda,p}$, $b\in G_{\mu,p}$. Then $a$ degenerates to $b$ if and only if
$I(b)\subset I(a)$.
The enumeration of orbits as ideals, first as counting ideals in terms of
their boundaries, and the second as counting them in terms of anti chains of
maximal elements is presented in [Example 6.1, 6.2 [8]].
Please see sections 7 and 8 of [8] for results related to embedding of the
lattice of orbits of $G_{\lambda,p}$ into the lattice of characteristic
subgroups of $G_{\lambda,p}$, formula for the order of the characteristic
subgroup associated to an orbit, computation of a monic polynomial in $p$
(with integer coefficients) using mobius inversion formula representing
cardinality of the orbit in $G_{\lambda,p}$.
Let $\Gamma=(V,E)$ be a simple connected graph and let $\Gamma_{1}$ and
$\Gamma_{2}$ be two simple connected graphs, recall a mapping
$\phi:V(\Gamma_{1})\rightarrow V(\Gamma_{2})$ is a homomorphism if it
preserves edges, that is, for any edge $(u,v)$ of $\Gamma_{1}$,
$(\phi(u),\phi(v))$ is an edge of $\Gamma_{2}$, where $u,v\in V(\Gamma_{1})$.
A homomorphism $\phi:V(\Gamma_{1})\rightarrow V(\Gamma_{2})$ is faithful when
there is an edge between two pre images $\phi^{-1}(u)$ and $\phi^{-1}(u)$ such
that $(u,v)$ is an edge of $\Gamma_{2}$, a faithful bijective homomorphism is
an isomorphism and in this case we write $\Gamma_{1}\cong\Gamma_{2}$. An
isomorphism from $\Gamma$ to itself is an automorphism of $\Gamma$, it is well
known that set of automorphisms of $\Gamma$ forms a group under composition,
we denote the group of automorphisms of $\Gamma$ by $Aut(\Gamma)$.
Understanding the automorphism group of a graph is a guiding principle for
understanding objects by their symmetries.
Consider the group action: $Aut(\Gamma)\leavevmode\nobreak\
acting\leavevmode\nobreak\ on\leavevmode\nobreak\ V(\Gamma)$ by some
permutation of $Aut(\Gamma)$, that is,
$Aut(\Gamma)\times V(\Gamma)\rightarrow V(\Gamma)$,
$\sigma(v)=u$,
where $\sigma\in Aut(\Gamma)$ and $v,u\in V(\Gamma)$ are any two vertices of
$\Gamma$. This group action is called a symmetric action [13].
Consider a finite abelian non-trivial group $G$ with identity element $0$ and
view $G$ as a $\mathbb{Z}$-module. For $a\in G$, set
$[a:G]=\\{x\in\mathbb{Z}\leavevmode\nobreak\ |\leavevmode\nobreak\
xG\subseteq\mathbb{Z}a\\}$, which clearly is an ideal of $\mathbb{Z}.$ For
$a\in G$, $G/\mathbb{Z}a$ is a $\mathbb{Z}$-module. So, $[a:G]$ is a
annihilator of $G/\mathbb{Z}a$, $[a:G]$ is called a $a$-annihilator of $G.$
Also, an element $a$ is called an ideal-annihilator of $G$ if there exists a
non-zero element $b$ of $G$ such that $[a:G][b:G]G=\\{0\\}$, where
$[a:G][b:G]$ denotes the product of ideals of $\mathbb{Z}$. The element $0$ is
a trivial ideal-annihilator of $G$, since $[0:G][b:G]G=ann(G)[b:G]G=\\{0\\}$,
$ann(G)$ is an annihilator of $G$ in $\mathbb{Z}$.
Given an abelian group $G$, the group-annihilator graph is defined to be the
graph $\Gamma(G)=(V(\Gamma(G))$, $E(\Gamma(G)))$ with vertex set
$V(\Gamma(G))=G$ and for two distinct $a,b\in V(\Gamma(G))$, the vertices $a$
and $b$ are adjacent in $\Gamma(G)$ if and only if $[a:G][b:G]G=\\{0\\}$, that
is, $E(G)=\\{(a,b)\in G\times G:[a:G][b:G]G=\\{0\\}\\}$.
For a cyclic group $G=\mathbb{Z}/p^{n}\mathbb{Z}$ ($n\geq 1$), it is easy to
verify that the orbits of the action: $Aut(G)\times G\longrightarrow G$ are
same as the orbits of the symmetric action: $Aut(\Gamma(G))\times
G\longrightarrow G$ which are given as follows,
$\mathcal{O}_{n,p^{i}}=\\{p^{i}\alpha(mod\leavevmode\nobreak\
p^{n})\mid\alpha\in\mathbb{Z},(\alpha,p)=1\\}$,
where $i\in[0,n]$. Furthermore, for $0\leq i<j\leq n$, $p^{i}\alpha\equiv
p^{j}\alpha^{\prime}(mod\leavevmode\nobreak\ p^{n})\text{ where
}(\alpha,p)=1\text{ and }(\alpha^{\prime},p)=1$. Consequently, we have for
$i\neq j$, $\mathcal{O}_{n,p^{i}}\cap\mathcal{O}_{n,p^{j}}=\emptyset$.
Any element $a\in\mathbb{Z}/p^{n}\mathbb{Z}$ can be expressed as,
$a\equiv
p^{n-1}b_{1}+p^{n-2}b_{2}+\cdots+pb_{n-1}+b_{n}(mod\leavevmode\nobreak\
p^{n})$,
where $b_{i}\in[1,p-1]$. If $a\in\mathcal{O}_{n,1}$, then $b_{n}\neq 0.$ So,
$|\mathcal{O}_{n,1}|=p^{n-1}(p-1)=\phi(p^{n})$. If
$a^{\prime}\in\mathcal{O}_{n,p}$, then for some $a\in\mathcal{O}_{n,1}$
$a^{\prime}=pa$, that is, $b_{n}\neq 0$, so
$|\mathcal{O}_{n,p}|=\frac{\phi(p^{n})}{p}$. Similarly, for $i\in[0,n]$, we
have $|\mathcal{O}_{n,p^{i}}|=\frac{\phi(p^{n})}{p^{i}}$.
###### Proposition 2.1.
[13]. Let $G=\mathbb{Z}/p^{n}\mathbb{Z}$ be a cyclic group of order $p^{n}$,
where $n\geq 2$. Then for each $a\in\mathcal{O}_{n,p^{i}}$ with $i\in[1,n]$,
the $a-$annihilator of $G$ is $[a:G]=p^{i}\mathbb{Z}$
Thus if we consider the symmetric group action: $Aut(\Gamma(G))\times
G\longrightarrow G$, then for $G=\mathbb{Z}/p^{n}\mathbb{Z}$, the group-
annihilator graph realized by $G$ is defined as
$\Gamma(G)=(V(\Gamma(G)),E(\Gamma(G)))$, where
$V(\Gamma(G))=\mathbb{Z}/p^{n}\mathbb{Z}$ and two vertices
$u\in\mathcal{O}_{n,p^{i}}$, $v\in\mathcal{O}_{n,p^{j}}$ are adjacent in
$\Gamma(G)$ if and only if $i+j\geq n$.
Therefore, from the above observation it follows that the vertices of the
graph $\Gamma(G)$ are parametrized by representatives of orbits of the group
action: $Aut(\Gamma(G))\times G\longrightarrow G$. Thus an element
$0\in\mathcal{O}_{n,p^{n}}$ of $G$ is adjacent to all vertices in $\Gamma(G)$,
elements $a\in\mathcal{O}_{n,1}$ which are prime to order of $G$ are adjacent
to $0$ only in $\Gamma(G)$. Furthermore, elements of the orbit
$\mathcal{O}_{n,p}$ are adjacent to $0$ and elements of the orbit
$\mathcal{O}_{n,p^{n-1}}$, elements of the orbit $\mathcal{O}_{n,p^{2}}$ are
adjacent to $0$ and elements of the orbits $\mathcal{O}_{n,p^{n-1}}$,
$\mathcal{O}_{n,p^{n-2}}$. Thus, for $k\geq 1$, elements of the orbit
$\mathcal{O}_{n,p^{k}}$ are adjacent to elements of the orbits
$\mathcal{O}_{n,p^{n-k}}$,
$\mathcal{O}_{n,p^{n-k+1}},\cdots,\mathcal{O}_{n,p^{n-1}}$,
$\mathcal{O}_{n,p^{n}}$.
###### Theorem 4.
[13]. Let $n$ be a positive integer. Then for the $p$-group
$G=(\mathbb{Z}/p^{n}\mathbb{Z})^{\ell}$ of rank $\ell\geq 2,$ and
$(a_{1},\ldots,a_{l})\in G$, the $(a_{1},\ldots,a_{l})$-annihilator of $G$ is
$p^{n}\mathbb{Z}.$ In particular the corresponding group-annihilator graph
realized by $G$ is a complete graph.
Note that the action of $Aut(\Gamma((\mathbb{Z}/p\mathbb{Z})^{\ell}))$ on
$(\mathbb{Z}/p\mathbb{Z})^{\ell}$ is transitive, since an automorphism of
$\Gamma((\mathbb{Z}/p\mathbb{Z})^{\ell})$ map any vertex to any other vertex
and this does not place any restriction on where any of the other $p^{\ell}-1$
vertices are mapped, as they are all mutually connected in
$\Gamma((\mathbb{Z}/p\mathbb{Z})^{\ell})$. This implies
$Aut(\Gamma((\mathbb{Z}/p\mathbb{Z})^{\ell}))\setminus(\mathbb{Z}/p\mathbb{Z})^{\ell}$
is a single orbit of order $p^{\ell}$.
For more information regarding $a-$annihilators, $(a,b)-$annihilators and
$(a_{1},a_{2},\cdots,a_{l})$ $-$annihilators of finite abelian $p$-groups,
please see section 3 of [13].
We conclude this section by an example which illustrates the parametrization
of vertices of the group-annihilator graph $\Gamma(G)$ by representatives of
orbits of the symmetric action on $G$.
###### Example 5.
Let $G=\mathbb{Z}/2^{4}\mathbb{Z}$ be a finite abelian. Consider the group
action: $Aut(\Gamma(G))\times G\longrightarrow G$. The orbits of this action
are: $\mathcal{O}_{4,2^{4}}=\\{0\\}$, $\mathcal{O}_{4,1}=\\{1,3,5,7\\}$,
$\mathcal{O}_{4,2}=\\{2,6,10,14\\}=\\{2a\leavevmode\nobreak\
|\leavevmode\nobreak\ (a,2)=1\\}$,
$\mathcal{O}_{4,2^{2}}=\\{4,12\\}=\\{2^{2}a\leavevmode\nobreak\
|\leavevmode\nobreak\ (a,2)=1\\}$ and
$\mathcal{O}_{4,2^{3}}=\\{8\\}=\\{2^{3}a\leavevmode\nobreak\
|\leavevmode\nobreak\ (a,2)=1\\}$. Note that orbits of elements $3,5,7$ are
same as the orbit of $1$, orbits of $6,10,14$ are same as the orbit of $2$ and
orbit of $12$ is same as the orbit of $4$. Therefore, the group $G$ has $4$
orbits of nonzero elements under the action of $Aut(\Gamma(G))$ represented by
$1,2,2^{2},2^{3}$. The group-annihilator graph realized by $G$ with its orbits
is shown in Figure (1).
Figure 1: $\Gamma(\mathbb{Z}/2^{4}\mathbb{Z})$ with its orbits
## 3 Degeneration in graphs
This section is devoted to the study of degeneration in graphs. We show that
every group homomorphism is a graph homomorphism. We employ the methods of
degeneration in graphs to simply the techniques used to establish
degenerations of elements in finite abelian groups [8].
As far as groups are concerned, there are always homomorphisms (trivial
homomorphisms) from one group to another. Any source group (a group where from
we have the map) can be mapped by a homomorphism into target group (a group
where the elements are mapped) by simply sending all of its elements to the
identity of the target group. In fact, the study of kernels is very important
in algebraic structures. In the context of simple graphs, the notion of a
homomorphism is far more restrictive. Indeed, there need not be a homomorphism
between two graphs, and these cases are as much a part of the theory as those
where homomorphisms do exist. There are other categories where homomorphisms
do not always exist between two objects, for example, the category of bounded
lattices or that of semi-groups.
The answer to the question that “every group homomorphism is a graph
homomorphism” is affirmative, and the same is discussed in the following
result. Note that the orbits of elements of actions (automorphism group and
symmetric) on finite abelian $p$-group of rank one coincide and it can be
explored further on abelian $p$-groups of different ranks.
###### Proposition 3.1.
Every group homomorphism which maps elements from orbits
$\mathcal{O}_{k,p^{i}}$ to orbits $\mathcal{O}_{l,p^{j}}$ is a graph
homomorphism, where $1\leq i\leq k$, $1\leq j\leq l$ and $k\leq l$.
###### Proof.
The group homomorphisms are uniquely determined by the image of unity element
in the target group and order of the element divides order of unity in the
source group. Let $\tau(a)$ be the image of unity in the target group.
Therefore, we have $\tau(a)=a_{1},a_{2},\cdots,a_{p^{k}}$, where
$a_{1},a_{p},\cdots,a_{p^{k}}$ are elements of orbits, $\mathcal{O}_{l,1}$,
$\mathcal{O}_{l,p}$, $\cdots$, $\mathcal{O}_{l,p^{k}}$. Note that $k\leq l$,
therefore we have the following inequalities concerning the cardinalities of
obits,
$|\mathcal{O}_{k,1}|\leq|\mathcal{O}_{l,1}|$,
$|\mathcal{O}_{k,p}|\leq|\mathcal{O}_{l,p}|$,
⋮
$|\mathcal{O}_{k,p^{k}}|\leq|\mathcal{O}_{l,p^{k}}|$.
If $\tau(a)\in\mathcal{O}_{l,1}$, then under the monomorphism the elements of
orbits are mapped as,
$\mathcal{O}_{k,1}\xhookrightarrow{\leavevmode\nobreak\
1-1\leavevmode\nobreak\ }\mathcal{O}_{l,1}$,
$\mathcal{O}_{k,p}\xhookrightarrow{\leavevmode\nobreak\
1-1\leavevmode\nobreak\ }\mathcal{O}_{l,p}$,
⋮
$\mathcal{O}_{k,p^{k-1}}\xhookrightarrow{\leavevmode\nobreak\
1-1\leavevmode\nobreak\ }\mathcal{O}_{l,p^{k-1}}$,
$\mathcal{O}_{k,p^{k}}\xhookrightarrow{\leavevmode\nobreak\
1-1\leavevmode\nobreak\ }\mathcal{O}_{l,p^{l}}$.
If $\tau(a)\in\mathcal{O}_{l,p}$, then elements of orbits are mapped as,
$\mathcal{O}_{k,1}\twoheadrightarrow\mathcal{O}_{l,p}$,
$\mathcal{O}_{k,p}\twoheadrightarrow\mathcal{O}_{l,p^{2}}$,
⋮
$\mathcal{O}_{k,p^{k-1}}\twoheadrightarrow\mathcal{O}_{l,p^{k}}$,
$\mathcal{O}_{k,p^{k}}\twoheadrightarrow\mathcal{O}_{l,p^{l}}$.
Thus it follows that if $\tau(a)\in\mathcal{O}_{l,p^{t}}$ for $(0\leq t\leq
k-1)$, then every element of the orbit $\mathcal{O}_{k,p^{t}}$ is mapped to
elements of the orbit $\mathcal{O}_{l,p^{t+1}}$.
Under the symmetric action the orbits of vertices are same as the orbits
listed above. Note that the vertices of the orbit $\mathcal{O}_{k,1}$ are only
adjacent to the vertex in $\mathcal{O}_{k,p^{k}}$, vertices of the orbit
$\mathcal{O}_{k,p}$ are adjacent to vertices in $\mathcal{O}_{k,p^{k}}$ and
$\mathcal{O}_{k,p^{k-1}}$ and so on. Thus if $\tau(a)\in\mathcal{O}_{l,1}$,
then for $0\leq i\leq j\leq k$, every edge
$(u,v)\in\mathcal{O}_{k,p^{i}}\times\mathcal{O}_{k,p^{j}}$ is mapped to edges
$(\tau(u),\tau(v))\in\mathcal{O}_{l,p^{r}}\times\mathcal{O}_{l,p^{s}}$, where
$0\leq r\leq s\leq l$. Therefore $\tau$ is a graph homomorphism. Similarly it
can be verified that all other group homomorphisms are graph homomorphisms,
since the adjacencies are preserved under all group homomorphisms. ∎
###### Remark 3.2.
The converse of the preceding result is not true, that is, a graph
homomorphism between two graphs realised by some groups need not to be a group
homomorphism. To illustrate this we consider the “distribution of edges in
orbits”. Theoretically, distribution of edges is carried out in a way that for
sufficiently large $l$, a graph homomorphism is acting on vertices in orbits
$\mathcal{O}_{k,p^{k}}$, $\mathcal{O}_{k,1}$ such that
$\mathcal{O}_{k,p^{k}}\xhookrightarrow{identity}\mathcal{O}_{l,p^{l}}$,
$\mathcal{O}_{k,p^{k-1}}\xhookrightarrow{identity}\mathcal{O}_{l,p^{k-1}}$,
$\cdots$, $\mathcal{O}_{k,p}\xhookrightarrow{identity}\mathcal{O}_{l,p}$. Some
vertices of $\mathcal{O}_{k,1}$ are mapped to itself in $\mathcal{O}_{l,1}$
whereas the remaining are mapped to vertices in $\mathcal{O}_{l,p}$. So, under
the above distribution some edges in
$\mathcal{O}_{k,p^{k}}\times\mathcal{O}_{k,1}$ are mapped to edges in
$\mathcal{O}_{l,p^{l}}\times\mathcal{O}_{l,1}$, whereas the remaining edges in
$\mathcal{O}_{k,p^{k}}\times\mathcal{O}_{k,1}$ are mapped to edges in
$\mathcal{O}_{l,p^{l}}\times\mathcal{O}_{l,p}$. Thus if $x\neq y$ are two
elements of $\mathcal{O}_{k,1}$ such that $x$ is mapped to
$x^{\prime}\in\mathcal{O}_{l,1}$ and $y$ is mapped to
$y^{\prime}\in\mathcal{O}_{l,p}$, then the following equation may have no
solution,
$x+y(mod\leavevmode\nobreak\
p^{k})=x^{\prime}+y^{\prime}(mod\leavevmode\nobreak\ p^{l})$.
###### Definition 6.
Let $\Gamma_{1}$ and $\Gamma_{2}$ be two simple graphs. Then $(a,b)\in
E(\Gamma_{1})$ degenerates to $(u,v)\in E(\Gamma_{2})$ if there exists a
homomorphism $\varphi:V(\Gamma_{1})\longrightarrow V(\Gamma_{2})$ such that
$\varphi(a,b)=(u,v)$. If every edge of $\Gamma_{1}$ degenerates to edges in
$\Gamma_{2}$, then we say that $\Gamma_{1}$ degenerates to $\Gamma_{2}$.
Recall that an independent part (independent set) in a graph $\Gamma$ is a set
of vertices of $\Gamma$ such that for every two vertices, there is no edge in
$\Gamma$ connecting the two. Also, the complete part (complete subgraph) in a
graph $\Gamma$ is a set of vertices in $\Gamma$ such that there is an edge
between every pair of vertices in $\Gamma$.
The simplified form of Lemma (2) is presented in the following result. We
adapted the definition of degeneration in groups and make it to work for
graphs which are realized by finite abelian groups.
###### Theorem 7.
If under any graph homomorphism $\mathcal{O}_{k,p^{k}}$ is the only vertex
mapped to $\mathcal{O}_{l,p^{l}}$, then the pair
$(p^{r}u,p^{s}u)\in\mathcal{O}_{k,p^{r}}\times\mathcal{O}_{k,p^{s}}$
degenerates to
$(p^{r^{\prime}}u,p^{s^{\prime}}u)\in\mathcal{O}_{l,p^{r^{\prime}}}\times\mathcal{O}_{l,p^{s^{\prime}}}$
if and only if $r\leq r^{\prime}$ and $s\leq s^{\prime}$, where $u$ is
relatively prime to $p$ and $k\leq l$.
###### Proof.
In setting of the symmetric group action on finite abelain $p$-groups of rank
one, let $\mathcal{O}_{k,p^{r}}$, $\mathcal{O}_{k,p^{s}}$ be orbits
represented by elements $p^{r}$ and $p^{s}$ of the source group and
$\mathcal{O}_{l,p^{r^{\prime}}}$, $\mathcal{O}_{l,p^{s^{\prime}}}$ be orbits
represented by elements $p^{r^{\prime}}$ and $p^{s^{\prime}}$ of the target
group, where $0\leq r,\leavevmode\nobreak\ s\leq k-1$ and $0\leq
r^{\prime},\leavevmode\nobreak\ s^{\prime}\leq l-1$. We consider the cases
hereunder.
Case I: $k=l=2t$, $t\in\mathbb{Z}_{>0}$. Then the independent and complete
parts of the graph realised by a source group is
$X=\dot{\bigcup}_{i=0}^{t-1}\mathcal{O}_{k,p^{i}}$ and
$Y=\dot{\bigcup}_{i=0}^{t-1}\mathcal{O}_{k,p^{t+j}}$, where each element of
both $X$ and $Y$ are connected to $\mathcal{O}_{k,p^{k}}=\\{0\\}$. Similarly,
$X^{\prime}=\dot{\bigcup}_{i=0}^{t-1}\mathcal{O}_{l,p^{i}}$ and
$Y^{\prime}=\dot{\bigcup}_{j=0}^{t-1}\mathcal{O}_{l,p^{t+j}}$ represents the
independent and complete parts of the graph realized by a target group, where
each element of both $X^{\prime}$ and $Y^{\prime}$ are connected to
$\mathcal{O}_{l,p^{l}}=\\{0\\}$.
Let $x\in X$. If $x\in\mathcal{O}_{k,1}$, then as discussed above, $x$ is
adjacent to $\mathcal{O}_{k,p^{k}}$ only. On the other hand, if
$x\in\mathcal{O}_{k,p^{i}}$ for $1\leq i\leq t-1$, then $x$ is adjacent to all
elements of the set $\dot{\bigcup}_{n=i}^{0}\mathcal{O}_{k,p^{k-n}}\subset Y$.
Moreover, if $x^{\prime}\in\mathcal{O}_{l,1}$, then $x^{\prime}$ is adjacent
to $\mathcal{O}_{l,p^{l}}$ whereas if $x^{\prime}\in\mathcal{O}_{l,p^{j}}$ for
$1\leq j\leq t-1$, then $x^{\prime}$ is adjacent to all elements of the set
$\dot{\bigcup}_{m=j}^{0}\mathcal{O}_{l,p^{l-m}}\subset Y^{\prime}$. Under any
given graph homomorphism $\tau$, the images of relations in
$X\times\mathcal{O}_{k,p^{k}}$, $X\times Y$ and $Y\times\mathcal{O}_{k,p^{k}}$
are in $X^{\prime}\times\mathcal{O}_{l,p^{l}}$, $X^{\prime}\times Y^{\prime}$
and $Y^{\prime}\times\mathcal{O}_{l,p^{l}}$. Let $(a,b)\in
X\times\mathcal{O}_{k,p^{k}}\bigcup X\times Y\bigcup
Y\times\mathcal{O}_{k,p^{k}}$. Suppose $(a,b)$ degenerates to some
$(a^{\prime},b^{\prime})\in X^{\prime}\times\mathcal{O}_{l,p^{l}}\bigcup
X^{\prime}\times Y^{\prime}\bigcup Y^{\prime}\times\mathcal{O}_{l,p^{l}}$ . If
$\tau$ is group homomorphism such that $\tau(1)\in\mathcal{O}_{l,1}$, then
$\mathcal{O}_{k,1}\times\mathcal{O}_{k,p^{k}}\xhookrightarrow{1-1}\mathcal{O}_{l,1}\times\mathcal{O}_{l,p^{l}}$,
$\mathcal{O}_{k,p}\times\mathcal{O}_{k,p^{k}}\xhookrightarrow{1-1}\mathcal{O}_{l,p}\times\mathcal{O}_{l,p^{l}}$,
$\mathcal{O}_{k,p}\times\mathcal{O}_{k,p^{k-1}}\xhookrightarrow{1-1}\mathcal{O}_{l,p}\times\mathcal{O}_{l,p^{l-1}}$,
⋮
If $\tau(1)\in\mathcal{O}_{l,p}$, then
$\mathcal{O}_{k,1}\times\mathcal{O}_{k,p^{k}}\twoheadrightarrow\mathcal{O}_{l,p}\times\mathcal{O}_{l,p^{l}}$,
$\mathcal{O}_{k,p}\times\mathcal{O}_{k,p^{k}}\twoheadrightarrow\mathcal{O}_{l,p^{2}}\times\mathcal{O}_{l,p^{l}}$,
$\mathcal{O}_{k,p}\times\mathcal{O}_{k,p^{k-1}}\twoheadrightarrow\mathcal{O}_{l,p^{2}}\times\mathcal{O}_{l,p^{l-1}}$,
⋮
If $\tau(1)$ lies in any other orbit of $X^{\prime}\bigcup Y^{\prime}$, then
as above we have the mapping of edges to edges. Thus for any group
homomorphism which maps
$(p^{r}u,p^{s}u)\in\mathcal{O}_{k,p^{r}}\times\mathcal{O}_{k,p^{s}}$ to
$(p^{r^{\prime}}u,p^{s^{\prime}}u)\in\mathcal{O}_{l,p^{r^{\prime}}}\times\mathcal{O}_{l,p^{s^{\prime}}}$,
the relations $r\leq r^{\prime}$ and $s\leq s^{\prime}$ are verified.
Now, suppose $\tau$ is not a group homomorphism but a graph homomorphism.
Assume without loss of generality that under $\tau$,
$A\times\mathcal{O}_{k,p^{k}}\xhookrightarrow{1-1}A^{\prime}\times\mathcal{O}_{l,p^{l}}$,
where $A\subset\mathcal{O}_{k,1}\subset X$ and
$A^{\prime}\subset\mathcal{O}_{l,1}\subset X^{\prime}$ are proper subsets of
$X$ and $X^{\prime}$. Moreover,
$\mathcal{O}_{k,1}\setminus
A\times\mathcal{O}_{k,p^{k}}\bigcup\mathcal{O}_{k,p}\times\mathcal{O}_{k,p^{k}}\bigcup\mathcal{O}_{k,p}\times\mathcal{O}_{k,p^{k-1}}\twoheadrightarrow\mathcal{O}_{l,p}\times\mathcal{O}_{l,p^{l-1}}\bigcup\mathcal{O}_{l,p}\times\mathcal{O}_{l,p^{l}}$,
$\mathcal{O}_{k,p^{2}}\times\mathcal{O}_{k,p^{k}}\xhookrightarrow{1-1}\mathcal{O}_{l,p^{2}}\times\mathcal{O}_{l,p^{l}}$,
$\mathcal{O}_{k,p^{2}}\times\mathcal{O}_{k,p^{k-1}}\xhookrightarrow{1-1}\mathcal{O}_{l,p^{2}}\times\mathcal{O}_{l,p^{l-1}}$,
$\mathcal{O}_{k,p^{2}}\times\mathcal{O}_{k,p^{k-2}}\xhookrightarrow{1-1}\mathcal{O}_{l,p^{2}}\times\mathcal{O}_{l,p^{l-2}}$,
⋮
Thus, for $\tau$, we observe that the relations $r\leq r^{\prime}$ and $s\leq
s^{\prime}$ hold. Similarly these relations can be verified for other graph
homomorphisms.
Suppose to the contrary that $r>r^{\prime}$ and $s>s^{\prime}$. Then $(a,b)$
does not degenerates to $(a^{\prime},b^{\prime})$, since by Lemma (2), $a$ and
$b$ degenerates to $a^{\prime}$ and $b^{\prime}$ if and only if $r\leq
r^{\prime}$ and $s\leq s^{\prime}$, therefore, a contradiction. Further, if
under any graph homomorphism the elements of orbits
$\mathcal{O}_{k,p^{r}}\times\mathcal{O}_{k,p^{s}}$ are mapped to elements of
$\mathcal{O}_{l,p^{r^{\prime}}}\times\mathcal{O}_{l,p^{s^{\prime}}}$, then it
follows that for some $1\leq s\leq k-1$, $\mathcal{O}_{k,p^{s}}$ is mapped to
$\mathcal{O}_{l,p^{l}}$, again a contradiction.
Case II: $k=l=2t+1$, $t\in\mathbb{Z}_{>0}$. The independent and complete parts
of the graph realised by source and target groups are
$X=\dot{\bigcup}_{i=0}^{t}\mathcal{O}_{k,p^{i}}$,
$Y=\dot{\bigcup}_{j=1}^{t+1}\mathcal{O}_{l,p^{t+j}}$ and
$X^{\prime}=\dot{\bigcup}_{i=0}^{t}\mathcal{O}_{l,p^{i}}$,
$Y^{\prime}=\dot{\bigcup}_{j=0}^{t+1}\mathcal{O}_{l,p^{t+j}}$. Rest of the
proof for this case follows by the same argument which we discussed above for
the even case.
Finally, if we consider the cases $(k,l)=(2t,2t+1)$ or $(k,l)=(2t+1,2t)$, then
these cases can be handled in the same manner as above. ∎
Figure 2: Pictorial sketch of degeneration
Note that in Figure (2), the graph on the left hand side is the graph realized
by $\mathbb{Z}/2^{3}\mathbb{Z}$ and the graph on the right hand side is
realized by $\mathbb{Z}/2^{5}\mathbb{Z}$.
## 4 Partial orders on $\mathcal{T}_{p_{1}\cdots p_{n}}$
In this section, we study some relations on the set $\mathcal{T}_{p_{1}\cdots
p_{n}}$ of all graphs realized by finite abelian $p_{r}$-groups of rank $1$,
where each $p_{r}$, $1\leq r\leq n$, is a prime number. We discuss equivalent
forms of the partial order “degeneration” on $\mathcal{T}_{p_{1}\cdots p_{n}}$
and obtain a locally finite distributive lattice of finite abelian groups.
Threshold graphs play an essential role in graph theory as well as in several
applied areas which include psychology and computer science [12]. These graphs
were introduced by Chvátal and Hammer [7] and Henderson and Zalcstein [10].
A vertex in a graph $\Gamma$ is called dominating if it is adjacent to every
other vertex of $\Gamma$. A graph $\Gamma$ is called a threshold graph if it
is obtained by the following procedure.
Start with $K_{1}$, a single vertex, and use any of the following steps, in
any order, an arbitrary number of times.
(i) Add an isolated vertex.
(ii) Add a dominating vertex, that is, add a new vertex and make it adjacent
to each existing vertex.
It is always interesting to determine the classes of threshold graphs, since
we may represent a threshold graph on $n$ vertices using a binary code
$(b_{1},b_{2},\cdots,b_{n})$, where $b_{i}=0$ if vertex $v_{i}$ is being added
as an isolated vertex and $b_{i}=1$ if $v_{i}$ is being added as a dominating
vertex. Furthermore, using the concept of creation sequences we establish the
nullity, multiplicity of some non-zero eigenvalues and the Laplacian
eigenvalues of a threshold graph. The Laplacian eigenvalues of $\Gamma$ are
the eigenvalues of a matrix $D(\Gamma)-A(\Gamma)$, where $D(\Gamma)$ is the
diagonal matrix of vertex degrees and $A(\Gamma)$ is the familiar $(0,1)$
adjacency matrix of $\Gamma$.
The authors in [13] confirmed that the graph realised by a finite abelian
$p$-group of rank $1$ is a threshold graph. In fact, they proved the following
intriguing result for a finite abelain $p$-groups of rank $1$.
###### Theorem 8.
[13]. If $G$ is a finite abelian $p$-group of rank $1$, then $\Gamma(G)$ is a
threshold graph.
Let $p_{1}<p_{2}<\cdots<p_{n}$ be a sequence of primes and let
$\lambda_{i}=(\lambda_{i,1},\lambda_{i,2},\cdots,\lambda_{i,n})$ be sequence
of partitions of positive integers, where $1\leq i\leq n$. For each prime
$p_{t}$, where $1\leq t\leq n$, the sequences of finite abelian $p_{t}$-groups
with respect to partitions $\lambda_{i,1},\lambda_{i,2},\cdots,\lambda_{i,n}$
are listed as follows,
$G_{\lambda_{1},p_{1}}=\mathbb{Z}/p_{1}^{\lambda_{1,1}}\mathbb{Z}\oplus\mathbb{Z}/p_{1}^{\lambda_{1,2}}\mathbb{Z}\oplus\cdots\oplus\mathbb{Z}/p_{1}^{\lambda_{1,n}}\mathbb{Z}$,
$G_{\lambda_{2},p_{2}}=\mathbb{Z}/p_{2}^{\lambda_{2,1}}\mathbb{Z}\oplus\mathbb{Z}/p_{2}^{\lambda_{2,2}}\mathbb{Z}\oplus\cdots\oplus\mathbb{Z}/p_{2}^{\lambda_{2,n}}\mathbb{Z}$,
⋮
Fix a prime $p_{r}$, where $1\leq r\leq n$. Then for each distinct power
$\lambda_{i,j}$, $1\leq i,j\leq n$, it follows from Theorem (8), that members
of the sequence of graphs realised by a sequence of finite abelian
$p_{r}$-groups of rank $1$ are threshold graphs. The sets of orbits of
symmetric group action on sequence of finite abelian $p_{r}$-groups
$\mathbb{Z}/p_{r}^{\lambda_{r,1}}\mathbb{Z},\mathbb{Z}/p_{r}^{\lambda_{r,2}}\mathbb{Z},\cdots,\mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z}$
of rank $1$ are:
$\\{\mathcal{O}_{r,1},\mathcal{O}_{r,p_{r}^{1}}\\}$,
$\\{\mathcal{O}_{r,1},\mathcal{O}_{r,p_{r}^{1}},\mathcal{O}_{r,p_{r}^{2}}\\}$,
$\\{\mathcal{O}_{r,1},\mathcal{O}_{r,p_{r}^{1}},\mathcal{O}_{r,p_{r}^{2}},\mathcal{O}_{r,p_{r}^{3}}\\}$,
⋮
Note that, $\lambda_{r,1}=1,\lambda_{r,2}=2,\lambda_{r,3}=3,\cdots$, in the
above sequence of finite abelian$p_{r}$-groups.
Thus for each prime $p_{r}$ and positive integer $\lambda_{i,j}$, we have
sequences of threshold graphs realised by sequences of abelian $p_{r}$-groups.
The degree sequence of a graph $\Gamma$ is given by
$\pi(\Gamma)=(d_{1},d_{2},\cdots,d_{n})$, which is the non-increasing sequence
of non-zero degrees of vertices of $\Gamma$.
For a graph $\Gamma$ of order $n$ and size $m$, let
$d=[d_{1},d_{2},\cdots,d_{n}]$ be a sequence of non-negative integers arranged
in non-increasing order, which we refer to as a partition of $2m$. Define the
transpose of the partition as $d^{*}=[d_{1}^{*},d_{2}^{*},\cdots,d_{r}^{*}]$,
where $d_{j}^{*}=|\\{d_{i}:d_{i}\geq j\\}|$, $j=1,2,\cdots,r$. Therefore
$d_{j}^{*}$ is the number of $d_{i}$’s that are greater than equal to $j$.
Recall from [2] that a sequence $d^{*}$ is called the conjugate sequence of
$d$. The another interpretation of a conjugate sequence is the Ferrer’s
diagram (or Young diagram) denoted by $Y(d)$ corresponding to
$d_{1},d_{2},\cdots,d_{n}$ consists of $n$ left justified rows of boxes, where
the $i^{th}$ row consists of $d_{i}$ boxes (blocks), $i=1,2,\cdots,n$. Note
that $d_{i}^{*}$ is the number of boxes in the $i^{th}$ column of the Young
diagram with $i=1,2,\cdots,r$. An immediate consequence of this observation is
that if $d^{*}$ is the conjugate sequence of $d$, then,
$\sum\limits_{i=1}^{n}d_{i}=\sum\limits_{i=1}^{r}d_{i}^{*}$
If $d$ represents the degree sequence of a graph, then the number of boxes in
the $i^{th}$ row of the Young diagram is the degree of vertex $i$, while the
number of boxes in the $i^{th}$ row of the Young diagram of the transpose is
the number of vertices with degree at least $i$. The trace of a Young diagram
$tr(Y(d))$ is $tr(Y(d))=|\\{i:d_{i}\geq i\\}|=tr(Y(d^{*}))$, which is the
length of “diagonal” of the Young diagram for $d$ (or $d^{*}$).
The degree sequence is a graph invariant, so two isomorphic graphs have the
same degree sequence. In general, the degree sequence does not uniquely
determine a graph, that is, two non-isomorphic graphs can have the same degree
sequence. However, for threshold graphs, we have the following result.
###### Proposition 4.1 ([15]).
Let $\Gamma_{1}$ and $\Gamma_{2}$ be two threshold graphs and let
$\pi_{1}(\Gamma_{1})$ and $\pi_{2}(\Gamma_{2})$ be degree sequences of
$\Gamma_{1}$ and $\Gamma_{2}$ respectively. If
$\pi_{1}(\Gamma_{1})=\pi_{2}(\Gamma_{2})$, then $\Gamma_{1}\cong\Gamma_{2}$.
The Laplacian spectrum of threshold graphs $\Gamma$, which we denote by $\ell-
spec(\Gamma)$, have been studied in [9, 14]. In [9], the formulas for the
Laplacian spectrum, the Laplacian polynomial, and the number of spanning trees
of a threshold graph are given. It is shown that the degree sequence of a
threshold graph and the sequence of eigenvalues of its Laplacian matrix are
“almost the same” and on this basis, formulas are given to express the
Laplacian polynomial and the number of spanning trees of a threshold graph in
terms of its degree sequence.
The following is the fascinating result regarding the Laplacian eigenvalues of
the graph realized by a finite abelian $p$-group of rank $1$.
###### Theorem 9.
[13]. Let $\Gamma(G)$ be the graph realized by a finite abelian $p$-group of
the type $G=\mathbb{Z}/p^{k}\mathbb{Z}$. Then the representatives
$0,1,p,p^{2},\cdots,p^{k-1}$ (with multiplicities) of orbits
$\\{\mathcal{O}_{k,p^{k}}\\}\cup\\{\mathcal{O}_{k,p^{i}}:0\leq i\leq k-1\\}$
of symmetric action on $G$ are the Laplacian eigenvalues of $\Gamma(G)$, that
is, $\ell-spec(\Gamma(G))=\\{0,1,p,p^{2},\cdots,p^{k-1},p^{k}\\}$.
###### Definition 10.
Let $\pi_{1},\pi_{2},\cdots,\pi_{n}\in\mathbb{Z}_{>0}$ and
$\pi_{1}^{\bullet},\pi_{2}^{\bullet},\cdots,\pi_{n}^{\bullet}\in\mathbb{Z}_{>0}$
be some partitions of $n\in\mathbb{Z}_{>0}$. A sequence (partition) of
eigenvalues $\pi=(\pi_{1},\pi_{2},\cdots,\pi_{n})$ of a graph $\Gamma$ is said
to be a threshold eigenvalues sequence (partition) if
$\pi_{i}=\pi_{i}^{\bullet}+1$ for all $i$ with $1\leq i\leq tr(Y(\pi))$.
Just for the convenience we refer the Laplacian eigenvalues as eigenvalues.
The sequence of representatives of orbits (or eigenvalues of
$\Gamma(\mathbb{Z}/p^{k}\mathbb{Z})$) of a symmetric action on a group
$\mathbb{Z}/p^{k}\mathbb{Z}$ obtained in Theorem (9) represents transpose of a
young diagram $Y(d)$, where $d$ is the degree sequence of the graph realized
by $\mathbb{Z}/p^{k}\mathbb{Z}$.
For a group $G=\mathbb{Z}/2^{4}\mathbb{Z}$ be a group, the degree sequence
$\sigma$ of $\Gamma(G)$ is,
$\sigma=\pi^{\bullet}=(15,7,3,3,2,2,2,2,1,1,1,1,1,1,1,1).$
The conjugate sequence of $\sigma$ is,
$\sigma^{*}=\pi=(2^{4},2^{3},2^{2},2,2,2,2,1,1,1,1,1,1,1,1).$
A partition $\pi$ of eigenvalues of $\Gamma(G)$ is a threshold eigenvalues
partition, since
$\sum\limits_{i=1}^{3}\pi_{i}=\sum\limits_{i=1}^{3}\pi_{i}^{\bullet}+1$. Note
that $tr(Y(\pi))=3$, the three blocks in $Y(\sigma^{*})=Y(\pi)$ are shown as
$t_{11},t_{22},t_{33}$ before the darkened column in Figure (3) below.
Figure 3: $Y(\pi)$
Thus from above discussion we assert that a partition $\pi$ of eigenvalues is
a threshold eigenvalues partition if and only if $Y(\pi)$ can be decomposed
into an $tr(Y(\pi))\times tr(Y(\pi))$ array of blocks in the upper left-hand
corner called the trace square in $Y(\pi)$. A column of $tr(Y(\pi))$ blocks
placed immediately on the right hand side of trace square, darkened in Figure
(3), and a piece of blocks on the right hand side of column $tr(Y(\pi))+1$ is
the transpose of the piece which is below the trace square.
If $a=(a_{1},a_{2},\cdots,a_{r})$ and $b=(b_{1},b_{2},\cdots,b_{s})$ are non-
increasing sequences of real numbers. Then $b$ weakly majorizes $a$, written
as $b\succeq a$, if $r\geq s$,
$\sum\limits_{i=1}^{k}b_{i}\geq\sum\limits_{i=1}^{k}a_{i},$ (1)
where $1\leq k\leq s$, and
$\sum\limits_{i=1}^{r}b_{i}\geq\sum\limits_{i=1}^{s}a_{i}.$ (2)
If $b$ weakly majorizes $a$ and equality holds in (2), then $b$ majorizes $a$,
written as $b\succ a$.
We present an example which illustrates that the threshold eigenvalues
partition of some graph realized by a finite abelian $p$-group $G_{1}$
majorizes the degree partition of the graph realized by some other finite
abelian $p$-group $G_{2}$.
Let $G_{1}=\mathbb{Z}/2^{3}\mathbb{Z}$ and $G_{1}=\mathbb{Z}/3^{2}\mathbb{Z}$
be two groups. The degree partitions $\pi_{1}^{\bullet}$ and $\pi_{2}$ of
graphs $\Gamma(G_{1})$ and $\Gamma(G_{2})$ are listed below as,
$\pi_{1}^{\bullet}=(7,3,2,2,1,1,1,1),$
$\pi_{2}=(8,2,2,1,1,1,1,1,1).$
The partitions $\pi_{1}^{\bullet},\pi_{2}\in\mathcal{P}(18)$, where
$\mathcal{P}(18)$ is the set of all partitions of $18$. The partition
$\pi_{1}=(8,4,2,1,1,1,1)$ is the threshold eigenvalues partition of
$\Gamma(G_{1})$. The Young diagrams of partitions $\pi_{1}$ and $\pi_{2}$ are
shown in Figure (4).
Figure 4: Young diagrams of $\pi_{1}$ and $\pi_{2}$
Let $\pi^{\bullet}$ and $\sigma$ be two degree sequences of graphs realized by
finite abelian $p$-groups of rank $1$ such that $\pi^{\bullet},\sigma\vdash
m$, where $m\in\mathbb{Z}_{>0}$. Then $\pi\succ\sigma$ if and only if $Y(\pi)$
can be obtained from $Y(\sigma)$ by moving blocks of the highest row in
$Y(\sigma)$ to lower numbered rows. Thus majorization induces a partial order
on sets $\\{Y(\pi^{\bullet}):\pi^{\bullet}\leavevmode\nobreak\
is\leavevmode\nobreak\ a\leavevmode\nobreak\ degree\leavevmode\nobreak\
sequence\leavevmode\nobreak\ of\leavevmode\nobreak\ some\leavevmode\nobreak\
graph\leavevmode\nobreak\ re$ $alized\leavevmode\nobreak\
by\leavevmode\nobreak\ a\leavevmode\nobreak\ p-group\leavevmode\nobreak\
of\leavevmode\nobreak\ rank\leavevmode\nobreak\ 1\\}$ and
$\\{Y(\pi^{\bullet}):\pi^{\bullet}\vdash n,n\in\mathbb{Z}_{>0}\\}$.
###### Corollary 11.
If $\pi,\sigma\in\mathcal{P}(n)$, $n\in\mathbb{Z}_{>0}$, then $\pi\succ\sigma$
if and only if $Y(\pi)$ can be obtained from $Y(\sigma)$ by moving blocks of
the highest row in $Y(\sigma)$ to lower numbered rows.
###### Theorem 12.
Let $\mathcal{T}_{p_{1}\cdots p_{n}}$ be the collection of all graphs realised
by all sequences of finite abelian $p_{r}$-groups, where $1\leq r\leq n$. If
$\pi$ is a threshold eigenvalues partition, then upto isomorphism, there is
exactly one finite abelian $p_{r}$-group $G$ of rank $1$ such that $\ell-
spec(\Gamma(G))\setminus\\{0\\}=\pi$.
###### Proof.
Let
$\left(\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r,1}}\mathbb{Z}),\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r,2}}\mathbb{Z}),\cdots,\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z})\right)\in\mathcal{T}_{p_{1}\cdots
p_{n}}$ be a sequence of graphs realized by a sequence of finite abelian
$p_{r}$-groups
$\left(\mathbb{Z}/p_{r}^{\lambda_{r,1}}\mathbb{Z},\mathbb{Z}/p_{r}^{\lambda_{r,2}}\mathbb{Z},\cdots,\mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z}\right)$.
Let $\pi$ be a threshold eigenvalues partition of some graph of the sequence.
Without loss of generality let it be the graph realised by a finite abelian
$p_{r}$-group $\mathbb{Z}/p_{r}^{\lambda_{r,r}}\mathbb{Z}$. The partition
$\pi$ is represented by Young diagram $Y(\pi)$ and the Young diagram for the
abelian $p_{r}$-group of type $\mathbb{Z}/p_{r}^{\lambda_{r,r-1}}\mathbb{Z}$
can be obtained from $Y(\pi)$ by removing some blocks in rows and columns of
$Y(\pi)$. The proof now follows by induction on terms of the sequence of
graphs. ∎
For $1\leq i\leq j\leq n$, let $G$ be a finite abelian $p_{i}$-group of rank
$1$ and $H$ be a finite abelian $p_{j}$-group of the same rank. Moreover, let
$\Gamma(G)$ and $\Gamma(H)$ be two graphs realized by $G$ and $H$. We define a
partial order “$\leq$” on $\mathcal{T}_{p_{1}\cdots p_{n}}$. Graphs
$\Gamma(G),\Gamma(H)\in\mathcal{T}_{p_{1}\cdots p_{n}}$ are related as
$\Gamma(G)\leq\Gamma(H)$ if and only if $\Gamma(H)$ contains a subraph
isomorphic to $\Gamma(G)$, that is if and only if $\Gamma(G)$ can be obtained
from $\Gamma(H)$ by “deletion of vertices”.
The relation “degeneration” on the set $\mathcal{T}_{p_{1}\cdots p_{n}}$
descends to a partial order on $\mathcal{T}_{p_{1}\cdots p_{n}}$ and two
graphs $\Gamma(G)$, $\Gamma(H)$ are related if $\Gamma(G)$ degenerates to
$\Gamma(H)$. It is not hard to verify that the partial orders “$\leq$” and
“degeneration” are equivalent on $\mathcal{T}_{p_{1}\cdots p_{n}}$, since by
“deletion of vertices” in $\Gamma(H)$ we get the homomorphic image of
$\Gamma(G)$ in $\Gamma(H)$ and if $\Gamma(G)$ degenerates to $\Gamma(H)$, then
$\Gamma(G)$ can be obtained from $\Gamma(H)$ by “deletion of vertices”.
Recall that a poset $P$ is locally finite if the interval $[x,z]=\\{y\in
P:x\leq y\leq z\\}$ is finite for all $x,z\in P$. If $x,z\in P$ and
$[x,z]=\\{x,z\\}$, then $z$ covers $x$. A Hasse diagram of $P$ is a graph
whose vertices are the elements of $P$, whose edges are the cover relations,
and such that z is drawn “above” x whenever $x<z$.
A lattice is a poset $P$ in which every pair of elements $x,y\in P$ has a
least upper bound (or join), $x\vee y\in P$, and a greatest lower bound (or
meet), $x\wedge y\in P$. Lattice $P$ is distributive if $x\wedge(y\vee
z)=(x\wedge y)\vee(x\wedge z)$ and $x\vee(y\wedge z)=(x\vee y)\wedge(x\vee z)$
for all $x,y,z\in P$.
Let $\mathcal{Y}$ be the set of all threshold eigenvalues partitions of
members of $\mathcal{T}_{p_{1}\cdots p_{n}}$. If $\mu,\eta\in\mathcal{Y}$,
define $\mu\leq\eta$, if $Y(\mu)$ “fits in” $Y(\eta)$, that is, if
$\mu\leq\eta$, then $Y(\eta)$ is overlapped by $Y(\mu)$ or $Y(\mu)$ fits
inside $Y(\eta)$. The set $\mathcal{Y}$ with respect this partial ordering is
a locally finite distributive lattice. The unique smallest element of
$\mathcal{Y}$ is $\hat{0}=\emptyset$, the empty set.
Recall that the dual of a poset $P$ is the poset $P^{*}$ on the same set as
$P$, such that $x\leq y$ in $P^{*}$ if and only if $y\leq x$ in $P$. If $P$ is
isomorphic to $P^{*}$, then $P$ is self-dual.
###### Theorem 13.
If $\Gamma(G),\Gamma(H)\in\mathcal{T}_{p_{1}\cdots p_{n}}$, then
$\Gamma(G)\leq\Gamma(H)$ if and only $Y(\mu)$ “fits in” $Y(\eta)$, where $\mu$
and $\eta$ are threshold eigenvalues partitions of graphs $\Gamma(G)$ and
$\Gamma(H)$.
###### Proof.
If $\Gamma(G)$ is obtained from $\Gamma(H)$ by deletion of one or more
vertices, then the terms in the threshold eigenvalues partition $\mu$ are less
in number than the terms in the threshold eigenvalues partition $\eta$ of
$\Gamma(H)$. It follows that $Y(\mu)$ “fits in” $Y(\eta)$.
Conversely, suppose $Y(\mu)$ “fits in” $Y(\eta)$. The threshold eigenvalues
partitions $\mu$ and $\eta$ are obtained from degree sequences of $\Gamma(G)$
and $\Gamma(H)$. If $\Gamma(G)$ and $\Gamma(H)$ have same degree sequence,
then $\mu=\eta$. Therefore by Proposition (4.1), $\Gamma(G)\cong\Gamma(H)$.
Otherwise, $\mu\neq\eta$. Let $\Gamma(K)$ be a subgraph of $\Gamma(H)$
obtained by removing a pendant vertex from $\Gamma(H)$. Then
$Y(\eta^{\prime})$ is obtained from $Y(\eta)$ by removing a single block in
the string with number of blocks in the string equal to the largest eigenvalue
in $\eta$. It is clear that $Y(\eta^{\prime}$ “fits in” $Y(\eta)$. We continue
the process of deletion of vertices untill the resulting graph has the same
threshold eigenvalues partition as $\Gamma(G)$. Thus, it follows that
$\Gamma(H)$ contains a subgraph isomorphic to $\Gamma(H)$, that is,
$\Gamma(G)\leq\Gamma(H)$. ∎
###### Corollary 14.
The sets $\mathcal{T}_{p_{1}\cdots p_{n}}$ and $\mathcal{Y}$ are isomorphic to
each other (as posets).
###### Proof.
The bijection $\Gamma(G)\longrightarrow Y(\mu)$ is a poset isomorphism from
$\mathcal{T}_{p_{1}\cdots p_{n}}$ onto $\mathcal{Y}$, where $\mu$ is threshold
eigenvalues partition of the graph $\Gamma(G)\in\mathcal{T}_{p_{1}\cdots
p_{n}}$ realised by a finite abelian$p_{r}$-group of rank $1$. ∎
For $n\geq 1$, let $\mathcal{F}_{n}$ be the the collection of all connected
threshold graphs on $n$ vertices. We extend the partial order “$\leq$” to
$\mathcal{F}_{n}$. Two graphs $G_{1},G_{2}\in\mathcal{F}_{n}$ are related as
$G_{1}\leq G_{2}$ if and only if $G_{1}$ is isomorphic to a subgraph of
$G_{2}$. It is not difficult to verify that the poset
$\mathcal{T}_{p_{1}\cdots p_{n}}$ is an induced subposet of $\mathcal{F}_{n}$
and $\mathcal{F}_{n}$ is a self-dual distributive lattice. Moreover, if
$\mathcal{H}_{n}$ is the collection of threshold eigenvalues partitions of
members of $\mathcal{F}_{n}$, then again it is easy verify that
$\mathcal{H}_{n}$ is a poset with respect to partial order “fits in” and we
have the following observation related to posets $\mathcal{F}_{n}$ and
$\mathcal{H}_{n}$.
###### Corollary 15.
The bijection $G\longrightarrow Y(\mu)$ is a poset isomorphism from
$\mathcal{F}_{n}$ to $\mathcal{H}_{n}$, where $\mu$ is threshold eigenvalues
partition of $G\in\mathcal{F}_{n}$. In particular, $\mathcal{H}_{n}$ is self-
dual distributive lattice.
Now, we focus on sub-sequences (sub-partitions) of a threshold eigenvalues
partition. We begin by dividing $Y(\pi)$ into two disjoint pieces of blocks,
where $\pi)$ is a threshold eigenvalues partition of a graph
$\Gamma(G)\in\mathcal{T}_{p_{1}\cdots p_{n}}$. We denote by $R(Y(\pi))$ those
blocks of $Y(\pi)$ which lie on the diagonal of a trace square of $Y(\pi)$ and
to the right of diagonals. By the notation $C(Y(\pi))$, we denote those blocks
of $Y(\pi)$ that lie strictly below diagonals of a trace square, that is,
$R(Y(\pi))$ is a piece of blocks of $Y(\pi)$ on or above the diagonal and
$C(Y(\pi))$ is the piece of $Y(\pi)$ which lie strictly below the diagonal.
This process if division is illustrated as follows (Figure (5)).
Figure 5: Division of $Y(\pi)$
If we look more closely at these shifted divisions of $Y(\pi)$. Each
successive row of $R(Y(\pi))$ is shifted one block to the right. Furthermore,
$R(Y(\pi))$ corresponding to sub-partition of $\pi$ forms a strictly
decreasing sequence, that is, terms of of the sub-partition are distinct and
these sub-partitions with distinct terms are called strict threshold eigen
vales partitions. Thus, if $\pi^{\prime}=(a_{1},a_{2},\cdots,a_{n})$ is a
strict threshold eigen vales partition of a threshold eigenvalues partition
$\pi$, then there is a unique shifted division whose $i^{th}$ row contains
$a_{i}$ blocks, where $1\leq i\leq n$. It follows that there is a one to one
correspondence between the set of all threshold eigenvalue partitions of
members of $\mathcal{T}_{p_{1}\cdots p_{n}}$ and the set of all threshold
eigen vales partition. As a result, $\mathcal{Y}$ is identical to the lattice,
which we call lattice of shifted divisions.
Recall that a subset $A$ of a poset $P$ is a chain if any two elements of $A$
are comparable in $P$. A chain is called saturated if there do not exist
$x,z\in A$ and $y\in P\setminus{A}$ such that $y$ lies in between $x$ and $z$.
In a locally finite lattice, a chain $\\{x_{0},x_{1},\cdots,x_{n}\\}$ of
length $n$ is saturated if and only if $x_{i}$ covers $x_{i-1}$, where $1\leq
i\leq n$.
Since $\mathcal{T}_{p_{1}\cdots p_{n}}$ is a locally finite distributive
lattice, therefore $\mathcal{T}_{p_{1}\cdots p_{n}}$ has a unique rank
function $\Psi:\mathcal{T}_{p_{1}\cdots p_{n}}\longrightarrow\mathbb{Z}_{>0}$,
where
$\Psi\left(\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r,1}}\mathbb{Z}),\cdots,\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z})\right)$
is the length of any saturated chain from $\hat{0}$ to the graph realized by a
finite abelian $p_{r}$-group $\mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z}$.
Note that a finite abelian $p_{r}$-group of rank $n$,
$G_{\lambda_{r},p_{r}}=\mathbb{Z}/p_{r}^{\lambda_{r,1}}\mathbb{Z}\oplus\mathbb{Z}/p_{r}^{\lambda_{r,2}}\mathbb{Z}\oplus\cdots\oplus\mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z}$
is identified with a sequence of abelian $p_{r}$ groups of rank $1$
$\left(\mathbb{Z}/p_{r}^{\lambda_{r,1}}\mathbb{Z},\mathbb{Z}/p_{r}^{\lambda_{r,2}}\mathbb{Z},\cdots,\mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z}\right)$
which in turn is identified with a sequence of graphs
$\left(\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r,1}}\mathbb{Z}),\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r,2}}\mathbb{Z}),\cdots,\Gamma(\mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z})\right)$
or a sequence of a threshold partitions
$(\mu_{1},\mu_{2},\cdots,\mu_{n})\in\mathcal{Y}$. Therefore, the
correspondence of
$G_{\lambda_{r},p_{r}}=\mathbb{Z}/p_{r}^{\lambda_{r,1}}\mathbb{Z}\oplus\mathbb{Z}/p_{r}^{\lambda_{r,2}}\mathbb{Z}\oplus\cdots\oplus\mathbb{Z}/p_{r}^{\lambda_{r,n}}\mathbb{Z}$
to $(\mu_{1},\mu_{2},\cdots,\mu_{n})$ establishes that every finite abelain
$p_{r}$-group of rank $n$ can be identified with a saturated chain in
$\mathcal{T}_{p_{1}\cdots p_{n}}$ or $\mathcal{Y}$ and the rank function of
each abelian $p_{r}$-group of rank $n$ is
$\Psi(\mu_{1},\mu_{2},\cdots,\mu_{n})={\lambda_{r,n}}=$
$max\\{\lambda_{r,i}:1\leq i\leq n\\}$.
###### Remark 4.2.
Let $\Lambda_{q}$ be the set of all non-isomorphic graphs of
$\mathcal{T}_{p_{1}\cdots p_{n}}$ with equal number of edges say $q$, (graphs
realized by groups $\mathbb{Z}/2^{3}\mathbb{Z}$ and
$\mathbb{Z}/3^{2}\mathbb{Z}$ are non-isomorphic graphs with equal number of
edges). Since there is one to one correspondence between threshold eigenvalues
partitions and strict threshold eigenvalues partitions. The rank generating
function of the poset is presented in the following equation,
$\sum\limits_{q\geq 0}\kappa_{q}z^{q}=\prod\limits_{t\geq
1}(1+z^{t})=1+z+z^{2}+2z^{3}+2z^{4}+\cdots,$
where $\kappa_{q}$ is the cardinality of $\Lambda_{q}$.
The representation of a locally finite distributive lattice
$\mathcal{T}_{235}$ is illustrated in Figure (6).
Figure 6: $\mathcal{T}_{235}$
Fix a finite abelian $p_{r}$-group $G$ and
$\Gamma(G)\in\mathcal{T}_{p_{1}\cdots p_{n}}$. Let $\ell(\Gamma(G))$ be the
number of saturated chains in $\mathcal{T}_{p_{1}\cdots p_{n}}$ from $\hat{0}$
to $\Gamma(G)$.
The following result relates the number of saturated chains in
$\mathcal{T}_{p_{1}\cdots p_{n}}$ with the degree of a projective
representation of a symmetric group $\mathcal{S}_{t}$ on $t$ number of
symbols.
###### Corollary 16.
Let $\pi=(\pi_{1},\pi_{2},\cdots,\pi_{k})$ be a strict threshold eigenvalues
partition of some $\Gamma(G)\in\mathcal{T}_{p_{1}\cdots p_{n}}$. Then the
following hold,
$\ell(\Gamma(G))=\frac{t!}{\prod\limits_{i=1}^{tr(Y(\pi))}\lambda_{i}!}\prod\limits_{r<s}\frac{\lambda_{r}-\lambda_{s}}{\lambda_{r}+\lambda_{s}},$
(3)
where $\lambda_{i}=\pi_{i}-i$, $1\leq i\leq tr(Y(\pi))$ and
$\lambda=(\lambda_{1},\lambda_{2},\cdots,\lambda_{tr(Y(\pi))})$ is a partition
of some $t\in\mathbb{Z}_{>0}$.
###### Proof.
The right side of (3) represents the count of number of saturated chains from
$\hat{0}$ to $\Gamma(G)$. ∎
Note that the number of saturated chains from $\hat{0}$ to $\Gamma(G)$ in (3)
also provide a combinatorial formula for the number of finite abelian groups
of different orders.
Acknowledgement: This research project was initiated when the author visited
School of Mathematics, TIFR Mumbai, India. So, I am immensely grateful to TIFR
Mumbai for all the facilities. Moreover, I would like to thank Amitava
Bhattacharya of TIFR Mumbai for some useful discussions on this research work.
Declaration of competing interest.
There is no conflict of interest to declare.
Data Availability.
Data sharing not applicable to this article as no datasets were generated or
analysed during the current study.
## References
* [1] D. F. Anderson and P. S. Livingston, The zero-divisor graph of a commutative ring, J. Algebra 217 (1999) 434 - 447.
* [2] R. B. Bapat, Graphs and Matrices, Springer/Hindustan Book Agency, London/New Delhi, 2010.
* [3] I. Beck, Coloring of commutative rings, J. Algebra 116 (1988) 208 - 226.
* [4] Z. Bohdan, Intersection graphs of finite abelian groups, Czech. Math. Journal 25 (2) (1975) 171 - 174.
* [5] R. Brauer, K. A. Fowler, On groups of even order, Annals of Math. 62 (2) (1955) 565 - 583.
* [6] P. Cameron, S. Ghosh, The power graph of a finite group, Disc. Math. 311 (13) (2011) 1220 - 1222.
* [7] V. Chvátal, P. L. Hammer, Aggregation of Inequalities in Integer Programming, Ann. Disc. Math. 1 (1977) 145 - 162.
* [8] K. Dutta, A. Prasad, Degenerations and orbits in finite abelian groups, J. Comb. Theory, Series A 118 (2011) 1685 - 1694.
* [9] P. L. Hammer and A. K. Kelmans, Laplacian spectra and spanning trees of threshold graphs, 65 1-3 (1996) 255 - 273.
* [10] P. B. Henderson and Y. Zalcstein, A Graph-Theoretic Characterization of the PV class of synchronizing primitives, SIAM J. Comput. 6 (1) (1977) 88 - 108.
* [11] M. W. Liebeck and A. Shalev, Simple Groups, Probabilistic Methods, and a Conjecture of Kantor and Lubotzky, J. Algebra 184 (1996) 31 - 57.
* [12] N.V.R. Mahadev, U.N. Peled, Threshold Graphs and Related Topics, Ann. Disc. Math. 56 (1995).
* [13] E. Mazumdar, Rameez Raja, Group-annihilator graphs realised by finite abelian and its properties, Graphs and Combinatorics 38 25 (2022) 25pp.
* [14] R. Merris. Laplacian matrices of graphs: A survey, L. Algebra Appl. 197 (1994) 143 - 176.
* [15] R. Merris. Graph Theory, John Wiley and Sons, (2011).
* [16] F. D. Meyer and L. D. Meyer, Zero-divisor graphs of semigroups, J. Algebra 283 (2005) 190 - 198.
* [17] G. A. Miller, Determination of all the characteristic subgroups of any abelian group, Amer. J. Math. 27 (1) (1905) 15 - 24.
* [18] K. Mönius, Eigenvalues of zero-divisor graphs of finite commutative rings, J. Algebr. Comb. 54 (2021) 787 – 802.
* [19] S. Pirzada, Rameez Raja, On the metric dimension of a zero-divisor graph, Commun. Algebra 45 (4) (2017) 1399 - 1408.
* [20] Rameez Raja, Total perfect codes in graphs realized by commutative rings, Transactions of Comb. 11 (4) (2022) 295-307.
* [21] Rameez Raja, S. Pirzada, On annihilating graphs associated with modules over commutative rings, Algebra Colloquium 29 (2) (2022) 281-296.
* [22] M. Schwachhöfer, M. Stroppel, Finding representatives for the orbits under the automorphism group of a bounded abelian group, J. Algebra 211 (1) (1999) 225 - 239.
|
# The “Top Priority” at the LHC
Tao Han111email<EMAIL_ADDRESS>Department of Physics, University of
Wisconsin, Madison, WI 53706
KITP, University of California, Santa Barbara, CA 93107
###### Abstract
The LHC will be a top-quark factory. With 80 million pairs of top quarks and
an additional 34 million single tops produced annually at the designed high
luminosity, the properties of this particle will be studied to a great
accuracy. The fact that the top quark is the heaviest elementary particle in
the Standard Model with a mass right at the electroweak scale makes it
tempting to contemplate its role in electroweak symmetry breaking, as well as
its potential as a window to unknown new physics at the TeV scale. We
summarize the expectations for top-quark physics at the LHC, and outline new
physics scenarios in which the top quark is crucially involved.
To be published as a chapter in the book of “Perspectives on the LHC”, edited
by G. Kane and A. Pierce, by World Scientific Publishing Co., 2008.
††preprint: MADPH–08–1509, NSF–KITP–08–55
## I Brief Introduction
The top quark plays a special role in the Standard Model (SM) and holds great
promise in revealing the secret of new physics beyond the SM. The theoretical
considerations include the following:
* •
With the largest Yukawa coupling $y_{t}\sim 1$ among the SM fermions, and a
mass at the electroweak scale $m_{t}\sim v/\sqrt{2}$ (the vacuum expectation
value of the Higgs field), the top quark is naturally related to electroweak
symmetry breaking (EWSB), and may reveal new strong dynamics Hill:2002ap .
* •
The largest contribution to the quadratic divergence of the SM Higgs mass
comes from the top-quark loop, which implies the immediate need for new
physics at the Terascale for a natural EW theory Giudice:2008bi , with SUSY
and Little Higgs as prominent examples.
* •
Its heavy mass opens up a larger phase space for its decay to heavy states
$Wb,\ Zq,\ H^{0,\pm}q$, etc.
* •
Its prompt decay much shorter than the QCD scale offers the opportunity to
explore the properties of a “bare quark”, such as its spin, mass, and
couplings.
Top quarks will be copiously produced at the LHC. The production and decay are
well understood in the SM. Therefore, detailed studies of the top-quark
physics can be rewarding for both testing the SM and searching for new physics
Quadt:2006jk .
## II Top Quark in The Standard Model
In the SM, the top quark and its interactions can be described by
$\displaystyle-{\cal L}_{SM}$ $\displaystyle=$ $\displaystyle
m_{t}\bar{t}t+{m_{t}\over
v}H\bar{t}t+g_{s}\bar{t}\gamma^{\mu}T^{a}tG_{\mu}^{a}+eQ_{t}\bar{t}\gamma^{\mu}tA_{\mu}$
$\displaystyle+$
$\displaystyle{g\over\cos\theta_{w}}\bar{t}\gamma^{\mu}(g_{V}+g_{A}\gamma^{5})tZ_{\mu}+{g\over\sqrt{2}}\sum_{q}^{d,s,b}V_{tq}\bar{t}\gamma^{\mu}P_{L}qW^{-}_{\mu}+h.c.\
\ \ \ $
Besides the well-determined gauge couplings at the electroweak scale, the
other measured parameters of the top quark are listed in Table 1.
Table 1: Experimental values for the top quark parameters pdg . $m_{t}$ (pole) | $|V_{tb}|$ | $|V_{ts}|$ | $|V_{td}|$
---|---|---|---
(172.7 $\pm$ 2.8) GeV | $>0.78$ | $(40.6\pm 2.6)\times 10^{-3}$ | $(7.4\pm 0.8)\times 10^{-3}$
The large top-quark mass is important since it contributes significantly to
the electroweak radiative corrections. For instance, the one-loop corrections
to the electroweak gauge boson mass can be cast in the form
$\Delta r=-{3G_{F}m_{t}^{2}\over
8\sqrt{2}\pi^{2}\tan^{2}\theta_{W}}+{3G_{F}M_{W}^{2}\over
8\sqrt{2}\pi^{2}}\left(\ln{m_{H}^{2}\over M_{Z}^{2}}-{5\over 6}\right).$ (2)
With the $m_{t}$ value in Table 1, the best global fit in the SM yields a
Higgs mass $m_{H}=89^{+38}_{-28}$ GeV pdg . The recent combined result from
CDF and D0 at the Tevatron Run II gave the new value Brubaker:2006xn
$m_{t}=171.4\pm 2.1\ {\rm GeV}.$ (3)
The expected accuracy of $m_{t}$ measurement at the LHC is better than 1 GeV
Etienvre:2006ph , with errors dominated by the systematics.
To directly determine the left-handed $V$-$A$ gauge coupling of the top quark
in the weak charged current, leptonic angular distributions and $W$
polarization information would be needed gordy . No direct measurements are
available yet for the electroweak neutral current couplings,
$g_{V}^{t}=T_{3}/2-Q_{t}\sin^{2}\theta_{W},\ g_{A}^{t}=-T_{3}/2$ and
$Q_{t}=+2/3$, although there are proposals to study them via the associated
production processes $t\bar{t}\gamma,\ t\bar{t}Z$ Baur:2001si . The indirect
global fits however indicate the consistency with these SM predictions pdg .
### II.1 Top-Quark Decay in the SM
Due to the absence of the flavor-changing neutral currents at tree level in
the SM (the Glashow-Iliopoulos-Maiani mechanism), the dominant decay channels
for a top quark will be through the weak charged-currents, with the partial
width given by twidth
$\Gamma(t\to W^{+}q)={|V_{tq}|^{2}m_{t}^{3}\over 16\pi
v^{2}}(1-r_{W})^{2}(1+2r_{W})\left[1-{2\alpha_{s}\over 3\pi}({2\pi^{2}\over
3}-{5\over 2})\right],$ (4)
where $r_{W}=M_{W}^{2}/m_{t}^{2}$. The subsequent decay of $W$ to the final
state leptons and light quarks is well understood. Two important features are
noted:
* •
Since $|V_{tb}|\gg|V_{td}|,|V_{ts}|$, a top quark will predominantly decay
into a $b$ quark. While $V_{ts},\ V_{td}$ may not be practically measured via
the top-decay processes, effective $b$-tagging at the Tevatron experiments has
served to put a bound on the ratio
${B(t\to Wb)\over B(t\to
Wq)}={|V_{tb}|^{2}\over{|V_{td}|^{2}+|V_{ts}|^{2}+|V_{tb}|^{2}}},$ (5)
that leads to the lower bound for $|V_{tb}|$ in Table 1.
* •
Perhaps the most significant aspect of Eq. (4) is the numerics:
$\Gamma(t\to W^{+}q)\approx 1.5\ {\rm GeV}\approx{1\over 0.5\times 10^{-24}\
{\rm s}}>\Lambda_{QCD}\sim 200\ {\rm MeV}.$
This implies that a top quark will promptly decay via weak interaction before
QCD sets in for hadronization tlife . So no hadronic bound states (such as
$\bar{t}t,\bar{t}q$, etc.) would be observed. The properties of a “bare quark”
may be accessible for scrutiny.
It is interesting to note that in the top-quark rest frame, the longitudinal
polarization of the $W$ is the dominant mode. The ratio between the two
available modes is
${\Gamma(t\to b_{L}\ W_{\lambda=0})\over\Gamma(t\to b_{L}\
W_{\lambda=-1})}={m_{t}^{2}\over 2M_{W}^{2}}.$ (6)
### II.2 Top-Quark Production in the SM
#### II.2.1 $t\bar{t}$ production via QCD
Historically, quarks were discovered via their hadronic bound states, most
notably for the charm quark via $J/\psi(\bar{c}c)$ and bottom quark via
$\Upsilon(\bar{b}b)$. Due to the prompt decay of the top quark, its production
mechanisms and search strategy are quite different from the traditional one.
Figure 1: Top-quark pair production in hadronic collisions via QCD
interaction. This figure is taken from Ref. Willenbrock:2002ta .
The leading processes are the open flavor pair production from the QCD strong
interaction, as depicted in Fig. 1. The contributing subprocesses are from
$q\bar{q},\ gg\to t\bar{t}.$ (7)
The cross sections have been calculated rather reliably to the next-to-leading
order Nason:1987xz and including the threshold resummations Laenen:1993xr ;
Bonciani:1998vc , as given in Table 2.
Table 2: Cross sections, at next-to-leading-order in QCD, for top-quark production via the strong interaction at the Tevatron and the LHC Bonciani:1998vc . Also shown is the percentage of the total cross section from the quark-antiquark-annihilation and gluon-fusion subprocesses. | $\sigma_{\rm NLO}$ (pb) | $q\bar{q}\to t\bar{t}$ | $gg\to t\bar{t}$
---|---|---|---
Tevatron ($\sqrt{s}=1.8$ TeV $p\bar{p}$) | $4.87\pm 10\%$ | $90\%$ | $10\%$
Tevatron ($\sqrt{s}=2.0$ TeV $p\bar{p}$) | $6.70\pm 10\%$ | $85\%$ | $15\%$
LHC ($\sqrt{s}=14$ TeV $pp$) | $803\pm 15\%$ | $10\%$ | $90\%$
Largely due to the substantial gluon luminosity at higher energies, the
$t\bar{t}$ production rate is increased by more than a factor of 100 from the
Tevatron to the LHC. Assuming an annual luminosity at the LHC of $10^{34}$
cm-2 s${}^{-1}\Rightarrow 100$ fb${}^{-1}/$year, one expects to have 80
million top pairs produced. It is truly a “top factory”. In Fig. 2(a), we plot
the invariant mass distribution, which is important to understand when
searching for new physics in the $t\bar{t}$ channel. Although the majority of
the events are produced near the threshold $m(t\bar{t})\sim 2m_{t}$, there is
still a substantial cross section even above $m(t\bar{t})\sim$ 1 TeV, about 5
pb. This is illustrated in Fig. 2(b), where the integrated cross section is
given versus a minimal cutoff on $m(t\bar{t})$ and decay branching fractions
of one top decaying hadronically and the other leptonically have been
included.
,
Figure 2: (a) Invariant mass distribution of $t\bar{t}$ at the LHC and (b)
integrated cross section versus a minimal cutoff on $m(t\bar{t})$. Decay
branching fractions of one top decaying hadronically and the other
leptonically ($e,\mu$) have been included.
It should be noted that the forward-backward charge asymmetry of the
$t\bar{t}$ events can be generated by higher order corrections, reaching
$10-15\%$ at the partonic level from QCD Kuhn:1998jr and $1\%$ from the
electroweak Bernreuther:2005is .
#### II.2.2 Single top production via weak interaction
As discussed in the last section, the charged-current weak interaction is
responsible for the rapid decay of the top quark. In fact, it also
participates significantly in the production of the top quark as well
Willenbrock:cr . The three classes of production processes, $s$-channel Drell-
Yan, $t$-channel $Wb$ fusion, and associated $Wt$ diagrams, are plotted in
Fig. 3. Two remarks are in order:
* •
The single top production is proportional to the quark mixing element
$|V_{tb}|^{2}$ and thus provides the direct measurement for it, currently
Abazov:2006gd $0.68<|V_{tb}|\leq 1$ at the $95\%$ C.L.
* •
The $s$-channel and $t$-channel can be complementary in the search for new
physics such as a $W^{\prime}$ exchange Cao:2007ea .
For the production rates Smith:1996ij ; Stelzer:1997ns ; Zhu:uj ;
Kidonakis:2006bu ; Kidonakis:2007ej , the largest of all is the $t$-channel
$Wb$ fusion. It is nearly one third of the QCD production of the $t\bar{t}$
pair. Once again, it is mainly from the enhancement of the longitudinally
polarized $W$. The total cross sections for these processes at Tevatron
Kidonakis:2006bu and LHC energies Kidonakis:2007ej are listed in Table 3
Smith:1996ij ; Stelzer:1997ns ; Zhu:uj . We see the typical change of the
production rate from the Tevatron to the LHC: A valence-induced process (DY-
type) is increased by about an order of magnitude; while the gluon- or
$b$-induced processes are enhanced by about a factor of 100.
Figure 3: Single top-quark production in hadronic collisions via the charged-current weak interaction. This figure is taken from Ref. Willenbrock:2002ta . Table 3: Cross sections, at next-to-leading-order in QCD, for top-quark production via the charged current weak interaction at the Tevatron and the LHC. $\sigma({\rm pb})$ | $s$-channel | $t$-channel | $Wt$
---|---|---|---
Tevatron ($\sqrt{s}=2.0$ TeV $p\bar{p}$) | $0.90\pm 5\%$ | $2.1\pm 5\%$ | $0.1\pm 10\%$
LHC ($\sqrt{s}=14$ TeV $pp$) | $10.6\pm 5\%$ | $250\pm 5\%$ | $75\pm 10\%$
#### II.2.3 Top quark and Higgs associated production
Of fundamental importance is the measurement of the top-quark Yukawa coupling.
The direct probe to it at the LHC is via the processes Marciano:1991qq
$q\bar{q},\ gg\to t\bar{t}H.$ (8)
The cross section has been calculated to the next-to-leading-order (NLO) in
QCD Beenakker:2001rj ; Dawson:2003zu and the numerics are given in Table 4.
The cross section ranges are estimated from the uncertainty of the QCD scale.
Table 4: Total cross section at the NLO in QCD for top-quark and Higgs associated production at the LHC Dawson:2003zu . $m_{H}$ (GeV) | 120 | 150 | 180
---|---|---|---
$\sigma$ (fb) | 634$-$719 | 334$-$381 | 194$-$222
The production rate at the LHC seems quite feasible for the signal
observation. It was claimed Desch:2004kf that a $15\%$ accuracy for the
Yukawa coupling measurement may be achievable with a luminosity of 300 fb-1.
Indeed, the decay channel $H\to\gamma\gamma$ should be useful for the search
and study in the mass range of $100<m_{H}<150$ GeV unknown:1999fr ;
Zhou:1993at . However, the potentially large backgounds and the complex event
topology, in particular the demand on the detector performance, make the study
of the leading decay $H\to b\bar{b}$ very challenging Benedetti:2007sn .
## III New Physics in Top-quark Decay
The high production rate for the top quarks at the LHC provides a great
opportunity to seek out top-quark rare decays and search for new physics
Beyond the Standard Model (BSM). Given the annual yield of 80 million
$t\bar{t}$ events plus $34$ million single-top events, one may hope to search
for rare decays with a branching fraction as small as $10^{-6}$.
### III.1 Charged Current Decay: BSM
The most prominent examples for top-quark decay beyond the SM via charged-
currents may be the charged Higgs in SUSY or with an extended Higgs sector,
and charged technicolor particles
$t\to H^{+}b,\ \ \pi^{+}_{T}b.$ (9)
Experimental searches have been conducted at the Tevatron Abazov:2001md , and
some simulations are performed for the LHC as well Hashemi:2006qg ;
Quadt:2006jk . It is obvious that as long as those channels are kinematically
accessible and have a sizable branching fraction, the observation should be
straightforward. In fact, the top decay to a charged Higgs may well be the
leading channel for $H^{\pm}$ production.
More subtle new physics scenarios may not show up with the above easy signals.
It may be desirable to take a phenomenological approach to parameterize the
top-quark interactions beyond the SM gordy ; Tait:2000sh , and experimentally
search for the deviations from the SM. Those “anomalous couplings” can be
determined in a given theoretical framework, either from loop-induced
processes or from a new flavor structure. One can write the interaction terms
as
$\displaystyle{\cal L}_{CC}={g\over\sqrt{2}}\left(\
\bar{t}(1+\delta_{L})\gamma^{\mu}P_{L}qW^{-}_{\mu}+\bar{t}\delta_{R}\gamma^{\mu}P_{R}qW^{-}_{\mu}\right)+h.c.$
(10)
The expected accuracy of the measurements on $\delta_{L,R}$ is about $1\%$
Tait:2000sh ; Quadt:2006jk , thus testing the top-quark chiral coupling.
### III.2 Neutral Current Decay: BSM
Although there are no Flavor-Changing Neutral Currents (FCNC) at tree level in
the SM, theories beyond the SM quite often have new flavor structure, most
notably for SUSY and technicolor models. New symmetries or some alignment
mechanisms will have to be utilized in order to avoid excessive FCNC. It is
nevertheless prudent to keep in mind the possible new decay modes of the top
quark such as the SUSY decay channel
$t\to\tilde{t}\tilde{\chi}^{0}.$ (11)
Generically, FCNCs can always be generated at loop level. It has been shown
that the interesting decay modes
$t\to Zc,\ \ Hc,\ \ \gamma c,\ \ gc$ (12)
are highly suppressed Eilam:1990zc ; Cao:2007dk with branching fractions
typically $10^{-13}-10^{-10}$ in the SM, and $10^{-7}-10^{-5}$ in the MSSM. It
has been shown that the branching fractions can be enhanced significantly in
theories beyond the SM and MSSM, reaching above $10^{-5}$ and even as high as
$1\%$ AguilarSaavedra:2004wm .
One may again take the effective operator approach to parameterize the
interactions. After the electroweak symmetry breaking, one can write them as
Peccei:1989kr ; Han:1998tp ; Han:1996ep
$\displaystyle{\cal L}_{NC}$ $\displaystyle=$ $\displaystyle{g\over
2\cos\theta_{w}}\sum_{\tau=\pm,q=c,u}\kappa_{\tau}\bar{t}\gamma^{\mu}P_{\tau}qZ_{\mu}+h.c.$
(13) $\displaystyle+$ $\displaystyle
g_{s}\sum_{q=c,u}{\kappa^{g}_{q}\over\Lambda}\bar{t}\sigma^{\mu\nu}T^{a}tG_{\mu\nu}^{a}+eQ_{t}\sum_{q=c,u}{\kappa^{\gamma}_{q}\over\Lambda}\bar{t}\sigma^{\mu\nu}tA_{\mu\nu}+h.c.$
(14)
The sensitivities for the anomalous couplings have been studied at the LHC by
the ATLAS Collaboration Carvalho:2007yi , as listed in Table 5
Table 5: $95\%$ C.L. sensitivity of the branching fractions for the top-quark decays via FCNC couplings at the LHC Carvalho:2007yi . Channel | 10 $\rm fb^{-1}$ | 100 $\rm fb^{-1}$
---|---|---
$t\to Zq$ | $3.1\times 10^{-4}$ | $6.1\times 10^{-5}$
$t\to\gamma q$ | $4.1\times 10^{-5}$ | $1.2\times 10^{-5}$
$t\to gq$ | $1.3\times 10^{-3}$ | $4.2\times 10^{-4}$
## IV Top Quarks in Resonant Production
The most striking signal of new physics in the top-quark sector is the
resonant production via a heavy intermediate state $X$. With some proper
treatment to identify the top decay products, it is possible to reconstruct
the resonant kinematics. One may thus envision fully exploring its properties
in the c.m. frame.
### IV.1 $X\to t\bar{t},\ t\bar{b}$
Immediate examples of the resonant states include Higgs bosons He:1998ie , new
gauge bosons Agashe:2007ki , Kaluza-Klein excitations of gluons Lillie:2007ve
and gravitons Fitzpatrick:2007qr , Technicolor-like dynamical states
Hill:2002ap ; Quadt:2006jk ; Choudhury:2007ux etc.
The signal can be generically written as
$\displaystyle\sigma(pp\to X\to t\bar{t})$ $\displaystyle=$
$\displaystyle\sum_{ij}\int
dx_{1}dx_{2}f_{i}(M_{X}^{2},x_{1})f_{j}(M_{X}^{2},x_{2})$ (15)
$\displaystyle\times$ $\displaystyle{4\pi^{2}(2J+1)\over s}{\Gamma(X\to
ij)B(X\to t\bar{t})\over M_{X}}.$
Thus the observation of this class of signals depends on the branching
fraction of $X\to t\bar{t}$ as well as its coupling to the initial state
partons. Figure 4 quantifies the observability for a bosonic resonance (spin
0,1,2) for a mass up to 2 TeV at the LHC Barger:2006hm via $q\bar{q},gg\to
X\to t\bar{t}$. The vertical axis gives the normalization factors ($\omega$)
for the cross section rates needed to reach a $5\sigma$ signal with a
luminosity of 10 fb-1. The normalization $\omega=1$ defines the benchmark for
the spin 0, 1 and 2 resonances. They correspond to the SM-like Higgs boson, a
$Z^{\prime}$ with electroweak coupling strength and left (L) or right (R)
chiral couplings to SM fermions, and the Randall-Sundrum graviton $\tilde{h}$
with the couplings scaled with a cutoff scale as $\Lambda^{-1}$ for
$\tilde{h}q\bar{q}$, and $(\Lambda\ln(M^{*}_{pl}/\Lambda))^{-1}$ for
$\tilde{h}gg$, respectively. We see that a $Z^{\prime}$ or a graviton should
be easy to observe, but a Higgs-like broad scalar will be difficult to
identify in the $t\bar{t}$ channel.
Figure 4: Normalization factor versus the resonance mass for the scalar
(dashed) with a width-mass ratio of $20\%$, vector (dot-dashed) with 5%, and
graviton (solid) 2%, respectively. The region above each curve represents
values of $\omega$ that give 5$\sigma$ or greater statistical significance
with 10 fb-1 integrated luminosity.
It is of critical importance to reconstruct the c.m. frame of the resonant
particle, where the fundamental properties of the particle can be best
studied. It was demonstrated Barger:2006hm that with the semi-leptonic decays
of the two top quarks, one can effectively reconstruct the events in the c.m.
frame. This relies on using the $M_{W}$ constraint to determine the missing
neutrino momentum, while it is necessary to also make use of $m_{t}$ to break
the two-fold ambiguity for two possible $p_{z}(\nu)$ solutions. Parity and CP
asymmetries Atwood:2000tu can be studied.
Top-quark pair events at the high invariant mass are obviously important to
search for and study new physics. In this new territory there comes a new
complication: When the top quark is very energetic, $\gamma=E/m_{t}\sim 10$,
its decay products may be too collimated to be individually resolved by the
detector $-$ recall that the granularity of the hadronic calorimeter at the
LHC is roughly $\Delta\eta\times\Delta\phi\sim 0.1\times 0.1$. This is a
generic problem relevant to any fast-moving top quarks from heavy particle
decays Lillie:2007ve ; Barger:2006hm ; Skiba:2007fw (see the next sections).
The interesting questions to be addressed may include:
* •
To what extent can we tell a “fat top-jet” from a massive QCD jet due to
showering?
* •
To what extent can we tell a “fat $W$-jet” from a massive QCD jet?
* •
Can we make use of a non-isolated lepton inside the top-jet ($b\ell\nu$) for
the top-quark identification and reconstruction?
* •
Can we do $b$-tagging for the highly boosted top events?
These practical issues would become critical to understand the events and thus
for new physics searches. Detailed studies including the detector effects will
be needed to reach quantitative conclusions.
### IV.2 $T\to tZ,\ tH,\ bW$
In many theories beyond the SM, there is a top-quark partner. These are
commonly motivated by the “naturalness” argument, the need to cancel the
quadratic divergence in the Higgs mass radiative correction, most severely
from the top-quark loop. Besides the scalar top quark in SUSY, the most
notable example is the Little Higgs theory Schmaltz:2005ky . If there is no
discrete symmetry, the top partner $T$ will decay to SM particles in the final
state, leading to fully a reconstructable fermionic resonance.
Figure 5: Production of the top-quark partner $T$ in pair and singly at the
LHC versus its mass. The Yukawa coupling ratio $\lambda_{1}/\lambda_{2}$ has
been taken to be 2 (upper dotted curve) 1 (solid) and 1/2 (lower dotted),
respectively. The $T\bar{T}$ pair production via QCD includes an NLO
$K$-factor (dashed curve).
It was pointed out Han:2003wu that the single $T$ production via the weak
charged-current may surpass the pair production via the QCD interaction due to
the longitudinal gauge boson enhancement for the former and the phase space
suppression for the latter. This is shown in Fig. 5. Subsequent simulations
Azuelos:2004dm performed by the ATLAS collaboration demonstrated the clear
observability for the signals above the backgrounds at the LHC for $T\to tZ,\
bW$ with a mass $M_{T}=1$ TeV, as seen in Fig. 6.
Figure 6: Observability for the decays (a) $T\to tZ$ and (b) $T\to bW$ at the
ATLAS Azuelos:2004dm .
## V Top-rich Events for New Physics
Although the top-quark partner is strongly motivated for a natural electroweak
theory, it often results in excessively large corrections to the low energy
electroweak observables. In order to better fit the low energy measurements, a
discrete symmetry is often introduced, such as the R-parity in SUSY, KK-parity
in UED, and T-parity in LH Cheng:2003ju . The immediate consequence for
collider phenomenology is the appearance of a new stable particle that may
provide the cold dark matter candidate, and results in missing energy in
collider experiments.222Alternatively, the breaking of the R-parity
Barbier:2004ez or the T-parity Hill:2007nz would lead to different collider
phenomenology Barger:2007df .
### V.1 $T\bar{T}$ pair production
The top partner has similar quantum numbers to the top quark, and thus is
commonly assigned as a color triplet. This leads to their production in QCD
$q\bar{q},\ gg\to T\bar{T}.$ (16)
The production cross section is shown in Fig. 7 for both spin-0 and spin-1/2
top partners. Although there is a difference of a factor of 8 or so (4 from
spin state counting and the rest from threshold effects) in the cross
sections, it is still challenging to tell a scalar and a fermionic partner
apart us ; Cheng:2005as ; Meade:2006dw due to the lack of definitive
features.
Due to the additional discrete symmetry, the top partner cannot decay to a SM
particle alone. Consequently, $T\to tA^{0}$, leading to $t\bar{t}$ pair
production plus large mixing energy. The crucial parameter to characterize the
kinematical features is the mass difference $\Delta M_{TA}=m_{T}-m_{A}$. For
$\Delta M_{TA}\gg m_{t}$, the top quark as a decay product will be energetic
and qualitatively different from the SM background. But if $\Delta
M_{TA}\approx m_{t}$, then the two will have very little difference, making
the signal difficult to separate out. Depending on the top-quark decay, we
present two classes of signals.
Figure 7: Leading order total cross section for the top partner $T\bar{T}$
production at the LHC versus its mass us . Both spin-0 and spin-1/2 top
partners are included. The QCD $t\bar{t}$ and the SM $t\bar{t}Z$ backgrounds
are indicated by the horizontal lines.
#### V.1.1 $t\bar{t}$ pure hadronic decay
For both $t\bar{t}$ to decay hadronically Meade:2006dw ; Matsumoto:2006ws ,
the signal will be 6 jets plus missing energy. While it has the largest decay
rate, the backgrounds would be substantial as well. With judicious acceptance
cuts, the signal observability for $\Delta M_{TA}>200$ GeV was established, as
seen in Fig. 8. Possible measurements of the absolute mass scale and its spin
of the top partner were considered us ; Meade:2006dw , but the determination
remains difficult.
Figure 8: Contour in $m_{\tilde{t}}-m_{N}$ for $\tilde{t}\to tN$ for the
statistical significance of a scalar $\tilde{t}$ at the LHC with an integrated
luminosity of 100 fb-1. Purely hadronic decays are considered.
#### V.1.2 $t\bar{t}$ semi-leptonic decay
If one of the $t\bar{t}$ decays hadronically and the other decays
leptonically, the signal may be cleaner. It turns out that if the mass
difference $\Delta M_{TA}$ is sizable, then requiring large missing transverse
energy may be sufficient to suppress the background. However, if $\Delta
M_{TA}\sim m_{t}$, then the $E\\!\\!\\!\\!/_{T}$ for the signal is not much
different from the background. On the other hand, the fact that the $t\bar{t}$
kinematics can be fully reconstructed in the SM implies that the
reconstruction for the signal events would be distinctive due to the large
missing mass. Indeed, the reconstructed $m^{r}_{t}$ based on the
$E\\!\\!\\!\\!/_{T}$ will be far away from the true $m_{t}$, and mostly result
in an unphysical value. If we impose
$|m_{t}-m_{t}^{r}|>110\ {\rm GeV},$ (17)
we can reach optimal signal identification. The summary plot for the
statistical significance (the number of $\sigma$) is given in Fig. 9 at the
LHC with an integrated luminosity of 100 fb-1, where the left panel is for a
fermionic $T$, and the right is a scalar $\tilde{t}$, both decaying to $t+$ a
missing particle.
Figure 9: Contour in $m_{T}-m_{A}$ for $T\to tA$ for the statistical
significance at the LHC with an integrated luminosity of 100 fb-1. Left panel
is for a fermionic $T$, and the right is a scalar $\tilde{t}$, both decaying
to a top plus a missing particle.
### V.2 Exotic top signatures
Searching for exotic events related to the top quark can be rewarding. First,
there exists a variety of natural electroweak models with distinctive top
partners that should not be overlooked for collider phenomenology. Second,
potentially large couplings of the top quark to new physics may result in
multiple top quarks from new particle decays. Finally, the exotic events have
less SM background contamination, and thus may stand out for discovery even at
the early phase of the LHC. We briefly list a few recent examples.
* •
Multiple top quarks and $b$-quarks in the final state may help to search for
new heavy particles in the electroweak sector and can be distinctive from the
SM backgrounds Han:2004zh .
* •
Heavy top partners and other KK fermions in the RS model may lead to unique
top-quark and $W$-boson signatures Contino:2008hi .
* •
New exotic colored states may predominantly couple to heavy quarks and thus
lead to multiple top quarks in the final state Gerbush:2007fe .
* •
Composite models for the right-handed top-quark may lead to $t\bar{t}t\bar{t}$
signals at the LHC Lillie:2007hd .
* •
Like-sign top quark pairs may indicate new dynamics Cao:2004wd .
## VI Summary and Outlook
The LHC will be a true top-quark factory. With 80 million top-quark pairs plus
34 million single tops produced annually at the designed high luminosity, the
properties of this particle will be studied to a great accuracy and the deep
questions related to the top quark at the Terascale will be explored to an
unprecedented level. Theoretical arguments indicate that it is highly likely
that new physics associated with the top quark beyond the SM will show up at
the LHC. This article only touches upon the surface of the rich top quark
physics, and is focused on possible new physics beyond the SM in the top-quark
sector. The layout of this article has been largely motivated by experimental
signatures for the LHC. Interesting signatures covered here include
* •
Rare decays of the top quark to new light states, or to SM particles via the
charged and neutral currents through virtual effects of new physics.
* •
Top quark pair production via the decay of a new heavy resonance, resulting in
fully reconstructable kinematics for detailed studies.
* •
Top quark pair production via the decay of pairly produced top partners,
usually associated with two other missing particles, making the signal
identification and the property studies challenging.
* •
Multiple top quarks, $b$ quarks, and $W^{\pm}$’s coming from theories of
electroweak symmetry breaking or an extended top-quark sector.
The physics associated with top quarks is rich, far-reaching, and exciting. It
opens up golden opportunities for new physics searches, while brings in new
challenges as well. It should be of high priority in the LHC program for both
theorists and experimentalists.
## Acknowledgments
I thank Gordy Kane and Aaron Pierce for inviting me to write on this subject,
which I consider a very important and exciting part of the LHC physics
program. I would also like to thank Vernon Barger, Tim Tait and Lian-Tao Wang
for reading and commenting on the draft. This work was supported in part by
the US DOE under contract No. DE-FG02-95ER40896 and in part by the Wisconsin
Alumni Research Foundation. The work at the KITP was supported by the National
Science Foundation under Grant No. PHY05-51164.
## References
* (1) For a review on new strong dynamics related to the top quark, see e.g., C. T. Hill and E. H. Simmons, Strong dynamics and electroweak symmetry breaking, _Phys. Rept._ 381, 235 (2003) [Erratum-ibid. 390, 553 (2004)] [arXiv:hep-ph/0203079]; and references therein.
* (2) For a general discussion on the “naturalness”, see e.g., G. F. Giudice, Naturally Speaking: The Naturalness Criterion and Physics at the LHC, arXiv:0801.2562 [hep-ph].
* (3) For recent reviews on top-quark physics, see, e.g., D. Chakraborty, J. Konigsberg and D. L. Rainwater, Review of top quark physics, Ann. Rev. Nucl. Part. Sci. 53, 301 (2003) [arXiv:hep-ph/0303092]; A. Quadt, Top quark physics at hadron colliders, _Eur. Phys. J. C_ 48 (2006) 835, and references therein.
* (4) Particle Data Group, W.-M. Yao et al., _J. Phys. G_ 33, 1 (2006).
* (5) E. Brubaker et al. [Tevatron Electroweak Working Group], Combination of CDF and D0 Results on the Mass of the Top Quark, arXiv:hep-ex/0608032.
* (6) A. I. Etienvre, Top mass measurement at LHC, PoS TOP2006 (2006) 023.
* (7) G. L. Kane, G. A. Ladinsky and C. P. Yuan, Using the top quark for testing standard model polarization and CP predictions, _Phys. Rev. D_ 45, 124 (1992).
* (8) U. Baur, M. Buice and L. H. Orr, Direct measurement of the top quark charge at hadron colliders, _Phys. Rev. D_ 64, 094019 (2001) [arXiv:hep-ph/0106341]; U. Baur, A. Juste, L. H. Orr and D. Rainwater, Probing electroweak top quark couplings at hadron colliders, _Phys. Rev. D_ 71, 054013 (2005) [arXiv:hep-ph/0412021].
* (9) M. Jezabek and J. H. Kuhn, QCD Corrections to Semileptonic Decays of Heavy Quarks, _Nucl. Phys. B_ 314, 1 (1989).
* (10) I. I. Y. Bigi, Y. L. Dokshitzer, V. A. Khoze, J. H. Kuhn and P. M. Zerwas, Production and Decay Properties of Ultraheavy Quarks, _Phys. Lett. B_ 181, 157 (1986).
* (11) S. Willenbrock, The standard model and the top quark, arXiv:hep-ph/0211067.
* (12) P. Nason, S. Dawson and R. K. Ellis, The Total Cross-Section for the Production of Heavy Quarks in Hadronic Collisions, _Nucl. Phys. B_ 303, 607 (1988); W. Beenakker, H. Kuijf, W. L. van Neerven and J. Smith, QCD Corrections to Heavy Quark Production in p anti-p Collisions, _Phys. Rev. D_ 40, 54 (1989); N. Kidonakis and R. Vogt, Next-to-next-to-leading order soft-gluon corrections in top quark hadroproduction, _Phys. Rev. D_ 68, 114014 (2003) [arXiv:hep-ph/0308222].
* (13) E. Laenen, J. Smith and W. L. van Neerven, Top Quark Production Cross-Section, _Phys. Lett. B_ 321, 254 (1994) [arXiv:hep-ph/9310233]; E. L. Berger and H. Contopanagos, The Perturbative Resummed Series for Top Quark Production in Hadron Reactions, _Phys. Rev. D_ 54, 3085 (1996) [arXiv:hep-ph/9603326];
* (14) R. Bonciani, S. Catani, M. L. Mangano and P. Nason, NLL resummation of the heavy-quark hadroproduction cross-section, _Nucl. Phys. B_ 529, 424 (1998) [arXiv:hep-ph/9801375]; and references therein.
* (15) J. H. Kuhn and G. Rodrigo, Charge asymmetry in hadroproduction of heavy quarks, _Phys. Rev. Lett._ 81, 49 (1998) [arXiv:hep-ph/9802268].
* (16) W. Bernreuther, M. Fuecker and Z. G. Si, Mixed QCD and weak corrections to top quark pair production at hadron colliders, _Phys. Lett. B_ 633, 54 (2006) [arXiv:hep-ph/0508091]; W. Bernreuther, M. Fuecker and Z. G. Si, Weak interaction corrections to hadronic top quark pair production, _Phys. Rev. D_ 74, 113005 (2006) [arXiv:hep-ph/0610334].
* (17) S. S. Willenbrock and D. A. Dicus, Production Of Heavy Quarks From $W$-Gluon Fusion, _Phys. Rev. D_ 34, 155 (1986); C. P. Yuan, A New Method to Detect a Heavy Top Quark at the Tevatron, _Phys. Rev. D_ 41, 42 (1990); T. Stelzer, Z. Sullivan and S. Willenbrock, Single top quark production at hadron colliders, _Phys. Rev. D_ 58, 094021 (1998) [arXiv:hep-ph/9807340]; Z. Sullivan, Understanding single-top-quark production and jets at hadron colliders, _Phys. Rev. D_ 70, 114012 (2004) [arXiv:hep-ph/0408049].
* (18) V. M. Abazov et al. [D0 Collaboration], Evidence for production of single top quarks and first direct measurement of $|V(tb)|$, _Phys. Rev. Lett._ 98, 181802 (2007) [arXiv:hep-ex/0612052].
* (19) Q. H. Cao, J. Wudka and C. P. Yuan, Search for New Physics via Single Top Production at the LHC, _Phys. Lett. B_ 658, 50 (2007) [arXiv:0704.2809 [hep-ph]].
* (20) M. C. Smith and S. Willenbrock, QCD and Yukawa Corrections to Single-Top-Quark Production via $q\bar{q}\to t\bar{b}$, _Phys. Rev. D_ 54, 6696 (1996) [arXiv:hep-ph/9604223].
* (21) T. Stelzer, Z. Sullivan and S. Willenbrock, Single-top-quark production via $W$-gluon fusion at next-to-leading order, _Phys. Rev. D_ 56, 5919 (1997) [arXiv:hep-ph/9705398].
* (22) S. Zhu, Next-To-Leading Order QCD Corrections to $bg\to tW^{-}$ at the CERN Large Hadron Collider, _Phys. Lett. B_ 524, 283 (2002) [Erratum-ibid. B 537, 351 (2002)].
* (23) Q. H. Cao, R. Schwienhorst and C. P. Yuan, Next-to-leading order corrections to single top quark production and decay at Tevatron. I: s-channel process, _Phys. Rev. D_ 71, 054023 (2005) [arXiv:hep-ph/0409040]; N. Kidonakis, Single top production at the Tevatron: Threshold resummation and finite-order soft gluon corrections, _Phys. Rev. D_ 74, 114012 (2006) [arXiv:hep-ph/0609287].
* (24) Q. H. Cao and C. P. Yuan, Single top quark production and decay at next-to-leading order in hadron collision, _Phys. Rev. D_ 71, 054022 (2005) [arXiv:hep-ph/0408180]; N. Kidonakis, Higher-order soft gluon corrections in single top quark production at the LHC, _Phys. Rev. D_ 75, 071501 (2007) [arXiv:hep-ph/0701080].
* (25) W. J. Marciano and F. E. Paige, Phys. Rev. Lett. 66, 2433 (1991); J. F. Gunion, Phys. Lett. B 261, 510 (1991).
* (26) W. Beenakker, S. Dittmaier, M. Kramer, B. Plumper, M. Spira and P. M. Zerwas, Higgs radiation off top quarks at the Tevatron and the LHC, _Phys. Rev. Lett._ 87, 201805 (2001) [arXiv:hep-ph/0107081]; W. Beenakker, S. Dittmaier, M. Kramer, B. Plumper, M. Spira and P. M. Zerwas, NLO QCD corrections to t anti-t H production in hadron collisions, _Nucl. Phys. B_ 653, 151 (2003) [arXiv:hep-ph/0211352].
* (27) S. Dawson, L. H. Orr, L. Reina and D. Wackeroth, Associated top quark Higgs boson production at the LHC, _Phys. Rev. D_ 67, 071503 (2003) [arXiv:hep-ph/0211438]; S. Dawson, C. Jackson, L. H. Orr, L. Reina and D. Wackeroth, Associated Higgs production with top quarks at the Large Hadron Collider: NLO QCD corrections, _Phys. Rev. D_ 68, 034022 (2003) [arXiv:hep-ph/0305087].
* (28) K. Desch and M. Schumacher, Model independent determination of the top Yukawa coupling from LHC and LC, _Eur. Phys. J. C_ 46, 527 (2006) [arXiv:hep-ph/0407159].
* (29) ALTAS TDR: ATLAS detector and physics performance. Technical design report. Vol. 2, CERN-LHCC-99-15; CMS TDR: CMS Physics: Technical Design Report V.2: Physics Performance, CERN-LHCC-2006-021.
* (30) H. Y. Zhou and Y. P. Kuang, Difficulties of detecting the intermediate mass Higgs boson in the associate production channel p p $\to t\bar{t}HX$, Phys. Rev. D 47, 3680 (1993).
* (31) D. Benedetti et al., Observability Of Higgs Produced With Top Quarks And Decaying To Bottom Quarks, _J. Phys. G_ 34 (2007) N221.
* (32) V. M. Abazov et al. [D0 Collaboration], Direct search for charged Higgs bosons in decays of top quarks, _Phys. Rev. Lett._ 88, 151803 (2002) [arXiv:hep-ex/0102039].
* (33) M. Hashemi, Search for the light charged Higgs in CMS, In the Proceedings of IPM School and Conference on Lepton and Hadron Physics (IPM-LHP06), Tehran, Iran, 15-20 May 2006, pp 0018 [arXiv:hep-ph/0612104].
* (34) T. Tait and C. P. Yuan, Single top quark production as a window to physics beyond the Standard Model, _Phys. Rev. D_ 63, 014018 (2001) [arXiv:hep-ph/0007298]; C. R. Chen, F. Larios and C. P. Yuan, General analysis of single top production and W helicity in top decay, _Phys. Lett. B_ 631, 126 (2005), [arXiv:hep-ph/0503040].
* (35) G. Eilam, J. L. Hewett and A. Soni, Rare decays of the top quark in the standard and two Higgs doublet models, _Phys. Rev. D_ 44, 1473 (1991) [Erratum-ibid. D 59, 039901 (1999)]; B. Mele, S. Petrarca and A. Soddu, A new evaluation of the t $\rightarrow$ c H decay width in the standard model, _Phys. Lett. B_ 435, 401 (1998) [arXiv:hep-ph/9805498].
* (36) J. J. Cao, G. Eilam, M. Frank, K. Hikasa, G. L. Liu, I. Turan and J. M. Yang, SUSY-induced FCNC top-quark processes at the Large Hadron Collider, _Phys. Rev. D_ 75, 075021 (2007) [arXiv:hep-ph/0702264].
* (37) J. L. Diaz-Cruz, H. J. He and C. P. Yuan, Soft SUSY breaking, stop-scharm mixing and Higgs signatures, _Phys. Lett. B_ 530, 179 (2002) [arXiv:hep-ph/0103178]; J. A. Aguilar-Saavedra, Top flavour-changing neutral interactions: Theoretical expectations and experimental detection, _Acta Phys. Polon. B_ 35, 2695 (2004) [arXiv:hep-ph/0409342]; G. Eilam, A. Gemintern, T. Han, J. M. Yang and X. Zhang, Top quark rare decay t $\rightarrow$ c h in R-parity-violating SUSY, _Phys. Lett. B_ 510, 227 (2001) [arXiv:hep-ph/0102037]; F. Larios, R. Martinez and M. A. Perez, New physics effects in the flavor-changing neutral couplings of the top quark, _Int. J. Mod. Phys. A_ 21, 3473 (2006) [arXiv:hep-ph/0605003]; K. Agashe, G. Perez and A. Soni, Collider signals of top quark flavor violation from a warped extra dimension, _Phys. Rev. D_ 75, 015002 (2007) [arXiv:hep-ph/0606293]; For a review on FCNC processes of top decay, see e.g., J. M. Yang, arXiv:0801.0210 [hep-ph].
* (38) R. D. Peccei and X. Zhang, Dynamical Symmetry Breaking and Universality Breakdown, _Nucl. Phys. B_ 337, 269 (1990); T. Han, R. D. Peccei and X. Zhang, Top Quark Decay Via Flavor Changing Neutral Currents At Hadron Colliders, _Nucl. Phys. B_ 454, 527 (1995) [arXiv:hep-ph/9506461].
* (39) T. Han, M. Hosch, K. Whisnant, B. L. Young and X. Zhang, Single top quark production via FCNC couplings at hadron colliders, _Phys. Rev. D_ 58, 073008 (1998) [arXiv:hep-ph/9806486].
* (40) T. Han, K. Whisnant, B. L. Young and X. Zhang, Top-Quark Decay Via the Anomalous Coupling $\bar{t}c\gamma$ at Hadron Colliders, _Phys. Rev. D_ 55, 7241 (1997) [arXiv:hep-ph/9603247].
* (41) J. Carvalho et al. [ATLAS Collaboration], Study of ATLAS sensitivity to FCNC top decays, _Eur. Phys. J. C_ 52, 999 (2007) [arXiv:0712.1127 [hep-ex].
* (42) H. J. He and C. P. Yuan, New method for detecting charged (pseudo-)scalars at colliders, _Phys. Rev. Lett._ 83, 28 (1999) [arXiv:hep-ph/9810367]; C. Balazs, H. J. He and C. P. Yuan, QCD corrections to scalar production via heavy quark fusion at hadron colliders, _Phys. Rev. D_ 60, 114001 (1999) [arXiv:hep-ph/9812263].
* (43) K. Agashe et al., LHC Signals for Warped Electroweak Neutral Gauge Bosons, _Phys. Rev. D_ 76, 115015 (2007) [arXiv:0709.0007 [hep-ph]].
* (44) K. Agashe, A. Belyaev, T. Krupovnickas, G. Perez and J. Virzi, LHC signals from warped extra dimensions, _Phys. Rev. D_ 77, 015003 (2008) [arXiv:hep-ph/0612015]; B. Lillie, L. Randall and L. T. Wang, The Bulk RS KK-gluon at the LHC, _JHEP_ 0709, 074 (2007) [arXiv:hep-ph/0701166]; B. Lillie, J. Shu and T. M. P. Tait, Kaluza-Klein Gluons as a Diagnostic of Warped Models, _Phys. Rev. D_ 76, 115016 (2007) [arXiv:0706.3960 [hep-ph]].
* (45) A. L. Fitzpatrick, J. Kaplan, L. Randall and L. T. Wang, Searching for the Kaluza-Klein graviton in bulk RS models, _JHEP_ 0709, 013 (2007) [arXiv:hep-ph/0701150].
* (46) C. T. Hill and S. J. Parke, Top production: Sensitivity to new physics, _Phys. Rev. D_ 49, 4454 (1994) [arXiv:hep-ph/9312324]; C. X. Yue, H. Y. Zhou, Y. P. Kuang and G. R. Lu, $t\bar{t}$ production rates at the Tevatron and the LHC in topcolor-assisted multiscale technicolor models, _Phys. Rev. D_ 55, 5541 (1997) [arXiv:hep-ph/9608294]; T. Han, D. L. Rainwater and G. Valencia, TeV resonances in top physics at the LHC, _Phys. Rev. D_ 68, 015003 (2003) [arXiv:hep-ph/0301039]; D. Choudhury, R. M. Godbole, R. K. Singh and K. Wagh, Top production at the Tevatron / LHC and nonstandard, strongly interacting spin one particles, _Phys. Lett. B_ 657, 69 (2007) [arXiv:0705.1499 [hep-ph]].
* (47) V. Barger, T. Han and D. G. E. Walker, Top Quark Pairs at High Invariant Mass - A Model-Independent Discriminator of New Physics at the LHC, _Phys. Rev. Lett._ 100, 031801 (2008) [arXiv:hep-ph/0612016].
* (48) D. Atwood, S. Bar-Shalom, G. Eilam and A. Soni, CP violation in top physics, _Phys. Rept._ 347, 1 (2001) [arXiv:hep-ph/0006032]; G. Valencia and Y. Wang, New CP-odd observable in H $\rightarrow t\bar{t}$, _Phys. Rev. D_ 73, 053009 (2006) [arXiv:hep-ph/0512127].
* (49) W. Skiba and D. Tucker-Smith, Using jet mass to discover vector quarks at the LHC, _Phys. Rev. D_ 75, 115010 (2007) [arXiv:hep-ph/0701247]; U. Baur and L. H. Orr, High $p_{T}$ Top Quarks at the Large Hadron Collider, _Phys. Rev. D_ 76, 094012 (2007) [arXiv:0707.2066 [hep-ph]]; R. Frederix and F. Maltoni, Top pair invariant mass distribution: a window on new physics, arXiv:0712.2355 [hep-ph].
* (50) For a review, see e.g., M. Schmaltz and D. Tucker-Smith, Little Higgs review, _Ann. Rev. Nucl. Part. Sci._ 55, 229 (2005) [arXiv:hep-ph/0502182], and references therein.
* (51) T. Han, H. E. Logan, B. McElrath and L. T. Wang, Phenomenology of the little Higgs model, _Phys. Rev. D_ 67, 095004 (2003) [arXiv:hep-ph/0301040]; M. Perelstein, M. E. Peskin and A. Pierce, Top quarks and electroweak symmetry breaking in little Higgs models, _Phys. Rev. D_ 69, 075002 (2004) [arXiv:hep-ph/0310039].
* (52) G. Azuelos et al., Exploring little Higgs models with ATLAS at the LHC, _Eur. Phys. J. C_ 39S2, 13 (2005) [arXiv:hep-ph/0402037].
* (53) H.-C. Cheng and I. Low, TeV symmetry and the little hierarchy problem, _JHEP_ 0309, 051 (2003) [arXiv:hep-ph/0308199].
* (54) For a review, see e.g., R. Barbier et al., R-parity violating supersymmetry, _Phys. Rept._ 420, 1 (2005) [arXiv:hep-ph/0406039], and references therein.
* (55) C.T. Hill and R.J. Hill, Topological Physics of Little Higgs Bosons, _Phys. Rev. D_ 75, 115009 (2007) [arXiv:hep-ph/0701044]; C.T. Hill and R.J. Hill, $T^{-}$ parity violation by anomalies, _Phys. Rev. D_ 76, 115014 (2007) [arXiv:0705.0697 [hep-ph]].
* (56) V. Barger, W.-Y. Keung and Y. Gao, T-Anomaly Induced LHC Signals, _Phys. Lett. B_ 655, 228 (2007) [arXiv:0707.3648 [hep-ph]].
* (57) T. Han, R. Mahbubani, D. Walker and L.-T. Wang, Top Quark Pair plus Large Missing Energy at the LHC, arXiv:0803.3820 [hep-ph].
* (58) P. Meade and M. Reece, Top partners at the LHC: Spin and mass measurement, _Phys. Rev. D_ 74, 015010 (2006) [arXiv:hep-ph/0601124].
* (59) H. C. Cheng, I. Low and L. T. Wang, Top partners in little Higgs theories with T-parity, _Phys. Rev. D_ 74, 055001 (2006) [arXiv:hep-ph/0510225].
* (60) S. Matsumoto, M. M. Nojiri and D. Nomura, Hunting for the top partner in the littlest Higgs model with T-parity at the LHC, _Phys. Rev. D_ 75, 055006 (2007) [arXiv:hep-ph/0612249].
* (61) T. Han, G. Valencia and Y. Wang, Hadron collider signatures for new interactions of top and bottom quarks, _Phys. Rev. D_ 70, 034002 (2004) [arXiv:hep-ph/0405055].
* (62) C. Dennis, M. Karagoz Unel, G. Servant and J. Tseng, Multi-W events at LHC from a warped extra dimension with custodial symmetry, arXiv:hep-ph/0701158; M. Carena, A. D. Medina, B. Panes, N. R. Shah and C. E. M. Wagner, Collider Phenomenology of Gauge-Higgs Unification Scenarios in Warped Extra Dimensions, arXiv:0712.0095 [hep-ph]; R. Contino and G. Servant, Discovering the top partners at the LHC using same-sign dilepton final states, arXiv:0801.1679 [hep-ph].
* (63) B. A. Dobrescu, K. Kong and R. Mahbubani, Massive color-octet bosons and pairs of resonances at hadron colliders, arXiv:0709.2378 [hep-ph]; M. Gerbush, T. J. Khoo, D. J. Phalen, A. Pierce and D. Tucker-Smith, Color-octet scalars at the LHC, arXiv:0710.3133 [hep-ph].
* (64) B. Lillie, J. Shu and T. M. P. Tait, Top Compositeness at the Tevatron and LHC, arXiv:0712.3057 [hep-ph].
* (65) F. Larios and F. Penunuri, FCNC production of same sign top quark pairs at the LHC, _J. Phys. G_ 30, 895 (2004) [arXiv:hep-ph/0311056]; J. J. Cao, G. L. Liu and J. M. Yang, Probing topcolor-assisted technicolor from like-sign top pair production at LHC, _Phys. Rev. D_ 70, 114035 (2004) [arXiv:hep-ph/0409334]; S. Kraml and A. R. Raklev, Same-sign top quarks as signature of light stops at the LHC, _Phys. Rev. D_ 73, 075002 (2006) [arXiv:hep-ph/0512284].
|
Implementing quantum Fourier transform using three qubits
Mouhcine Yachia, Radouan Hab-arriha and Ahmed Jellal***[email protected],b
aLaboratory of Theoretical Physics, Faculty of Sciences, Chouaïb Doukkali
University,
PO Box 20, 24000 El Jadida, Morocco
bCanadian Quantum Research Center, 204-3002 32 Ave Vernon,
BC V1T 2L7, Canada
Using the circulant symmetry of a Hamiltonian describing three qubits, we
realize the quantum Fourier transform. This symmetry allows us to construct a
set of eigenvectors independently on the magnitude of physical parameters
involved in the Hamiltonian and as a result the entanglement will be
maintained. The realization will be leaned on trapped ions and the gate
implementation requires an adiabatic transition from each spin product state
to Fourier modes. The fidelity was numerically calculated and the results show
important values. Finally, we discuss the acceleration of the gate by using
the counter-driving field.
PACS numbers: 03.65.Fd, 03.65.Ge, 03.65.Ud, 03.67.Hk
Keywords: Three qubits, circulant symmetry, entanglement, quantum Fourier
transform, adiabatic transition, counter-driving-field.
## 1 Introduction
Paul Benioff suggested a quantum mechanical model of the Turing machine [1] in
1980, launching a new field of study known as quantum computers (QCs). Richard
Feynman demonstrated in 1982 that QCs may be used to mimic complicated systems
(living cells, city traffic, the human brain, and the universe, $\cdots$) [2].
Seth Lloyd demonstrated four decades later that the basic units of a QC are an
array of nuclear magnetic resonance spins [3]. QC is a device that uses
quantum mechanics to process data while maintaining quantum coherence [4]. As
a result, QC can tackle the most difficult computational problems that even
today’s most powerful supercomputers cannot solve [5]. As an example, Shor’s
quantum algorithm (SQA) for factoring large numbers[6] is the most seminal
motivation behind the development of QCs [5]. It is well-known in quantum
computing that the physical implementation of SQA requires a gate of
particular importance called quantum Fourier transform (QFT)[7]. QFT is the
quantum implementation of the discrete Fourier transform over the amplitudes
of a wavefunction, which acts on a vector $|x\rangle\in\mathbb{C}^{N}$ as
$|x\rangle\xrightarrow{\text{QFT}}|y\rangle=\frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}e^{2\pi
ixj/N}|j\rangle$ [5]. QFT is widely used as a key subroutine in several
factoring algorithms like for instance quantum amplitude estimation [8] and
quantum counting [9].
The circulant matrices have been thoroughly investigated in [10, 11], and the
set of invertible circulant matrices forms a group in terms of matrix
multiplication. Geometry, linear coding, and graph theory, all use such
matrices, one may see [12, 13, 14, 15, 16, 17]. Also for more than 47 years,
vibration analysis has focused on systems with circulant symmetry. Many of
these contributions were motivated by vibration studies of general
rotationally periodic systems [15], bladed disks, planetary gear systems,
rings, circular plates, disk spindle systems, centrifugal pendulum vibration
absorbers, space antennae, and microelectromechanical system frequency
filters.
The circulant Hamiltonians are addressed [18] in terms of the physical
implementation of logical QFT, which related to the fact that the
eigenspectrum of circulant matrix is spanned by the Fourier modes[19, 10].
Ivanov and Vitanov [20] recently proposed a Hamiltonian based on two spins
emerging in a magnetic field, creating Rabi oscillations, and obtained
circulant symmetry by modifying the coupling strength of the spin-spin
interaction. As a result, they demonstrated that the eigenvectors are
independent of the magnitude of the physical parameters, implying entanglement
protection, and the resulting system can subsequently be employed as a logical
QFT. Wu and his colleagues [21] presented two approaches for implementing
quantum phase gates based on adiabatic passing and phase control of the
driving fields. They studied the experimental feasibility, gate fidelity and
docoherence effect for both schemes. Moreover, the shortcuts to adiabaticity
have become instrumental in preparing and driving internal and motional states
in atomic, molecular and solid-state physics, for a complete review we refer
to [22].
Motivated by the results developed in [20], we study a Hamiltonian describing
three spins in a magnetic field, coupled via linear and non-linear
interactions. The last coupling generally arises when the interaction medium
is non-linear [23, 24, 25] and remains an important ingredient in generating a
circulant Hamiltonian. To build a logical QFT, we propose two schemes based on
the physical parameter choices. The eigenvectors of our circulant Hamiltonian
do not depend on the parameters, which protects entanglement during the gate
implementation as long as the circulant symmetry is maintained. We use an
energy offset Hamiltonian
${H}_{0}(t)=\sum\limits_{j=1}^{3}\Delta_{j}(t){\sigma}_{j}^{z}$ to break the
circulant symmetry during the transition. We adjust the physical parameters
and the detuning $\Delta_{j}(t)$ to realize the circulant symmetry at the end
of the transition $t_{f}$. We sinusoidally modulate the physical parameters in
time to show that it is possible to adiabatically obtain a superposition of
the quantum Fourier modes from any initial state with high fidelity. As the
adiabatic evolution is robust but still limited by the non-adiabatic
transition, we introduce a counter-driving Hamiltonian ${H}_{\sf CD}(t)$[26]
to suppress these transitions. Under suitable conditions of the physical
parameters, we determine the eigenvectors associated with the total
Hamiltonian. These allow us to combine our gate with the short-cut to
adiabacity scheme in order to accelerate the gate. Based on the proposal
described in [20], we suggest a way to physically implement the gate.
The outlines of our paper are summarized as follows. In Sec. 2, we propose a
Hamiltonian describing three qubits with different interactions in addition to
Rabi oscillations and show how to obtain the circulant symmetry. The adiabatic
transition technique is performed to end up with the Fourier modes in Sec. 3.
The physical implementation of the obtained QFT gate will be discussed in Sec.
4. The non-degeneracy of frequencies, the fidelities of our gate and the
creation of entangled state will be numerically analyzed by sinusoidally
varying the physical parameters in Sec. 5. We discuss the shortcut to
adiabacity scheme combined with our gate in Sec. 6. Finally, we close by
concluding our work.
## 2 Theoretical model
We consider three spins that emerged in a magnetic field to achieve our task,
forming a three-qubit gate system as presented in Figure 1.
Figure 1: (color online) The schematic presents the interactions of strengths
$J_{j}$ between three coupled qubits emerging in a magnetic field with the
Rabi frequencies $\Omega_{j}$.
It can be described by a Hamiltonian involving two types of interaction, such
as
$\displaystyle{H}=$ $\displaystyle
J_{1}(\sigma_{1}^{+}e^{-i\phi_{12}}+\sigma_{1}^{-}e^{i\phi_{12}})(\sigma_{2}^{+}e^{-i\phi_{21}}+\sigma_{2}^{-}e^{i\phi_{21}})+J_{2}(\sigma_{2}^{+}e^{-i\phi_{23}}+\sigma_{2}^{-}e^{i\phi_{23}})(\sigma_{3}^{+}e^{-i\phi_{32}}+\sigma_{3}^{-}e^{i\phi_{32}})$
$\displaystyle+J_{3}(\sigma_{1}^{+}e^{-i\phi_{13}}+\sigma_{1}^{-}e^{i\phi_{13}})(\sigma_{3}^{+}e^{-i\phi_{31}}+\sigma_{3}^{-}e^{i\phi_{31}})+\Omega_{1}(\sigma_{1}^{+}e^{i\theta_{1}}+\sigma_{1}^{-}e^{-i\theta_{1}})+\Omega_{2}(\sigma_{2}^{+}e^{i\theta_{2}}+\sigma_{2}^{-}e^{-i\theta_{2}})$
$\displaystyle+\Omega_{3}(\sigma_{3}^{+}e^{i\theta_{3}}+\sigma_{3}^{-}e^{-i\theta_{3}})+J(\sigma_{1}^{+}e^{-i\phi_{1}}+\sigma_{1}^{-}e^{i\phi_{1}})(\sigma_{2}^{+}e^{-i\phi_{2}}+\sigma_{2}^{-}e^{i\phi_{2}})(\sigma_{3}^{+}e^{-i\phi_{3}}+\sigma_{3}^{-}e^{i\phi_{3}})$
(1)
where $\sigma_{j}^{+}=\ket{\uparrow_{j}}\bra{\downarrow_{j}}$ and
$\sigma_{j}^{-}=\ket{\downarrow_{j}}\bra{\uparrow_{j}}$ stand for spin flip
operators, $\ket{\uparrow_{j}}$ and $\ket{\downarrow_{j}}$ being the qubit
states of the $j^{th}$ spin, with $j=1,2,3$. The first term describes the
interaction between spins $1$ and $2$ of phases $\phi_{12}$ and $\phi_{21}$,
whereas the second term covers the interaction between spins $2$ and $3$ of
phases $\phi_{23}$ and $\phi_{32}$, and the third term describes the
interaction between spins $1$ and $3$ of phases $\phi_{13}$ and $\phi_{31}$.
The terms, including the Rabi frequencies $\Omega_{j}$, are the single-qubit
transitions with phases $\theta_{j}$. The last term describes the coupling
between the three spins with phases $\phi_{j}$ [25]. Note that, our
Hamiltonian can be considered as a three-qubit generalization of the Ivanov
and Vitanov model for two qubits [20], thanks to the trilinear interaction
term.
It is convenient for our task to consider the matrix form of the Hamiltonian
(2) and then, in the basis
$\mathcal{B}_{c}=\left\\{\mid\downarrow\downarrow\downarrow\rangle,\mid\downarrow\downarrow\uparrow\rangle,\mid\downarrow\uparrow\downarrow\rangle,\mid\downarrow\uparrow\uparrow\rangle,\mid\uparrow\downarrow\downarrow\rangle,\mid\uparrow\downarrow\uparrow\rangle,\mid\uparrow\uparrow\downarrow\rangle,\mid\uparrow\uparrow\uparrow\rangle\right\\}$,
we have
$\displaystyle{H}=\begin{pmatrix}0&\Omega_{3}e^{i\theta_{3}}&\Omega_{2}e^{i\theta_{2}}&J_{2}e^{-i\xi_{1}}&\Omega_{1}e^{i\theta_{1}}&J_{3}e^{-i\xi_{2}}&J_{1}e^{-i\xi_{3}}&Je^{-i\xi_{7}}\\\
\Omega_{3}e^{-i\theta_{3}}&0&J_{2}e^{-i\xi_{4}}&\Omega_{2}e^{i\theta_{2}}&J_{3}e^{-i\xi_{5}}&\Omega_{1}e^{i\theta_{1}}&Je^{-i\xi_{8}}&J_{1}e^{-i\xi_{3}}\\\
\Omega_{2}e^{-i\theta_{2}}&J_{2}e^{i\xi_{4}}&0&\Omega_{3}e^{i\theta_{3}}&J_{1}e^{-i\xi_{6}}&Je^{-i\xi_{9}}&\Omega_{1}e^{i\theta_{1}}&J_{3}e^{-i\xi_{2}}\\\
J_{2}e^{i\xi_{1}}&\Omega_{2}e^{-i\theta_{2}}&\Omega_{3}e^{-i\theta_{3}}&0&Je^{-i\xi_{10}}&J_{1}e^{-i\xi_{6}}&J_{3}e^{-i\xi_{5}}&\Omega_{1}e^{i\theta_{1}}\\\
\Omega_{1}e^{-i\theta_{1}}&J_{3}e^{i\xi_{5}}&J_{1}e^{i\xi_{6}}&Je^{i\xi_{10}}&0&\Omega_{3}e^{i\theta_{3}}&\Omega_{2}e^{i\theta_{2}}&J_{2}e^{-i\xi_{1}}\\\
J_{3}e^{i\xi_{2}}&\Omega_{1}e^{-i\theta_{1}}&Je^{i\xi_{9}}&J_{1}e^{i\xi_{6}}&\Omega_{3}e^{-i\theta_{3}}&0&J_{2}e^{-i\xi_{4}}&\Omega_{2}e^{i\theta_{2}}\\\
J_{1}e^{i\xi_{3}}&Je^{i\xi_{8}}&\Omega_{1}e^{-i\theta_{1}}&J_{3}e^{i\xi_{5}}&\Omega_{2}e^{-i\theta_{2}}&J_{2}e^{i\xi_{4}}&0&\Omega_{3}e^{i\theta_{3}}\\\
Je^{i\xi_{7}}&J_{1}e^{i\xi_{3}}&J_{3}e^{i\xi_{2}}&\Omega_{1}e^{-i\theta_{1}}&J_{2}e^{i\xi_{1}}&\Omega_{2}e^{-i\theta_{2}}&\Omega_{3}e^{-i\theta_{3}}&0\end{pmatrix}$
(2)
where the ten angles $\xi_{1}=\phi_{23}+\phi_{32}$,
$\xi_{2}=\phi_{13}+\phi_{31}$, $\xi_{3}=\phi_{12}+\phi_{21}$,
$\xi_{4}=\phi_{23}-\phi_{32},$ $\xi_{5}=\phi_{13}-\phi_{31}$,
$\xi_{6}=\phi_{12}-\phi_{21},$ $\xi_{7}=\phi_{1}+\phi_{2}+\phi_{3},$
$\xi_{8}=\phi_{1}+\phi_{2}-\phi_{3},$ $\xi_{9}=\phi_{1}-\phi_{2}+\phi_{3},$
$\xi_{10}=\phi_{1}-\phi_{2}-\phi_{3}$ have been introduced.
In what follows, we are going to find conditions on the involved parameters to
end up with the Hamiltonian (2) as a circulant matrix [19, 10]. The benefit of
the circulant matrix is that its eigenvectors are the vectors of columns of
the discrete quantum Fourier transform, and therefore they do not depend on
the elements of the circulant matrix. The eigenvectors of our circulant matrix
can be mapped into the spin basis $\mathcal{B}_{c}$ as
$\displaystyle\ket{\psi_{0}}=\dfrac{1}{2\sqrt{2}}(\ket{\downarrow\downarrow\downarrow}+\ket{\downarrow\downarrow\uparrow}+\ket{\downarrow\uparrow\downarrow}+\ket{\downarrow\uparrow\uparrow}+\ket{\uparrow\downarrow\downarrow}+\ket{\uparrow\downarrow\uparrow}+\ket{\uparrow\uparrow\downarrow}+\ket{\uparrow\uparrow\uparrow})$
(3)
$\displaystyle\ket{\psi_{1}}=\dfrac{1}{2\sqrt{2}}(\ket{\downarrow\downarrow\downarrow}+\omega\ket{\downarrow\downarrow\uparrow}+i\ket{\downarrow\uparrow\downarrow}+i\omega\ket{\downarrow\uparrow\uparrow}-\ket{\uparrow\downarrow\downarrow}-\omega\ket{\uparrow\downarrow\uparrow}-i\ket{\uparrow\uparrow\downarrow}-i\omega\ket{\uparrow\uparrow\uparrow})$
(4)
$\displaystyle\ket{\psi_{2}}=\dfrac{1}{2\sqrt{2}}(\ket{\downarrow\downarrow\downarrow}+i\ket{\downarrow\downarrow\uparrow}-\ket{\downarrow\uparrow\downarrow}-i\ket{\downarrow\uparrow\uparrow}+\ket{\uparrow\downarrow\downarrow}+i\ket{\uparrow\downarrow\uparrow}-\ket{\uparrow\uparrow\downarrow}-i\ket{\uparrow\uparrow\uparrow})$
(5)
$\displaystyle\ket{\psi_{3}}=\dfrac{1}{2\sqrt{2}}(\ket{\downarrow\downarrow\downarrow}+i\omega\ket{\downarrow\downarrow\uparrow}-i\ket{\downarrow\uparrow\downarrow}+\omega\ket{\downarrow\uparrow\uparrow}-\ket{\uparrow\downarrow\downarrow}-i\omega\ket{\uparrow\downarrow\uparrow}+i\ket{\uparrow\uparrow\downarrow}-\omega\ket{\uparrow\uparrow\uparrow})$
(6)
$\displaystyle\ket{\psi_{4}}=\dfrac{1}{2\sqrt{2}}(\ket{\downarrow\downarrow\downarrow}-\ket{\downarrow\downarrow\uparrow}+\ket{\downarrow\uparrow\downarrow}-\ket{\downarrow\uparrow\uparrow}+\ket{\uparrow\downarrow\downarrow}-\ket{\uparrow\downarrow\uparrow}+\ket{\uparrow\uparrow\downarrow}-\ket{\uparrow\uparrow\uparrow})$
(7)
$\displaystyle\ket{\psi_{5}}=\dfrac{1}{2\sqrt{2}}(\ket{\downarrow\downarrow\downarrow}-\omega\ket{\downarrow\downarrow\uparrow}+i\ket{\downarrow\uparrow\downarrow}-i\omega\ket{\downarrow\uparrow\uparrow}-\ket{\uparrow\downarrow\downarrow}+\omega\ket{\uparrow\downarrow\uparrow}-i\ket{\uparrow\uparrow\downarrow}+i\omega\ket{\uparrow\uparrow\uparrow})$
(8)
$\displaystyle\ket{\psi_{6}}=\dfrac{1}{2\sqrt{2}}(\ket{\downarrow\downarrow\downarrow}-i\ket{\downarrow\downarrow\uparrow}-\ket{\downarrow\uparrow\downarrow}+i\ket{\downarrow\uparrow\uparrow}+\ket{\uparrow\downarrow\downarrow}-i\ket{\uparrow\downarrow\uparrow}-\ket{\uparrow\uparrow\downarrow}+i\ket{\uparrow\uparrow\uparrow})$
(9)
$\displaystyle\ket{\psi_{7}}=\dfrac{1}{2\sqrt{2}}\left(\ket{\downarrow\downarrow\downarrow}-i\omega\ket{\downarrow\downarrow\uparrow}-i\ket{\downarrow\uparrow\downarrow}-\omega\ket{\downarrow\uparrow\uparrow}-\ket{\uparrow\downarrow\downarrow}+i\omega\ket{\uparrow\downarrow\uparrow}+i\ket{\uparrow\uparrow\downarrow}+\omega\ket{\uparrow\uparrow\uparrow}\right)$
(10)
and we have set the phase factor $\omega=\exp(i\frac{\pi}{4})$. At this point,
we show the possibilities that lead to the circulant symmetries. We end up
with two scenarios when we require adequate requirements to be met by the
physical parameters. Indeed, the first configuration is
$\displaystyle J_{1}=\Omega_{2}$ (11) $\displaystyle J=J_{2}=J_{3}=\Omega_{3}$
(12) $\displaystyle\Omega_{1}=0$ (13)
$\displaystyle\theta_{2}=\theta_{3}=\phi_{32}=\phi_{3}=\phi_{21}=-\phi_{31}=\varphi$
(14)
$\displaystyle\theta_{1}=\phi_{23}=\phi_{2}=\phi_{1}=\phi_{12}=\phi_{13}=0$
(15)
which can be injected into (2) to get the first circulant Hamiltonian
$\displaystyle{H}_{\sf
cir}^{(1)}=\begin{pmatrix}0&Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}\\\
Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{-i\varphi}\\\
J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}\\\
Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&0\\\
0&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}\\\
Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}\\\
J_{1}e^{i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}\\\
Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0\end{pmatrix}$
(16)
whereas the second configuration looks like
$\displaystyle J_{1}=\Omega_{2}$ (17) $\displaystyle J=J_{2}=J_{3}=\Omega_{3}$
(18)
$\displaystyle\theta_{2}=\theta_{3}=\phi_{32}=\phi_{3}=\phi_{21}=-\phi_{31}=\varphi$
(19)
$\displaystyle\theta_{1}=\phi_{23}=\phi_{2}=\phi_{1}=\phi_{12}=\phi_{13}=0$
(20)
with $\Omega_{1}\neq 0$ and then, from (2) we obtain the second circulant
Hamiltonian
$\displaystyle{H}_{\sf
cir}^{(2)}=\begin{pmatrix}0&Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&\Omega_{1}&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}\\\
Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&\Omega_{1}&Je^{i\varphi}&J_{1}e^{-i\varphi}\\\
J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&\Omega_{1}&Je^{i\varphi}\\\
Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&\Omega_{1}\\\
\Omega_{1}&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}\\\
Je^{-i\varphi}&\Omega_{1}&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}\\\
J_{1}e^{i\varphi}&Je^{-i\varphi}&\Omega_{1}&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&Je^{i\varphi}\\\
Je^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&\Omega_{1}&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0\end{pmatrix}.$
(21)
At this level, we underline that the presence of the trilinear interaction is
necessary to realize the QFT gate. This type of interaction is required to
achieve such circulant symmetry. Generalizing to more than three qubits is not
trivial because the Hamiltonian will demand multilinear interactions. As a
result, discussing the adiabatic transition and shortcut to adiabacity will be
difficult mathematically. In the forthcoming analysis, we focus only on one of
the above Hamiltonians, let say ${H}_{\sf cir}^{(1)}$, and investigate its
basic features.
## 3 Adiabatic transition to Fourier modes
Some controls can be used to create an adiabatic transition to Fourier modes.
Two of these will be discussed here: the energy offset and Rabi frequencies.
### 3.1 Controlling by energy offset
To realize an adiabatic evolution to the circulant Hamiltonian states (Fourier
modes), we add an energy offset ${H}_{0}(t)$
${H}_{0}(t)=\Delta_{1}(t){\sigma}_{1}^{z}+\Delta_{2}(t){\sigma}_{2}^{z}+\Delta_{3}(t){\sigma}_{3}^{z}$
(22)
where the time-dependent detuning $\Delta_{j}(t)$ of $j^{th}$ spin are
necessary to control the adiabatic transition from computational spin states
to quantum Fourier states (3-10). Consequently, we have now the Hamiltonian
${H}(t)={H}_{0}(t)+{H}_{\sf cir}^{(1)}(t).$ (23)
Remember that in the adiabatic limit, the system is always in the same
eigenstates of ${H}(t)$ [20]. The eigenstates of $H(t)$ will be those of
$H_{0}(t)$ at $t_{i}$, and the dynamics will drive them to the Fourier modes
at $t_{f}$ if the couplings and detunings have a specific time dependency.
Therefore, adiabatic evolution translates each computational spin state to a
Fourier mode, resulting in a single interaction step that generates the QFT.
The non-degeneracy between the eigen-frequencies of ${H}(t)$ must be bigger at
any time than the non-adiabatic coupling between each pair of $H(t)$
eigenstates $\ket{\lambda_{\pm}}$, $\ket{\delta_{\pm}}$, $\ket{\mu_{\pm}}$,
and $\ket{\gamma_{\pm}}$ of ${H}(t)$, according to adiabatic evolution.
Otherwise, we have the conditions
$\displaystyle|\mu_{\pm}(t)-\lambda_{\pm}(t)|\gg|\bra{\partial_{t}\mu_{\pm}(t)}\ket{\lambda_{\pm}(t)}|$
(24)
$\displaystyle|\lambda_{+}(t)-\lambda_{-}(t)|\gg|\bra{\partial_{t}\lambda_{+}(t)}\ket{\lambda_{-}(t)}|$
(25)
$\displaystyle|\lambda_{\pm}(t)-\delta_{\pm}(t)|\gg|\bra{\partial_{t}\lambda_{\pm}(t)}\ket{\delta_{\pm}(t)}|$
(26)
$\displaystyle|\delta_{+}(t)-\delta_{-}(t)|\gg|\bra{\partial_{t}\delta_{+}(t)}\ket{\delta_{-}(t)}|$
(27)
$\displaystyle|\delta_{\pm}(t)-\gamma_{\pm}(t)|\gg|\bra{\partial_{t}\delta_{\pm}(t)}\ket{\gamma_{\pm}(t)}|$
(28)
$\displaystyle|\gamma_{+}(t)-\gamma_{-}(t)|\gg|\bra{\partial_{t}\gamma_{+}(t)}\ket{\gamma_{-}(t)}|.$
(29)
Let us choose $\varphi=\frac{\pi}{2}$ to simplify our problem, and then we
prove in Appendix A that the eigenvalues
$\lambda_{\pm},\delta_{\pm},\mu_{\pm},\gamma_{\pm}$ of the Hamiltonian
${H}(t)$ (23) are provided by (A.1-A.4). Now, it is worthwhile to mention that
the circulant symmetry is broken. We suppose that the system is initially
prepared in the computational product states
$\ket{\psi_{s_{1}s_{2}s_{3}}}=\ket{s_{1}s_{2}s_{3}}$
$(s_{j}=\downarrow_{j},\uparrow_{j})$, which are eigenstates of the
Hamiltonian ${H}_{0}(t)$. As a result, the initial parameters should verify
the conditions
$\displaystyle\Delta_{1,2,3}(t_{i})\gg J(t_{i}),\>J_{1}(t_{i})$ (30)
and then ${H}(t)$ goes to ${H}_{0}(t)$. Consequently the eigenvalues become
$\displaystyle\lambda_{\pm}(t_{i})=\pm\left[\Delta_{1}(t_{i})+\Delta_{2}(t_{i})+\Delta_{3}(t_{i})\right]$
(31)
$\displaystyle\delta_{\pm}(t_{i})=\pm\left[\Delta_{1}(t_{i})+\Delta_{2}(t_{i})-\Delta_{3}(t_{i})\right]$
(32)
$\displaystyle\mu_{\pm}(t_{i})=\pm\left[\Delta_{1}(t_{i})-\Delta_{2}(t_{i})+\Delta_{3}(t_{i})\right]$
(33)
$\displaystyle\gamma_{\pm}(t_{i})=\pm\left[\Delta_{1}(t_{i})-\Delta_{2}(t_{i})-\Delta_{3}(t_{i})\right]$
(34)
and the ${H}(t)$ eigenvectors are exactly the computational spin states, i.e.
$\ket{\psi(t_{i})}=\ket{s_{1}s_{2}s_{3}}$,
$\displaystyle\ket{\lambda_{+}}=\ket{\downarrow\downarrow\downarrow},\quad\ket{\lambda_{-}}=\ket{\uparrow\uparrow\uparrow}$
(35)
$\displaystyle\ket{\delta_{+}}=\ket{\downarrow\downarrow\uparrow},\qquad\ket{\delta_{-}}=\ket{\uparrow\uparrow\downarrow}$
(36)
$\displaystyle\ket{\mu_{+}}=\ket{\downarrow\uparrow\downarrow},\qquad\ket{\mu_{-}}=\ket{\uparrow\downarrow\uparrow}$
(37)
$\displaystyle\ket{\gamma_{+}}=\ket{\downarrow\uparrow\uparrow},\qquad\ket{\gamma_{-}}=\ket{\uparrow\downarrow\downarrow}.$
(38)
To avoid degeneracy, the condition $\Delta_{i}(t_{i})\neq\Delta_{j}(t_{i})$
with $i,j=1,2,3$, and we can have equidistant eigen-frequencies by requiring
$\Delta_{1}(t_{i})=2\Delta_{2}(t_{i})=4\Delta_{3}(t_{i})$. Furthermore, to
obtain the Fourier modes at the final time $t_{f}$ of transition, the coupling
parameters together with detunings should verify the conditions
$\displaystyle\Delta_{1,2,3}(t_{f})\ll J(t_{f}),\>J_{1}(t_{f}).$ (39)
As a result, the total Hamiltonian evolves to a circulant one, i.e.
${H}(t)\rightarrow{H}_{\sf cir}^{(1)}(t)$, and its eigenspectrum becomes that
of ${H}_{\sf cir}^{(1)}(t)$, as shown below
$\displaystyle\ket{\lambda_{+}}=\ket{\psi_{0}},\qquad\ket{\lambda_{-}}=\ket{\psi_{7}}$
(40)
$\displaystyle\ket{\delta_{+}}=\ket{\psi_{1}},\qquad\ket{\delta_{-}}=\ket{\psi_{6}}$
(41)
$\displaystyle\ket{\mu_{+}}=\ket{\psi_{2}},\qquad\ket{\mu_{-}}=\ket{\psi_{5}}$
(42)
$\displaystyle\ket{\gamma_{+}}=\ket{\psi_{3}},\qquad\ket{\gamma_{-}}=\ket{\psi_{4}}.$
(43)
Additionally, the realization of the QFT relies on the adiabatic following of
each of the instantaneous eigenvectors
$\displaystyle\ket{\downarrow\downarrow\downarrow}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{i\alpha_{1}}\>\ket{\psi_{0}}$ (44)
$\displaystyle\ket{\downarrow\downarrow\uparrow}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{i(\alpha_{2}-\frac{\pi}{2})}\>\ket{\psi_{1}}$ (45)
$\displaystyle\ket{\downarrow\uparrow\downarrow}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{i\alpha_{3}}\>\ket{\psi_{2}}$ (46)
$\displaystyle\ket{\downarrow\uparrow\uparrow}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{i\alpha_{4}}\>\ket{\psi_{3}}$ (47)
$\displaystyle\ket{\uparrow\downarrow\downarrow}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\alpha_{4}}\>\ket{\psi_{4}}$ (48)
$\displaystyle\ket{\uparrow\downarrow\uparrow}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\alpha_{3}}\>\ket{\psi_{5}}$ (49)
$\displaystyle\ket{\uparrow\uparrow\downarrow}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\alpha_{2}}\>\ket{\psi_{6}}$ (50)
$\displaystyle\ket{\uparrow\uparrow\uparrow}\>$ $\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\alpha_{1}}\>\ket{\psi_{7}}$ (51)
and the global adiabatic phases $\alpha_{j}$ appear due to the adiabatic
evolution [20, 27, 28, 22]
$\displaystyle\alpha_{1}=\int_{t_{i}}^{t_{f}}\lambda_{+}(t)\,\mathrm{d}t,\qquad\alpha_{2}=\int_{t_{i}}^{t_{f}}\delta_{+}(t)\,\mathrm{d}t,\qquad\alpha_{3}=\int_{t_{i}}^{t_{f}}\mu_{+}(t)\,\mathrm{d}t,\qquad\alpha_{4}=\int_{t_{i}}^{t_{f}}\gamma_{+}(t)\,\mathrm{d}t$
(52)
with the relations
$\displaystyle\lambda_{-}(t)=-\lambda_{+}(t),\qquad\delta_{-}(t)=-\delta_{+}(t),\qquad\mu_{-}(t)=-\mu_{+}(t),\qquad\gamma_{-}(t)=-\gamma_{+}(t).$
(53)
After a specific tuning of the detuning $\Delta_{j}(t)$, $\alpha_{j}$ reduce
to
$\alpha_{1}=2p\pi,\qquad\alpha_{2}=2m\pi,\qquad\alpha_{3}=2n\pi,\qquad\alpha_{4}=2k\pi$
(54)
with $p,m,n,k$ are four integer numbers. This choice leads to the realization
of the following unitary quantum gate
$\displaystyle\mathcal{G}=\dfrac{1}{2\sqrt{2}}\begin{pmatrix}1&-i&1&1&1&1&1&1\\\
1&-i\omega&i&i\omega&-1&-\omega&-i&-i\omega\\\ 1&1&-1&-i&1&i&-1&-i\\\
1&\omega&-i&\omega&-1&-i\omega&i&-\omega\\\ 1&i&1&-1&1&-1&1&-1\\\
1&i\omega&i&-i\omega&-1&\omega&-i&i\omega\\\ 1&-1&-1&i&1&-i&-1&i\\\
1&-\omega&-i&-\omega&-1&i\omega&i&\omega\end{pmatrix}.$ (55)
As a result, one can show that up to an additional phase $-\frac{\pi}{2}$ the
determinant of $\mathcal{G}$ is equal to one, i.e. $\det(\mathcal{G})=1$,
which is necessary for adiabatic evolution. Therefore, it is worthwhile to
note that the gate $\mathcal{G}$ is a QFT one.
### 3.2 Controlling by Rabi frequencies
Now we will discuss how to control the Rabi frequencies $\Omega_{2}(t)$ and
$\Omega_{3}(t)$ by considering the Hamiltonian ${H}^{(1)}(t)$ (B.2) in
Appendix B without using the energy offset ${H}_{0}(t)$ (22). To achieve this
goal, we drive $\Omega_{2}(t)$ and $\Omega_{3}(t)$ in a way that they become
equal to the couplings $J_{1}$ and $J$, respectively. In what follows, we
summarize the process of control in three steps.
* •
Initially: Let us assume that $\Omega_{2}(t_{i})\gg J_{1}$ and
$\Omega_{3}(t_{i})\gg J$, then the eigenvectors of ${H}^{(1)}(t)$ (B.2) are
the rotating computational spin states
$\ket{\psi({t_{i}})}=\ket{s^{\prime}_{1}s^{\prime}_{2}s^{\prime}_{3}}$
($s^{\prime}_{j}=\pm_{j})$, such as
$\displaystyle\ket{\pm_{1}}=\dfrac{1}{\sqrt{2}}\left(\ket{\downarrow_{1}}\pm\ket{\uparrow_{1}}\right)$
$\displaystyle\ket{\pm_{2}}=\dfrac{1}{\sqrt{2}}\left(e^{i\varphi}\ket{\downarrow_{2}}\pm\>\ket{\uparrow_{2}}\right)$
(56)
$\displaystyle\ket{\pm_{3}}=\dfrac{1}{\sqrt{2}}\left(\ket{\downarrow_{3}}\pm\ket{\uparrow_{3}}\right).$
* •
Transition: The adiabatic transition from the initial eigenvectors to Fourier
modes is given by the mappings
$\displaystyle\ket{---}\>$ $\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\beta_{0}}\>\ket{\psi_{0}}$ $\displaystyle\ket{--+}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\beta_{1}}\>\ket{\psi_{1}}$ $\displaystyle\ket{-+-}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\beta_{2}}\>\ket{\psi_{2}}$ $\displaystyle\ket{-++}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\beta_{3}}\>\ket{\psi_{3}}$ (57)
$\displaystyle\ket{+--}\>$ $\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\beta_{4}}\>\ket{\psi_{4}}$ $\displaystyle\ket{+-+}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\beta_{5}}\>\ket{\psi_{5}}$ $\displaystyle\ket{++-}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\beta_{6}}\>\ket{\psi_{6}}$ $\displaystyle\ket{+++}\>$
$\displaystyle\longrightarrow$
$\displaystyle\>e^{-i\beta_{7}}\>\ket{\psi_{7}}$
where the adiabatic phases $\beta_{i}$ read as
$\displaystyle\beta_{i}=\int_{t_{i}}^{t_{f}}\Lambda_{i}(t)\ dt,\qquad
i=1,\cdots,7$ (58)
and Appendix B shows the eigenfrequencies $\Lambda_{i}(t)$ (B.3-B.10) of
${H}^{(1)}(t)$ (B.2).
* •
Finally: We adiabatically decrease $\Omega_{2}(t)$ together with
$\Omega_{3}(t)$ to end up with
$\Omega_{2}(t_{f})=J_{1},\qquad\Omega_{3}(t_{f})=J.$ (59)
As a result, the circulant symmetry will be established, and the Fourier modes
will be derived as well.
At this stage, we mention that in the case of the gate realization based on
the energy offset, the exact eigenvectors of the Hamiltonian (23) can not be
obtained exactly. However, under the consideration made here, the derivation
of eigenvectors can be achieved, see Appendix B. Thus, to accelerate the gate,
we combine the gate scheme with the short-cut to adiabaticity.
## 4 Gate implementation
To give a physical implementation of our system, we generalize the process
proposed by Ivanov and Vitanov [20] to the three qubit case with trilinear
coupling. Indeed, to realize the circulant Hamiltonian, we proceed with
trapped ions [29, 30, 31]. A crystal with $3N$ ions with mass $M$ is
considered, the trap axis are $z$ and $x$ with the frequencies $\Omega_{z}$
and $\Omega_{x}$, respectively. Each ion of the crystal is a qubit described
by two typical levels: $\ket{\uparrow},\ket{\downarrow}$ with an energy gap
$\omega_{0}$. Moreover, the virtual excitations generated between two coupled
ions undergo small radial vibrations around the equilibrium positions of the
ions. Then, as an illustration, we present our physical implementation,
depicted in Figure 2.
Figure 2: (color online) A proposal for the physical implementation of the
interaction between three coupled trapped ions with laser frequencies
$\Omega_{j,L_{r}}=\omega_{0}-\nu-\Delta_{j}(t)$ and
$\Omega_{j,L_{b}}=\omega_{0}+\nu-\Delta_{j}(t)$ that generates a spin-
dependent force at the frequency $\nu$.
As for our Hamiltonian (2), the implementation of kinetic terms together with
bilinear couplings is perfectly discussed in [20]. Regarding the
implementation of the trilinear coupling, we suggest that it can be seen as a
coupling between two coupled ions with an extra third ion [25]. To be clear,
the small radial vibrations around the equilibrium positions between two
coupled ions are displayed by a set of collective vibrational modes with the
Hamiltonians [23, 32]
${H}_{\sf
ph1}=\sum_{n}\Omega_{n}\widehat{a}_{n}^{\dagger}\widehat{a}_{n},\qquad{H}_{\sf
ph2}=\sum_{n}\Omega_{n}\widehat{b}_{n}^{\dagger}\widehat{b}_{n},\qquad{H}_{\sf
ph3}=\sum_{n}\Omega_{n}\widehat{c}_{n}^{\dagger}\widehat{c}_{n}$ (60)
whereas the internal energy is
${H}_{\sf q}=\frac{1}{2}\sum_{j}\omega_{0}\sigma_{j}^{z}.$ (61)
and the free Hamiltonian then becomes
${H}_{0}={H}_{\sf q}+\sum_{j}{H}_{\sf phj}.$ (62)
As claimed above, generating the trilinear coupling will be dependent on the
interaction between two coupled ions and a third one. For instance, this can
be achieved by using two pairs of noncopropagating laser beams along the
radial directions of frequencies
$\displaystyle\Omega_{j,L_{r}}=\omega_{0}-\nu-\Delta_{j}(t)$ (63)
$\displaystyle\Omega_{j,L_{b}}=\omega_{0}+\nu-\Delta_{j}(t)$ (64)
that generate a spin-dependent force at frequency $\nu$.
Besides, using the weak coupling assumption, one can legitimately apply the
optical rotating-wave approximation and, as a result, the interacting part of
our system reads now as
$\displaystyle{H}_{\sf I}=$
$\displaystyle\sum_{j}\Delta_{j}\sigma_{j}^{z}+\Omega_{x}e^{ik\widehat{x}_{1}}\cos(\nu
t)\left(\sigma_{1}^{+}e^{i\phi_{1}}+\sigma_{1}^{-}e^{-i\phi_{1}}\right)+\Omega_{x}e^{ik\widehat{x}_{2}}\cos(\nu
t)\left(\sigma_{2}^{+}e^{i\phi_{2}}+\sigma_{2}^{-}e^{-i\phi_{2}}\right)$
$\displaystyle+\Omega_{z}e^{ik\widehat{z}_{3}}\cos(\nu
t)\left(\sigma_{3}^{+}e^{i\phi_{3}}+\sigma_{3}^{-}e^{-i\phi_{3}}\right)+\sum_{j}\Omega_{j}\left(\sigma_{j}^{+}e^{i\theta_{j}}+\sigma_{j}^{-}e^{-i\theta_{j}}\right)$
(65) $\displaystyle+\Omega_{\alpha}e^{ik\widehat{z}_{3}}\sin(\nu
t)\left(\sigma_{1}^{+}e^{i{\varphi_{1}}}+\sigma_{1}^{-}e^{-i{\varphi_{1}}}\right)\left(\sigma_{2}^{+}e^{i{\varphi_{2}}}+\sigma_{2}^{-}e^{-i{\varphi_{2}}}\right)$
where $\Omega_{x},\Omega_{z},\Omega_{j},\Omega_{\alpha}$ are the Rabi
frequencies, $\phi_{j},\theta_{j}$ are the laser phases, $\varphi_{1,2}$ are
the phases resulted from trilinear interaction, and the spatial arguments
$\displaystyle k\widehat{x}_{1}$ $\displaystyle=$
$\displaystyle\sum_{n}\eta_{1,n}(\widehat{a}_{n}^{\dagger}e^{i\Omega_{n}t}+\widehat{a}_{n}e^{-i\Omega_{n}t})$
(66) $\displaystyle k\widehat{x}_{2}$ $\displaystyle=$
$\displaystyle\sum_{n}\eta_{2,n}(\widehat{b}_{n}^{\dagger}e^{i\Omega_{n}t}+\widehat{b}_{n}e^{-i\Omega_{n}t})$
(67) $\displaystyle k\widehat{z}_{3}$ $\displaystyle=$
$\displaystyle\sum_{n}\eta_{3,n}(\widehat{c}_{n}^{\dagger}e^{i\Omega_{n}t}+\widehat{c}_{n}e^{-i\Omega_{n}t})$
(68)
are involving the Lamb-Dicke parameters
$\displaystyle\eta_{j,n}=b_{j,n}k\sqrt{\hbar/2M\Omega_{n}}$ (69)
with $b_{j,n}$ are the normal mode transformation matrix for the $j^{th}$ ion.
Hence, since the dimensionless parameter $\eta_{j,n}$ is small, we can make
the Lamb-Dicke approximation, $\Delta k\langle\widehat{x}_{1}\rangle\ll 1$,
$\Delta k\langle\widehat{x}_{2}\rangle\ll 1$, $\Delta
k\langle\widehat{z}_{3}\rangle\ll 1$, to end up with
$\displaystyle{H}_{\sf I}=$
$\displaystyle\sum_{j}\Delta_{j}\sigma_{j}^{z}+\sum_{n}J_{1,n}\cos(\nu
t)\left(\sigma_{1}^{+}e^{i\phi_{1}}+\sigma_{1}^{-}e^{-i\phi_{1}}\right)\left(\widehat{a}_{n}^{\dagger}e^{i\Omega_{n}t}+\widehat{a}_{n}e^{-i\Omega_{n}t}\right)$
$\displaystyle+\sum_{j}\Omega_{j}\left(\sigma_{j}^{+}e^{i\theta_{j}}+\sigma_{j}^{-}e^{-i\theta_{j}}\right)+\sum_{n}J_{2,n}\cos(\nu
t)\left(\sigma_{2}^{+}e^{i\phi_{2}}+\sigma_{2}^{-}e^{-i\phi_{2}}\right)\left(\widehat{b}_{n}^{\dagger}e^{i\Omega_{n}t}+\widehat{b}_{n}e^{-i\Omega_{n}t}\right)$
$\displaystyle+\sum_{n}J_{3,n}\cos(\nu
t)\left(\sigma_{3}^{+}e^{i\phi_{3}}+\sigma_{3}^{-}e^{-i\phi_{3}}\right)\left(\widehat{c}_{n}^{\dagger}e^{i\Omega_{n}t}+\widehat{c}_{n}e^{-i\Omega_{n}t}\right)$
(70) $\displaystyle+\sum_{n}h_{n}\sin(\nu
t)\left(\sigma_{1}^{+}e^{i{\varphi_{1}}}+\sigma_{1}^{-}e^{-i{\varphi_{1}}}\right)\left(\sigma_{2}^{+}e^{i{\varphi_{2}}}+\sigma_{2}^{-}e^{-i{\varphi_{2}}}\right)\left(\widehat{c}_{n}^{\dagger}e^{i\Omega_{n}t}+\widehat{c}_{n}e^{-i\Omega_{n}t}\right)$
where $J_{1,n}=\eta_{1,n}\Omega_{x}$, $J_{2,n}=\eta_{2,n}\Omega_{x}$ and
$J_{3,n}=\eta_{3,n}\Omega_{z}$ are the spin-phonon coupling and
$h_{n}=\eta_{3,n}\Omega_{\alpha}$ is the trilinear coupling. Moreover, during
a slow dynamics, the beat-note frequency $\nu$ isn’t resonant with any radial
vibration mode, i.e. $|\Omega_{n}-\nu|\gg J_{j,n},h_{n}$. Additionally, if the
phonons are virtually excited, then they should be eliminated from the
dynamics, and as a consequence, the spin states in different sites become
coupled to each other. For the different three sites $j^{th}$, $p^{th}$ and
$q^{th}$, we have
$\displaystyle{H}_{\sf I}=$
$\displaystyle\Delta_{j}\sigma_{j}^{z}+\Delta_{p}\sigma_{p}^{z}+\Delta_{q}\sigma_{q}^{z}+\Omega_{j}\left(\sigma_{j}^{+}e^{i\theta_{j}}+\sigma_{j}^{-}e^{-i\theta_{j}}\right)+\Omega_{p}\left(\sigma_{p}^{+}e^{i\theta_{p}}+\sigma_{p}^{-}e^{-i\theta_{p}}\right)$
$\displaystyle+\Omega_{q}\left(\sigma_{q}^{+}e^{i\theta_{q}}+\sigma_{q}^{-}e^{-i\theta_{q}}\right)+J_{1}\left(\sigma_{j}^{+}e^{i\phi_{j}}+\sigma_{j}^{-}e^{-i\phi_{j}}\right)\left(\sigma_{p}^{+}e^{i\phi_{p}}+\sigma_{p}^{-}e^{-i\phi_{p}}\right)$
(71)
$\displaystyle+J_{2}\left(\sigma_{p}^{+}e^{i\phi_{p}}+\sigma_{p}^{-}e^{-i\phi_{p}}\right)\left(\sigma_{q}^{+}e^{i\phi_{q}}+\sigma_{q}^{-}e^{-i\phi_{q}}\right)+J_{3}\left(\sigma_{j}^{+}e^{i\phi_{j}}+\sigma_{j}^{-}e^{-i\phi_{j}}\right)\left(\sigma_{q}^{+}e^{i\phi_{q}}+\sigma_{q}^{-}e^{-i\phi_{q}}\right)$
$\displaystyle+J\left(\sigma_{j}^{+}e^{i{\varphi_{j}}}+\sigma_{j}^{-}e^{-i{\varphi_{j}}}\right)\left(\sigma_{p}^{+}e^{i{\varphi_{p}}}+\sigma_{p}^{-}e^{-i{\varphi_{p}}}\right)\left(\sigma_{q}^{+}e^{i{\varphi_{q}}}+\sigma_{q}^{-}e^{-i{\varphi_{q}}}\right)$
where the couplings between two ions are given by
$\displaystyle
J_{1}=\sum_{n}J_{j,n}J_{p,n}\frac{1}{\nu^{2}-\Omega_{n}^{2}},\qquad
J_{2}=\sum_{n}J_{p,n}J_{q,n}\frac{1}{\nu^{2}-\Omega_{n}^{2}},\qquad
J_{3}=\sum_{n}J_{j,n}J_{q,n}\frac{1}{\nu^{2}-\Omega_{n}^{2}}$ (72)
and that of the trilinear coupling between three ions
$\displaystyle J=\sum_{n}J_{j,n}J_{p,n}h_{n}\frac{1}{\nu^{2}-\Omega_{n}^{2}}.$
(73)
At this level, it is clearly seen that one can realize the circulant
Hamiltonian $H_{\sf cir}^{(1)}(t)$ (16) by adjusting the coupling parameters.
## 5 Numerical analysis
In what follows, we choose the following time modulations of the couplings
$J_{1}(t)$, $J(t)$ and the detunings $\Delta_{j}(t)$ for the gate
implementation:
$\displaystyle J_{1}(t)=J_{01}\sin^{2}(\omega^{\prime}t)$ (74) $\displaystyle
J(t)=J_{0}\sin^{2}(\omega^{\prime}t)$ (75)
$\displaystyle\Delta_{j}(t)=\Delta_{j}\cos^{2}(\omega^{\prime}t)$ (76)
where the characteristic parameter $\omega^{\prime}$ controls the adiabaticity
of the transition and the interaction time $t$ varies as $t\in[0,t_{max}]$
with $t_{max}=\frac{\pi}{2\omega{{}^{\prime}}}$. This time dependence
guarantees the following conditions
$\displaystyle\Delta_{1,2,3}(0)\gg
J(0),J_{1}(0),\qquad\Delta_{1,2,3}(t_{max})\ll J(t_{max}),J_{1}(t_{max}).$
(77)
The adiabatic transition to Fourier modes can be carried out without using the
detuning $\Delta_{j}$. In fact, we can simply vary the Rabi frequencies to
finally get the Fourier modes, such as
$\displaystyle J(t)=J_{0}\sin^{2}(\omega^{\prime}t)$ (78) $\displaystyle
J_{1}(t)=J_{01}\sin^{2}(\omega^{\prime}t)$ (79)
$\displaystyle\Omega_{2}(t)=J_{01}+\Upsilon_{0}\cos^{2}(\omega^{\prime}t)$
(80)
$\displaystyle\Omega_{3}(t)=J_{0}+\Upsilon^{\prime}_{0}\cos^{2}(\omega^{\prime}t)$
(81)
with $\Upsilon_{0}$ and $\Upsilon^{\prime}_{0}$ are the adding amplitudes for
the control of the adiabaticity of transition in the second and third qubits,
respectively.
### 5.1 Eigenfrequencies
We numerically show in Figure 3 the eigenfrequencies
$\lambda_{\pm}(t),\delta_{\pm}(t),\mu_{\pm}(t),\gamma_{\pm}(t)$ of the
Hamiltonian ${H}(t)$ (23) versus time under suitable choices of the coupling
parameters and detunings. As expected, all eigenfrequencies are separated from
each other, which entails, in its turn the suppression of any transition to a
superposition of eigenstates. The degeneracy of the energies should be avoided
during the simulated time, and this is due to the fact that degeneracy entails
the prevention of the gate implementation at hand. The gap between the
eigenvalues decreases during the simulation time. To avoid their degeneracy
during the evolution, we have introduced the detuning frequencies
$\Delta_{j}$. Additionally, we mention that the amplitude couplings $J_{0}$
and $J_{01}$ are important to prevent the degeneracy at final time $t_{max}$.
Figure 3: (color online) Eigenfrequencies of the Hamiltonian ${H}(t)$ (23) as
a function of the time. The parameters are chosen as $J_{0}/2\pi=1$ kHz,
$J_{01}/2\pi=2$ kHz, $\Delta_{1}/2\pi=120$ kHz, $\Delta_{2}/2\pi=60$ kHz,
$\Delta_{3}/2\pi=30$ kHz, $\varphi=\dfrac{\pi}{2}$,
$\omega^{\prime}/2\pi=0.15$ kHz.
### 5.2 Gate fidelity
Gate fidelity is a tool to compare how close two gates, or more generally
operations, are to each other [33]. In other words, it expresses the
probability that one state will pass a test to identify itself as the other
one. We recall that fidelities higher than 99.99$\%$ for a single-qubit gate
and 99.9 $\%$ for an entangling gate in a two-ion crystal have been developed
in [34, 35, 36]. Generally, for the theoretical density matrix $\rho_{0}$ and
reconstructed density matrix $\rho$, it is defined by
$\displaystyle
F(\rho_{0},\rho)=\left(\Tr\sqrt{\sqrt{\rho}_{0}\rho\sqrt{\rho}_{0}}\right)^{2}.$
(82)
By applying the Uhlmann theorem [37], (82) can take a simple form
$\displaystyle F(\rho_{0},\rho)=\left|\langle\psi_{0}|\psi\rangle\right|^{2}.$
(83)
with $\psi_{0}$ and $\psi$ are theoretical and reconstructed purified state
vectors, respectively. As for our system, we have [20]
$\displaystyle F_{\sf
Gate}(t)=\dfrac{1}{16}\left|\sum_{s_{1},s_{2},s_{3}}\bra{s_{1}s_{2}s_{3}}\mathcal{G}^{+}\mathcal{G}^{\prime}(t)\ket{s_{1}s_{2}s_{3}}\right|^{2}$
(84)
where $s_{j}=\uparrow_{j},\downarrow_{j}$, $\mathcal{G}$ is the three-qubit
QFT (55) and $\mathcal{G}^{\prime}(t)$ is the real transform. In Figure 4, we
present the gate fidelity versus the evolution time by choosing the detunings
$\Delta_{1,2,3}$ such that the adiabatic phases are given in (54). The unitary
propagator $\mathcal{G}^{\prime}(t)$ converges to $\mathcal{G}$ as time
progresses. We notice that for a nonlinear coupling $J_{0}=1$ kHz and
$t=0.4875$ ms, the gate reaches a high fidelity ($96$⁒).
Figure 4: (color online) The gate fidelity is calculated using the Hamiltonian
${H}(t)$ (23) in a numerical simulation. The parameters are chosen as
$J_{0}/2\pi=J_{01}/2\pi=1$ kHz, $\Delta_{1}/2\pi=20$ kHz, $\Delta_{2}/2\pi=10$
kHz, $\Delta_{3}/2\pi=6$ kHz, $\varphi=\dfrac{\pi}{2}$ and
$\omega^{\prime}=0.505$ kHz.
By using the Hamiltonian (B.2) together with the quantum Fourier states
(3-10), one can end up with the fidelity of the adiabatic transitions between
the rotating computational spin states
$\ket{s^{\prime}_{1}s^{\prime}_{2}s^{\prime}_{3}}(s^{\prime}_{j}=\pm_{j})$
$\displaystyle F_{\sf
ad}(t)=\dfrac{1}{16}\left|\sum_{i=0}^{i=7}\bra{\psi_{i}}\ket{\Lambda_{i}}\right|^{2}$
(85)
and more explicitly, we have
$\displaystyle F_{\sf ad}(t)=$
$\displaystyle\dfrac{1}{16}\left|\bra{\psi_{0}}\ket{\Lambda_{0}}+\bra{\psi_{1}}\ket{\Lambda_{1}}+\bra{\psi_{2}}\ket{\Lambda_{2}}+\bra{\psi_{3}}\ket{\Lambda_{3}}+\bra{\psi_{4}}\ket{\Lambda_{4}}+\bra{\psi_{5}}\ket{\Lambda_{5}}+\bra{\psi_{6}}\ket{\Lambda_{6}}+\bra{\psi_{7}}\ket{\Lambda_{7}}\right|^{2}.$
(86)
Figure 5: (color online) Fidelity of adiabatic transition with
$J_{0}/2\pi=2.1$ kHz, $\Upsilon_{0}/2\pi=1.9$ kHz, $J_{01}/2\pi=2.4$ kHz,
$\Upsilon^{\prime}_{0}/2\pi=2$ kHz, $\varphi=\dfrac{\pi}{4}$ and
$\omega^{\prime}/2\pi=0.3$ kHz.
Figure 5 presents the good fidelity of the adiabatic transition within a
shorter interaction time, $t_{max}=0.835$ ms. It is clearly seen that our
results show the possibility for obtaining high fidelity ($71$⁒).
### 5.3 Creation of entangled states
To create entangled states, one has to suitably prepare the initial state in a
superposition of spin states, which is due to the fact that the action of the
QFT on the computational basis creates a superposition, but they are not
entangled. For concreteness, we assume that the system is initially prepared
in the following state:
$\displaystyle\ket{\psi(0)}=\frac{1}{2}\ket{\downarrow_{1}}(e^{i\alpha_{1}}\ket{\downarrow_{2}\downarrow_{3}}+e^{i\alpha_{2}}\ket{\downarrow_{2}\uparrow_{3}}+e^{i\alpha_{3}}\ket{\uparrow_{2}\downarrow_{3}}+e^{i\alpha_{4}}\ket{\uparrow_{2}\uparrow_{3}}).$
(87)
Performing our three qubit gates, we obtain the entangled state
$\displaystyle\ket{\psi(0)}\longrightarrow\ket{\psi(t_{f})}=\frac{1}{2}(\ket{\psi_{0}}+\ket{\psi_{1}}+\ket{\psi_{2}}+\ket{\psi_{3}}).$
(88)
Let us emphasize that by rotating the initially prepared state such as
$\displaystyle\ket{\psi(0)}=\frac{1}{2}\ket{-_{1}}(e^{-i\beta_{0}}\ket{-_{2}-_{3}}+e^{-i\beta_{1}}\ket{-_{2}+_{3}}+e^{-i\beta_{2}}\ket{+_{2}-_{3}}+e^{-i\beta_{3}}\ket{+_{2}+_{3}})$
(89)
we end up with the same transformed entangled state (88). Therefore, the
fidelity of the creation of the entangled state is defined by
$\displaystyle
F(t)=\dfrac{1}{2}|\bra{\psi(t_{f})}(e^{-i\beta_{0}}\ket{\Lambda_{0}(t)}+e^{-i\beta_{1}}\ket{\Lambda_{1}(t)}+e^{-i\beta_{2}}\ket{\Lambda_{2}(t)}+e^{-i\beta_{3}}\ket{\Lambda_{3}(t)})|^{2}.$
(90)
By adjusting the parameters $\omega^{\prime}$, $J_{01}$ and $J_{0}$ one can
achieve high fidelity in the creation of entangled states as presented in
Figure 6-A, Figure 6-B and Figure 6-C.
Figure 6: (color online) The fidelity of the entangled state is calculated
from the numerical simulation of the Hamiltonian (B.2). (A):
$\Upsilon_{0}=1.8$ kHz,$\Upsilon^{\prime}_{0}=1.7$ kHz, $J_{01}=2.1$ kHz,
$J_{0}=2.3$ kHz and the gate time $t=0.31$ ms. (B): The same values with
$\omega^{\prime}/2\pi=0.5$ kHz and vary the coupling strength $J_{01}$. (C):
The same values in (A) with $\omega^{\prime}/2\pi=0.605$ kHz and vary the
coupling strength $J_{0}$.
## 6 Short-cut to adiabaticity
Now we add an auxiliary interaction ${H}_{\sf CD}(t)$ (counter-driving-field)
[38, 22] to the reference Hamiltonian ${H}^{(1)}(t)$ (B.2) in order to
suppress the non-adiabatic transitions and reduce the gate time. As a result,
the Hamiltonian will take the following form:
${H}_{\sf T}(t)={H}^{(1)}(t)+{H}_{\sf CD}(t)$ (91)
such that the interaction is
$\displaystyle{H}_{\sf
CD}(t)=i\hbar\sum_{i=0}^{7}\ket{\partial_{t}\Lambda_{i}(t)}\bra{\Lambda_{i}(t)}$
(92)
and (B.11-B.18) show the time-dependent eigenvectors $\ket{\Lambda_{i}(t)}$ of
${H}^{(1)}(t)$. After some algebra, we obtain
$\displaystyle{H}_{\sf
CD}(t)=-\partial_{t}\kappa(t)\begin{pmatrix}0&0&0&0&1&0&0&0\\\
0&0&0&0&0&1&0&0\\\ 0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0\\\ 1&0&0&0&0&0&0&0\\\
0&1&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0\\\ 0&0&0&0&0&0&0&0\end{pmatrix}.$ (93)
Using the time-dependent coupling parameters (78-81) and the eigenvectors
(B.11-B.18), we explicitly determine the counter-driving-field
$\displaystyle{\partial_{t}\kappa(t)=\dfrac{1}{2}\left(\dfrac{\omega^{\prime}J_{01}(J_{01}+\Upsilon_{0})\sin(2\omega^{\prime}t)}{J_{01}^{2}\sin^{4}(\omega^{\prime}t)+[\Upsilon_{0}\sin^{2}(\omega^{\prime}t)-(J_{01}+\Upsilon_{0})]^{2}}+\dfrac{\omega^{\prime}J_{0}(J_{0}+\Upsilon_{0}^{\prime})\sin(2\omega^{\prime}t)}{J_{0}^{2}\sin^{4}(\omega^{\prime}t)+[\Upsilon_{0}^{\prime}\sin^{2}(\omega^{\prime}t)-(J_{0}+\Upsilon_{0}^{\prime})]^{2}}\right).}$
(94)
In Figure 7, we show the shape of the counter-driving field (94) as a
function of the time and by varying $\omega^{\prime}$, $J_{01}$ and $J_{0}$.
The counter-driving term should be zero at $t=0$, because the system starts in
the rotational computational spin states. We mention also that at $t_{max}$,
the system ends up with the Fourier modes.
Figure 7: (color online) Behavior of a counter-driving field with
$\Upsilon_{0}/2\pi=0.5$ kHz and $\Upsilon^{\prime}_{0}/2\pi=2$ kHz, while the
remaining parameters are chosen as $J_{01}=1.5$ kHz, $J_{0}=1$ kHz (blue
line), $J_{01}=1.9$ kHz, $J_{0}=1.3$ kHz (cyan line),$J_{01}=2$ kHz,
$J_{0}=1.7$ kHz (red line).
## 7 Conclusion
We generalized the work done by Ivanov and Vitanov [20] dealing with two-qubit
quantum gates and entanglement protected by circulant symmetry to a system of
three-qubit quantum gates. In fact, we have constructed a discrete system
based on three qubits emerging in a magnetic field. A special symmetry called
circulant can be obtained only by adjusting the Rabi frequencies and the
coupling parameters characterizing our system. We have shown that our
eigenvectors do not depend on the magnitude of the physical parameters, which
entails the protection of entanglement. These eigenvectors lead to obtaining
the quantum Fourier transform (QFT) modes, which imply the realization of a
QFT gate. To discuss the implementation of the gate, the eigenfrequencies
should be non-degenerate. For this aim, we have added an Hamiltonian
$H_{0}(t)$ that breaks the circulant symmetry and favors the adiabatic
transition process.
Subsequently, instead of adding an energy offset, we have shown that it is
possible to control the transition by using only the Rabi frequencies. As a
result, the gate scheme with the short-cut to adiabaticity has been discussed,
and therefore we have found the suppression of the non-adiabatic transition
and the acceleration of the gate. Additionally, in the framework of the
trapped ions, we have suggested a possible physical implementation of the
constructed Hamiltonian. By assuming a particular sinusoidal modulation,
several fidelities are discussed, and the results show the possibility to
achieve high fidelities only by adjusting the physical parameters. The
physical realization of a three-qubit QFT is the key subroutine in several
quantum algorithms. Then, it turns out that our present three-qubit gates can
significantly reduce the number of gates in a quantum algorithm.
In this work, we have added the trilinear coupling, which is necessary to
retain the circulant symmetry in a three-qubit system. Indeed, a three-qubit
system with only bilinear interaction terms, as done in [18], does not lead to
the wanted circulant symmetry. Additionally, constructing a circulant
Hamiltonian based on $N$-qubits is not obvious. Indeed, the Hamiltonian
demands multilinear interaction terms that are not easily realizable.
Moreover, discussing the adiabatic transition and accelerating the gate using
a shortcut to adiabacity will be hard.
## Appendix A Eigenfrequencies of ${H}(t)$ with energy offset
As for ${H}(t)$ (23) including the circulant Hamiltonian (16) and the energy
offset (22), we show that the corresponding eigenfrequencies are given by
$\displaystyle\lambda_{\pm}=\pm\sqrt{\left|-\frac{A}{4}-S+\frac{1}{2}\sqrt{\left|-4S^{2}-2p+\frac{q}{S}\right|}\right|}$
(A.1)
$\displaystyle\delta_{\pm}=\pm\sqrt{\left|-\frac{A}{4}-S-\frac{1}{2}\sqrt{\left|-4S^{2}-2p+\frac{q}{S}\right|}\right|}$
(A.2)
$\displaystyle\mu_{\pm}=\pm\sqrt{\left|-\frac{A}{4}+S+\frac{1}{2}\sqrt{\left|-4S^{2}-2p-\frac{q}{S}\right|}\right|}$
(A.3)
$\displaystyle\gamma_{\pm}=\pm\sqrt{\left|-\frac{A}{4}+S-\frac{1}{2}\sqrt{\left|-4S^{2}-2p-\frac{q}{S}\right|}\right|}$
(A.4)
where we have set
$\displaystyle p=\frac{1}{8}\left(8B-3\right)$ (A.5) $\displaystyle
q=\frac{1}{8}\left(-1+4B+8C\right)$ (A.6) $\displaystyle
S=\frac{1}{2}\sqrt{\frac{1}{3}\left|-2p+\left(Q+\frac{\Delta_{0}}{Q}\right)\right|}$
(A.7) $\displaystyle\Delta_{0}=B^{2}+3C+12D$ (A.8) $\displaystyle
Q=\sqrt[3]{\frac{1}{2}\left|\Delta+\sqrt{\left|\Delta^{2}-4\Delta_{0}^{3}\right|}\right|}$
(A.9) $\displaystyle\Delta=2B^{3}+9BC+27D+27C^{2}-72BD$ (A.10)
and the associated quantities are
$\displaystyle A=$
$\displaystyle-16J^{2}-4\Delta_{1}^{2}-4\Delta_{2}^{2}-4\Delta_{3}^{2}-8J_{1}^{2}$
$\displaystyle B=$ $\displaystyle
32{J}^{2}{\Delta_{{1}}}^{2}+32\,{J}^{2}{\Delta_{{2}}}^{2}+48{J}^{2}{\Delta_{{3}}}^{2}+128{J}^{2}{J_{{1}}}^{2}+6{\Delta_{{1}}}^{4}+4\,{\Delta_{{1}}}^{2}{\Delta_{{2}}}^{2}+4{\Delta_{{1}}}^{2}{\Delta_{{3}}}^{2}$
$\displaystyle+16{\Delta_{{1}}}^{2}{J_{{1}}}^{2}+6\,{\Delta_{{2}}}^{4}+4{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{2}+24{\Delta_{{2}}}^{2}{J_{{1}}}^{2}+6{\Delta_{{3}}}^{4}+8{\Delta_{{3}}}^{2}{J_{{1}}}^{2}+16{J_{{1}}}^{4}$
$\displaystyle C=$
$\displaystyle-16{J}^{2}{\Delta_{{1}}}^{4}-32{J}^{2}{\Delta_{{1}}}^{2}{\Delta_{{2}}}^{2}-128{J}^{2}{\Delta_{{1}}}^{2}{J_{{1}}}^{2}-16{J}^{2}{\Delta_{{2}}}^{4}-128{J}^{2}{\Delta_{{2}}}^{2}{J_{{1}}}^{2}$
$\displaystyle-48{J}^{2}{\Delta_{{3}}}^{4}-256\,{J}^{2}{J_{{1}}}^{4}-4{\Delta_{{1}}}^{6}+4{\Delta_{{1}}}^{4}{\Delta_{{2}}}^{2}+4{\Delta_{{1}}}^{4}{\Delta_{{3}}}^{2}-8{\Delta_{{1}}}^{4}{J_{{1}}}^{2}+4{\Delta_{{1}}}^{2}{\Delta_{{2}}}^{4}$
$\displaystyle-40{\Delta_{{1}}}^{2}{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{2}+4{\Delta_{{1}}}^{2}{\Delta_{{3}}}^{4}-32{\Delta_{{1}}}^{2}{\Delta_{{3}}}^{2}{J_{{1}}}^{2}-4\,{\Delta_{{2}}}^{6}+4{\Delta_{{2}}}^{4}{\Delta_{{3}}}^{2}-24\,{\Delta_{{2}}}^{4}{J_{{1}}}^{2}$
$\displaystyle+4{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{4}+16\,{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{2}{J_{{1}}}^{2}-32\,{\Delta_{{2}}}^{2}{J_{{1}}}^{4}-4\,{\Delta_{{3}}}^{6}+8\,{\Delta_{{3}}}^{4}{J_{{1}}}^{2}-32\,{\Delta_{{3}}}^{2}{J_{{1}}}^{4}$
$\displaystyle D=$ $\displaystyle
4\,{\Delta_{{1}}}^{4}{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{2}+4\,{\Delta_{{1}}}^{2}{\Delta_{{2}}}^{4}{\Delta_{{3}}}^{2}+4\,{\Delta_{{1}}}^{2}{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{4}-4\,{\Delta_{{1}}}^{6}{\Delta_{{2}}}^{2}-4\,{\Delta_{{1}}}^{6}{\Delta_{{3}}}^{2}+6\,{\Delta_{{1}}}^{4}{\Delta_{{2}}}^{4}$
$\displaystyle+6\,{\Delta_{{1}}}^{4}{\Delta_{{3}}}^{4}-4\,{\Delta_{{1}}}^{2}{\Delta_{{2}}}^{6}-4\,{\Delta_{{1}}}^{2}{\Delta_{{3}}}^{6}-4\,{\Delta_{{2}}}^{6}{\Delta_{{3}}}^{2}+6\,{\Delta_{{2}}}^{4}{\Delta_{{3}}}^{4}-4\,{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{6}+16\,{\Delta_{{2}}}^{4}{J_{{1}}}^{4}$
$\displaystyle-8{\Delta_{{3}}}^{6}{J_{{1}}}^{2}+16\,{\Delta_{{3}}}^{4}{J_{{1}}}^{4}+16\,{J}^{2}{\Delta_{{3}}}^{6}+8\,{\Delta_{{2}}}^{6}{J_{{1}}}^{2}+16\,{\Delta_{{1}}}^{2}{\Delta_{{3}}}^{4}{J_{{1}}}^{2}-24\,{\Delta_{{2}}}^{4}{\Delta_{{3}}}^{2}{J_{{1}}}^{2}$
$\displaystyle+24\,{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{4}{J_{{1}}}^{2}-32\,{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{2}{J_{{1}}}^{4}+8{\Delta_{{1}}}^{4}{\Delta_{{2}}}^{2}{J_{{1}}}^{2}-8\,{\Delta_{{1}}}^{4}{\Delta_{{3}}}^{2}{J_{{1}}}^{2}-16\,{\Delta_{{1}}}^{2}{\Delta_{{2}}}^{4}{J_{{1}}}^{2}$
$\displaystyle-128\,{J}^{2}{\Delta_{{3}}}^{4}{J_{{1}}}^{2}+256\,{J}^{2}{\Delta_{{3}}}^{2}{J_{{1}}}^{4}+16\,{J}^{2}{\Delta_{{1}}}^{4}{\Delta_{{3}}}^{2}-32\,{J}^{2}{\Delta_{{1}}}^{2}{\Delta_{{3}}}^{4}+16\,{J}^{2}{\Delta_{{2}}}^{4}{\Delta_{{3}}}^{2}$
$\displaystyle-32\,{J}^{2}{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{4}+32\,{J}^{2}{\Delta_{{1}}}^{2}{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{2}+128\,{J}^{2}{\Delta_{{1}}}^{2}{\Delta_{{3}}}^{2}{J_{{1}}}^{2}+128\,{J}^{2}{\Delta_{{2}}}^{2}{\Delta_{{3}}}^{2}{J_{{1}}}^{2}+{\Delta_{{1}}}^{8}+{\Delta_{{2}}}^{8}+{\Delta_{{3}}}^{8}.$
## Appendix B Energy spectrum of $H^{(1)}(t)$
The Hamiltonian $H^{(1)}(t)$ with $\Omega_{2}(t)$ (i.e. $J_{1}=\Omega_{2}(t)$
and $J=\Omega_{3}(t)$ are not always respected) has the following form:
$\displaystyle{H}^{(1)}(t)=$ $\displaystyle
J_{1}(\sigma_{1}^{+}+\sigma_{1}^{-})(\sigma_{2}^{+}e^{-i\varphi}+\sigma_{2}^{-}e^{i\varphi})+\Omega_{3}(t)(\sigma_{2}^{+}+\sigma_{2}^{-})(\sigma_{3}^{+}e^{-i\varphi}+\sigma_{3}^{-}e^{i\varphi})$
$\displaystyle+\Omega_{3}(t)(\sigma_{1}^{+}+\sigma_{1}^{-})(\sigma_{3}^{+}e^{i\varphi}+\sigma_{3}^{-}e^{-i\varphi})+\Omega_{2}(t)(\sigma_{2}^{+}e^{i\varphi}+\sigma_{2}^{-}e^{-i\varphi})$
(B.1)
$\displaystyle+\Omega_{3}(t)(\sigma_{3}^{+}e^{i\varphi}+\sigma_{3}^{-}e^{-i\varphi})+J(\sigma_{1}^{+}+\sigma_{1}^{-})(\sigma_{2}^{+}+\sigma_{2}^{-})(\sigma_{3}^{+}e^{-i\varphi}+\sigma_{3}^{-}e^{i\varphi})$
and we have in the matrix notation
$\displaystyle{H}^{(1)}(t)=$ (B.2)
$\displaystyle\begin{pmatrix}0&\Omega_{3}(t)e^{i\varphi}&\Omega_{2}(t)e^{i\varphi}&\Omega_{3}(t)e^{-i\varphi}&0&\Omega_{3}(t)e^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}\\\
\Omega_{3}(t)e^{-i\varphi}&0&\Omega_{3}(t)e^{i\varphi}&\Omega_{2}(t)e^{i\varphi}&\Omega_{3}(t)e^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{-i\varphi}\\\
\Omega_{2}(t)e^{-i\varphi}&\Omega_{3}(t)e^{-i\varphi}&0&\Omega_{3}(t)e^{i\varphi}&J_{1}e^{i\varphi}&Je^{-i\varphi}&0&\Omega_{3}(t)e^{i\varphi}\\\
\Omega_{3}(t)e^{i\varphi}&\Omega_{2}(t)e^{-i\varphi}&\Omega_{3}(t)e^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{i\varphi}&\Omega_{3}(t)e^{-i\varphi}&0\\\
0&\Omega_{3}(t)e^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&\Omega_{3}(t)e^{i\varphi}&\Omega_{2}(t)e^{i\varphi}&\Omega_{3}(t)e^{-i\varphi}\\\
\Omega_{3}(t)e^{-i\varphi}&0&Je^{i\varphi}&J_{1}e^{-i\varphi}&Je^{-i\varphi}&0&\Omega_{3}(t)e^{i\varphi}&\Omega_{2}(t)e^{i\varphi}\\\
J_{1}e^{i\varphi}&Je^{-i\varphi}&0&\Omega_{3}(t)e^{i\varphi}&\Omega_{2}(t)e^{-i\varphi}&\Omega_{3}(t)e^{-i\varphi}&0&\Omega_{3}(t)e^{i\varphi}\\\
Je^{i\varphi}&J_{1}e^{i\varphi}&\Omega_{3}(t)e^{-i\varphi}&0&\Omega_{3}(t)e^{i\varphi}&\Omega_{2}(t)e^{-i\varphi}&\Omega_{3}(t)e^{-i\varphi}&0\end{pmatrix}.$
We choose $\varphi=\dfrac{\pi}{4}$ for simplicity and show that the time-
dependent eigenfrequencies ${H}^{(1)}(t)$ (B.2) are given by
$\displaystyle\Lambda_{0}(t)=\sqrt{(\Omega_{3}-J)^{2}+J_{1}^{2}+\Omega_{2}^{2}+\sqrt{2(\Omega_{3}-J)^{2}(J_{1}-\Omega_{2})^{2}}}$
(B.3)
$\displaystyle\Lambda_{1}(t)=-\sqrt{(\Omega_{3}-J)^{2}+J_{1}^{2}+\Omega_{2}^{2}+\sqrt{2(\Omega_{3}-J)^{2}(J_{1}-\Omega_{2})^{2}}}$
(B.4)
$\displaystyle\Lambda_{2}(t)=\sqrt{(\Omega_{3}-J)^{2}+J_{1}^{2}+\Omega_{2}^{2}-\sqrt{2(\Omega_{3}-J)^{2}(J_{1}-\Omega_{2})^{2}}}$
(B.5)
$\displaystyle\Lambda_{3}(t)=-\sqrt{(\Omega_{3}-J)^{2}+J_{1}^{2}+\Omega_{2}^{2}-\sqrt{2(\Omega_{3}-J)^{2}(J_{1}-\Omega_{2})^{2}}}$
(B.6)
$\displaystyle\Lambda_{4}(t)=\sqrt{5\Omega_{3}^{2}+2J\Omega_{3}+J^{2}+J_{1}^{2}+\Omega_{2}^{2}+\sqrt{2\Omega_{3}^{2}(9\Omega_{2}^{2}+2J_{1}\Omega_{2}+9J_{1}^{2})+2J(J_{1}+\Omega_{2})^{2}(2\Omega_{3}+J)}}$
(B.7)
$\displaystyle\Lambda_{5}(t)=-\sqrt{5\Omega_{3}^{2}+2J\Omega_{3}+J^{2}+J_{1}^{2}+\Omega_{2}^{2}+\sqrt{2\Omega_{3}^{2}(9\Omega_{2}^{2}+2J_{1}\Omega_{2}+9J_{1}^{2})+2J(J_{1}+\Omega_{2})^{2}(2\Omega_{3}+J)}}$
(B.8)
$\displaystyle\Lambda_{6}(t)=\sqrt{5\Omega_{3}^{2}+2J\Omega_{3}+J^{2}+J_{1}^{2}+\Omega_{2}^{2}-\sqrt{2\Omega_{3}^{2}(9\Omega_{2}^{2}+2J_{1}\Omega_{2}+9J_{1}^{2})+2J(J_{1}+\Omega_{2})^{2}(2\Omega_{3}+J)}}$
(B.9)
$\displaystyle\Lambda_{7}(t)=-\sqrt{5\Omega_{3}^{2}+2J\Omega_{3}+J^{2}+J_{1}^{2}+\Omega_{2}^{2}-\sqrt{2\Omega_{3}^{2}(9\Omega_{2}^{2}+2J_{1}\Omega_{2}+9J_{1}^{2})+2J(J_{1}+\Omega_{2})^{2}(2\Omega_{3}+J)}}$
(B.10)
and the corresponding eigenvectors can be expressed as
$\displaystyle\ket{\Lambda_{0}(t)}=\dfrac{1}{2\sqrt{2}}$ (B.11)
$\displaystyle\left(e^{-i\alpha(t)}\ket{\downarrow\downarrow\downarrow}+e^{-i\alpha(t)}\ket{\downarrow\downarrow\uparrow}+\ket{\downarrow\uparrow\downarrow}+\ket{\downarrow\uparrow\uparrow}+e^{-i\alpha(t)}\ket{\uparrow\downarrow\downarrow}+e^{-i\alpha(t)}\ket{\uparrow\downarrow\uparrow}+\ket{\uparrow\uparrow\downarrow}+\ket{\uparrow\uparrow\uparrow}\right)$
$\displaystyle\ket{\Lambda_{1}(t)}=\dfrac{1}{2\sqrt{2}}$ (B.12)
$\displaystyle\left(e^{i\alpha(t)}\ket{\downarrow\downarrow\downarrow}+\omega
e^{i\alpha(t)}\ket{\downarrow\downarrow\uparrow}+i\ket{\downarrow\uparrow\downarrow}+i\omega\ket{\downarrow\uparrow\uparrow}-e^{i\alpha(t)}\ket{\uparrow\downarrow\downarrow}-\omega
e^{i\alpha(t)}\ket{\uparrow\downarrow\uparrow}-i\ket{\uparrow\uparrow\downarrow}-i\omega\ket{\uparrow\uparrow\uparrow}\right)$
$\displaystyle\ket{\Lambda_{2}(t)}=\dfrac{1}{2\sqrt{2}}$ (B.13)
$\displaystyle\left(e^{-i\alpha(t)}\ket{\downarrow\downarrow\downarrow}+ie^{-i\alpha(t)}\ket{\downarrow\downarrow\uparrow}-\ket{\downarrow\uparrow\downarrow}-i\ket{\downarrow\uparrow\uparrow}+e^{-i\alpha(t)}\ket{\uparrow\downarrow\downarrow}+ie^{-i\alpha(t)}\ket{\uparrow\downarrow\uparrow}-\ket{\uparrow\uparrow\downarrow}-i\ket{\uparrow\uparrow\uparrow}\right)$
$\displaystyle\ket{\Lambda_{3}(t)}=\dfrac{1}{2\sqrt{2}}$ (B.14)
$\displaystyle\left(e^{i\alpha(t)}\ket{\downarrow\downarrow\downarrow}+i\omega
e^{i\alpha(t)}\ket{\downarrow\downarrow\uparrow}-i\ket{\downarrow\uparrow\downarrow}+\omega\ket{\downarrow\uparrow\uparrow}-e^{i\alpha(t)}\ket{\uparrow\downarrow\downarrow}-i\omega
e^{i\alpha(t)}\ket{\uparrow\downarrow\uparrow}+i\ket{\uparrow\uparrow\downarrow}-\omega\ket{\uparrow\uparrow\uparrow}\right)$
$\displaystyle\ket{\Lambda_{4}(t)}=\dfrac{1}{2\sqrt{2}}$ (B.15)
$\displaystyle\left(e^{-i\alpha(t)}\ket{\downarrow\downarrow\downarrow}-e^{-i\alpha(t)}\ket{\downarrow\downarrow\uparrow}+\ket{\downarrow\uparrow\downarrow}-\ket{\downarrow\uparrow\uparrow}+e^{-i\alpha(t)}\ket{\uparrow\downarrow\downarrow}-e^{-i\alpha(t)}\ket{\uparrow\downarrow\uparrow}+\ket{\uparrow\uparrow\downarrow}-\ket{\uparrow\uparrow\uparrow}\right)$
$\displaystyle\ket{\Lambda_{5}(t)}=\dfrac{1}{2\sqrt{2}}$ (B.16)
$\displaystyle\left(e^{i\alpha(t)}\ket{\downarrow\downarrow\downarrow}-\omega
e^{i\alpha(t)}\ket{\downarrow\downarrow\uparrow}+i\ket{\downarrow\uparrow\downarrow}-i\omega\ket{\downarrow\uparrow\uparrow}-e^{i\alpha(t)}\ket{\uparrow\downarrow\downarrow}+\omega
e^{i\alpha(t)}\ket{\uparrow\downarrow\uparrow}-i\ket{\uparrow\uparrow\downarrow}+i\omega\ket{\uparrow\uparrow\uparrow}\right)$
$\displaystyle\ket{\Lambda_{6}(t)}=\dfrac{1}{2\sqrt{2}}$ (B.17)
$\displaystyle\left(e^{-i\alpha(t)}\ket{\downarrow\downarrow\downarrow}-ie^{-i\alpha(t)}\ket{\downarrow\downarrow\uparrow}-\ket{\downarrow\uparrow\downarrow}+i\ket{\downarrow\uparrow\uparrow}+e^{-i\alpha(t)}\ket{\uparrow\downarrow\downarrow}-ie^{-i\alpha(t)}\ket{\uparrow\downarrow\uparrow}-\ket{\uparrow\uparrow\downarrow}+i\ket{\uparrow\uparrow\uparrow}\right)$
$\displaystyle\ket{\Lambda_{7}(t)}=\dfrac{1}{2\sqrt{2}}$ (B.18)
$\displaystyle\left(e^{i\alpha(t)}\ket{\downarrow\downarrow\downarrow}-i\omega
e^{i\alpha(t)}\ket{\downarrow\downarrow\uparrow}-i\ket{\downarrow\uparrow\downarrow}-\omega\ket{\downarrow\uparrow\uparrow}-e^{i\alpha(t)}\ket{\uparrow\downarrow\downarrow}+i\omega
e^{i\alpha(t)}\ket{\uparrow\downarrow\uparrow}+i\ket{\uparrow\uparrow\downarrow}+\omega\ket{\uparrow\uparrow\uparrow}\right)$
where we have defined
$\displaystyle\alpha(t)=\frac{\pi}{4}-\kappa(t)$ (B.19)
$\displaystyle\tan[\kappa(t)]=\frac{\Omega_{2}(t)}{2J_{1}}+\frac{\Omega_{3}(t)}{2J}.$
(B.20)
## References
* [1] Paul Benioff, J. Stat. Phys. 22, 563 (1980).
* [2] R. Feynman, Int. J. Theo. Phys. 21, 467 (1982).
* [3] S. Lloyd, Science 261, 1569 (1993).
* [4] Y. S. Weinstein, S. Lloyd, and D. G. Cory, Phys. Rev. Lett. 86, 1889 (2001).
* [5] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, 2000).
* [6] P. W. Shor, SIAM J. Sci. Statist. Comput. 26, 1484 (1997).
* [7] L. Ruiz-Perez and J. C. Garcia-Escartin, Qua. Inf. Process. 16, 152 (2017).
* [8] S. A. Daniel and S. Lloyd, Phys. Rev. Lett. 83, 5162 (1999).
* [9] G. Brassard, P. Hoyer, M. Mosca, and A. Tapp, Contemp. Math. 305, 53 (2002).
* [10] P. J. Davis, Circulant Matrices (Wiley, New York, 1970).
* [11] Robert M. Gray, Found. Trends Commun. Inf. Theory 2, 155 (2006).
* [12] G. Dzhelepov, I. Dokuzova, and D. Razpopov, Plovdiv Univ. Paisiǐ Khilendarski Nauchn. Trud. Mat. 38, 17 (2011).
* [13] X. Y. Jiang and K. Hong, Explicit Determinants of the k-Fibonacci and k-Lucas RSFPLR Circulant Matrix in Codes. In: Y. Yang, M. Ma, and B. Liu, (eds) Information Computing and Applications (ICICA 2013). Communications in Computer and Information Science (Springer, Berlin, Heidelberg, 2013, vol 391), https://doi.org/10.1007/978-3-642-53932-9$\\_$61\.
* [14] M. Muzychuk, Proc. London Math. Soc. 88, 1 (2004).
* [15] B. Olson, S. Shaw, C. Shi, C. Pierre, and R. G. Parker, Appl. Mech. Rev. 66, 040803 (2014).
* [16] D. Razpopov, Four-dimensional Riemannian manifolds with two circulant structures. In: Mathematics Education Mathematics, Proceedings of 44-th Spring Conference of UBM, SOK Kamchia, Bulgaria, vol. 44, pp. 179–185 (2015).
* [17] R. M. Roth and A. Lempel, IEEE Trans. Inform. Theory 36 , 1157 (1990).
* [18] S. S. Zhou, T. Loke, J. A. Izaac, and J. B. Wang, Qua. Inf. Process. 16, 82 (2017).
* [19] F. R. Gantmacher, Matrix Theory (Springer, Berlin, 1986).
* [20] P. A. Ivanov and N. V. Vitanov, Sci. Rep. 10, 5030 (2020).
* [21] H. Wu, X. Huang, C. Hu, Z. Yang, and S. Zheng, Phys. Rev. A 96, 022321 (2017).
* [22] D. Guéry-Odelin, A. Ruschhaupt, A. Kiely, E. Torrontegui, S. Martínez-Garaot, and J.-G. Muga, Rev. Mod. Phys. 91, 045001 (2019).
* [23] A. Rueda, W. Hease, S. Barzanjeh, and J. M. Fink, npj Quantum Information 5, 108 (2019).
* [24] R. Hablützel, Nonlinear Quantum Optics and Thermodynamics with Three Trapped Ions (Thesis, National University of Singapore 2018).
* [25] J. K. Pachos, Int. J. Quan. Inf. 4, 541 (2006).
* [26] T. Hatomura, J. Phys. Soc. Jpn. 86, 094002 (2017).
* [27] M. Born and V. A. Fock, Zeitschrift für Physik A. 51, 165 (1928).
* [28] M. V Berry, Proc. R. Soc. Lond. A 392, 45 (1984).
* [29] Michael R. Hush, Weibin Li, Sam Genway, Igor Lesanovsky, and Andrew D. Armour, Phys. Rev. A 91, 061401(R) (2015).
* [30] K. Kim, M.-S. Chang, R. Islam, S. Korenblit, L.-M. Duan, and C. Monroe, Phys. Rev. Lett. 103, 120502 (2009).
* [31] S. X. Wang, Quantum Gates, Sensors, and Systems with Trapped Ions (Thesis, Massachusetts Institute of Technology 2012).
* [32] Shiqian Ding, Gleb Maslennikov, Roland Hablützel, and Dzmitry Matsukevich, Phys. Rev. Lett. 121, 130502 (2018).
* [33] Easwar Magesan, Robin Blume-Kohout, and Joseph Emerson, Phys. Rev. A 84, 012309 (2011).
* [34] C. J. Ballance, T. P. Harty, N. M. Linke, M. A. Sepiol, and D. M. Lucas, Phys. Rev. Lett. 117, 060504 (2016).
* [35] J. P. Gaebler, T. R. Tan, Y. Lin, Y. Wan, R. Bowler, A. C. Keith, S. Glancy, K. Coakley, E. Knill, D. Leibfried, and D. J. Wineland, Phys. Rev. Lett. 117, 060505 (2016).
* [36] Yukai Wu, Sheng-Tao Wang, and L.-M. Duan, Phys. Rev. A 97, 062325 (2018).
* [37] Armin Uhlmann, Phys. Rev. A 62, 032307 (2000).
* [38] Xi Chen, I. Lizuain, A. Ruschhaupt, D. Guéry-Odelin, and J. G. Muga, Phys. Rev. Lett. 105, 123003 (2010).
|
# Provable Benefit of Adaptivity in Adam
Firstname1 Lastname1 Firstname2 Lastname2 Firstname3 Lastname3 Firstname4
Lastname4 Firstname5 Lastname5 Firstname6 Lastname6 Firstname7 Lastname7
Firstname8 Lastname8 Firstname8 Lastname8
###### Abstract
Adam is widely adopted in practical applications due to its fast convergence.
However, its theoretical analysis is still far from satisfactory. Existing
convergence analyses for Adam rely on the bounded smoothness assumption,
referred to as the _L-smooth condition_. Unfortunately, this assumption does
not hold for many deep learning tasks. Moreover, we believe that this
assumption obscures the true benefit of Adam, as the algorithm can adapt its
update magnitude according to local smoothness. This important feature of Adam
becomes irrelevant when assuming globally bounded smoothness. This paper
studies the convergence of randomly reshuffled Adam (RR Adam) with diminishing
learning rate, which is the major version of Adam adopted in deep learning
tasks. We present the first convergence analysis of RR Adam without the
bounded smoothness assumption. We demonstrate that RR Adam can maintain its
convergence properties when smoothness is linearly bounded by the gradient
norm, referred to as the _$(L_{0},L_{1})$ -smooth condition_. We further
compare Adam to SGD when both methods use diminishing learning rate. We refine
the existing lower bound of SGD and show that SGD can be slower than Adam. To
our knowledge, this is the first time that Adam and SGD are rigorously
compared in the same setting and the advantage of Adam is revealed.
Machine Learning, ICML
## 1 Introduction
Machine learning tasks are often formulated as solving the following finite-
sum problem:
$\min_{\bm{w}\in\mathbb{R}^{d}}f(\bm{w})=\frac{1}{n}\sum_{i=0}^{n-1}f_{i}(\bm{w}),$
(1)
where $n$ denotes the number of samples or mini-batches, and $\bm{w}$ denotes
the trainable parameters. Recently, it is noted that adaptive gradient methods
including Adaptive Moment estimation (Adam) (kingma2014adam) are widely used
to train modern deep neural networks including GANs (brock2018large), BERTs
(kenton2019bert), GPTs (brown2020language) and ViTs (dosovitskiy2020image). It
is often observed that Adam converges considerably faster than vanilla
Stochastic Gradient Descent (SGD) for the training of Transformers, as seen in
Figure 1(a). Similar phenomena are also reported in (zhang2024transformers).
Despite its practical success, the theoretical analysis of Adam is less than
satisfactory. Existing analyses rely on bounded smoothness assumption, i.e.,
the Lipschitz coefficient of gradients (or the spectrum norm of the Hessian)
is globally upper bounded by constant $L$, referred to as _$L$ -smooth
condition_. However, recent studies show that the $L$-smooth condition does
not hold in practical deep learning tasks such as LSTM (zhang2019gradient) and
Transformers (crawshaw2022robustness).
Moreover, such an assumption hides the benefit of Adam. Intuitively, Adam can
overcome the issue of unbounded smoothness using adaptive learning rate.
First, Adam uses the reciprocal of the square root of the exponential moving
averages of past squared gradients as an effective learning rate (see
Algorithm 1 for the update rule). Thus, the effective learning rate would be
adapted to the local gradient norm. Second, there is a strong correlation
between the Lipschitz coefficient and the gradient norm of deep neural
networks (zhang2019gradient; cohen2021gradient; crawshaw2022robustness). As a
result, Adam can adapt the update magnitude to the local Lipschitz coefficient
and is empirically observed to converge fast (Figure 1(a) and
(zhang2019gradient)). Unfortunately, such benefit is hidden because existing
theories of Adam are built upon $L$-smooth condition.
To reveal the theoretical benefit of Adam, we analyze its convergence under a
relaxed smoothness condition called $(L_{0},L_{1})$-smooth condition
(zhang2019gradient):
$\|\nabla^{2}f_{i}(\bm{w})\|\leq L_{0}+L_{1}\|\nabla f_{i}(\bm{w})\|.$ (2)
When $L_{1}=0$, Eq. equation 2 degenerates into classical $L$-smooth
condition. The $(L_{0},L_{1})$-smooth condition allows the spectral norm of
the Hessian (Lipschitz coefficient of gradients) to linearly grow with the
gradient norm of $\bm{w}$, so it is a relaxed version of $L$-smooth condition.
The $(L_{0},L_{1})$-smooth condition is empirically observed to hold in LSTM
(zhang2019gradient; zhang2020improved) and Transformers (Figure 1(b) and
(crawshaw2022robustness)).
(a) Training loss
(b) Gradient vs. smoothness
Figure 1: Experiments on the WMT 2014 dataset trained with the transformer.
(a): The training loss of SGD and Adam. (b): The gradient norm vs. the local
smoothness on the training trajectory. The blue line in (b) stands for
$\log(\text{local smoothness})=\log(\text{gradient norm})+1.4$. It can be
observed that $(e^{1.4},0)$-smooth condition holds in this task. Similar
results can be seen in zhang2019gradient.
Our Contribution: Under the $(L_{0},L_{1})$-smooth condition, we establish the
convergence of randomly-reshuffled Adam. Specifically, our contributions are
summarized as follows.
* •
We establish the first convergence result of Adam without “$L$-smoothness”. We
prove that Adam converges under the $(L_{0},L_{1})$-smooth condition.
* •
Our convergence result enjoys several good properties. First,there is no need
for the bounded gradient assumption (i.e. $\|\nabla f(\bm{w})\|\leq C$).
Eliminating this assumption is essential since the $(L_{0},L_{1})$-smooth
condition would otherwise degenerate to the $L$-smooth condition. Second, our
result does not rely on other assumptions such as a bounded adaptor or a large
regularizer for numerical stability. Lastly, the convergence holds for every
possible trajectory, which is not only technically demanding but also much
stronger than “convergence in expectation”.
* •
We further compare Adam to SGD when both methods use diminishing learning
rate. We present an improved lower bound for (S)GD under the
$(L_{0},L_{1})$-smooth condition. In this lower bound, there is a factor
related to the gradient norm of the initial point, which does not exist in the
upper bound of Adam. This indicates that (S)GD can converge slow under the
$(L_{0},L_{1})$-smooth condition, showing the advantage of Adam over (S)GD. To
our knowledge, this is the first time that Adam and SGD are rigorously
compared in the same setting where the advantage of Adam can be revealed. We
believe these results shed new light on understanding the benefit of Adam.
Organization of this paper. The rest of this paper is organized as follows: In
Section 2, we review related works on the convergence analysis for Adam, the
relaxed smoothness assumption, and the variants of Adam. In Section 3, we
define notations, present the psedocode of Adam, and provide the assumptions
that our result rests on. In Section 4, we provide our main result on the
convergence of RR Adam under non-uniform smoothness together with explanations
regarding the result. In Section 5, we then state the proof ideas of the main
result. In Section 7, we provide discussions on intuitions of why non-adaptive
optimizers can be used for fine-tuning tasks, comparison of Adam and Clipped
SGD, insights for practioners and limitations of Theorem 1.
## 2 Related works
Convergence analysis for Adam. Adam is firstly proposed in kingma2015adam with
a convergence proof. However, the proof is pointed out to have flaws by
(reddi2018convergence) and (reddi2018convergence) further provide simple
counterexamples with which Adam diverges. This discovery caused the
convergence analysis of Adam to stagnate for a while and motivated a series of
works developing variants of Adam without divergent issues (see discussion
latter in this section). On the other hand, vanilla Adam works well in
practice and divergence is not empirically observed. This phenomenon
motivates researchers to rethink the counterexamples. The counterexamples
states “for every $\beta_{1}<\sqrt{\beta_{2}}$, there exists a problem that
Adam diverges”. That is to say, the divergence statement requires picking
$(\beta_{1},\beta_{2})$ before fixing the problem, while in practice, the
algorithmic parameters are often picked according to the problem. Based on
this observation, a recent work (Zhang2022Adam) proves that Adam can converge
with $(\beta_{1},\beta_{2})$ picked after the problem is given.
We categorize the existing results of Adam into two classes based on the
sampling strategy: with-replacement sampling (a.k.a., i.i.d. sampling,
abbreviated as “WR”) and RR Adam. We believe both sampling strategies are
worth studying: WR is more favored among the theory community due to its
simple form, whereas RR is widely used among practitioners because it is easy
to implement. Further, RR guarantees to pass each data at least once and
brings good performance (bottou2009curiously; bottou2012stochastic).
The first line of work analyzes WR Adam. For instance, (zaheer2018adaptive)
shows that WR RMSProp (a simplified version of Adam with $\beta_{1}=0$)
converges to the neighborhood of the stationary points. (de2018convergence)
prove the convergence of WR RMSProp by assuming the signs of the gradients to
remain the same along the trajectory. However, this condition is not
guaranteed to hold in practice. (defossez2020simple) prove the convergence of
WR Adam with $\beta_{1}<\beta_{2}$. However, their convergence bound is
inversely proportional to $\xi$, which is the hyperparameter for numerical
stability. Consequently, their bound becomes vacuous as $\xi$ approaches zero.
This result does not match practical observations because small values of
$\xi$, like $10^{-8}$, often yield satisfactory performance. Moreover,
employing large values of $\xi$ obscures the effect of $\sqrt{v_{k}}$, and
thus the proof is largely reduced to the proof of SGD. (huang2021super;
guo2021novel) provide simple convergence proof for WR Adam with $\beta_{1}$
close to $1$. However, their results require the $\sqrt{v_{k}}$ to be bounded
in a certain interval $[C_{l},C_{u}]$. This condition changes Adam into
AdaBound (luo2019adaptive). In summary, all the above works require certain
strong conditions such as bounded $\sqrt{v_{k}}$ or large $\xi$. Further, they
all require bounded gradient ($\|\nabla f(x)\|\leq C$) and bounded smoothness
($L$-smooth) condition.
Our analysis falls into the second line of works, which focus on RR Adam.
(shi2021rmsprop) prove the trajectory-wise convergence of RR RMSProp and
(Zhang2022Adam) prove the in-expectation convergence of RR Adam. However,
these works both require $L$-smooth condition. Our analysis follows this line
of works and provides the first convergence result of RR Adam under relaxed
smoothness condition.
Relaxed smoothness assumption. There are several attempts on relaxing
$L$-smooth condition. zhang2019gradient proposes $(L_{0},L_{1})$-smooth
condition to theoretically explain the acceleration effect of clipped SGD over
SGD. Similar results are also extended to clipped SGD with momentum
(zhang2020improved), distributionally-robust optimization (jin2021non) ,
differentially-private SGD (yang2022normalized) and generalized SignSGD
(crawshaw2022robustness). However, they did not theoretically analyze Adam in
this setting. Considering the great empirical impact of Adam, we believe it is
important to study Adam in its original form.
One concurrent work (li2023convergence) studies the convergence of WR Adam
under $(L_{0},L_{1})$-smooth condition by cleverly constructing certain
stopping time. They also propose a variance-reduced variant with better
convergence rate. However, their bound on Adam not only assumes the noise is
deterministically bounded, but also has polynomial dependence over $1/\xi$
(the hyperparameter for numerical stability). Similarly to
(de2018convergence), this result does not match practice observations, since
Adam performs well even when $\xi$ is as small as $10^{-8}$.
Variants of Adam. Ever since the counter-example of the convergence of Adam
raised by (reddi2018convergence), many new variants of Adam have been
designed. For instance, zou2019sufficient; gadat2020asymptotic;
chen2018convergence; chen2021towards replaced the constant hyperparameters by
iterate-dependent ones e.g. $\beta_{1t}$ or $\beta_{2t}$. AMSGrad
(reddi2019convergence) and AdaFom (chen2018convergence) enforced $\\{v_{t}\\}$
to be non-decreasing. Similarly, AdaBound (luo2019adaptive) imposed
constraints $v_{t}\in[C_{l},C_{u}]$ to prevent the learning rate from
vanishing or exploding. Similarly, (zhou2018adashift) adopted a new estimate
of $v_{t}$ to correct the bias. In addition, there are attempts to combine
Adam with Nesterov momentum (dozat2016incorporating) as well as warm-up
techniques (Liu2020On). There are also some works providing theoretical
analysis on the variants of Adam. For instance, zhou2018convergence studied
the convergence of AdaGrad and AMSGrad. gadat2020asymptotic studied the
asymptotic behavior of a subclass of adaptive gradient methods from landscape
point of view. Their analysis applies to RMSprop-variants with iterate-
dependent $\beta_{2t}$. In summary, all these works study variants of Adam,
which is different from our work since we focus on vanilla Adam.
## 3 Preliminaries
This section introduces notations, definitions, and assumptions that are used
throughout this work.
Notations. We list the notations that are used in the formal definition of the
randomly-shuffled Adam and its convergence analysis.
* •
(Vector) We define $\bm{a}\odot\bm{b}$ as the Hadamard product (i.e.,
component-wise product) between two vectors $\bm{a}$ and $\bm{b}$ with the
same dimension. We also define $\langle\bm{a},\bm{b}\rangle$ as the $\ell^{2}$
inner product between $\bm{a}$ and $\bm{b}$. We define $\mathds{1}_{d}$ as an
all-one vector with dimension $d$.
* •
(Array) We define $[m_{1},m_{2}]\triangleq\\{m_{1},\cdots,m_{2}\\}$, $\forall
m_{1},m_{2}\in\mathbb{N},m_{1}\leq m_{2}$. Specifically, we use $[m]$
$\triangleq\\{1,\cdots,m\\}$.
* •
(Asymptotic notation) We define $A_{1}(x)=\mathcal{O}_{x\rightarrow
a}(A_{2}(x))$ if $\left|\frac{A_{1}(x)}{A_{2}(x)}\right|$ is bounded when
$x\rightarrow$ $a$. We define $A_{2}(x)=\Omega_{x\rightarrow a}(A_{1}(x))$
when $A_{1}(x)$$=\mathcal{O}_{x\rightarrow a}(A_{2}(x))$. We use
$\tilde{\mathcal{O}}$ to denote $\mathcal{O}$ with logarithmic factors hidden,
i.e., $A_{1}(x)=\tilde{\mathcal{O}}_{x\rightarrow a}(A_{2}(x))$ if
$A_{1}(x)=\mathcal{O}_{x\rightarrow a}(A_{2}(x)\log|A_{2}(x)|)$. When the
context is clear, we hide ”$x\rightarrow a$” and only use
$\mathcal{O},\Omega,\tilde{\mathcal{O}}$.
Pseudocode. To facilitate the analysis, we provide the pseudocode of Adam in
Algorithm 1.
Algorithm 1 Randomly reshuffled Adam (RR-Adam)
Input: Objective function
$f(\bm{w}):=\frac{1}{n}\sum_{i=0}^{n-1}f_{i}(\bm{w})$, learning rate series
$\\{\eta_{k}\\}_{k=1}^{T}$ and hyperparameters
$(\beta_{1},\beta_{2})\in[0,1)^{2}$. Initialize the parameter
$\bm{w}_{1,0}\in\mathbb{R}^{d}$, the conditioner
$\bm{\nu}_{1,-1}\in\mathbb{R}^{d,\geq 0}$, and the momentum
$\bm{m}_{1,-1}\in\mathbb{R}^{d}$.
for $k=1$ to $T$ do
Randomly shuffle $[0,n-1]$ to get $\\{\tau_{k,j}\\}_{j=0}^{n-1}$
for $i=0$ to $n-1$ do
Calculate $g_{k,i}=\nabla f_{\tau_{k,i}}(\bm{w}_{\tau_{k,i}})$
Update $\bm{\nu}_{k,i}=\beta_{2}\bm{\nu}_{k,i-1}+(1-\beta_{2})g_{k,i}^{\odot
2}$,
Update $\bm{m}_{k,i}=\beta_{1}\bm{m}_{k,i-1}+(1-\beta_{1})g_{k,i}$
Update
$\bm{w}_{k,i+1}=\bm{w}_{k,i}-\eta_{k}\frac{1}{\sqrt{\bm{\nu}_{k,i}}+\xi}\odot\bm{m}_{k,i}$
end for
Update $\bm{\nu}_{k+1,-1}=\bm{\nu}_{k,n-1}$, $\bm{m}_{k+1,-1}=\bm{m}_{k,n-1}$,
$\bm{w}_{k+1,0}=\bm{w}_{k,n}$
end for
$\bm{m}_{k,i}$ and $\bm{\nu}_{k,i}$ are weighted averages with hyperparamter
$\beta_{1}\in[0,1)$ and $\beta_{2}\in[0,1)$, respectively. $\xi$ is adopted
for numerical stability and it is often chosen to be $10^{-8}$ in practice. In
our theory, we allow $\xi$ to be an arbitrary non-negative constant including
0.
Algorithm 1 follows a without-replacement sampling strategy (also known as
shuffling), which is the default strategy used in CV, NLP, GANs, etc. However,
it is not necessarily easy to analyze shuffling strategy, because the
stochastic gradients sampled by random-shuffling lack statistical
unbiasedness, i.e. $\mathbb{E}\left[\nabla
f_{k,i}(x_{k,i})|x_{k,i}\right]\neq\nabla f(x_{k,i})$. This bias requires a
much different analysis from its with-replacement counterpart. Even for SGD,
the analysis for shuffling is often known to be “more challenging”
(tran2021smg; mishchenko2020random). However, we choose to study this version
as it is closer to the practice.
Assumptions. Here we state the assumptions that our result will rest on. The
first one is the $(L_{0},L_{1})$-smooth condition introduced in Section 1.
###### Assumption 1 ($(L_{0},L_{1})$-smooth condition).
We assume that $f_{i}(\bm{w})$ is lower bounded by $0$, and $f_{i}(\bm{w})$
satisfies $(L_{0},L_{1})$-smooth condition, i.e., there exist positive
constants ($L_{0}$, $L_{1}$), such that,
$\forall\bm{w}_{1},\bm{w}_{2}\in\mathbb{R}^{d}$ satisfying
$\|\bm{w}_{1}-\bm{w}_{2}\|\leq\frac{1}{L_{1}}$,
$\|\nabla f_{i}(\bm{w}_{1})-\nabla f_{i}(\bm{w}_{2})\|\leq(L_{0}+L_{1}\|\nabla
f_{i}(\bm{w}_{1})\|)\|\bm{w}_{1}-\bm{w}_{2}\|.$ (3)
Eq. (3) is firstly introduced by zhang2020improved, and is the weakest version
of $(L_{0},L_{1})$-smooth condition to our best knowledge since it does not
require $f_{i}(\bm{w})$ to be twice differentiable. When $f_{i}(\bm{w})$ is
twice differentiable, Eq. (3) is equivalent to Eq. equation 2
(zhang2020improved). $(L_{0},L_{1})$-smooth condition generalizes the
$L$-smooth condition (i.e., $(L_{0},L_{1})$-smooth condition with $L_{0}=L$
and $L_{1}=0$) in classical non-convex optimization literature
(ghadimi2013stochastic; liu2020improved) and allows the smoothness to be
unbounded globally.
###### Assumption 2 (Affine Noise Variance).
$\forall\bm{w}\in\mathbb{R}^{d}$, the gradients of
$\\{f_{i}(\bm{w})\\}_{i=0}^{n-1}$ has the following connection with the
gradient of $f(\bm{w})$:
$\frac{1}{n}\sum_{i=0}^{n-1}\left\|\nabla f_{i}(\bm{w})\right\|^{2}\leq
D_{1}\|\nabla f(\bm{w})\|^{2}+D_{0}.$
Assumption 2 is one of the weakest assumption on gradient noise in existing
literature. It not only generalizes the “bounded variance” assumption (which
requires $D_{1}=1/n$, and thus further generalizes the ”bounded gradient”
assumption (defossez2020simple)) (ghadimi2016mini; Manzil2018adaptive;
huang2021super), but also is weaker than the “strongly growth condition”
(which requires $D_{0}=0$) (schmidt2013fast; vaswani2019fast). Assumption 2
allows flexible choices of $D_{0}$ & $D_{1}$ and thus it is among the weakest
assumption of this kind.
## 4 Adam Converges under the $(L_{0},L_{1})$-smooth condition
In this section, we provide our main result on the convergence of RR Adam
under $(L_{0},L_{1})$-smooth condition. As discussed in Section 1, even for
the simpler with-replacement sampling Adam, the establishment of the
convergence under $(L_{0},L_{1})$-smooth condition requires restrictive
assumptions such as a large $\xi$ (the constant introduced for numerical
stability and is as small as $10^{-8}$ in practice), and deterministically
bounded noise (li2023convergence). Such assumptions make the corresponding
results hard to apply to practical setting. As for the harder randomly-
reshuffled setting, there is no convergence result for Adam under non-uniform
smoothness. Our result tackles the limitation in existing works and propose
the first convergence result for RR Adam under non-uniform smoothness,
provided as follows.
###### Theorem 1.
Consider RR Adam defined as Algorithm 1 with diminishing learning rate
$\eta_{k}=\frac{\eta_{1}}{\sqrt{k}}$. Let Assumptions 1 and 2 hold. Suppose
the hyperparamters satisfy: $0\leq\beta_{1}^{2}<\beta_{2}<1$ and $\beta_{2}$
is larger than a threshold $\gamma(D_{1})$. Then, we have
$\displaystyle\min_{k\in[1,T]}\left\\{\frac{\|\nabla
f(\bm{w}_{k,0})\|}{\sqrt{D_{1}}},\frac{\|\nabla
f(\bm{w}_{k,0})\|^{2}}{\sqrt{D_{0}}}\right\\}\leq$
$\displaystyle\tilde{\mathcal{O}}\left(\frac{f(\bm{w}_{1,0})-\min_{\bm{w}}f(\bm{w})}{\sqrt{T}}\right)$
$\displaystyle+\mathcal{O}((1-\beta_{2})^{2}\sqrt{D_{0}}).$ (4)
For simplicity, we defer the concrete form of $\gamma$ to Appendix B.2. We
provide some remarks on the results as follows, and state the proof idea in
the next section.
Explanation for Theorem 1. Theorem 1 is pioneering in demonstrating that RR
Adam is capable of converging under the non-uniform smoothness condition, a
finding that is novel to our best knowledge. Observing the right-hand side of
inequality (4), one can see that as $T\to\infty$, it approaches
$\mathcal{O}((1-\beta_{2})^{2}\sqrt{D_{0}})$. This suggests that Adam’s
convergence to the vicinity of stationary points is inversely related to the
proximity of $\beta_{2}$ to 1. This theoretical insight corroborates the
common practice of choosing $\beta_{2}$ close to $0.99$. A counterexample
provided later will further illustrate that convergence to a neighborhood,
rather than an exact point, is an intrinsic characteristic of the algorithm.
Beyond the $(L_{0},L_{1})$-smooth condition, Theorem 1 presupposes only that
the gradient noise exhibits affine variance as per Assumption 2, which is a
relatively mild constraint that eschews the need for a bounded gradient norm.
This is crucial, as imposing a bound would reduce the $(L_{0},L_{1})$-smooth
condition to an $(L_{0}+L_{1}M)$-smooth condition with a gradient norm capped
by $M$. Additionally, we do not require the adaptive learning rate
$\eta_{k}/\sqrt{\hat{\nu}_{k}}$ to be upper bounded, nor do we stipulate a
large regularizer $\xi$—aligning with common practices in deep learning
libraries where a small $\xi$ such as $10^{-8}$ is often effective. Our
theorem permits any non-negative $\xi$, including zero. Finally, Theorem 1
asserts convergence for every possible trajectory, a guarantee that exceeds
the typical ”convergence in expectation” results and poses a significant
technical challenge.
On the Comparison to Existing Analyses of RR Adam. Our analysis extends the
applicability of RR Adam by ensuring convergence under the $(L_{0},L_{1})$
smooth condition, which inherently encompasses the traditional $L$-smooth
condition. This broadened perspective allows our results to guarantee
convergence for RR Adam even under the more general $L$-smooth scenario. When
juxtaposed with the state-of-the-art analysis of RR Adam under the $L$-smooth
condition by Zhang2022Adam, our findings advance the field in two significant
ways. Firstly, we elevate the notion of convergence from the expected sense to
the more stringent trajectory-wise convergence. Secondly, we refine the
estimated convergence neighborhood, tightening it from
$(1-\beta_{2})\sqrt{D_{0}}$ to $(1-\beta_{2})^{2}\sqrt{D_{0}}$. Collectively,
our analysis not only operates under a less restrictive assumption—the
$(L_{0},L_{1})$-smooth condition—but also delivers substantively enhanced
convergence results.
On the range of hyperparameters. Theorem 1 indicates that Adam can work when
$\beta_{2}$ is close enough to $1$. This matches the practical choice of
$\beta_{2}$ (e.g., $0.999$ in default setting, $0.95$ in the GPT-3 training
(brown2020language)). Note that our result does not contradict the
counterexamples of Adam’s non-convergence (reddi2018convergence;
Zhang2022Adam), as these divergence results require $\beta_{2}$ to be small
and thus not close to $1$. Rather, these counterexamples suggest that large
$\beta_{2}$ is necessary for convergence. As for $\beta_{1}$, Theorem 1 needs
$\beta_{1}^{2}<\beta_{2}$. When $\beta_{2}$ is large, Theorem 1 allows a wide
range of candidates of $\beta_{1}$ (e.g., $0.9$ in default setting and $0.5$
in GAN (radford2015unsupervised)).
Figure 2: Reconduct of experimental results from (Zhang2022Adam). The
objective function is defined in Eq. (5). One can observe that while letting
$\beta_{2}$ closer to $1$ can make the limiting gradient norm smaller, the
limiting gradient norm always stabilizes beyond 0.
On the neighborhood of stationary points. When $D_{0}\neq 0$, Theorem 1 only
ensures that Adam converges to a neighborhood of stationary points
$\\{\bm{w}:\min$ $\\{\frac{\|\nabla f(\bm{w}))\|}{\sqrt{D_{1}}},\frac{\|\nabla
f(\bm{w})\|^{2}}{\sqrt{D_{0}}}\\}\leq\mathcal{O}((1-\beta_{2})\sqrt{D_{0}})\\}$.
Since SGD converges to the stationary points with diminishing learning rate,
one may wonder if Theorem 1 can be improved to obtain the same conclusion as
SGD. Unfortunately, there is a counterexample in the existing literature (
function (9) in Zhang2022Adam) showing that Adam does not converge to
stationary points even if all the conditions in Theorem 1 are satisfied.
Specifically, (Zhang2022Adam) consider the following function:
$\displaystyle f(x)=\sum_{j=0}^{9}f_{j}(x)=\frac{1}{10}x^{2}-1,$ (5)
$\displaystyle\text{ where }f_{j}(x)=\left\\{\begin{array}[]{l}(x-1)^{2}\text{
if }j=0\\\ -0.1\left(x-\frac{10}{9}\right)^{2}\text{ if }1\leq j\leq
9\end{array}\right..$
One can easily verify such an example satisfies Assumptions 2 and 1 with
$D_{0}>0$. As shown in Figure 2, when running Adam (with $\beta_{1}=0.9$,
$\eta_{k}=0.1/\sqrt{k},a=3,x_{0}=-2$), it does not converge to exact
stationary points. Instead, it converges to a neighborhood of stationary
points with size inversely proportional to $\beta_{2}$. Therefore, the non-
vanishing term in Theorem 1 is not due to the limitation of the proof. Rather,
it is an intrinsic property of Adam.
Why cannot Adam converge to exact stationary points when $D_{0}>0$?
Intuitively, this is because even with diminishing $\eta_{k}$, the effective
learning rate $\frac{\eta_{k}}{\xi\mathds{1}_{d}+\sqrt{\bm{\nu}_{k,i}}}$ may
not diminish due to the potentially decreasing $\sqrt{\bm{\nu}_{k,i}}$. The
good news is that $\mathcal{O}((1-\beta_{2})\sqrt{D_{0}})$ approaches $0$ as
$\beta_{2}$ gets close to $1$. This means that the neighborhood shrinks as
$\beta_{2}\rightarrow 1$ (this is also observed in Figure 2). As discussed
above, the practical use of $\beta_{2}$ is close to $1$, and thus
$O((1-\beta_{2})\sqrt{D_{0}})$ is tolerable.
On the Diminishing Learning Rate. In Theorem 1, we consider a diminishing
learning rate of the form $\eta_{k}=\frac{\eta_{1}}{\sqrt{k}}$ to maintain
consistency with existing works on RR Adam, such as those by (shi2021rmsprop;
Zhang2022Adam), which also employ diminishing learning rates. Nonetheless, our
results can be readily extended to RR Adam with a constant learning rate. By
adhering to the same proof strategy outlined in Theorem 1, one can demonstrate
that with a constant learning rate $\eta$, the conclusion (as given in Eq.
(4)) of Theorem 1 is modified to $\min_{k\in[1,T]}\\{\frac{\|\nabla
f(w_{k,0})\|}{\sqrt{D_{0}}},\frac{\|\nabla
f(w_{k,0})\|2}{\sqrt{D_{1}}}\\}\leq\tilde{\mathcal{O}}\left(\frac{f(\bm{w}_{1,0})-\min_{\bm{w}}f(\bm{w})}{\eta
T}\right)+\mathcal{O}(\sqrt{D_{0}}(1-\beta_{2})^{2})+\mathcal{O}(\eta)$. In
essence, while Adam may converge more rapidly to a neighborhood (with the rate
improving from $1/\sqrt{t}$ to $1/t$), the size of this neighborhood is
increased by an additional term $\mathcal{O}(\eta)$, which is attributable to
the constant step size.
## 5 Proof sketch
In this section, we briefly explain our proof idea for Theorem 1, which can be
divided into two stages. In Stage I, we will prove Theorem 1 for Adam with
$\beta_{1}=0$ to show the challenge brought by $(L_{0},L_{1})$-smooth
condition and how we tackle it. In Stage II, we then show the additional
difficulty when adding the momentum and our corresponding intuition to solve
it.
Stage I: Convergence of Adam with $\beta_{1}=0$. By the descent lemma,
$\displaystyle\begin{matrix}f(\bm{w}_{k+1,0})-f(\bm{w}_{k,0})\leq&\\\
\quad\vspace{2mm}\end{matrix}$
$\displaystyle\begin{matrix}\underbrace{\langle\bm{w}_{k+1,0}-\bm{w}_{k,0},\nabla
f(\bm{w}_{k,0})\rangle}\\\ \text{First Order}\end{matrix}\begin{matrix}\\\
\quad\vspace{2mm}\end{matrix}$
$\displaystyle\begin{matrix}\underbrace{+\frac{L_{loc}}{2}\|\bm{w}_{k+1,0}-\bm{w}_{k,0}\|^{2},}\\\
\text{Second Order}\end{matrix}$ (6)
where $L_{loc}$ is the local smoothness. We bound the first-order and the
second-order term respectively. The upper bound on second-order term is
relatively simple. Due to the limited space, we only show the idea of bounding
first-order term here.
The ever-changing adaptive learning rate poses a challenge on deriving the
bound. It is even noted that with small $\beta_{2}$, the first order term can
be positive (reddi2018convergence). However, we notice that if
$\bm{\nu}_{k,i}$ is stationary, i.e., RMSProp degenerates to SGD with
preconditioning, the first order term equals to
$-\eta_{k}\langle\sum_{i}\frac{1}{\xi\mathds{1}_{d}+\sqrt{\bm{\nu}_{k,0}}}\odot\nabla
f_{\tau_{k,i}}(\bm{w}_{k,i}),\nabla
f(\bm{w}_{k,0})\rangle\approx-\eta_{k}\langle\sum_{i}\frac{1}{\xi\mathds{1}_{d}+\sqrt{\bm{\nu}_{k,0}}}\odot\nabla
f_{\tau_{k,i}}(\bm{w}_{k,0}),\nabla f(\bm{w}_{k,0})\rangle$, which is indeed
negative. While that ”$\bm{\nu}_{k,i}$ is stationary” is too good to be true,
we prove that $\bm{\nu}_{k,i}$ changes little when $\beta_{2}$ is close to
$1$, assuming that the gradient is large. Below we denote $\bm{\nu}_{l,k,i}$
as the $l$-th component of $\bm{\nu}_{k,i}$.
###### Lemma 1 (Informal).
For any $l\in[d]$ and $i\in[0,n-1]$, if
$\max_{p\in[0,n-1]}|\partial_{l}f_{p}(\bm{w}_{k,0})|=\Omega(\sum_{r=1}^{k-1}\beta_{2}^{\frac{(k-1-r)}{2}}\eta_{r}$
$\|\nabla f(\bm{w}_{r,0})\|+\eta_{k})$, then
$|\bm{\nu}_{l,k,i}-\bm{\nu}_{l,k,0}|=\mathcal{O}((1-\beta_{2})\bm{\nu}_{l,k,0})$.
The idea of Lemma 1 is simple: since
$\bm{\nu}_{k,i}=\beta_{2}\bm{\nu}_{k,i-1}+(1-\beta_{2})\nabla
f_{\tau_{k,i}}(\bm{w}_{\tau_{k,i}})^{\odot 2}$, the change of $\bm{\nu}_{k,i}$
w.r.t. $i$ should be small when $\beta_{2}$ is large. However, we need to
check that the relative size of $\nabla
f_{\tau_{k,i}}(\bm{w}_{\tau_{k,i}})^{\odot 2}$ w.r.t. $\bm{\nu}_{k,i-1}$ is
uniformly bounded across varying $\beta_{2}$, otherwise the term
$(1-\beta_{2})\nabla f_{\tau_{k,i}}(\bm{w}_{\tau_{k,i}})^{\odot 2}$ may not go
to zero when $\beta_{2}\rightarrow 1$. We resolve this challenge by expanding
$\bm{\nu}_{k,i}$ in terms of squared gradients and bounding the gap between
each of the terms and $\nabla f_{\tau_{k,i}}(\bm{w}_{\tau_{k,i}})^{\odot 2}$
by echoing $(L_{0},L_{1})$-smooth condition. We defer a detailed proof to
Corollary 5 for details.
As a conclusion, if we denote those dimensions with large gradients (i.e.,
satisfying the requirement of Lemma 1) as $\mathbb{L}_{large}^{k}$ and the
rest as $\mathbb{L}_{small}^{k}$, Lemma 1 indicates that the
$\mathbb{L}_{large}^{k}$ part (i.e.,
$\sum_{l\in\mathbb{L}_{large}^{k}}(\bm{w}_{l,k+1,0}-\bm{w}_{l,k,0})\partial_{l}f(\bm{w}_{k,0})$)
in the first order term can be bounded as
$\displaystyle-\eta_{k}\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\partial_{l}f(\bm{w}_{k,0})}{\sqrt{\bm{\nu}_{l,k,i}}+\xi}\sum_{i}\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,i})$
$\displaystyle\approx$
$\displaystyle-\eta_{k}\sum_{l\in\mathbb{L}_{large}^{k}}\left(\frac{\partial_{l}f(\bm{w}_{k,0})^{2}}{\sqrt{\bm{\nu}_{l,k,0}}+\xi}+\mathcal{O}\left((1-\beta_{2})\frac{\partial_{l}|f(\bm{w}_{k,0})|\sum_{i}|\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,i})|}{\sqrt{\bm{\nu}_{l,k,0}}+\xi}\right)\right)$
$\displaystyle=$ $\displaystyle-\Omega\left(\eta_{k}\min\left\\{\frac{\|\nabla
f(\bm{w}_{k,0})\|}{\sqrt{D_{1}}},\frac{\|\nabla
f(\bm{w}_{k,0})\|^{2}}{\sqrt{D_{0}}}\right\\}\right)+O(\eta_{k}(1-\beta_{2})\sqrt{D_{0}}).$
The last equation uses the affine noise assumption (Assumption 2), and we
defer a detailed proof to Appendix B.4. A remaining problem is how to deal
with those components in $\mathbb{L}_{small}^{k}$. We treat them as error
terms. Concretely, $l\in\mathbb{L}_{small}^{k}$ indicates that
$\partial_{l}f(\bm{w}_{k,0})=\mathcal{O}(\sum_{r=1}^{k-1}\beta_{2}^{\frac{(k-1-r)}{2}}$
$\eta_{r}\|\nabla f(\bm{w}_{r,0})\|+\eta_{k})$. Applying it directly into
$\sum_{l\in\mathbb{L}_{small}^{k}}(\bm{w}_{l,k+1,0}-\bm{w}_{l,k,0})\partial_{l}f(\bm{w}_{k,0})$,
we have
$\displaystyle-\eta_{k}\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\partial_{l}f(\bm{w}_{k,0})}{\sqrt{\bm{\nu}_{l,k,i}}+\xi}\sum_{i}\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,i})$
$\displaystyle=$
$\displaystyle\mathcal{O}\left(\eta_{k}\left(\sum_{r=1}^{k-1}\beta_{2}^{\frac{(k-1-r)}{2}}\eta_{r}\|\nabla
f(\bm{w}_{r,0})\|+\eta_{k}\right)\right),$
where the equation is because
$\frac{\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,i})}{\sqrt{\bm{\nu}_{l,k,i}}+\xi}$
is bounded (proved by Lemma 4).
In order to upper bound the first order term, we then need to prove that
$-\Omega(\eta_{k}\min\\{\frac{\|\nabla
f(\bm{w}_{k,0})\|}{\sqrt{D_{1}}},\frac{\|\nabla
f(\bm{w}_{k,0})\|^{2}}{\sqrt{D_{0}}}\\})$ dominates
$\mathcal{O}(\eta_{k}(\sum_{r=1}^{k-1}\beta_{2}^{\frac{(k-1-r)}{2}}\eta_{r}\|\nabla
f(\bm{w}_{r,0})\|+\eta_{k}))$. This is not necessarily true, as the historical
gradient norms in the latter term can be large.
###### Remark 1.
We recognize this as the challenge brought by $(L_{0},L_{1})$-smooth
condition, since the latter term degenerates to $\mathcal{O}(\eta_{k}^{2})$
with $L$-smooth condition, which is minor ($\sum_{k=1}^{T}\eta_{k}^{2}$ is
only in order $\log T$).
We address this challenge by noting that what we need to bound is the sum of
the first order term. Fortunately, although we cannot upper bound the first
order term in one single epoch, we can bound the sum of it across epochs. By a
sum order change, the sum of
$\mathcal{O}(\eta_{k}(\sum_{r=1}^{k-1}\beta_{2}^{\frac{(k-1-r)}{2}}\eta_{r}\|\nabla
f(\bm{w}_{r,0})\|+\eta_{k}))$ over $k$ equals to $\mathcal{O}(\sum_{k=1}^{T}$
$\eta_{k}^{2}\|\nabla f(\bm{w}_{k,0})\|+\ln T)$. This is smaller by the sum of
$-\Omega(\eta_{k}\min\\{\frac{\|\nabla
f(\bm{w}_{k,0})\|}{\sqrt{D_{1}}},\frac{\|\nabla
f(\bm{w}_{k,0})\|^{2}}{\sqrt{D_{0}}}\\})$ by order of $\eta_{k}$ except a $\ln
T$ term due to the mean value inequality, i.e.,
$\eta_{k}^{2}\|\nabla
f(\bm{w}_{k,0})\|\leq\mathcal{O}(\eta_{k}^{2})+\mathcal{O}\left(\eta_{k}^{2}\sqrt{\frac{D_{1}}{D_{0}}}\|\nabla
f(\bm{w}_{k,0})\|^{2}\right).$
We then conclude the sum of the first order term can be bounded by
$-\Omega(\eta_{k}\min\\{\frac{\|\nabla
f(\bm{w}_{k,0})\|}{\sqrt{D_{1}}},\frac{\|\nabla
f(\bm{w}_{k,0})\|^{2}}{\sqrt{D_{0}}}\\})+\mathcal{O}(\ln T)$.
###### Remark 2 (Difficulty compared to the analysis under $L$-smooth
condition).
Here we illustrate the challenge brought by stepping beyond $L$-smooth
condition. First of all, the change of $\bm{\nu}_{k,i}$ is easier to bound
without the historical gradient term due to the absence of the gradient norm
in the bound of local smoothness. Secondly, under $L$-smooth condition, the
error does not contain historical gradient information and is only in order of
$\mathcal{O}(\eta_{k}^{2})$, which is easy to bound.
Stage II: adding the momentum. The second order term of Adam can be bounded
similarly. However, the analysis of the first order term becomes more
challenging even though we still have $\bm{\nu}_{k,i}\approx\bm{\nu}_{k,0}$.
Specifically, even with constant $\bm{\nu}_{k,i}=\bm{\nu}_{k,0}$,
$-\eta_{k}\langle\sum_{i}\frac{\bm{m}_{k,i}}{\sqrt{\bm{\nu}_{k,i}}+\xi},-\nabla
f(\bm{w}_{k,0})\rangle>0$ is not necessarily correct, as the momentum
$\bm{m}_{k,i}$ contains a heavy historical signal, and may push the update
away from the negative gradient direction.
We resolve this challenge by observing that the alignment of
$\bm{w}_{k+1,0}-\bm{w}_{k,0}$ and $-\nabla f(\bm{w}_{k,0})$ is required due to
that our analysis is based on the potential function $f(\bm{w}_{k,0})$.
However, while this potential function is suitable for the analysis of
RMSProp, it is no longer appropriate for Adam based on the above discussion.
We need to construct another potential function. Our construction of the
potential function is based on the following observation: we revisit the
update rule in Algorithm 1 and rewrite it as
$\frac{\bm{m}_{k,i}-\beta_{1}\bm{m}_{k,i-1}}{1-\beta_{1}}=\nabla
f_{\tau_{k,i}}(\bm{w}_{k,i}).$
Notice that the right-hand-side of the above equation contains no historical
gradients but only the gradient of the current step! By dividing
$(\sqrt{\bm{\nu}_{k,i}}+\xi)/\eta_{k}$ above,
$\displaystyle\frac{\bm{w}_{k,i+1}-\bm{w}_{k,i}-\beta_{1}(\bm{w}_{k,i}-\bm{w}_{k,i-1})}{1-\beta_{1}}\approx$
$\displaystyle-\frac{\eta_{k}}{\sqrt{\bm{\nu}_{k,0}}+\xi\mathds{1}_{d}}\odot\frac{\bm{m}_{k,i}-\beta_{1}\bm{m}_{k,i-1}}{1-\beta_{1}}$
$\displaystyle=$
$\displaystyle-\frac{\eta_{k}}{\sqrt{\bm{\nu}_{k,0}}+\xi\mathds{1}_{d}}\odot\nabla
f_{\tau_{k,i}}(\bm{w}_{k,i}).$
After simple rearrangement, one can see that the sequence
$\\{\frac{\bm{w}_{k,i}-\beta_{1}\bm{w}_{k,i-1}}{1-\beta_{1}}\\}$ are
(approximately) doing SGD within one epoch (with coordinate-wise but constant
learning rate $\bm{\nu}_{k,i}$)! We define
$\bm{u}_{k,i}\triangleq\frac{\bm{w}_{k,i}-\beta_{1}\bm{w}_{k,i-1}}{1-\beta_{1}}.$
Then, further notice that the distance between
$\bm{u}_{k,i}=\bm{w}_{k,i}+\beta_{1}\frac{\bm{w}_{k,i}-\bm{w}_{k,i-1}}{1-\beta_{1}}$
and $\bm{w}_{k,i}$ is in order of one step’s update, and thus
$\bm{u}_{k,i}\approx\bm{w}_{k,i}$ . Therefore, we choose our potential
function as $f(\bm{u}_{k,i})$. The Taylor’s expansion of $f$ at $\bm{u}_{k,0}$
then provides a new descent lemma, i.e.,
$\displaystyle\begin{matrix}f(\bm{u}_{k+1,0})-f(\bm{u}_{k,0})\leq\\\
\quad\vspace{2mm}\end{matrix}$
$\displaystyle\begin{matrix}\underbrace{\langle\bm{u}_{k+1,0}-\bm{u}_{k,0},\nabla
f(\bm{u}_{k,0})\rangle}\\\ \text{First Order}\end{matrix}\begin{matrix}\\\
\quad\vspace{2mm}\end{matrix}$
$\displaystyle\begin{matrix}\underbrace{~{}+~{}\frac{L_{0}+L_{1}\|\nabla
f(\bm{w}_{k,0})\|}{2}\|\bm{w}_{k+1,0}-\bm{w}_{k,0}\|^{2},}\\\ \text{Second
Order}\end{matrix}$ (7)
By noticing $\bm{w}_{k,i}\approx\bm{u}_{k,i}\approx\bm{u}_{k,0}$, the first
order term can be further approximated by
$-\langle\frac{\eta_{k}}{\sqrt{\bm{\nu}_{k,0}}+\xi\mathds{1}_{d}}\odot\nabla
f(\bm{w}_{k,0}),\nabla f(\bm{w}_{k,0})\rangle$ which is negative. The rest of
the proof is the same as that of Stage I.
###### Remark 3 (On Why State-of-the-Art Results Do Not Achieve Trajectory-
Wise Convergence as Ours).
The state-of-the-art analysis of RR Adam under the $L$-smooth condition, as
presented by Zhang2022Adam, also addresses the misalignment between
$\bm{w}_{k+1,0}-\bm{w}_{k,0}$ and $-\nabla f(\bm{w}_{k,0})$. However, their
approach does not employ a potential function, resulting in convergence
results that are restricted to in-expectation guarantees. Specifically,
Zhang2022Adam manage this misalignment by assuming a uniform distribution over
all possible shuffling orders and demonstrating that, under this assumption,
the expected value of $\bm{w}_{k+1,0}-\bm{w}_{k,0}$ is approximately equal to
$-\nabla f(\bm{w}_{k,0})$. In contrast, our methodology introduces an
auxiliary function, $f(\bm{u}_{k,i})$, and examines the dynamics of
$\bm{u}_{k,i}$. This approach shifts the challenge from aligning
$\bm{w}_{k+1,0}-\bm{w}_{k,0}$ with $-\nabla f(\bm{w}_{k,0})$ to aligning
$\bm{u}_{k+1,0}-\bm{u}_{k,0}$ with $-\nabla f(\bm{w}_{k,0})$. Such a strategy
simplifies the analytical process and facilitates the demonstration of
trajectory-wise convergence.
###### Remark 4 (Similar potential functions in the existing literature.).
We notice that similar potential functions have already been applied in the
analysis of other momentum-based optimizers, e.g., momentum (S)GD in
(ghadimi2015global) and (liu2020improved). However, extending the proof to
Adam is highly-nontrivial. The key difficulty lies in showing that the first-
order expansion of $f(\bm{u}_{k,0})$ is positive, which further requires that
the adaptive learning rate does not change much within one epoch. This is hard
for Adam as the adaptive learning rate of Adam can be non-monotonic. The lack
of L-smooth condition makes the proof even challenging due to the unbounded
error brought by gradient norms.
## 6 Comparison Between Adam and SGD
Now we compare the convergence rate of Adam with SGD. To do so, we need a
lower bound of SGD in the same setting as Theorem 1. There are several
existing lower bounds of SGD under $(L_{0},L_{1})$ smoothness condition (e.g.,
(zhang2019gradient; crawshaw2022robustness)). However, we find these lower
bounds cannot be directly applicable for comparison with Adam. This is
because:
* •
1) In the lower bound of (zhang2019gradient; crawshaw2022robustness), they
pick the learning rate _before_ the construction of the objective function and
initialization point (we restate their lower bound in Appendix B.1 for
completeness). In other words, it is possible that if we fix the objective
function and tune the learning rate (which is a common practice in the
training of deep neural networks), SGD can converge very fast. For rigorous
comparison with Adam, we need a lower bound with reversed ordering. That is,
we need the following statement: “consider a fixed objective function and
initialization point, then no matter how we pick the learning rate, SGD
suffers from a certain rate. ”
* •
2) The lower bounds of SGD in (zhang2019gradient; crawshaw2022robustness)
require constant learning rate. However, since Adam in Theorem 1 uses
diminishing-learning-rate, we aim to establish a lower bound of SGD with
diminishing learning rate.
Unfortunately, there is no existing lower bound that satisfies the above two
properties. In the following theorem, we provide a refined lower bound of SGD
in the setup that we desired.
###### Theorem 2.
For any $L_{0},L_{1},T>0$, there exists an objective function $f$ obeying
Assumption 1, and an initialized parameter $\bm{w}_{0}$ satisfying
$M=\sup\\{\|\nabla f(\bm{w})\|:f(\bm{w})\leq f(\bm{w}_{0})\\}$, such that
$\forall\eta_{1}>0$, the iterations of SGD $\\{\bm{w}_{t}\\}_{t=0}^{\infty}$
satsifies $\min_{t\in T}\|\nabla
f(\bm{w}_{t})\|^{2}=\Omega(M(f(\bm{w}_{0})-\min_{\bm{w}\in\mathbb{R}^{d}}f(\bm{w}))/\sqrt{T})$.
The proof can be in Appendix A. The proof idea is mainly motivated by
(zhang2019adam). We highlight some differences when we try to reach the two
properties mentioned previously.
* •
To reverse the ordering of “picking learning rate and functions &
initialization”, we simply augment the worst-case example in (zhang2019adam)
into 2 dimensional space. It turns out this simple trick is effective in the
proof.
* •
To change constant learning rate into diminishing learning rate, we show that:
when the initial learning rate $\eta_{0}$ is larger than a certain threshold,
the decay rate of the learning rate cannot offset the curvature explosion
along the iteration, causing divergence; on the other hand, when initial
$\eta_{0}$ is small, it would lead to slow convergence. This is a new finding
in $(L_{0},L_{1})$ setting. We prove this result by mathematical induction.
This part of the discussion is not required in the lower bound of
(zhang2019adam) with constant learning rate.
#### Comparison between Adam and SGD.
Finally, we discuss the implication the lower bound of SGD (Theorem 2) and the
upper bound of Adam (Theorem 1). In the lower bound of SGD, there is an extra
constant $M$ which does not appear in the upper bound of Adam. This allows us
to compare the convergence rates of these two algorithms.
We summarize our findings as follows. We emphasize that Theorem 1 and Theorem
2 share exactly the same setting: both consider function class under the same
assumptions; both SGD and Adam use diminishing learning rate. Therefore, the
following comparison is rigorous.
Finding 1: When $D_{0}=0$. Adam converges to stationary point with rate
$\mathcal{O}\left(\frac{1}{T}\right)$ while GD converges with rate
$\mathcal{O}\left(\frac{1}{\sqrt{T}}\right)$. So Adam converges (to stationary
points) faster.
Finding 2: When $D_{0}>0$. There exists a set of $\bm{w}$ with infinite
Lebesgue measure, such that, when starting at any $\bm{w}$ in this set, Adam
converges (to the neighborhood of stationary points) faster than SGD.
Note that the above statement “algorithm 1 converges faster than algorithm 2”
does not mean that algorithm 1 always converges faster than algorithm 2. For
sure, rarely can anyone make such a strong statement. The above statement
actually means that “the worst-case complexity of algorithm 1 is faster than
that of algorithm 2, and both complexity bounds can be simultaneously achieved
when working on the same function and starting at the same initialization”.
This definition is adopted from (sun2021worst), and it is a widely accepted
definition in the optimization field.
###### Proof.
Finding 1 can be directly proved by plugging $D_{0}=0$ into Theorem 1 and
squaring the inequality. We now prove Finding 2. First, we state an important
fact from the proof of Theorem 2.
#### Fact 1:
For the counter-example we constructed in Theorem 2. $M=\sup\\{\|\nabla
f(\bm{w})\|:f(\bm{w})\leq f(\bm{w}_{0})\\}$ goes to infinity as $\|\bm{w}\|$
goes to infinity. Further, for any $C>0$, the set $\\{\bm{w}:M>C\\}$ is of
infinite Lebesgue measure.
Based on Fact 1, for the worst-case example in Theorem 2, there must exist a
region in $\mathbb{R}^{d}$ where $M$ is larger than all the constant terms in
the upper bound of Adam in Theorem 1. Further, Such region is of infinite
Lebesgue measure. When running Adam and SGD simultaneously on this worst-case
example starting from any $\bm{w}$ in this region, the constants in the upper
bound of Adam is smaller than the constants in the lower bound of SGD. Since
the upper and lower bounds share the same rate, so Adam is faster. Note that
there is an additional constant term in the upper bound of Adam equation 4, so
we conclude that Adam converges to the neighborhood of stationary points
faster than SGD. ∎
Note that when $D_{0}>0$, Adam is still guaranteed to converge faster, but
only to the neighborhood in lieu of the exact stationary points. We emphasize
that this “neighborhood” cannot be eliminated since there is a counter-example
showing that Adam cannot reach 0 gradient when $D_{0}>0$ (see Figure 2). So
this is an intrinsic property of Adam, rather than the limitation of the
theory. Nevertheless, we believe the effect of “not converging to exact
stationary points” is minor in practice. This is because: 1) As shown in
Theorem 1 and Figure 2, the size of the “ambiguity zone” is inversely
proportional to $\beta_{2}$. Since $\beta_{2}$ is often chosen to be close to
1, the ambiguity zone shrinks and becomes negligible. 2) Machine learning
tasks do not pursue high-precision solutions (as much as other fields like
PDE). Practitioners usually aim to efficiently find approximate solutions,
rather than exact solutions that over-fit the training data.
To our knowledge, the discussion above is the first time that Adam and SGD are
rigorously compared in the same setting where the advantage of Adam can be
revealed. We believe these results shed new light on understanding the benefit
of Adam.
Finally, we briefly explain why the upper bound of Adam is independent of $M$.
Intuitively, this is because: (1) it uses different learning rates for
different components of $\bm{w}$. (2) For each component of $\bm{w}$, the
effective learning rate adjusts according to the gradient norm (thus according
to the local smoothness). Even though the initial effective learning rate is
small, it gets larger when moving in a flat landscape. Combining together, the
initial learning rate of Adam can be independent of $M$, and so is its
convergence rate.
## 7 Discussion
Adam’s Advantage over Gradient Descent with Gradient Clipping.
zhang2019gradient established that gradient descent (GD) and stochastic
gradient descent (SGD) with gradient clipping are convergent under the
$(L_{0},L_{1})$ smooth condition. A natural inquiry arises concerning the
benefits of Adam over GD/SGD when gradient clipping is employed. While we lack
robust theoretical backing to fully answer this question, one discernible
advantage of Adam, as inferred from our results, is its capability to manage
more intricate noise profiles that adhere to the affine variance noise
assumption. In contrast, the current analyses of GD/SGD with gradient clipping
within the $(L_{0},L_{1})$-smooth framework presuppose that the deviation
between the stochastic gradient and the true gradient is uniformly bounded
with certainty—an assumption more stringent than the one we consider. Indeed,
a recent work (faw2022power) demonstrates that there exists a counterexample
satisfying Assumption 2 over which SGD with gradient clipping fails to
converge. Together with Theorem 1, their result demonstrate that a wide range
of application scenario of Adam than SGD with gradient clipping.
Insights for Practitioners. Here we discuss the insights our Theorem 1 can
provide to practitioners. Firstly, the widespread adoption of Adam among
practitioners, evidenced by its extensive citation record, underscores the
importance of a theoretical understanding of the algorithm.
Secondly, our findings offer theoretical support for a prevalent practice
among practitioners: for tasks involving architectures like Transformers and
LSTMs, Adam is often favored over SGD.
Lastly, in light of the convergence criteria delineated in Theorem 1, we
propose practical guidance for hyperparameter selection when employing Adam.
Specifically, we recommend increasing $\beta_{2}$ and experimenting with
various $\beta_{1}$ values that satisfy $\beta_{1}<\sqrt{\beta_{2}}$. This
heuristic could potentially reduce the computational burden associated with
exhaustive hyperparameter exploration across the $(\beta_{1},\beta_{2})$
space.
Limitations of Theorem 1. While Theorem 1 represents a significant advancement
in establishing the convergence of RR Adam under non-uniform smoothness
conditions, it is not without its limitations. Specifically, Theorem 1 applies
to cases where $\beta_{1}=0$, implying the absence of momentum. The theorem
does not distinguish between the convergence rates when $\beta_{1}=0$ and when
$\beta_{1}>0$, thus not demonstrating the benefits of momentum. The
theoretical elucidation of momentum’s advantage within Adam’s convergence
analysis remains a complex question. The role of momentum is not fully
understood even in momentum SGD for non-convex optimization (liu2020improved),
much less so for Adam. We acknowledge the importance of this question but
consider it beyond the scope of this paper, leaving it for future research.
## 8 Conclusions and Future directions
In this paper, we have taken a pioneering step towards a theoretical
understanding of the adaptivity inherent in the Adam optimization algorithm.
We present the first convergence results for RR Adam under the ($L_{0}$,
$L_{1}$)-smooth condition, which is both realistic and closely aligned with
practical scenarios. In contrast to existing analyses of RR Adam under the
stronger $L$-smooth condition, our results further demonstrate a more robust
form of convergence, specifically trajectory-wise convergence, and indicate a
reduced distance to the stationary point.
Future Directions. An intriguing avenue for future research lies in
delineating the advantages of incorporating momentum in Adam. Our Theorem 1
indicates an identical convergence rate for both $\beta_{1}=0$ (RMSProp) and
$\beta_{1}>0$ (Adam), implying that the current analysis does not
differentiate between the iteration complexities of Adam and RMSProp.
Consequently, the specific benefits of momentum in Adam remain elusive. This
presents a substantial challenge, given that the impact of momentum is not yet
fully understood even in the context of SGD with momentum. A possible strategy
could be to first establish a theoretical foundation for the advantages of
momentum in SGD, followed by extending these insights to the analysis of Adam.
Moreover, it would be compelling to explore whether Adam can effectively
manage more severe smoothness conditions, such as those bounded by a higher-
order polynomial of the gradient norm.
#### Supplementary materials
Supplementary materials including proofs can be found at
https://arxiv.org/abs/2208.09900.
#### Acknowledgement
This work is founded by the Strategic Priority Research Program of the Chinese
Academy of Sciences under Grant No. XDB0680101, CAS Project for Young
Scientists in Basic Research under Grant No. YSBR-034, Innovation Project of
ICT CAS under Grants No. E261090, NSFC under Grant No. 12326608, Hetao
Shenzhen-Hong Kong Science and Technology Innovation Cooperation Zone Project
under Grant No.HZQSWS-KCCYB-2024016.
## Appendix A Proof of Theorem 2
In this section, we prove Theorem 2. We consider the following function with
variable $\bm{w}=(x,y)\in\mathbb{R}^{2}$:
$f(\bm{w})=f((x,y))=f_{1}(x)+f_{2}(y)$, where
$f_{1}(x)=\left\\{\begin{aligned}
&\frac{L_{0}\exp^{L_{1}x-1}}{L_{1}^{2}}&,x\in[\frac{1}{L_{1}},\infty),\\\
&\frac{L_{0}x^{2}}{2}+\frac{L_{0}}{2L_{1}^{2}}&,x\in[-\frac{1}{L_{1}},\frac{1}{L_{1}}],\\\
&\frac{L_{0}\exp^{-L_{1}x-1}}{L_{1}^{2}}&,x\in(-\infty,-\frac{1}{L_{1}}].\end{aligned}\right.$
(8) $f_{2}(y)=\left\\{\begin{aligned}
&\varepsilon(y-1)+\frac{\varepsilon}{2}&,y\in[1,\infty),\\\
&\frac{\varepsilon}{2}y^{2}&,y\in[-1,1],\\\
&-\varepsilon(y+1)+\frac{\varepsilon}{2}&,y\in(-\infty,-1].\end{aligned}\right.$
(9)
The construction of both functions equation 8 and equation 9 are motivated by
zhang2019gradient. One improvement here is that we introduce a single function
with variable $\bm{w}\in\mathbb{R}^{2}$
$f(\bm{w})=f((x,y))=f_{1}(x)+f_{2}(y)$, which helps us to derive a stronger
conclusion, i.e., the constructed $f$ is independent of $\eta_{1}$. It is easy
to see that this $f(\bm{w})$ satisfies $(L_{0},L_{1})$ condition with
$L_{0}=L_{0}$ and $L_{1}=L_{1}$. We now restate Theorem 2 as follows with
constants specified.
###### Theorem 3 (Theorem 2, restated).
Consider function $f(\bm{w})=f((x,y))=f_{1}(x)+f_{2}(y)$ with $f_{1}(x)$ and
$f_{2}(y)$ defined in equation 8 and equation 9. Consider gradient descent
with diminishing learning rates: $\bm{w}_{k+1}=\bm{w}_{k}-\eta_{k}\nabla
f(\bm{w}_{k})$, where $\eta_{k}=\frac{\eta_{1}}{\sqrt{k}}$. Then for any
$T,M,\bar{f}>0$, denote
$\varepsilon=\sqrt{\frac{\frac{L_{1}M}{2}+\frac{L_{0}}{4}}{2(1+\sqrt{2})(\log(\frac{L_{1}M}{2L_{0}}+\frac{1}{4})+1)}\frac{\bar{f}}{4\sqrt{T}}}$.
As long as $M>\\{\frac{2(e^{\frac{\log
2}{\sqrt{2}-1}-1}-\frac{1}{4})L_{0}}{L_{1}},\varepsilon\\}$ and
$\frac{\bar{f}}{\varepsilon}>6$, there exists an initialization
$\bm{w}_{0}=(x_{0},y_{0})$ such that, $M=\sup\\{\|\nabla
f(\bm{w})\|:\bm{w}\text{ such that }f(\bm{w})\leq f(\bm{w}_{0})\\}$ and
$f(\bm{w}_{0})-f(\bm{w})=\bar{f}$, and for any $\eta_{1}>0$, $\|\nabla
f(\bm{w}_{k})\|\geq\epsilon$ whenever $k<T$.
One can immediately see from the above theorem that
$\varepsilon^{2}=\tilde{\Theta}(\frac{M(f(\bm{w}_{0})-f^{*})}{\sqrt{T}})$,
which gives Theorem 2. Before giving the proof of Theorem 3, we briefly
discuss the difference between ours and (zhang2019gradient, Theorem 4).
Generally speaking, our result is stronger than (zhang2019gradient, Theorem
4). This is because we pick the function before the learning rate: we prove
that there exists a function $f$ and an initialization, such that with any
learning rate GD takes a long time to reach the stationary point, while
(zhang2019gradient, Theorem 4) picks the learning rate before the function:
they prove that with any learning rate, there exists a function $f$ and an
initialization, such that GD takes a long time to reach the stationary point.
We next present the proof of Theorem 3 in Part I and Part II as follows. For
simplicity, we let $\|\cdot\|$ to be the $\ell_{\infty}$ norm, and the proof
can be easily extended to other norms given the equivalence between norms in
$\mathbb{R}^{2}$. We pick
$x_{0}=\frac{\log(\frac{L_{1}M}{2L_{0}}+\frac{1}{4})+1}{L_{1}}$ and
$y_{0}=\frac{f_{1}(x_{0})-\frac{L_{0}}{2L_{1}^{2}}}{\varepsilon}-\frac{1}{2}$.
We have $f_{1}(x_{0})-\min_{x}f_{1}(x)=f_{2}(y_{0})-\min_{y}f_{2}(y)$, and
thus
$f((x_{0},y_{0}))-\min_{x,y}f((x,y))=2\left(f_{1}(x_{0})-\min_{x}f_{1}(x)\right)$.
As $M>\varepsilon$, $\sup\\{\|\nabla f(\bm{w})\|:\bm{w}\text{ such that
}f(\bm{w})\leq f(\bm{w}_{0})\\}$ is achieved at $(x_{0}^{\prime},0)$ where
$x_{0}^{\prime}$ satisfies
$f_{1}(x_{0}^{\prime})-\min_{x}f_{1}(x)=2\left(f_{1}(x_{0})-\min_{x}f_{1}(x)\right)$.
By simple calculation, we have $\sup\\{\|\nabla f(\bm{w})\|:\bm{w}\text{ such
that }f(\bm{w})\leq f(\bm{w}_{0})\\}=M$.
In the proof, we use $x_{k}$ to denote the value of $x$ (i.e., the first
component of $\bm{w}\in\mathbb{R}$) in the $k$-th iteration of gradient
descent. Similarly for $y_{k}$.
#### Part I: Large $\eta_{1}$ can cause divergence.
In this part, we prove that: when using the large initial learning rate
$\eta_{1}\geq\frac{L_{1}(1+\sqrt{2})|x_{0}|}{L_{0}\exp^{L_{1}|x_{0}|-1}}$,
decay-learning-rate gradient descent will never reach stationary points.
We prove this claim by induction. When $k=1$, we claim the following two
statements are true:
(1-I): $|x_{1}|\geq\sqrt{2}|x_{0}|$.
(1-II):
$\eta_{2}=\frac{\eta_{1}}{\sqrt{2}}\geq\frac{L_{1}(1+\sqrt{2})|x_{1}|}{L_{0}\exp^{L_{1}|x_{1}|-1}}$.
We first prove (1-I): without loss of generality, we assume $x_{1}>0$. By the
update rule of gradient descent, we have
$\displaystyle x_{1}$ $\displaystyle=$ $\displaystyle
x_{0}-\eta_{1}\frac{\partial f(x_{0})}{\partial x}$
$\displaystyle\overset{equation~{}\ref{lowerbound_f1}}{=}$ $\displaystyle
x_{0}-\eta_{1}\frac{L_{0}\exp^{L_{1}x_{0}-1}}{L_{1}}$ $\displaystyle\leq$
$\displaystyle
x_{0}-\frac{L_{1}(1+\sqrt{2})|x_{0}|}{L_{0}\exp^{L_{1}|x_{0}|-1}}\frac{L_{0}\exp^{L_{1}x_{0}-1}}{L_{1}}=-\sqrt{2}x_{0}.$
So $|x_{1}|\geq\sqrt{2}|x_{0}|$ and (1-1) is proved. We now prove (1-II).
Before that, we introduce the following lemma.
###### Lemma 2.
Consider any $x,y\in\\{z:|z|\geq\frac{\log
2}{(\sqrt{2}-1)L_{1}},z\in\mathbb{R}\\}$. When $|y|>\sqrt{2}|x|$, then we have
$\frac{|y|}{\exp^{L_{1}|y|}}\leq\frac{1}{\sqrt{2}}\frac{|x|}{\exp^{L_{1}|x|}}$.
###### Proof.
Let $g(z)=\frac{z}{\exp^{L_{1}z}}$. It is easy to see that $\nabla g(z)<0$
when $z>0$. Therefore, when $z_{1}\geq\sqrt{2}z_{2}$, we have $g(z_{1})\leq
g(\sqrt{2}z_{2})$. When $z_{2}\geq\frac{\log 2}{(\sqrt{2}-1)L_{1}}$, we have
$\displaystyle z_{2}\geq\frac{\log 2}{(\sqrt{2}-1)L_{1}}$
$\displaystyle\Leftrightarrow$ $\displaystyle\sqrt{2}L_{1}z_{2}>\log
2+L_{1}z_{2}$ $\displaystyle\Leftrightarrow$
$\displaystyle\exp^{\sqrt{2}L_{1}z_{2}}\geq 2\exp^{L_{1}z_{2}}$
$\displaystyle\Leftrightarrow$
$\displaystyle\frac{1}{\exp^{\sqrt{2}L_{1}z_{2}}}\geq\frac{1}{2\exp^{L_{1}z_{2}}}$
$\displaystyle\Leftrightarrow$
$\displaystyle\frac{\sqrt{2}z_{2}}{\exp^{\sqrt{2}L_{1}z_{2}}}\geq\frac{z_{2}}{\sqrt{2}\exp^{L_{1}z_{2}}}.$
Therefore, we have
$\frac{z_{1}}{\exp^{z_{1}L_{1}}}\leq\frac{\sqrt{2}z_{2}}{\exp^{\sqrt{2}L_{1}z_{2}}}\geq\frac{z_{2}}{\sqrt{2}\exp^{L_{1}z_{2}}}.$
Proof of Lemma 2 is completed.
∎
Now we prove (1-II):
$\displaystyle\eta_{2}=\frac{\eta_{1}}{\sqrt{2}}$ $\displaystyle\geq$
$\displaystyle\frac{L_{1}(1+\sqrt{2})|x_{0}|}{L_{0}\exp^{L_{1}|x_{0}|-1}}\frac{1}{\sqrt{2}}$
$\displaystyle\overset{\text{(1-I) and Lemma
\ref{lemma_lower_bound_decay_lr}}}{\geq}$
$\displaystyle\frac{L_{1}(1+\sqrt{2})|x_{1}|}{L_{0}\exp^{L_{1}|x_{1}|-1}}\sqrt{2}\frac{1}{\sqrt{2}}$
$\displaystyle=$
$\displaystyle\frac{L_{1}(1+\sqrt{2})|x_{1}|}{L_{0}\exp^{L_{1}|x_{1}|-1}}$
So (1-II) is proved. Now we suppose the following two claims hold for $k=2m$
where $m\in\mathbb{N}^{+}$.
(2m-I): $|x_{2m+1}|\geq\sqrt{2}|x_{2m}|$.
(2m-II):
$\eta_{2m+2}=\frac{\eta_{1}}{\sqrt{2m+2}}\geq\frac{L_{1}(1+\sqrt{2})|x_{2m+1}|}{L_{0}\exp^{L_{1}|x_{2m+1}|-1}}$
Then for $k=2m+1$, we prove the following claims hold for $k=2m+1$.
((2m+1)-I): $|x_{2m+2}|\geq\sqrt{2}|x_{2m+1}|$.
((2m+1)-II):
$\eta_{2m+3}=\frac{\eta_{1}}{\sqrt{2m+3}}\geq\frac{L_{1}(1+\sqrt{2})|x_{2m+2}|}{L_{0}\exp^{L_{1}|x_{2m+2}|-1}}$.
We first prove ((2m+1)-I):
$\displaystyle x_{2m+2}$ $\displaystyle=$ $\displaystyle
x_{2m+1}-\eta_{2m+2}\frac{\partial f(x_{2m+1})}{\partial x}$
$\displaystyle\overset{equation~{}\ref{lowerbound_f1}}{=}$ $\displaystyle
x_{2m+1}-\eta_{2m+2}\frac{L_{0}}{L_{1}}\exp^{L_{1}x_{2m+1}-1}$
$\displaystyle\overset{\text{(2m-II)}}{\leq}$ $\displaystyle
x_{2m+1}-\frac{L_{1}(1+\sqrt{2})|x_{2m+1}|}{L_{0}\exp^{L_{1}|x_{2m+1}|-1}}\frac{L_{0}\exp^{L_{1}x_{2m+1}-1}}{L_{1}}$
$\displaystyle\leq$ $\displaystyle-\sqrt{2}x_{2m+1}.$
So $|x_{2m+2}|\geq\sqrt{2}|x_{2m+1}|$ and ((2m+1)-I) is proved. We now prove
((2m+1)-II).
$\displaystyle\eta_{2m+3}$ $\displaystyle=$
$\displaystyle\eta_{2m+2}\sqrt{\frac{2m+2}{2m+3}}$
$\displaystyle\overset{\text{(2m-II)}}{\geq}$
$\displaystyle\frac{L_{1}(1+\sqrt{2})|x_{2m+1}|}{L_{0}\exp^{L_{1}|x_{2m+1}|-1}}\sqrt{\frac{2m+2}{2m+3}}$
$\displaystyle\overset{\text{(2m-I) and Lemma
\ref{lemma_lower_bound_decay_lr}}}{\geq}$
$\displaystyle\frac{L_{1}(1+\sqrt{2})|x_{2m+2}|}{L_{0}\exp^{L_{1}|x_{2m+2}|-1}}\sqrt{2}\sqrt{\frac{2m+2}{2m+3}}$
$\displaystyle\geq$
$\displaystyle\frac{L_{1}(1+\sqrt{2})|x_{2m+2}|}{L_{0}\exp^{L_{1}|x_{2m+2}|-1}}.$
So ((2m+1)-II) is proved. We can derive a similar claim when $k$ is odd. By
the principle of induction, we know that $|x_{k+1}|\geq\sqrt{2}|x_{k}|$ for
any $k\geq 1$. Since $f_{1}(x)$ grows exponentially, gradient descent in Part
I will never reach stationary points.
#### Part II: Small $\eta_{1}$ can cause slow convergence.
In this part, we prove that: when using initialization $\bm{w}\in\Omega$,
decay-learning-rate gradient descent with small initial learning rate
$\eta_{1}<\frac{L_{1}(1+\sqrt{2})|x_{0}|}{L_{0}\exp^{L_{1}|x_{0}|-1}}=\frac{(1+\sqrt{2})(\log(\frac{L_{1}M}{2L_{0}}+\frac{1}{4})+1)}{\frac{L_{1}M}{2}+\frac{L_{0}}{4}}$
will cause slow convergence. For any $k\geq 1$, we have
$\displaystyle y_{k}-y_{k+1}$ $\displaystyle=$
$\displaystyle\eta_{k}\frac{\partial f(\bm{w}_{k})}{\partial y}$
$\displaystyle\overset{equation~{}\ref{lowerbound_f2}}{=}$
$\displaystyle\varepsilon\frac{\eta_{1}}{\sqrt{k}}$ $\displaystyle<$
$\displaystyle\varepsilon\frac{L_{1}(1+\sqrt{2})|x_{0}|}{L_{0}\exp^{L_{1}|x_{0}|-1}}\frac{1}{\sqrt{k}}$
Therefore, we have
$\displaystyle\sum_{k=1}^{K}(y_{k+1}-y_{k})=\sum_{k=1}^{K}\varepsilon\frac{\eta_{1}}{\sqrt{k}}<2\sqrt{k}\varepsilon\eta_{1}<2\sqrt{k}\varepsilon\frac{L_{1}(1+\sqrt{2})|x_{0}|}{L_{0}\exp^{L_{1}|x_{0}|-1}}$
When using initialization $y_{0}$, it is easy to have the following
conclusion: when
$2\frac{(1+\sqrt{2})(\log(\frac{L_{1}M}{2L_{0}}+\frac{1}{4})+1)}{\frac{L_{1}M}{2}+\frac{L_{0}}{4}}\sqrt{k}=2\varepsilon\sqrt{k}\frac{L_{1}(1+\sqrt{2})|x_{1}|}{L_{0}\exp^{L_{1}|x_{1}|-1}}<y_{0}-1=\frac{\left(f_{1}(x_{0})-\min_{x}f_{1}(x)\right)}{\varepsilon}-\frac{3}{2}$,
we have $\frac{\partial f(\bm{w}_{k})}{\partial y}=\varepsilon$. In other
words, we have: $\|\nabla f(\bm{w}_{k})\|\geq\varepsilon$ for all
$k<(\frac{\frac{L_{1}M}{2}+\frac{L_{0}}{4}}{2(1+\sqrt{2})(\log(\frac{L_{1}M}{2L_{0}}+\frac{1}{4})+1)})^{2}\frac{(\frac{f_{2}(y_{0})-\min_{y}f_{2}(y)}{\varepsilon}-\frac{3}{2})^{2}}{\varepsilon^{2}}$.
Recall that
$f(\bm{w}_{1})-\min_{\bm{w}}f(\bm{w})=2(f_{2}(y_{0})-\min_{x}f_{2}(x))$ and
$(\frac{\frac{L_{1}M}{2}+\frac{L_{0}}{4}}{2(1+\sqrt{2})(\log(\frac{L_{1}M}{2L_{0}}+\frac{1}{4})+1)})^{2}\frac{(\frac{f_{2}(y_{0})-\min_{y}f_{2}(y)}{\varepsilon}-\frac{3}{2})^{2}}{\varepsilon^{2}}<T$
by the definition of $\varepsilon$, the proof is completed.
## Appendix B Proof of Theorem 1
This appendix provides the formal proof of Theorem 1, which is organized as
follows. In Section B.1, we first introduce notations that are used in the
proof. In Section B.2, We restate Theorem 1 with constants specified. In
Section B.3, we then make preparations by proving auxiliary lemmas. Finally,
in Section B.4, we prove Theorem 1.
### B.1 Notations
Here we provide a complete list of notations used in the appendix for a clear
reference.
* •
We use $(k_{1},i_{1})\leq(<)(k_{2},i_{2})$ for $\forall
k_{1},k_{2}\in\mathbb{N}^{+}$ and $i_{1},i_{2}\in\\{0,\cdots,n-1\\}$, if
either $k_{1}<k_{2}$ or $k_{1}=k_{2}$ and $i_{1}\leq(<)i_{2}$
* •
We define function $g(x):[0,1]\rightarrow\mathbb{R}^{/-}$ as
$\displaystyle
g(\beta_{2})\triangleq\max\left\\{\frac{1}{\sqrt{\beta_{2}^{n-1}}}-1,1-\frac{1}{\sqrt{\beta_{2}^{n-1}+8n\frac{1-\beta_{2}^{n-1}}{\beta_{2}^{n}}}},1-\sqrt{\beta_{2}},\sqrt{\frac{\beta_{2}}{\left(1-(1-\beta_{2})\frac{2n}{\beta_{2}^{n}}\right)}}-1\right\\}.$
* •
We define constants $\\{C_{i}\\}_{i=1}^{10}$ as follows:
$\displaystyle C_{1}$
$\displaystyle\triangleq\frac{(1-\beta_{1})^{2}}{1-\beta_{2}}\frac{1}{1-\frac{\beta_{1}^{2}}{\beta_{2}}}+1,$
(10) $\displaystyle C_{2}$ $\displaystyle\triangleq
nC_{1}+\frac{\beta_{1}}{1-\beta_{1}}C_{1}\left(1+\sqrt{2}\right),$
$\displaystyle C_{3}$ $\displaystyle\triangleq
C_{1}\left(n(L_{0}+L_{1}\sqrt{D_{0}})+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\frac{\sqrt{\beta_{2}}}{1-\sqrt{\beta_{2}}}+8\sqrt{2n}L_{0}\frac{1}{1-\beta_{2}^{n}}\right),$
$\displaystyle C_{4}$ $\displaystyle\triangleq
4L_{1}C_{1}\sqrt{D_{1}}\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}$
$\displaystyle C_{5}$ $\displaystyle\triangleq
n^{2}(1+n\sqrt{d}C_{1}\eta_{1}L_{1}\sqrt{n}\sqrt{D_{1}})\left(C_{4}+\frac{dC_{4}\sqrt{D_{1}}}{1-\sqrt{\beta_{2}^{n}}}\right),$
$\displaystyle C_{6}$
$\displaystyle\triangleq\left(dC_{3}+\frac{C_{4}n\sqrt{D_{1}}}{1-\sqrt{\beta_{2}^{n}}}\right)\eta^{2}_{1},$
$\displaystyle C_{7}$ $\displaystyle\triangleq
3n\left(C_{4}+\frac{dC_{4}}{1-\sqrt{\beta_{2}^{n}}}\right)\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}\right)n^{2}\sqrt{d}C_{1}\eta_{1}^{3}+\left(dC_{3}+\frac{C_{2}C_{4}n\sqrt{D_{1}}}{1-\sqrt{\beta_{2}^{n}}}\right)\eta^{2}_{1},$
$\displaystyle C_{8}$
$\displaystyle\triangleq\sqrt{\frac{2n^{2}}{\beta_{2}^{n}}}L_{1}\sqrt{D_{1}}n\sqrt{n}+dg(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}L_{1}C_{1}\sqrt{D_{1}}\left(1+\frac{1}{1-\beta_{2}^{n}}\right)(n+n^{\frac{5}{2}}\sqrt{d}C_{1}\eta_{1}L_{1}\sqrt{D_{1}})+2\frac{\beta_{1}}{(1-\beta_{1})\eta_{1}}\sqrt{d}C_{1},$
$\displaystyle C_{9}$
$\displaystyle\triangleq\sqrt{\frac{2n^{2}}{\beta_{2}^{n}}}d(n^{2}L_{0}+n\sqrt{n}L_{1}\sqrt{D_{0}})C_{1}\eta_{1}^{2}+g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}\left(n+\frac{2\sqrt{2}\beta_{1}}{1-\beta_{1}}\right)C_{1}(L_{0}+L_{1}\sqrt{D_{0}})d\sqrt{d}\eta^{2}_{1},$
$\displaystyle C_{10}$ $\displaystyle\triangleq
3dg(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}L_{1}C_{1}\sqrt{D_{1}}\left(1+\frac{1}{1-\beta_{2}^{n}}\right)n\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}\right)n\sqrt{d}C_{1}\eta_{1}^{3}+C_{9},$
$\displaystyle C_{11}$
$\displaystyle\triangleq(\frac{1}{2}+C_{2})C_{5}+C_{8}+\frac{3L_{1}\sqrt{n}\sqrt{D_{1}}C_{2}^{2}d}{2},$
$\displaystyle C_{12}$
$\displaystyle\triangleq(\frac{1}{2}+C_{2})C_{6}+C_{9}+\frac{nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}}{2}3C_{2}^{2}d\eta_{1}^{2},$
$\displaystyle C_{13}$
$\displaystyle\triangleq(\frac{1}{2}+C_{2})C_{7}+C_{10}+\frac{nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}}{2}3C_{2}^{2}d\eta_{1}^{2}.$
### B.2 Restate Theorem 1
Here we restate Theorem 1 with constants specified.
###### Theorem 4 (Theorem 1, restated).
Consider Adam defined as Alg. (1) with diminishing learning rate
$\eta_{k}\equiv\frac{\eta_{1}}{\sqrt{k}}$. Let Assumptions 1 and 2 hold.
Suppose the hyperparamters satisfy: $\gamma<\beta_{2}<1$ and
$0\leq\beta_{1}^{2}<\beta_{2}$, where $\gamma$ is defined as the solution of
$\sqrt{d}g(x)\frac{n}{x^{\frac{n}{2}}}=\frac{1}{2(4+\sqrt{2})\sqrt{D_{1}}\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)}$
with respect to $x$. Then, either
$\min_{k\in[1,T]}\|\nabla f(\bm{w}_{k,0})\|\leq
2\sqrt{d}(2\sqrt{2}+1)\sqrt{D_{0}}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\sqrt{\frac{2n}{\beta_{2}^{n}}},$
or
$\displaystyle\min_{k\in[1,T]}\left\\{\frac{\|\nabla
f(\bm{w}_{k,0})\|}{\sqrt{D_{1}}},\frac{\|\nabla
f(\bm{w}_{k,0})\|^{2}}{\sqrt{D_{0}}+\xi}\right\\}\leq(4(2\sqrt{2}+1))\frac{f(\bm{w}_{1,0})-\min_{\bm{w}}f(\bm{w})}{\eta_{1}\sqrt{T}}$
$\displaystyle~{}~{}~{}+4(2\sqrt{2}+1)\left(C_{12}+\frac{\sqrt{D_{0}}+\xi}{4\sqrt{D_{1}}}C_{11}\eta_{1}^{2}\right)\frac{\ln
T}{\eta_{1}\sqrt{T}}+4(2\sqrt{2}+1)\frac{C_{13}+\frac{\sqrt{D_{0}}+\xi}{4\sqrt{D_{1}}}C_{11}}{\eta_{1}\sqrt{T}}.$
### B.3 Auxiliary Lemmas
In this section, we introduce auxiliary lemmas that will be latter used. In
the remaining proof of this paper, we assume without the loss of generality
that $\eta_{1}$ is small enough, such that the following requirements are
fulfilled: ($C_{1}$ and $C_{2}$ are defined in Eq. (10)).
* •
$2C_{2}\sqrt{d}\eta_{1}\leq\frac{1}{L_{1}}$. This will latter ensure that we
can directly apply the definition of $(L_{0},L_{1})$-smooth condition
(Assumption 1) to parameter sequence $\\{\bm{w}_{k,i}\\}_{k,i}$;
* •
$\frac{1}{4(2\sqrt{2}+1)}\geq\sqrt{D_{1}}C_{11}\eta_{1}$. This will latter
ensure the second-order term is smaller than the first-order term at the end
of the proof.
The proof can be easily extended to general cases by selecting large enough
$K$ and using the epoch $K$ as a new start point and derive the results after
epoch $K$, because the epochs before epoch $K$ can be uniformly bounded due to
$\eta_{k}$ decaying and $K$ finite, and we then derive the desired result for
all epochs.
Without the loss of generality, we also take the following initialization:
$\bm{w}_{1,0}=\bm{w}_{0}$, $\bm{m}_{1,-1}=\nabla f_{\tau_{1,-1}}(\bm{w}_{0})$
where $\tau_{1,-1}$ can be any integer in $[0,n-1]$, and
$\bm{\nu}_{l,1,-1}=\max_{j}\\{\partial_{l}f_{j}(\bm{w}_{0})^{2}\\}~{}\forall
l$ where the maximum is taken component-wisely. We take the initialization to
have a more concise proof, while the proof can be easily extended to all the
initialization as the information of the initialization in the exponentially
decayed average of Adam (both in $\bm{m}_{k,i}$ and $\bm{\nu}_{k,i}$) decays
rapidly with $k$ increasing.
The following lemma shows that $f$ is also $(L_{0},L_{1})$-smooth under
Assumptions 1 and 2 (while the $L_{0}$ and $L_{1}$ are different from those of
$f_{i}$).
###### Lemma 3.
With Assumptions 1 and 2, $f$ satisfies
$(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}},L_{1}\sqrt{n}\sqrt{D_{1}})$-smooth
condition.
###### Proof.
$\forall\bm{w}_{1},\bm{w}_{2}\in\mathbb{R}^{d}$ satisfying
$\|\bm{w}_{1}-\bm{w}_{2}\|\leq\frac{1}{L_{1}}$,
$\displaystyle\|\nabla f(\bm{w}_{1})-\nabla
f(\bm{w}_{2})\|\leq\sum_{i=0}^{n-1}\|\nabla f_{i}(\bm{w}_{1})-\nabla
f_{i}(\bm{w}_{2})\|\leq\sum_{i=0}^{n-1}(L_{0}+L_{1}\|\nabla
f_{i}(\bm{w}_{1})\|)\|\bm{w}_{1}-\bm{w}_{2}\|$ $\displaystyle\leq$
$\displaystyle\left(nL_{0}+L_{1}\sqrt{n}\sqrt{\sum_{i=0}^{n-1}\|\nabla
f_{i}(\bm{w}_{1})\|^{2}}\right)\|\bm{w}_{1}-\bm{w}_{2}\|\leq(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}+D_{1}\|\nabla
f(\bm{w}_{1})\|^{2}})\|\bm{w}_{1}-\bm{w}_{2}\|$ $\displaystyle\leq$
$\displaystyle(nL_{0}+L_{1}\sqrt{n}(\sqrt{D_{0}}+\sqrt{D_{1}}\|\nabla
f(\bm{w}_{1})\|))\|\bm{w}_{1}-\bm{w}_{2}\|\leq(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}+L_{1}\sqrt{n}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{1})\|)\|\bm{w}_{1}-\bm{w}_{2}\|.$
The proof is completed. ∎
The following lemma bounds the update norm of Adam.
###### Lemma 4 (Bounded Update).
If $\beta_{1}<\sqrt{\beta_{2}}$, we have $\forall k\in\mathbb{N}^{+}$,
$i\in\\{0,\cdots,n-1\\}$,
$\frac{|\bm{m}_{l,k,i}|}{\sqrt{\bm{\nu}_{l,k,i}}+\xi}\leq C_{1},$
where $C_{1}$ is defined in Eq. (10).
Furthermore, we have $|\bm{w}_{l,k,i+1}-\bm{w}_{l,k,i}|\leq C_{1}\eta_{k}$,
and thus $\|\bm{w}_{k,i+1}-\bm{w}_{k,i}\|\leq C_{1}\eta_{k}\sqrt{d}$.
###### Proof.
By the definition of $\bm{m}_{k,i}$, we have
$\displaystyle(\bm{m}_{l,k,i})^{2}$ $\displaystyle=$
$\displaystyle\left((1-\beta_{1})\sum_{j=0}^{i}\beta_{1}^{(k-1)n+i-((k-1)n+j)}\partial_{l}f_{\tau_{k,j}}(\bm{w}_{k,j})\right.$
$\displaystyle+\left.(1-\beta_{1})\sum_{m=1}^{k-1}\sum_{j=0}^{n-1}\beta_{1}^{(k-1)n+i-((m-1)n+j)}\partial_{l}f_{\tau_{m,j}}(\bm{w}_{m,j})+\beta_{1}^{(k-1)n+i+1}\partial_{l}f_{\tau_{1,-1}}(\bm{w}_{1,0})\right)^{2}$
$\displaystyle\leq$
$\displaystyle\left((1-\beta_{1})\sum_{j=0}^{i}\beta_{1}^{(k-1)n+i-((k-1)n+j)}|\partial_{l}f_{\tau_{k,j}}(\bm{w}_{k,j})|\right.$
$\displaystyle+\left.(1-\beta_{1})\sum_{m=1}^{k-1}\sum_{j=0}^{n-1}\beta_{1}^{(k-1)n+i-((m-1)n+j)}|\partial_{l}f_{\tau_{m,j}}(\bm{w}_{m,j})|+\beta_{1}^{(k-1)n+i+1}\max_{s\in[n]}|\partial_{l}f_{s}(\bm{w}_{1,0})|\right)^{2}$
$\displaystyle\overset{(\star)}{\leq}$
$\displaystyle\left((1-\beta_{2})\sum_{j=0}^{i}\beta_{2}^{(k-1)n+i-((k-1)n+j)}|\partial_{l}f_{\tau_{k,j}}(\bm{w}_{k,j})|^{2}\right.$
$\displaystyle+\left.(1-\beta_{2})\sum_{m=1}^{k-1}\sum_{j=0}^{n-1}\beta_{2}^{(k-1)n+i-((m-1)n+j)}|\partial_{l}f_{\tau_{m,j}}(\bm{w}_{m,j})|^{2}+\beta_{2}^{(k-1)n+i+1}\max_{s\in[n]}|\partial_{l}f_{s}(\bm{w}_{1,0})|^{2}\right)$
$\displaystyle\cdot\left(\frac{(1-\beta_{1})^{2}}{1-\beta_{2}}\sum_{j=0}^{(k-1)n+i}\left(\frac{\beta_{1}^{2}}{\beta_{2}}\right)^{j}+\left(\frac{\beta_{1}^{2}}{\beta_{2}}\right)^{(k-1)n+i+1}\right)$
$\displaystyle\overset{(\ast)}{=}$
$\displaystyle\left(\frac{(1-\beta_{1})^{2}}{1-\beta_{2}}\sum_{j=0}^{(k-1)n+i}\left(\frac{\beta_{1}^{2}}{\beta_{2}}\right)^{j}+\left(\frac{\beta_{1}^{2}}{\beta_{2}}\right)^{(k-1)n+i+1}\right)\bm{\nu}_{l,k,i}$
$\displaystyle\leq$
$\displaystyle\left(\frac{(1-\beta_{1})^{2}}{1-\beta_{2}}\frac{1}{1-\frac{\beta_{1}^{2}}{\beta_{2}}}+1\right)\bm{\nu}_{l,k,i}=C_{1}\bm{\nu}_{l,k,i},$
where Eq. ($\star$) is due to the Cauchy-Schwartz’s Inequality, and Eq.
($\ast$) is due to the definition of $\bm{\nu}_{l,1,-1}$. We complete the
proof of the first claim. The second claim then follows directly from the
update rule
$\bm{w}_{l,k,i+1}-\bm{w}_{l,k,i}=\eta_{k}\frac{\bm{m}_{l,k,i}}{\sqrt{\bm{\nu}_{l,k,i}}+\xi}.$
The proof is completed. ∎
Define
$\bm{u}_{k}\triangleq\frac{\bm{w}_{k,0}-\beta_{1}\bm{w}_{k,-1}}{1-\beta_{1}}$
(with $\bm{w}_{1,-1}\triangleq\bm{w}_{1,0}$), and let $\bm{u}_{l,k}$ be the
$i$-th component of $\bm{u}_{k}$, $\forall k\in\mathbb{N}^{+}$, $l\in[d]$. The
following lemma bounds the distance between $\bm{u}_{l,k}$ and
$\bm{w}_{l,k,0}$ and the distance between $\bm{u}_{l,k+1}$ and $\bm{u}_{l,k}$.
###### Lemma 5.
$\forall k\geq 1$,
$\displaystyle|\bm{u}_{l,k}-\bm{w}_{l,k,0}|\leq C_{2}\eta_{k},$ (11)
$\displaystyle|\bm{u}_{l,k+1}-\bm{u}_{l,k}|\leq C_{2}\eta_{k},$ (12)
where $C_{2}$ is defined in Eq. (10).
###### Proof.
By Lemma 4, we immediately have $\forall l\in[d]$,
$|\bm{u}_{l,k}-\bm{w}_{l,k,0}|$ is bounded as
$\displaystyle|\bm{u}_{l,k}-\bm{w}_{l,k,0}|=\left|\frac{\bm{w}_{l,k,0}-\beta_{1}\bm{w}_{l,k,-1}}{1-\beta_{1}}-\bm{w}_{l,k,0}\right|$
$\displaystyle=$
$\displaystyle\frac{\beta_{1}}{1-\beta_{1}}\left|\bm{w}_{l,k,0}-\bm{w}_{l,k,-1}\right|\leq\frac{\beta_{1}}{1-\beta_{1}}C_{1}\eta_{1}\frac{1}{\sqrt{k-1}}\leq\frac{\sqrt{2}\beta_{1}}{1-\beta_{1}}C_{1}\eta_{1}\frac{1}{\sqrt{k}}\leq\frac{\sqrt{2}\beta_{1}}{1-\beta_{1}}C_{1}\eta_{k}\leq
C_{2}\eta_{k},$
and
$\displaystyle|\bm{u}_{l,k+1}-\bm{u}_{l,k}|$ $\displaystyle=$
$\displaystyle\left|\frac{\bm{w}_{l,k+1,0}-\beta_{1}\bm{w}_{l,k+1,-1}}{1-\beta_{1}}-\frac{\bm{w}_{l,k,0}-\beta_{1}\bm{w}_{l,k,-1}}{1-\beta_{1}}\right|$
$\displaystyle=$
$\displaystyle\left|\left(\bm{w}_{l,k+1,0}-\bm{w}_{l,k,0}\right)+\frac{\beta_{1}}{1-\beta_{1}}\left(\bm{w}_{l,k+1,0}-\bm{w}_{l,k+1,-1}\right)-\frac{\beta_{1}}{1-\beta_{1}}\left(\bm{w}_{l,k,0}-\bm{w}_{l,k,-1}\right)\right|$
$\displaystyle\leq$
$\displaystyle\left|\left(\bm{w}_{l,k+1,0}-\bm{w}_{l,k,0}\right)+\frac{\beta_{1}}{1-\beta_{1}}\left(\bm{w}_{l,k+1,0}-\bm{w}_{l,k+1,-1}\right)-\frac{\beta_{1}}{1-\beta_{1}}\left(\bm{w}_{l,k,0}-\bm{w}_{l,k,-1}\right)\right|$
$\displaystyle\leq$ $\displaystyle
nC_{1}\eta_{1}\frac{1}{\sqrt{k}}+\frac{\beta_{1}}{1-\beta_{1}}C_{1}\eta_{1}\left(\frac{1}{\sqrt{k}}+\frac{\sqrt{2}}{\sqrt{k}}\right)=C_{2}\eta_{1}\frac{1}{\sqrt{k}}=C_{2}\eta_{k}.$
∎
In the following lemma, we bound the change of the gradient within one epoch.
###### Lemma 6.
$\forall k\in\mathbb{N}^{+},i\in\\{0,\cdots,n-1\\}$,
$\|\nabla
f(\bm{w}_{k,i})\|\leq(1+n\sqrt{d}C_{1}\eta_{1}L_{1}\sqrt{n}\sqrt{D_{1}})\|\nabla
f(\bm{w}_{k,0})\|+\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}\right)n\sqrt{d}C_{1}\eta_{k},$
where $C_{1}$ is defined in Eq. (10).
###### Proof.
By Assumption 1 and Lemma 3, we have
$\displaystyle\|\nabla f(\bm{w}_{k,i})\|\leq$ $\displaystyle\|\nabla
f(\bm{w}_{k,0})\|+\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}+L_{1}\sqrt{n}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k,0})\|\right)\|\bm{w}_{k,i}-\bm{w}_{k,0}\|$ $\displaystyle\leq$
$\displaystyle\|\nabla
f(\bm{w}_{k,0})\|+\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}+L_{1}\sqrt{n}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k,0})\|\right)i\sqrt{d}C_{1}\eta_{k}$ $\displaystyle\leq$
$\displaystyle(1+n\sqrt{d}C_{1}\eta_{1}L_{1}\sqrt{n}\sqrt{D_{1}})\|\nabla
f(\bm{w}_{k,0})\|+\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}\right)n\sqrt{d}C_{1}\eta_{k}.$
The proof is completed. ∎
We further need a descent lemma assuming $(L_{0},L_{1})$-smooth condition
similar to the case assuming $L$ smoothness. Specifically, for a function $h$
satisfying $L$-smooth condition and two points $\bm{w}$ and $\bm{v}$, by
Taylor’s expansion, we have
$h(\bm{w})\leq h(\bm{v})+\langle\nabla
h(\bm{v}),\bm{w}-\bm{v}\rangle+\frac{L}{2}\|\bm{w}-\bm{v}\|^{2}.$
This is called ”Descent Lemma” by existing literature (sra2014slides), as it
guarantees that the loss decreases with proper parameter update. Paralleling
to the above inequality, we establish the following descent lemma under the
$(L_{0},L_{1})$-smooth condition.
###### Lemma 7.
Assume that function $h:\mathcal{X}\rightarrow\mathbb{R}$ satisfies the
$(L_{0},L_{1})$-smooth condition, i.e., $\forall\bm{w},\bm{v}\in\mathcal{X}$
satisfying $\|\bm{w}-\bm{v}\|\leq\frac{1}{L_{1}}$,
$\|\nabla h(\bm{w})-\nabla h(\bm{v})\|\leq(L_{0}+L_{1}\|\nabla
h(\bm{v})\|)\|\bm{w}-\bm{v}\|.$
Then, for any three points $\bm{u},\bm{w},\bm{v}\in\mathcal{X}$ satisfying
$\|\bm{w}-\bm{u}\|\leq\frac{1}{L_{1}}$ and
$\|\bm{v}-\bm{u}\|\leq\frac{1}{L_{1}}$, we have
$h(\bm{w})\leq h(\bm{v})+\langle\nabla
h(\bm{u}),\bm{w}-\bm{v}\rangle+\frac{1}{2}(L_{0}+L_{1}\|\nabla
h(\bm{u})\|)(\|\bm{v}-\bm{u}\|+\|\bm{w}-\bm{u}\|)\|\bm{w}-\bm{v}\|.$
###### Proof.
By the Fundamental Theorem of Calculus, we have
$\displaystyle h(\bm{w})=$ $\displaystyle h(\bm{v})+\int_{0}^{1}\langle\nabla
h(\bm{v}+a(\bm{w}-\bm{v})),\bm{w}-\bm{v}\rangle\mathrm{d}a$ $\displaystyle=$
$\displaystyle h(\bm{v})+\langle\nabla
h(\bm{u}),\bm{w}-\bm{v}\rangle+\int_{0}^{1}\langle\nabla
h(\bm{v}+a(\bm{w}-\bm{v}))-\nabla h(\bm{u}),\bm{w}-\bm{v}\rangle\mathrm{d}a$
$\displaystyle\leq$ $\displaystyle h(\bm{v})+\langle\nabla
h(\bm{u}),\bm{w}-\bm{v}\rangle+\int_{0}^{1}\|\nabla
h(\bm{v}+a(\bm{w}-\bm{v}))-\nabla h(\bm{u})\|\|\bm{w}-\bm{v}\|\mathrm{d}a$
$\displaystyle\overset{(\star)}{\leq}$ $\displaystyle h(\bm{v})+\langle\nabla
h(\bm{u}),\bm{w}-\bm{v}\rangle+\int_{0}^{1}(L_{0}+L_{1}\|\nabla
h(\bm{u})\|)\|\bm{v}+a(\bm{w}-\bm{v})-\bm{u}\|\|\bm{w}-\bm{v}\|\mathrm{d}a$
$\displaystyle\leq$ $\displaystyle h(\bm{v})+\langle\nabla
h(\bm{u}),\bm{w}-\bm{v}\rangle+\int_{0}^{1}(L_{0}+L_{1}\|\nabla
h(\bm{u})\|)((1-a)\|\bm{v}-\bm{u}\|+a\|\bm{w}-\bm{u}\|)\|\bm{w}-\bm{v}\|\mathrm{d}a$
$\displaystyle\leq$ $\displaystyle h(\bm{v})+\langle\nabla
h(\bm{u}),\bm{w}-\bm{v}\rangle+\frac{1}{2}(L_{0}+L_{1}\|\nabla
h(\bm{u})\|)(\|\bm{v}-\bm{u}\|+\|\bm{w}-\bm{u}\|)\|\bm{w}-\bm{v}\|,$
where Inequality $(\star)$ is due to
$\|\bm{v}+a(\bm{w}-\bm{v})-\bm{u}\|=\|(1-a)(\bm{v}-\bm{u})+a(\bm{w}-\bm{u})\|\leq(1-a)\|\bm{v}-\bm{u}\|+a\|\bm{w}-\bm{u}\|\leq\frac{1}{L_{1}}.$
Thus the definition of $(L_{0},L_{1})$-smooth condition can be applied and the
proof is completed. ∎
Based on Lemma 4, we bound the momentum using the gradient of the current step
plus some error terms.
###### Lemma 8 (Estimation of the norm of the momentum).
We have for all $l\in[d],k\in\mathbb{Z}^{+},i\in[n]$,
$\displaystyle|\bm{m}_{l,k,i}|\leq$
$\displaystyle\max_{i^{\prime}\in[n]}|\partial_{l}f_{i^{\prime}}(\bm{w}_{k,0})|+\left(n+\frac{2\sqrt{2}\beta_{1}}{1-\beta_{1}}\right)C_{1}(L_{0}+L_{1}\sqrt{D_{0}})\sqrt{d}\eta_{k}+L_{1}C_{1}\sqrt{D_{1}}\eta_{k}\sum_{j=0}^{i-1}\|\nabla
f(\bm{w}_{k,j})\|$
$\displaystyle+L_{1}C_{1}\sqrt{D_{1}}\sum_{t=1}^{k-1}\eta_{k-t}\sum_{j=0}^{n-1}\beta_{1}^{tn+i-j}\|\nabla
f(\bm{w}_{k-t,j})\|,$
where $C_{1}$ is defined in Eq. (10). Similarly,
$l\in[d],k\in\mathbb{Z}^{+}/\\{1\\}$,
$|\bm{m}_{l,k-1,n-1}|\leq\max_{i^{\prime}\in[n]}|\partial_{l}f_{i^{\prime}}(\bm{w}_{k,0})|+\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}\beta_{1}^{tn-1-j}C_{1}\eta_{k-t}\sqrt{d}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-t,j})\|+\frac{2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}}{1-\beta_{1}}.$
###### Proof.
To begin with, for any $t\in[k-1]$ and any $j\in[0,n-1]$, we have the
following estimation for $\partial_{l}f_{i}(\bm{w}_{k-t,j})$:
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k-t,j})|$ $\displaystyle\leq$
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,0})|+\sum_{p=j}^{n-1}|\partial_{l}f_{i}(\bm{w}_{k-t,p})-\partial_{l}f_{i}(\bm{w}_{k-t,p+1})|+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}|\partial_{l}f_{i}(\bm{w}_{k-r,p})-\partial_{l}f_{i}(\bm{w}_{k-r,p+1})|$
$\displaystyle\overset{(\star)}{\leq}$
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,0})|+\sum_{p=j}^{n-1}(L_{0}+L_{1}\|\nabla
f_{i}(\bm{w}_{k-t,p})\|)\|\bm{w}_{k-t,p}-\bm{w}_{k-t,p+1}\|$
$\displaystyle+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}(L_{0}+L_{1}\|\nabla
f_{i}(\bm{w}_{k-r,p})\|)\|\bm{w}_{k-r,p}-\bm{w}_{k-r,p+1}\|$
$\displaystyle\leq$
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,0})|+\sum_{p=j}^{n-1}(L_{0}+L_{1}\|\nabla
f_{i}(\bm{w}_{k-t,p})\|)C_{1}\eta_{k-t}\sqrt{d}+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}(L_{0}+L_{1}\|\nabla
f_{i}(\bm{w}_{k-r,p})\|)C_{1}\eta_{k-r}\sqrt{d}$ $\displaystyle\leq$
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,0})|+\sum_{p=j}^{n-1}\left(L_{0}+L_{1}\sqrt{\sum_{i^{\prime}\in[n]}\|\nabla
f_{i^{\prime}}(\bm{w}_{k-t,p})\|^{2}}\right)C_{1}\eta_{k-t}\sqrt{d}$
$\displaystyle+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}\left(L_{0}+L_{1}\sqrt{\sum_{i^{\prime}\in[n]}\|\nabla
f_{i^{\prime}}(\bm{w}_{k-r,p})\|^{2}}\right)C_{1}\eta_{k-r}\sqrt{d},$
where Inequality $(\star)$ is due to $(L_{0},L_{1})$-smooth condition. By
Assumption 2, the RHS of the above inequality can be bounded as
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,0})|+\sum_{p=j}^{n-1}\left(L_{0}+L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-t,p})\|+L_{1}\sqrt{D_{0}}\right)C_{1}\eta_{k-t}\sqrt{d}$
$\displaystyle+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}\left(L_{0}+L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|+L_{1}\sqrt{D_{0}}\right)C_{1}\eta_{k-r}\sqrt{d}$
$\displaystyle\overset{(*)}{\leq}$
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,0})|+\sum_{p=j}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-t,p})C_{1}\eta_{k-t}\sqrt{d}+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}$
$\displaystyle+2(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k-1}(tn-j)$
$\displaystyle\leq$
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,0})|+\sum_{p=j}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-t,p})C_{1}\eta_{k-t}\sqrt{d}+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}$
$\displaystyle+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(tn-j).$
where Inequality $(*)$ is due to $\forall a,b\in\mathbb{N}^{+},a>b$,
$\sum_{i=0}^{b}\frac{1}{\sqrt{a-i}}\leq 2\frac{b+1}{a}$. Similarly, we have
that for any $j\in[0,n-1]$,
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,j})|\leq|\partial_{l}f_{i}(\bm{w}_{k,0})|+\sum_{p=0}^{j-1}|\partial_{l}f_{i}(\bm{w}_{k,p+1})-\partial_{l}f_{i}(\bm{w}_{k,p})|$
$\displaystyle\leq$
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,0})|+\sum_{p=0}^{j-1}\left(L_{0}+L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k,p})\|+L_{1}\sqrt{D_{0}}\right)C_{1}\eta_{k}\sqrt{d}$
$\displaystyle=$
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,0})|+\sum_{p=0}^{j-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k,p})\|C_{1}\eta_{k}\sqrt{d}+j(L_{0}+L_{1}\sqrt{D}_{0})C_{1}\sqrt{d}\eta_{k}.$
Therefore, the norm of $\bm{m}_{l,k,i}$ can be bounded as
$\displaystyle|\bm{m}_{l,k,i}|$ $\displaystyle\leq$
$\displaystyle(1-\beta_{1})\sum_{j=0}^{i}\beta_{1}^{(k-1)n+i-((k-1)n+j)}|\partial_{l}f_{\tau_{k,j}}(\bm{w}_{k,j})|+(1-\beta_{1})\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}\beta_{1}^{tn+i-j}|\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k-t,j})|$
$\displaystyle+\beta_{1}^{(k-1)n+i+1}|\partial_{l}f_{\tau_{1,0}}(\bm{w}_{1,0})|$
$\displaystyle\leq$
$\displaystyle(1-\beta_{1})\sum_{j=0}^{i}\beta_{1}^{(k-1)n+i-((k-1)n+j)}|\partial_{l}f_{\tau_{k,j}}(\bm{w}_{k,0})|+(1-\beta_{1})\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}\beta_{1}^{tn+i-j}|\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k,0})|$
$\displaystyle+\beta_{1}^{(k-1)n+i+1}|\partial_{l}f_{\tau_{1,0}}(\bm{w}_{k,0})|$
$\displaystyle+(1-\beta_{1})\sum_{j=0}^{i}\beta_{1}^{(k-1)n+i-((k-1)n+j)}\left(\sum_{p=0}^{j-1}C_{1}\eta_{k}\sqrt{d}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k,p})\|+(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\eta_{k}\sqrt{d}j\right)$
$\displaystyle+(1-\beta_{1})\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}\beta_{1}^{tn+i-j}\left(\sum_{p=j}^{n-1}C_{1}\eta_{k-t}\sqrt{d}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-t,p})\|\right.$
$\displaystyle\left.+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}C_{1}\eta_{k-r}\sqrt{d}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(tn-j)\right)$
$\displaystyle+\beta_{1}^{(k-1)n+i+1}\left(\sum_{t=1}^{k-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(k-1)n\right)$
$\displaystyle\overset{(\star)}{\leq}$
$\displaystyle\max_{i\in[n]}\left|\partial_{l}f_{i}(\bm{w}_{k,0})\right|+\left(n+\frac{2\sqrt{2}\beta_{1}}{1-\beta_{1}}\right)\sqrt{d}C_{1}(L_{0}+L_{1}\sqrt{D_{0}})\eta_{k}+L_{1}C_{1}\sqrt{D_{1}}\eta_{k}\sum_{j=0}^{i-1}\|\nabla
f(\bm{w}_{k,j})\|$
$\displaystyle+L_{1}C_{1}\sqrt{D_{1}}\sum_{t=1}^{k-1}\eta_{k-t}\sum_{j=0}^{n-1}\beta_{1}^{tn+i-j}\|\nabla
f(\bm{w}_{k-t,j})\|,$
where Inequality $(\star)$ is due to an exchange in the sum order.
Following the same routine, we have
$\displaystyle|\bm{m}_{l,k,-1}|$ $\displaystyle\leq$
$\displaystyle(1-\beta_{1})\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}\beta_{1}^{tn-1-j}|\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k-t,j})|+\beta_{1}^{(k-1)n}|\partial_{l}f_{\tau_{1,0}}(\bm{w}_{1,0})|$
$\displaystyle\leq$
$\displaystyle(1-\beta_{1})\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}\beta_{1}^{tn-1-j}|\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k,0})|+\beta_{1}^{(k-1)n}|\partial_{l}f_{\tau_{1,0}}(\bm{w}_{k,0})|$
$\displaystyle+(1-\beta_{1})\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}\beta_{1}^{tn-1-j}C_{1}\sqrt{d}\left(\sum_{p=j}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-t,p})\|\eta_{k-t}+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|\eta_{k-r}\right.$
$\displaystyle+\left.2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(tn-j)\right)$
$\displaystyle+\beta_{1}^{(k-1)n}\left(\sum_{t=1}^{k-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(k-1)n\right)$
$\displaystyle\leq$
$\displaystyle\max_{i\in[n]}\left|\partial_{l}f_{i}(\bm{w}_{k,0})\right|+\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}\beta_{1}^{tn-1-j}C_{1}\eta_{k-t}\sqrt{d}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-t,j})\|~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle+\frac{2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}}{1-\beta_{1}}.$
The proof is completed. ∎
Similarly, we can upper and lower bound the adaptor $\bm{\nu}_{k,0}$ by the
gradient plus some error terms.
###### Lemma 9 (Estimation of the norm of the adaptor).
We have for all $l\in[d],k\in\mathbb{Z}^{+}$,
$\displaystyle|\bm{\nu}_{l,k,0}|\geq$
$\displaystyle\beta_{2}^{n}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\sum_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}-\sqrt{\sum_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})^{2}|}\left(8\sqrt{2n}\eta_{k}C_{1}L_{0}\frac{1-\beta_{2}}{(1-\beta_{2}^{n})^{2}}\beta_{2}^{n}\right.$
$\displaystyle+\left.4L_{1}C_{1}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\left(\sum_{t=1}^{k-1}\beta_{2}^{n}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-t}\sum_{j=0}^{n-1}(\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-t,j})\|+\sqrt{D_{0}})\right)\right),$
and
$\displaystyle|\bm{\nu}_{l,k,0}|\leq$ $\displaystyle
2\max_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}+2\left(2\sqrt{2}\eta_{k}C_{1}(L_{0}+L_{1}\sqrt{D_{0}})\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\frac{\sqrt{\beta_{2}}}{1-\sqrt{\beta_{2}}}\right.$
$\displaystyle+\left.L_{1}C_{1}\sqrt{D_{1}}\sum_{t=1}^{k-1}\eta_{k-t}\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\sum_{j=0}^{n-1}\sqrt{\beta_{2}}^{(t-1)n}\|\nabla
f(\bm{w}_{k-t,j})\|\right)^{2},$
where $C_{1}$ is defined in Eq. (10).
###### Proof.
By the definition of $\bm{\nu}_{l,k,0}$, we have
$\displaystyle\bm{\nu}_{l,k,0}$ $\displaystyle=$
$\displaystyle(1-\beta_{2})\partial_{l}f_{\tau_{k,0}}(\bm{w}_{k,0})^{2}+\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}(1-\beta_{2})\beta_{2}^{tn-j}\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k-t,j})^{2}+\beta_{2}^{(k-1)n+1}\max_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{1,0})^{2}$
$\displaystyle\geq$
$\displaystyle(1-\beta_{2})\partial_{l}f_{\tau_{k,0}}(\bm{w}_{k,0})^{2}+\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}(1-\beta_{2})\beta_{2}^{tn}\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k-t,j})^{2}+\beta_{2}^{(k-1)n+1}\frac{1}{n}\sum_{i=1}^{n}\partial_{l}f_{i}(\bm{w}_{1,0})^{2}$
$\displaystyle=$
$\displaystyle(1-\beta_{2})\partial_{l}f_{\tau_{k,0}}(\bm{w}_{k,0})^{2}+\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}(1-\beta_{2})\beta_{2}^{tn}(\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k,0})+\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k-t,j})-\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k,0}))^{2}$
$\displaystyle+\beta_{2}^{(k-1)n+1}\frac{1}{n}\sum_{i=1}^{n}(\partial_{l}f_{i}(\bm{w}_{k,0})+\partial_{l}f_{i}(\bm{w}_{1,0})-\partial_{l}f_{i}(\bm{w}_{k,0}))^{2}$
$\displaystyle\geq$
$\displaystyle(1-\beta_{2})\partial_{l}f_{\tau_{k,0}}(\bm{w}_{k,0})^{2}+\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}(1-\beta_{2})\beta_{2}^{tn}\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k,0})^{2}+\beta_{2}^{(k-1)n+1}\frac{1}{n}\sum_{i=1}^{n}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}$
$\displaystyle-\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}(1-\beta_{2})\beta_{2}^{tn}|\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k,0})||\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k,0})-\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k-t,j})|$
$\displaystyle-\beta_{2}^{(k-1)n+1}\frac{1}{n}\sum_{i=1}^{n}|\partial_{l}f_{i}(\bm{w}_{k,0})||\partial_{l}f_{i}(\bm{w}_{k,0})-\partial_{l}f_{i}(\bm{w}_{1,0})|$
Since $f_{i}$ is $(L_{0},L_{1})$-smooth, the RHS of the above inequality can
be further lower bounded as follows:
$\displaystyle\left(\beta_{2}^{n}\frac{1-\beta_{2}^{(k-1)n}}{1-\beta_{2}^{n}}(1-\beta_{2})+\frac{\beta_{2}^{(k-1)n+1}}{n}\right)\sum_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}$
$\displaystyle-\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}(1-\beta_{2})\beta_{2}^{tn}|\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k,0})|\left(\sum_{r=1}^{t}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}tn\right)$
$\displaystyle-\beta_{2}^{(k-1)n+1}\frac{1}{n}\sum_{i=1}^{n}|\partial_{l}f_{i}(\bm{w}_{k,0})|\left(\sum_{r=1}^{k-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(k-1)n\right)$
$\displaystyle\geq$
$\displaystyle\beta_{2}^{n}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\sum_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}$
$\displaystyle-\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}(1-\beta_{2})\beta_{2}^{tn}|\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k,0})|\left(\sum_{r=1}^{t}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}tn\right)$
$\displaystyle-\beta_{2}^{(k-1)n+1}\frac{1}{n}\sum_{i=1}^{n}|\partial_{l}f_{i}(\bm{w}_{k,0})|\left(\sum_{r=1}^{k-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(k-1)n\right),$
where the last inequality we use
$\beta_{2}^{n}\frac{1-\beta_{2}^{(k-1)n}}{1-\beta_{2}^{n}}(1-\beta_{2})+\frac{\beta_{2}^{(k-1)n+1}}{n}\geq\beta_{2}^{n}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}$.
$\displaystyle\geq$
$\displaystyle\beta_{2}^{n}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\sum_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}-8\sqrt{2}\eta_{k}C_{1}L_{0}\frac{1-\beta_{2}}{(1-\beta_{2}^{n})^{2}}\beta_{2}^{n}\sum_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|$
$\displaystyle-4L_{1}C_{1}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\sum_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\left(\sum_{r=1}^{k-1}\beta_{2}^{rn}\eta_{k-r}\sum_{j=0}^{n-1}\|\nabla
f_{i}(\bm{w}_{k-r,j})\|\right)$ $\displaystyle\geq$
$\displaystyle\beta_{2}^{n}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\sum_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}-8\sqrt{2}\eta_{k}C_{1}L_{0}\frac{1-\beta_{2}}{(1-\beta_{2}^{n})^{2}}\beta_{2}^{n}\sum_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|$
$\displaystyle-4L_{1}C_{1}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\|\nabla
f_{i}(\bm{w}_{k,0})\|\left(\sum_{r=1}^{k-1}\beta_{2}^{rn}\eta_{k-r}\sum_{j=0}^{n-1}(\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,j})\|+\sqrt{D_{0}})\right)$ $\displaystyle\geq$
$\displaystyle\beta_{2}^{n}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\sum_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}-8\sqrt{2n}\eta_{k}C_{1}L_{0}\frac{1-\beta_{2}}{(1-\beta_{2}^{n})^{2}}\beta_{2}^{n}\sqrt{\sum_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})^{2}|}$
$\displaystyle-4L_{1}C_{1}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\sqrt{\sum_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})^{2}|}\left(\sum_{r=1}^{k-1}\beta_{2}^{rn}\eta_{k-r}\sum_{j=0}^{n-1}(\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,j})\|+\sqrt{D_{0}})\right)$ $\displaystyle\geq$
$\displaystyle\beta_{2}^{n}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\sum_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}-8\sqrt{2n}\eta_{k}C_{1}L_{0}\frac{1-\beta_{2}}{(1-\beta_{2}^{n})^{2}}\beta_{2}^{n}\sqrt{\sum_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})^{2}|}$
$\displaystyle-4L_{1}C_{1}\frac{1-\beta_{2}}{1-\beta_{2}^{n}}\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\sqrt{\sum_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})^{2}|}\left(\sum_{r=1}^{k-1}\beta_{2}^{n}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}\sum_{j=0}^{n-1}(\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,j})\|+\sqrt{D_{0}})\right).$
The first claim is proved.
As for the upper bound, we have
$\displaystyle\bm{\nu}_{l,k,0}$ $\displaystyle=$
$\displaystyle(1-\beta_{2})\partial_{l}f_{\tau_{k,0}}(\bm{w}_{k,0})^{2}+\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}(1-\beta_{2})\beta_{2}^{tn-j}\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k-t,j})^{2}+\beta_{2}^{(k-1)n+1}\max_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{1,0})^{2}$
$\displaystyle\leq$ $\displaystyle
2(1-\beta_{2})\partial_{l}f_{\tau_{k,0}}(\bm{w}_{k,0})^{2}+2\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}(1-\beta_{2})\beta_{2}^{tn-j}\partial_{l}f_{\tau_{k-t,j}}(\bm{w}_{k,0})^{2}+2\beta_{2}^{(k-1)n+1}\max_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}$
$\displaystyle+2\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}(1-\beta_{2})\beta_{2}^{tn-j}\left(\sum_{p=j}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-t,p})C_{1}\eta_{k-t}\sqrt{d}\right.$
$\displaystyle\left.+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(tn-j)\right)^{2}$
$\displaystyle+2\beta_{2}^{(k-1)n+1}\left(\sum_{r=1}^{k-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(k-1)n\right)^{2}$
$\displaystyle\leq$ $\displaystyle
2\max_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}+2\left(\sum_{t=1}^{k-1}\sum_{j=0}^{n-1}\sqrt{1-\beta_{2}}\sqrt{\beta_{2}}^{tn-j}\left(\sum_{p=j}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-t,p})C_{1}\eta_{k-t}\sqrt{d}\right.\right.$
$\displaystyle\left.+\sum_{r=1}^{t-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(tn-j)\right)$
$\displaystyle+\left.\sqrt{\beta_{2}}^{(k-1)n+1}\left(\sum_{r=1}^{k-1}\sum_{p=0}^{n-1}L_{1}\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,p})\|C_{1}\eta_{k-r}\sqrt{d}+2\sqrt{2}(L_{0}+L_{1}\sqrt{D_{0}})C_{1}\sqrt{d}\eta_{k}(k-1)n\right)\right)^{2}$
$\displaystyle\leq$ $\displaystyle
2\max_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}+2\left(2\sqrt{2}\eta_{k}C_{1}(L_{0}+L_{1}\sqrt{D_{0}})\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\frac{\sqrt{\beta_{2}}}{1-\sqrt{\beta_{2}}}\right.$
$\displaystyle+\left.L_{1}C_{1}\sqrt{D_{1}}\sum_{t=1}^{k-1}\eta_{k-t}\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\sum_{j=0}^{n-1}\sqrt{\beta_{2}}^{tn-j}\|\nabla
f(\bm{w}_{k-t,j})\|\right)^{2}$ $\displaystyle\leq$ $\displaystyle
2\max_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}+2\left(2\sqrt{2}\eta_{k}C_{1}(L_{0}+L_{1}\sqrt{D_{0}})\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\frac{\sqrt{\beta_{2}}}{1-\sqrt{\beta_{2}}}\right.$
$\displaystyle+\left.L_{1}C_{1}\sqrt{D_{1}}\sum_{t=1}^{k-1}\eta_{k-t}\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\sum_{j=0}^{n-1}\sqrt{\beta_{2}}^{(t-1)n}\|\nabla
f(\bm{w}_{k-t,j})\|\right)^{2}.$
The proof is completed. ∎
We then immediately have the following corollary when
$\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|$ is large enough compared to
the error term.
###### Corollary 5 (Lemma 1, formal).
If
$\displaystyle\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\geq$
$\displaystyle
4L_{1}C_{1}\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\left(\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}\sum_{j=0}^{n-1}(\sqrt{D_{1}}\|\nabla
f(\bm{w}_{k-r,j})\|+\sqrt{D_{0}})\right)$
$\displaystyle+2\sqrt{2}\eta_{k}C_{1}(L_{0}+L_{1}\sqrt{D_{0}})\frac{\sqrt{1-\beta_{2}}}{1-\sqrt{\beta_{2}}}\frac{\sqrt{\beta_{2}}}{1-\sqrt{\beta_{2}}}+8\sqrt{2n}\eta_{k}C_{1}L_{0}\frac{1}{1-\beta_{2}^{n}}$
$\displaystyle+\eta_{k}C_{1}\left(n(L_{0}+L_{1}\sqrt{D_{0}})+L_{1}\sqrt{D_{1}}\left(\sum_{p=0}^{n-1}\|\nabla
f(\bm{w}_{k,p})\|\right)\right),$ (13)
then
$\frac{\beta_{2}^{n}}{2}\frac{1}{n}\sum_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2}\leq\bm{\nu}_{l,k,0}\leq
4\max_{i\in[n]}\partial_{l}f_{i}(\bm{w}_{k,0})^{2},$
where $C_{1}$ is defined in Eq. (10). Furthermore, if Eq. (13) holds, we have
$\forall i\in\\{0,\cdots,n-1\\}$,
$\beta_{2}^{n-1}\bm{\nu}_{l,k,0}\leq\bm{\nu}_{l,k,i}\leq\left(\beta_{2}^{n-1}+8n\frac{1-\beta_{2}^{n-1}}{\beta_{2}^{n}}\right)\bm{\nu}_{l,k,0},$
and
$\frac{1}{\beta_{2}}\left(1-(1-\beta_{2})\frac{2n}{\beta_{2}^{n}}\right)\bm{\nu}_{l,k,0}\leq\bm{\nu}_{l,k,-1}\leq\frac{1}{\beta_{2}}\bm{\nu}_{l,k,0},$
###### Proof.
The first claim is derived by directly applying the range of
$\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|$ into Lemma 9.
As for the second claim, we have
$\bm{\nu}_{l,k,i}=\beta_{2}^{i}\bm{\nu}_{l,k,0}+(1-\beta_{2})(\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,i})^{2}+\cdots+\beta_{2}^{i-1}\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,1})^{2}).$
On the other hand, since $\forall j\in\\{0,\cdots,n-1\\}$
$\displaystyle|\partial_{l}f_{i}(\bm{w}_{k,j})|\leq$
$\displaystyle\max_{p\in[n]}|\partial_{l}f_{p}(\bm{w}_{k,0})|+\eta_{k}C_{1}\left(j(L_{0}+L_{1}\sqrt{D_{0}})+L_{1}\sqrt{D_{1}}\left(\sum_{p=0}^{j-1}\|\nabla
f(\bm{w}_{k,p})\|\right)\right)$ $\displaystyle\leq$
$\displaystyle\max_{p\in[n]}|\partial_{l}f_{p}(\bm{w}_{k,0})|+\eta_{k}C_{1}\left(n(L_{0}+L_{1}\sqrt{D_{0}})+L_{1}\sqrt{D_{1}}\left(\sum_{p=0}^{n-1}\|\nabla
f(\bm{w}_{k,p})\|\right)\right),$
we have
$\displaystyle\beta_{2}^{n-1}\bm{\nu}_{l,k,0}\leq\bm{\nu}_{l,k,i}$
$\displaystyle\leq$
$\displaystyle\beta_{2}^{i}\bm{\nu}_{l,k,0}+2(1-\beta_{2})\max_{p\in[n]}\partial_{l}f_{p}(\bm{w}_{k,0})^{2}(1+\cdots+\beta_{2}^{i-1})$
$\displaystyle+2(1-\beta_{2})(1+\cdots+\beta_{2}^{i-1})\eta_{k}^{2}C_{1}^{2}\left(n(L_{0}+L_{1}\sqrt{D_{0}})+L_{1}\sqrt{D_{1}}\left(\sum_{p=0}^{n-1}\|\nabla
f(\bm{w}_{k,p})\|\right)\right)^{2}$ $\displaystyle=$
$\displaystyle\beta_{2}^{i}\bm{\nu}_{l,k,0}+2(1-\beta_{2}^{i})\max_{p\in[n]}\partial_{l}f_{p}(\bm{w}_{k,0})^{2}+2(1-\beta_{2}^{i})\eta_{k}^{2}C_{1}^{2}\left(n(L_{0}+L_{1}\sqrt{D_{0}})+L_{1}\sqrt{D_{1}}\left(\sum_{p=0}^{n-1}\|\nabla
f(\bm{w}_{k,p})\|\right)\right)^{2}.$
Therefore, if Eq. (13) holds, we then have
$\displaystyle\bm{\nu}_{l,k,i}\leq$
$\displaystyle\beta_{2}^{i}\bm{\nu}_{l,k,0}+4(1-\beta_{2}^{i})\max_{p\in[n]}\partial_{l}f_{p}(\bm{w}_{k,0})^{2}$
$\displaystyle\leq$
$\displaystyle\beta_{2}^{i}\bm{\nu}_{l,k,0}+4\frac{n}{n}(1-\beta_{2}^{i})\sum_{p\in[n]}\partial_{l}f_{p}(\bm{w}_{k,0})^{2}\leq\left(\beta_{2}^{i}+8n\frac{1-\beta_{2}^{i}}{\beta_{2}^{n}}\right)\bm{\nu}_{l,k,0}$
$\displaystyle\leq$
$\displaystyle\left(\beta_{2}^{n-1}+8n\frac{1-\beta_{2}^{n-1}}{\beta_{2}^{n}}\right)\bm{\nu}_{l,k,0}.$
Following the same routine, we have
$\beta_{2}\bm{\nu}_{l,k,-1}\leq\bm{\nu}_{l,k,0},$
and if Eq. (13) holds,
$\displaystyle\bm{\nu}_{l,k,-1}=$
$\displaystyle\frac{1}{\beta_{2}}\left(\bm{\nu}_{l,k,0}-(1-\beta_{2})\partial_{l}f_{\tau_{k,0}}(\bm{w}_{k,0})^{2}\right)\geq\frac{1}{\beta_{2}}\left(\bm{\nu}_{l,k,0}-(1-\beta_{2})\max_{p}\partial_{l}f_{p}(\bm{w}_{k,0})^{2}\right)$
$\displaystyle\geq$
$\displaystyle\bm{\nu}_{l,k,0}\frac{1}{\beta_{2}}\left(1-(1-\beta_{2})\frac{2n}{\beta_{2}^{n}}\right).$
The proof of the second claim is completed. ∎
###### Remark 5.
By the notations in Eq. (10)., Eq. (13) can be translated into
$\displaystyle\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\geq$
$\displaystyle
C_{3}\eta_{k}+C_{4}\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k-r,j})\|$
$\displaystyle+C_{4}n\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}+\eta_{k}C_{4}\left(\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k,j})\|\right).$ (14)
Furthermore, we define $g(\beta_{2})$ as
$\displaystyle
g(\beta_{2})\triangleq\max\left\\{\frac{1}{\sqrt{\beta_{2}}^{n-1}}-1,1-\frac{1}{\sqrt{\beta_{2}^{n-1}+8n\frac{1-\beta_{2}^{n-1}}{\beta_{2}^{n}}}},1-\sqrt{\beta_{2}},\sqrt{\frac{\beta_{2}}{\left(1-(1-\beta_{2})\frac{2n}{\beta_{2}^{n}}\right)}}-1\right\\},$
and the conclusion of Corollary 5 can be translated into that if Eq. (14)
holds,
$\left|\frac{1}{\sqrt{\bm{\nu}_{l,k,i}}}-\frac{1}{\sqrt{\bm{\nu}_{l,k,0}}}\right|\leq
g(\beta_{2})\frac{1}{\sqrt{\bm{\nu}_{l,k,0}}},$
and
$\left|\frac{1}{\sqrt{\bm{\nu}_{l,k,-1}}}-\frac{1}{\sqrt{\bm{\nu}_{l,k,0}}}\right|\leq
g(\beta_{2})\frac{1}{\sqrt{\bm{\nu}_{l,k,0}}}.$
Based on whether Eq. (14) is fulfilled, we divide $[d]$ into
$\mathbb{L}_{large}^{k}$ and $\mathbb{L}_{small}^{k}$ ($\forall k\geq 1$),
which are respectively defined as
$\displaystyle\mathbb{L}_{large}^{k}=\\{l:l\in[d],\text{s.t. Eq. (\ref{eq:
translated_crit}) holds}\\},$
$\displaystyle\mathbb{L}_{small}^{k}=\\{l:l\in[d],\text{s.t. Eq. (\ref{eq:
translated_crit}) doesn't hold}\\}.$
The following lemma characterizes the property of $\mathbb{L}_{small}^{k}$.
###### Lemma 10.
Define
$\bm{u}_{k}\triangleq\frac{\bm{w}_{k,0}-\beta_{1}\bm{w}_{k,-1}}{1-\beta_{1}}$
(with $\bm{w}_{1,-1}\triangleq\bm{w}_{1,0}$). Then,
$\sum_{k=1}^{T}\left|\sum_{l\in\mathbb{L}_{small}^{k}}\partial_{l}f(\bm{w}_{k,0})(\bm{u}_{l,k+1}-\bm{u}_{l,k})\right|\leq
C_{2}\left(C_{5}\sum_{k=1}^{T}\eta_{k}^{2}\|\nabla f(\bm{w}_{k,0})\|+C_{6}\ln
T+C_{7}\right),$
where $C_{2}$, $C_{5}$, $C_{6}$, and $C_{7}$ are defined in Eq. (10).
###### Proof.
By directly applying the definition of $\mathbb{L}_{large}^{k}$ and Lemma 5,
we have
$\displaystyle\frac{1}{n}\left|\sum_{l\in\mathbb{L}_{small}^{k}}\partial_{l}f(\bm{w}_{k,0})(\bm{u}_{l,k+1}-\bm{u}_{l,k})\right|$
$\displaystyle\leq$ $\displaystyle
dC_{2}\eta_{k}\left(C_{3}\eta_{k}+C_{4}\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k-r,j})\|\right.\left.+C_{4}n\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}+\eta_{k}C_{4}\left(\sum_{p=0}^{n-1}\|\nabla
f(\bm{w}_{k,p})\|\right)\right).$
Summing over $k$ from $1$ to $t$ then leads to
$\displaystyle\frac{1}{n}\sum_{k=1}^{T}\left|\sum_{l\in\mathbb{L}_{small}^{k}}\partial_{l}f(\bm{w}_{k,0})(\bm{u}_{l,k+1}-\bm{u}_{l,k})\right|$
$\displaystyle\leq$
$\displaystyle\sum_{k=1}^{T}dC_{2}C_{3}\eta^{2}_{k}+dC_{2}C_{4}\sum_{k=1}^{T}\eta_{k}\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k-r,j})\|+C_{2}C_{4}n\sum_{k=1}^{T}\eta_{k}\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}$
$\displaystyle+C_{2}C_{4}\sum_{k=1}^{T}\eta_{k}^{2}\sum_{p=0}^{n-1}\|\nabla
f(\bm{w}_{k,p})\|$ $\displaystyle\leq$
$\displaystyle\sum_{k=1}^{T}dC_{2}C_{3}\eta^{2}_{k}+\frac{dC_{2}C_{4}}{1-\sqrt{\beta_{2}^{n}}}\sum_{k=1}^{T-1}\eta_{k}^{2}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k,j})\|+\frac{C_{2}C_{4}n}{1-\sqrt{\beta_{2}^{n}}}\sum_{k=1}^{T-1}\eta_{k}^{2}+C_{2}C_{4}\sum_{k=1}^{T}\eta_{k}^{2}\sum_{p=0}^{n-1}\|\nabla
f(\bm{w}_{k,p})\|$ $\displaystyle\leq$
$\displaystyle\left(dC_{2}C_{3}+\frac{C_{2}C_{4}n}{1-\sqrt{\beta_{2}^{n}}}\right)\eta^{2}_{1}(1+\ln
T)+\left(C_{2}C_{4}+\frac{dC_{2}C_{4}}{1-\sqrt{\beta_{2}^{n}}}\right)\sum_{k=1}^{T}\eta_{k}^{2}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k,j})\|,$
where in the second inequality we exchange the sum order. By Lemma 6, the
above inequality further leads to
$\displaystyle\sum_{k=1}^{T}\left|\sum_{l\in\mathbb{L}_{small}^{k}}\partial_{l}f(\bm{w}_{k,0})(\bm{u}_{l,k+1}-\bm{u}_{l,k})\right|$
$\displaystyle\leq$ $\displaystyle
n\left(C_{2}C_{4}+\frac{dC_{2}C_{4}}{1-\sqrt{\beta_{2}^{n}}}\right)\sum_{k=1}^{T}\eta_{k}^{2}\sum_{j=0}^{n-1}\left((1+n\sqrt{d}C_{1}\eta_{1}L_{1}\sqrt{n})\|\nabla
f(\bm{w}_{k,0})\|+\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}\right)n\sqrt{d}C_{1}\eta_{k}\right)$
$\displaystyle+n\left(dC_{2}C_{3}+\frac{C_{2}C_{4}n}{1-\sqrt{\beta_{2}^{n}}}\right)\eta_{1}^{2}(1+\ln
T)$ $\displaystyle\leq$ $\displaystyle
n^{2}(1+n\sqrt{d}C_{1}\eta_{1}L_{1}\sqrt{n})\left(C_{2}C_{4}+\frac{dC_{2}C_{4}}{1-\sqrt{\beta_{2}^{n}}}\right)\sum_{k=1}^{T}\eta_{k}^{2}\|\nabla
f(\bm{w}_{k,0})\|+\left(dC_{2}C_{3}+\frac{C_{2}C_{4}n}{1-\sqrt{\beta_{2}^{n}}}\right)\eta_{1}^{2}(1+\ln
T)$
$\displaystyle+n\left(C_{2}C_{4}+\frac{dC_{2}C_{4}}{1-\sqrt{\beta_{2}^{n}}}\right)\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}\right)n^{2}\sqrt{d}C_{1}\sum_{k=1}^{T}\eta_{k}^{3}$
$\displaystyle\leq$ $\displaystyle
n^{2}(1+n\sqrt{d}C_{1}\eta_{1}L_{1}\sqrt{n}\sqrt{D_{1}})\left(C_{2}C_{4}+\frac{dC_{2}C_{4}\sqrt{D_{1}}}{1-\sqrt{\beta_{2}^{n}}}\right)\sum_{k=1}^{T}\eta_{k}^{2}\|\nabla
f(\bm{w}_{k,0})\|+\left(dC_{2}C_{3}+\frac{C_{2}C_{4}n\sqrt{D_{1}}}{1-\sqrt{\beta_{2}^{n}}}\right)$
$\displaystyle\times\eta^{2}_{1}(1+\ln
T)+3n\left(C_{2}C_{4}+\frac{dC_{2}C_{4}}{1-\sqrt{\beta_{2}^{n}}}\right)\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}\right)n^{2}\sqrt{d}C_{1}\eta_{1}^{3}.$
(15)
By the notations in Eq. (10), the proof is completed. ∎
The next lemma characterizes the property of $\mathbb{L}^{k}_{large}$.
###### Lemma 11.
Define
$\bm{u}_{k}\triangleq\frac{\bm{w}_{k,0}-\beta_{1}\bm{w}_{k,-1}}{1-\beta_{1}}$
(with $\bm{w}_{1,-1}\triangleq\bm{w}_{1,0}$). We have
$\displaystyle\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})(\bm{u}_{l,k+1}-\bm{u}_{l,k})$
$\displaystyle\leq$
$\displaystyle-\sum_{k=1}^{T}\sum_{l\in[d]}\frac{\eta_{k}\partial_{l}f(\bm{w}_{k,0})^{2}}{2\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}$
$\displaystyle+\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\frac{\beta_{2}^{n}}{2n}}\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle+\left((C_{8}+\frac{1}{2}C_{5})\sum_{k=1}^{T}\eta_{k}^{2}\|\nabla
f(\bm{w}_{k,0})\|+(C_{9}+\frac{1}{2}C_{6})\ln
T+(C_{10}+\frac{1}{2}C_{7})\right).$
###### Proof.
Compared to the proof of Lemma 10, the proof of this lemma is more
complicated. To begin with, we provide a decomposition of
$\bm{u}_{k+1}-\bm{u}_{k}$. According to the definition of $\bm{u}_{k}$, we
have
$\displaystyle\bm{u}_{k+1}-\bm{u}_{k}$ $\displaystyle=$
$\displaystyle\frac{(\bm{w}_{k+1,0}-\beta_{1}\bm{w}_{k+1,-1})-(\bm{w}_{k,0}-\beta_{1}\bm{w}_{k,-1})}{1-\beta_{1}}$
$\displaystyle=$
$\displaystyle\frac{(\bm{w}_{k+1,0}-\bm{w}_{k,0})-\beta_{1}(\bm{w}_{k+1,-1}-\bm{w}_{k,-1})}{1-\beta_{1}}$
$\displaystyle=$
$\displaystyle\frac{\sum_{i=0}^{n-1}(\bm{w}_{k,i+1}-\bm{w}_{k,i})-\beta_{1}\sum_{i=0}^{n-1}(\bm{w}_{k,i}-\bm{w}_{k,i-1})}{1-\beta_{1}}$
$\displaystyle=$
$\displaystyle\frac{(\bm{w}_{k+1,0}-\bm{w}_{k+1,-1})+(1-\beta_{1})\sum_{i=0}^{n-2}(\bm{w}_{k,i+1}-\bm{w}_{k,i})-\beta_{1}(\bm{w}_{k,0}-\bm{w}_{k,-1})}{1-\beta_{1}}$
$\displaystyle\overset{(\star)}{=}$
$\displaystyle-\frac{\frac{\eta_{k}}{\sqrt{\bm{\nu}_{k,n-1}}}\odot\bm{m}_{k,n-1}+(1-\beta_{1})\sum_{i=0}^{n-2}\frac{\eta_{k}}{\sqrt{\bm{\nu}_{k,i}}}\odot\bm{m}_{k,i}-\beta_{1}\frac{\eta_{k-1}}{\sqrt{\bm{\nu}_{k-1,n-1}}}\odot\bm{m}_{k-1,n-1}}{1-\beta_{1}}$
$\displaystyle=$
$\displaystyle-\frac{\eta_{k}}{\sqrt{\bm{\nu}_{k,0}}}\odot\frac{\bm{m}_{k,n-1}+(1-\beta_{1})\sum_{i=0}^{n-2}\bm{m}_{k,i}-\beta_{1}\bm{m}_{k-1,n-1}}{1-\beta_{1}}-\eta_{k}\left(\left(\frac{1}{\sqrt{\bm{\nu}_{k,n-1}}}-\frac{1}{\sqrt{\bm{\nu}_{k,0}}}\right)\odot\frac{\bm{m}_{k,n-1}}{1-\beta_{1}}\right.$
$\displaystyle~{}~{}~{}\left.+\sum_{i=0}^{n-2}\left(\frac{1}{\sqrt{\bm{\nu}_{k,i}}}-\frac{1}{\sqrt{\bm{\nu}_{k,0}}}\right)\odot\bm{m}_{k,i}-\frac{\beta_{1}}{1-\beta_{1}}\left(\frac{1}{\sqrt{\bm{\nu}_{k-1,n-1}}}-\frac{1}{\sqrt{\bm{\nu}_{k,0}}}\right)\odot{\bm{m}_{k-1,n-1}}\right)$
$\displaystyle~{}~{}~{}-\frac{\beta_{1}}{1-\beta_{1}}(\eta_{k-1}-\eta_{k})\frac{1}{\sqrt{\bm{\nu}_{k-1,n-1}}}\odot{\bm{m}_{k-1,n-1}}.$
(16)
Here equation ($\star$) is due to a direct application of the update rule of
$\bm{w}_{k,i}$. We then analyze the above three terms respectively, namely, we
define
$\displaystyle
a^{1}_{l}\triangleq-\frac{\eta_{k}}{\sqrt{\bm{\nu}_{l,k,0}}}\frac{\bm{m}_{l,k,n-1}+(1-\beta_{1})\sum_{i=0}^{n-2}\bm{m}_{l,k,i}-\beta_{1}\bm{m}_{l,k-1,n-1}}{1-\beta_{1}}=-\frac{\eta_{k}}{\sqrt{\bm{\nu}_{l,k,0}}}\sum_{i=0}^{n-1}\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,i}),$
$\displaystyle
a^{2}_{l}\triangleq-\eta_{k}\left(\left(\frac{1}{\sqrt{\bm{\nu}_{l,k,n-1}}}-\frac{1}{\sqrt{\bm{\nu}_{l,k,0}}}\right)\frac{\bm{m}_{l,k,n-1}}{1-\beta_{1}}+\sum_{i=0}^{n-2}\left(\frac{1}{\sqrt{\bm{\nu}_{l,k,i}}}-\frac{1}{\sqrt{\bm{\nu}_{l,k,0}}}\right)\bm{m}_{l,k,i}\right.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\left.\frac{\beta_{1}}{1-\beta_{1}}\left(\frac{1}{\sqrt{\bm{\nu}_{l,k-1,n-1}}}-\frac{1}{\sqrt{\bm{\nu}_{l,k,0}}}\right){\bm{m}_{l,k-1,n-1}}\right),$
$\displaystyle
a^{3}_{l}\triangleq-\frac{\beta_{1}}{1-\beta_{1}}(\eta_{k-1}-\eta_{k})\frac{1}{\sqrt{\bm{\nu}_{l,k-1,n-1}}}{\bm{m}_{l,k-1,n-1}}.$
One can then easily observe that by Eq. (16),
$\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})(\bm{u}_{l,k+1}-\bm{u}_{l,k})=\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})a_{l}^{1}+\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})a_{l}^{2}+\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})a_{l}^{3}.$
① Tackling Term
$\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})a_{l}^{1}$:
We have
$\displaystyle\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})a_{l}^{1}$
$\displaystyle=$
$\displaystyle-\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}\frac{\eta_{k}}{\sqrt{\bm{\nu}_{l,k,0}}}\partial_{l}f(\bm{w}_{k,0})\left(\sum_{i=0}^{n-1}\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,0})\right)-\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}}{\sqrt{\bm{\nu}_{l,k,0}}}\partial_{l}f(\bm{w}_{k,0})\left(\sum_{i=0}^{n-1}(\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,i})-\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,0}))\right)$
$\displaystyle=$
$\displaystyle-\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}}{\sqrt{\bm{\nu}_{l,k,0}}}\partial_{l}f(\bm{w}_{k,0})^{2}-\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}}{\sqrt{\bm{\nu}_{l,k,0}}}\partial_{l}f(\bm{w}_{k,0})\left(\sum_{i=0}^{n-1}(\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,i})-\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,0}))\right)$
$\displaystyle\overset{(\star)}{=}$
$\displaystyle-\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}}{\sqrt{\bm{\nu}_{l,k,0}}}\partial_{l}f(\bm{w}_{k,0})^{2}+\mathcal{O}\left(\eta_{k}^{2}\right)+\mathcal{O}\left(\eta_{k}^{2}\|\nabla
f(\bm{w}_{k,0})\|\right),$
where Eq. ($\star$) is due to
$\displaystyle\left|\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}}{\sqrt{\bm{\nu}_{l,k,0}}}\partial_{l}f(\bm{w}_{k,0})\left(\sum_{i=0}^{n-1}(\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,i})-\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,0}))\right)\right|$
$\displaystyle\overset{(\ast)}{\leq}$
$\displaystyle\eta_{k}\sqrt{\frac{2n^{2}}{\beta_{2}^{n}}}\left(\sum_{l\in\mathbb{L}_{large}^{k}}\sum_{i=0}^{n-1}|\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,i})-\partial_{l}f_{\tau_{k,i}}(\bm{w}_{k,0})|\right)$
$\displaystyle\leq$
$\displaystyle\eta_{k}\sqrt{\frac{2n^{2}}{\beta_{2}^{n}}}\left(\sqrt{d}\sum_{i=0}^{n-1}\|\nabla
f_{\tau_{k,i}}(\bm{w}_{k,i})-\nabla f_{\tau_{k,i}}(\bm{w}_{k,0})\|\right)$
$\displaystyle\overset{(\circ)}{\leq}$
$\displaystyle\eta_{k}\sqrt{\frac{2n^{2}}{\beta_{2}^{n}}}\sqrt{d}\sum_{i=0}^{n-1}(L_{0}+L_{1}\|\nabla
f_{\tau_{k,i}}(\bm{w}_{k,0})\|)\|\bm{w}_{k,i}-\bm{w}_{k,0}\|$
$\displaystyle\leq$
$\displaystyle\eta_{k}\sqrt{\frac{2n^{2}}{\beta_{2}^{n}}}\sqrt{d}(nL_{0}+L_{1}\sqrt{D_{1}}\sqrt{n}\|\nabla
f(\bm{w}_{k,0})\|+\sqrt{n}L_{1}\sqrt{D_{0}})n\sqrt{d}C_{1}\eta_{k}$
$\displaystyle\overset{(\bullet)}{\leq}$
$\displaystyle\sqrt{\frac{2n^{2}}{\beta_{2}^{n}}}d(n^{2}L_{0}+n\sqrt{n}L_{1}\sqrt{D_{0}})C_{1}\eta_{k}^{2}+\eta_{k}^{2}d\sqrt{\frac{2n^{2}}{\beta_{2}^{n}}}L_{1}\sqrt{D_{1}}n\sqrt{n}\|\nabla
f(\bm{w}_{k,0})\|.$
Here Eq. ($\ast$) is due to Corollary 5, Eq. ($\circ$) is due to $f_{i}$ is
$(L_{0},L_{1})$-smooth, $\forall i$, and Eq. ($\bullet$) is due to Lemma 4.
② Tackling Term
$\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})a_{l}^{2}$:
We have for any $l\in\mathbb{L}_{max}$,
$\displaystyle|\partial_{l}f(\bm{w}_{k,0})a_{l}^{2}|$ $\displaystyle\leq$
$\displaystyle\eta_{k}|\partial_{l}f(\bm{w}_{k,0})|\left(\left|\frac{1}{\sqrt{\bm{\nu}_{l,k,n-1}}}-\frac{1}{\sqrt{\bm{\nu}_{l,k,0}}}\right|\frac{|\bm{m}_{l,k,n-1}|}{1-\beta_{1}}+\sum_{i=0}^{n-2}\left|\frac{1}{\sqrt{\bm{\nu}_{l,k,i}}}-\frac{1}{\sqrt{\bm{\nu}_{l,k,0}}}\right||\bm{m}_{l,k,i}|\right.$
$\displaystyle-\left.\frac{\beta_{1}}{1-\beta_{1}}\left|\frac{1}{\sqrt{\bm{\nu}_{l,k-1,n-1}}}+\frac{1}{\sqrt{\bm{\nu}_{l,k,0}}}\right|{|\bm{m}_{l,k-1,n-1}|}\right)$
$\displaystyle\overset{(\star)}{\leq}$
$\displaystyle\eta_{k}g(\beta_{2})\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\bm{\nu}_{l,k,0}}}\left(\frac{|\bm{m}_{l,k,n-1}|}{1-\beta_{1}}+\sum_{i=0}^{n-2}|\bm{m}_{l,k,i}|+\frac{\beta_{1}}{1-\beta_{1}}{|\bm{m}_{l,k-1,n-1}|}\right)$
$\displaystyle\overset{(\ast)}{\leq}$
$\displaystyle\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\bm{\nu}_{l,k,0}}}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle+\eta_{k}^{2}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}\left(n+\frac{2\sqrt{2}\beta_{1}}{1-\beta_{1}}\right)C_{1}(L_{0}+L_{1}\sqrt{D_{0}})\sqrt{d}$
$\displaystyle+\eta_{k}^{2}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}L_{1}C_{1}\sqrt{D_{1}}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k,j})\|$
$\displaystyle+\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}L_{1}C_{1}\sqrt{D_{1}}\sum_{t=1}^{k-1}\eta_{k-t}\sum_{j=0}^{n-1}\beta_{1}^{tn-1-j}\|\nabla
f(\bm{w}_{k-t,j})\|,$
where Inequality ($\star$) is due to Corollary 5, and $g(\beta_{2})$ is
defined in Lemma 5 , and Inequality $(\ast)$ is due to Lemma 8, by which we
have $\forall i\in\\{-1,\cdots,n-1\\}$
$\displaystyle|\bm{m}_{l,k,i}|\leq$
$\displaystyle\max_{i^{\prime}\in[n]}|\partial_{l}f_{i^{\prime}}(\bm{w}_{k,0})|+\left(n+\frac{2\sqrt{2}\beta_{1}}{1-\beta_{1}}\right)C_{1}(L_{0}+L_{1}\sqrt{D_{0}})\sqrt{d}\eta_{k}+L_{1}C_{1}\sqrt{D_{1}}\eta_{k}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k,j})\|$
$\displaystyle+L_{1}C_{1}\sqrt{D_{1}}\sum_{t=1}^{k-1}\eta_{k-t}\sum_{j=0}^{n-1}\beta_{1}^{tn-1-j}\|\nabla
f(\bm{w}_{k-t,j})\|.$
Therefore, summing over $\mathbb{L}_{large}^{k}$ and $k$ leads to
$\displaystyle\sum_{k=1}^{T}\left|\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})a_{l}^{2}\right|$
$\displaystyle\leq$
$\displaystyle\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\bm{\nu}_{l,k,0}}}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle+\sum_{k=1}^{T}\eta_{k}^{2}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}\left(n+\frac{2\sqrt{2}\beta_{1}}{1-\beta_{1}}\right)C_{1}(L_{0}+L_{1}\sqrt{D_{0}})d\sqrt{d}$
$\displaystyle+dg(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}L_{1}C_{1}\sqrt{D_{1}}\sum_{k=1}^{T}\eta_{k}^{2}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k,j})\|$
$\displaystyle+dg(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}L_{1}C_{1}\sqrt{D_{1}}\sum_{k=1}^{T}\eta_{k}\sum_{t=1}^{k-1}\eta_{k-t}\sum_{j=0}^{n-1}\beta_{1}^{(t-1)n}\|\nabla
f(\bm{w}_{k-t,j})\|$ $\displaystyle\leq$
$\displaystyle\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\bm{\nu}_{l,k,0}}}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle+g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}\left(n+\frac{2\sqrt{2}\beta_{1}}{1-\beta_{1}}\right)C_{1}(L_{0}+L_{1}\sqrt{D_{0}})d\sqrt{d}\eta_{1}(1+\ln
T)$
$\displaystyle+dg(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}L_{1}C_{1}\sqrt{D_{1}}\left(1+\frac{1}{1-\beta_{2}^{n}}\right)\sum_{k=1}^{T}\eta_{k}^{2}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k,j})\|$ $\displaystyle\overset{(\star)}{\leq}$
$\displaystyle\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\bm{\nu}_{l,k,0}}}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle+g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}\left(n+\frac{2\sqrt{2}\beta_{1}}{1-\beta_{1}}\right)C_{1}(L_{0}+L_{1}\sqrt{D_{0}})d\sqrt{d}\eta_{1}(1+\ln
T)$
$\displaystyle+dg(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}L_{1}C_{1}\sqrt{D_{1}}\left(1+\frac{1}{1-\beta_{2}^{n}}\right)$
$\displaystyle\cdot\sum_{k=1}^{T}\eta_{k}^{2}\sum_{j=0}^{n-1}\left((1+n\sqrt{d}C_{1}\eta_{1}L_{1}\sqrt{n}\sqrt{D_{1}})\|\nabla
f(\bm{w}_{k,0})\|+\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}\right)n\sqrt{d}C_{1}\eta_{k}\right)$
$\displaystyle\leq$
$\displaystyle\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\bm{\nu}_{l,k,0}}}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle+g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}\left(n+\frac{2\sqrt{2}\beta_{1}}{1-\beta_{1}}\right)C_{1}(L_{0}+L_{1}\sqrt{D_{0}})d\sqrt{d}\eta^{2}_{1}(1+\ln
T)$
$\displaystyle+dg(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}L_{1}C_{1}\sqrt{D_{1}}\left(1+\frac{1}{1-\beta_{2}^{n}}\right)(n+n^{\frac{5}{2}}\sqrt{d}C_{1}\eta_{1}L_{1}\sqrt{D_{1}})\sum_{k=1}^{T}\eta_{k}^{2}\|\nabla
f(\bm{w}_{k,0})\|$
$\displaystyle+3dg(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{\sqrt{2}n}{\beta_{2}^{\frac{n}{2}}}L_{1}C_{1}\sqrt{D_{1}}\left(1+\frac{1}{1-\beta_{2}^{n}}\right)n\left(nL_{0}+L_{1}\sqrt{n}\sqrt{D_{0}}\right)n\sqrt{d}C_{1}\eta_{1}^{3}.$
where Inequality $(\star)$ is due to Lemma 6.
③ Tackling Term
$\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})a_{l}^{3}$:
For any $l\in\mathbb{L}_{large}^{k}$,
$\displaystyle|\partial_{l}f(\bm{w}_{k,0})a_{l}^{3}|\leq\frac{\beta_{1}}{1-\beta_{1}}|\eta_{k-1}-\eta_{k}|\frac{1}{\sqrt{\bm{\nu}_{l,k-1,n-1}}}|\bm{m}_{l,k-1,n-1}||\partial_{l}f(\bm{w}_{k,0})|$
$\displaystyle\leq$
$\displaystyle\frac{\beta_{1}\eta_{1}}{(1-\beta_{1})}\frac{1}{\sqrt{k}\sqrt{k-1}(\sqrt{k}+\sqrt{k-1})}C_{1}|\partial_{l}f(\bm{w}_{k,0})|$
$\displaystyle=$
$\displaystyle\frac{\beta_{1}\eta_{k}}{(1-\beta_{1})}\frac{1}{\sqrt{k-1}(\sqrt{k}+\sqrt{k-1})}C_{1}|\partial_{l}f(\bm{w}_{k,0})|.$
Summing over $k$ and $\mathbb{L}_{large}^{k}$ then leads to
$\displaystyle\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}|\partial_{l}f(\bm{w}_{k,0})a_{l}^{3}|\leq\frac{\beta_{1}}{(1-\beta_{1})}\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}}{\sqrt{k-1}(\sqrt{k}+\sqrt{k-1})}C_{1}|\partial_{l}f(\bm{w}_{k,0})|$
$\displaystyle\leq$ $\displaystyle
2\frac{\beta_{1}}{(1-\beta_{1})\eta_{1}}\sqrt{d}C_{1}\sum_{k=1}^{T}\eta_{k}^{2}\|\nabla
f(\bm{w}_{k,0})\|.$
Put ①, ②, and ③ together and applying the notations in Eq. (10), we then have
$\displaystyle\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\partial_{l}f(\bm{w}_{k,0})(\bm{u}_{l,k+1}-\bm{u}_{l,k})$
$\displaystyle\leq$
$\displaystyle-\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}}{\sqrt{\bm{\nu}_{l,k,0}}}\partial_{l}f(\bm{w}_{k,0})^{2}+\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\bm{\nu}_{l,k,0}}}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle+C_{8}\sum_{k=1}^{T}\eta_{k}^{2}\|\nabla
f(\bm{w}_{k,0})\|+C_{9}\ln T+C_{10}.$ (17)
We then focus on the first two terms of the RHS of the above inequality.
Specifically, we have $\forall k\geq 1$,
$\displaystyle\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}\partial_{l}f(\bm{w}_{k,0})^{2}}{\sqrt{\bm{\nu}_{l,k,0}}+\xi}-\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\bm{\nu}_{l,k,0}}+\xi}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle\overset{(\star)}{\geq}$
$\displaystyle\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}\partial_{l}f(\bm{w}_{k,0})^{2}}{\sqrt{\bm{\nu}_{l,k,0}}+\xi}-\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\frac{\beta_{2}^{n}}{2n}}\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle\geq$
$\displaystyle\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}\partial_{l}f(\bm{w}_{k,0})^{2}}{2\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}-\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\frac{\beta_{2}^{n}}{2n}}\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle\overset{(\circ)}{=}$
$\displaystyle\sum_{l\in[d]}\frac{\eta_{k}\partial_{l}f(\bm{w}_{k,0})^{2}}{2\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}-\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\frac{\beta_{2}^{n}}{2n}}\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle-\frac{nd\eta_{k}}{2}\left(C_{3}\eta_{k}+C_{4}\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k-r,j})\|+C_{4}n\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}+\eta_{k}C_{4}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k,j})\|\right),$
where Inequality $(\star)$ is due to Corollary 5 and Equality $(\circ)$ is due
to
$\displaystyle\sum_{l\in\mathbb{L}_{small}^{k}}\frac{\eta_{k}\partial_{l}f(\bm{w}_{k,0})^{2}}{2\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}\leq\sum_{l\in\mathbb{L}_{small}^{k}}\frac{\eta_{k}\partial_{l}f(\bm{w}_{k,0})^{2}}{2\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}\leq\frac{n}{2}\eta_{k}\sum_{l\in\mathbb{L}_{small}^{k}}\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|$
$\displaystyle\leq$
$\displaystyle\frac{nd\eta_{k}}{2}\left(C_{3}\eta_{k}+C_{4}\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k-r,j})\|+C_{4}n\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}+\eta_{k}C_{4}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k,j})\|\right).$
Summing the both sides of the above inequality then leads to
$\displaystyle\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\frac{\eta_{k}\partial_{l}f(\bm{w}_{k,0})^{2}}{\sqrt{\bm{\nu}_{l,k,0}}+\xi}-\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\bm{\nu}_{l,k,0}}+\xi}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle\geq$
$\displaystyle\sum_{k=1}^{T}\sum_{l\in[d]}\frac{\eta_{k}\partial_{l}f(\bm{w}_{k,0})^{2}}{2\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}$
$\displaystyle-\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\frac{\beta_{2}^{n}}{2n}}\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle-\sum_{k=1}^{T}\frac{nd\eta_{k}}{2}\left(C_{3}\eta_{k}+C_{4}\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k-r,j})\|+C_{4}n\sum_{r=1}^{k-1}\sqrt{\beta_{2}}^{(r-1)n}\eta_{k-r}+\eta_{k}C_{4}\sum_{j=0}^{n-1}\|\nabla
f(\bm{w}_{k,j})\|\right)$ $\displaystyle\geq$
$\displaystyle\sum_{k=1}^{T}\sum_{l\in[d]}\frac{\eta_{k}\partial_{l}f(\bm{w}_{k,0})^{2}}{2\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}$
$\displaystyle-\sum_{k=1}^{T}\sum_{l\in\mathbb{L}_{large}^{k}}\eta_{k}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\frac{|\partial_{l}f(\bm{w}_{k,0})|}{\sqrt{\frac{\beta_{2}^{n}}{2n}}\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|+\xi}\left(\max_{i\in[n]}|\partial_{l}f_{i}(\bm{w}_{k,0})|\right)$
$\displaystyle-\frac{1}{2}\left(C_{5}\sum_{k=1}^{T}\eta_{k}^{2}\|\nabla
f(\bm{w}_{k,0})\|+C_{6}\ln T+C_{7}\right).$
Applying the above inequality back to Eq. (17), the proof is completed. ∎
The following lemma will be useful when translating $\langle\nabla
f(\bm{w}_{k,0}),\frac{1}{\sqrt{\bm{\nu}_{k,0}}}\odot\nabla
f(\bm{w}_{k,0})\rangle$ to $\min\left\\{\frac{\|\nabla
f(\bm{w}_{k,0})\|}{\sqrt{D_{1}}},\frac{\|\nabla
f(\bm{w}_{k,0})\|^{2}}{\sqrt{D_{0}}}\right\\}$.
###### Lemma 12.
Let all conditions in Theorem 1 hold. Then, either there exists a iteration
$k\in[T]$, such that either
$\|\nabla f(\bm{w}_{k,0})\|\leq
2\sqrt{d}(2\sqrt{2}+1)\sqrt{D_{0}}g(\beta_{2})\left(n-1+\frac{1+\beta_{1}}{1-\beta_{1}}\right)\sqrt{\frac{2n}{\beta_{2}^{n}}},$
or for all iteration $k\in[1,T]$, we have that
|
11institutetext: ISTI-CNR, Pisa, Italy 22institutetext: University of Pisa,
Pisa, Italy 33institutetext: CNIT, Florence, Italy 44institutetext: Mercatorum
University, Rome, Italy
44email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
# Adversarial Magnification to Deceive Deepfake Detection through Super
Resolution
Anonymous Submission Davide Alessandro Coccomini 1122 0000-0002-0755-6154
Roberto Caldelli 3344 0000-0003-3471-1196 Giuseppe Amato 11
0000-0003-0171-4315 Fabrizio Falchi 11 0000-0001-6258-5313 Claudio Gennaro 11
0000-0002-3715-149X
###### Abstract
Deepfake technology is rapidly advancing, posing significant challenges to the
detection of manipulated media content. Parallel to that, some adversarial
attack techniques have been developed to fool the deepfake detectors and make
deepfakes even more difficult to be detected. This paper explores the
application of super resolution techniques as a possible adversarial attack in
deepfake detection. Through our experiments, we demonstrate that minimal
changes made by these methods in the visual appearance of images can have a
profound impact on the performance of deepfake detection systems. We propose a
novel attack using super resolution as a quick, black-box and effective method
to camouflage fake images and/or generate false alarms on pristine images. Our
results indicate that the usage of super resolution can significantly impair
the accuracy of deepfake detectors, thereby highlighting the vulnerability of
such systems to adversarial attacks. The code to reproduce our experiments is
available at: https://github.com/davide-coccomini/Adversarial-Magnification-
to-Deceive-Deepfake-Detection-through-Super-Resolution
###### Keywords:
Deepfake Detection, Super Resolution, Adversarial Attacks
## 1 Introduction
Manipulating content to spread misinformation and damage the reputation of
people has never been easier than nowadays. We are witnessing the unstoppable
evolution of those known as Deepfakes. These are counterfeit media contents
which often show people saying or doing things they never actually said or
did, distorting reality. Distinguishing pristine contents from manipulated
ones is extremely difficult. For this reason, various deepfake detectors have
been developed. These are, however, subject to various issues such as the need
to be up-to-date to keep up with the latest deepfake generation methods or the
ability to handle real-world situations. It is precisely in real-world
contexts that deepfake detection systems could be faced with targeted attacks
made to deceive them. Known as adversarial attacks, these are techniques that
introduce noise or adversarial patches, specifically crafted to deceive the
detector. Although they can also be very effective, these techniques may
require deep knowledge of the deepfake detector they are trying to fool. In
this paper, we attempt to exploit a Super Resolution (SR) technique, to
camouflage deepfake images in a quick and black-box manner (in the sense that
the attack is model-agnostic). Our approach allows us to cause a significant
increase in the False Negative Rate (fake samples classified as pristine) of
up to $18\%$. We also have shown how the usage of SR on pristine images can
cause a drastic increase in false alarms of up to $14\%$, highlighting the
inadequacy of some deepfake detectors, which will probably arise as these
techniques continue to proliferate.
## 2 Related Works
### 2.1 Deepfake Generation and Detection
The generation of deepfakes involves the use of techniques that manipulate
human faces to achieve realistic alterations in appearance or identity. Two
primary approaches are commonly employed: Variational AutoEncoders (VAEs) and
Generative Adversarial Networks (GANs). VAE-based methods utilize encoder-
decoder pairs to decompose and recompose distinct faces. On the other hand,
GAN-based methods use a discriminator to distinguish real and fake images,
paired with a generator that creates fake faces to fool the discriminator.
Notable Deepfake generation methods include Face2Face[21] and FaceSwap[3]. As
deepfakes becomes more credible, there is a growing demand for systems capable
of detecting them. To address this problem various deepfake detectors have
been developed. Some methods are capable of analyzing deepfake videos by
considering also the temporal information[10, 6, 24, 7] but most approaches
focus on frame-based classification, evaluating each video frame
individually[9, 2] and being available to manage also simply still deepfake
images. Also, competitions such as [12] and [13] have been organized to
stimulate the resolution of this task. The problem of deepfakes has also been
extended to the detection of synthetic images in general such in [8, 11, 4]
increasing the variety of fake contents.
### 2.2 Adversarial Attacks
Adversarial attacks, such as noise addition and adversarial patches, exploit
vulnerabilities in deepfake detectors to deceive them. Adversarial noise
introduces subtle perturbations, while adversarial patches overlap patterns to
trigger misclassification. The authors of [15] propose a framework called
FakeRetouch, which aims to reduce artifacts in deepfake images without
sacrificing image quality. By adding noise and using deep image filtering,
they achieve high fidelity to the original deepfake images reducing the
accuracy of deepfake detectors. In [14] the authors propose a statistical
consistency attack (StatAttack) against deepfake detectors by minimizing the
statistical differences between natural and deepfake images through the
addition of statistical-sensitive degradations.
### 2.3 Super Resolution
Super Resolution (SR) is a technique which aims to reconstruct a high-
resolution version of a low-resolution image by utilizing information from
multiple input images[5] or by using prior knowledge about the relationship
between high-resolution and low-resolution image pairs[23, 16]. One of the
main SR techniques is the one proposed in [18] where an Enhanced Deep Super
Resolution network (EDSR) is presented; it introduces some improvements to the
ResNet architecture for SR previously proposed in [16]. They remove batch
normalization layers to increase flexibility and reduce memory usage. They
also propose the use of residual scaling layers to stabilize the training
procedure. The model constructed with these modifications, and pre-trained
with a lower upscaling factor was able to achieve good results in terms of
convergence speed and final performance.
## 3 The proposed attack
Figure 1: SR attack pipeline: pre-processing and attack phases. The face size
is reduced by a factor $K$, then restored to its original resolution using a
SR algorithm and pasted back onto the source frame.
The proposed attack consists of exploiting SR techniques to modify a deepfake
image and camouflage it in the eyes of a deepfake detector. The scope of the
attack is then to mislead the deepfake detector and make the false negative
rate increase. The SR process, in an attempt to improve the resolution of an
image, could smooth the artifacts introduced by some deepfake generation
techniques, thus undermining the learning performed by the deepfake detection
model. Figure 1 shows the proposed framework for implementing the SR attack.
Specifically, for each of the frames of a video (or for each image if the
attack is applied to a still image) to be analyzed, a pretrained face detector
(e.g., MTCNN[22]) is applied. This step has been added to the pipeline for two
main reasons. The first motivation is related to the scope of the attack
itself since an attacker wants to manipulate a minimal part of the image in
order to avoid adding artifacts when not needed. Applying the SR on the whole
frame may add artifacts on the background finishing to have the inverse
effect. The second reason behind the usage of a face detector is the common
practice of both deepfake detectors and generators to focus only on the face
and so it is very likely that the deepfake detector against which the attack
is applied will only focus on the face and that the artifacts to be removed
are concentrated on the face. The face extracted from the network has a
specific resolution which is dependent on factors such as video resolution,
distance from the camera, etc. Since the goal of SR is to raise the resolution
of the image by a factor $K\in\mathbb{N}$, the image is firstly down-scaled by
a factor $1/K$ and then given as input to an SR model (e.g. EDSR[18]) to be SR
up-scaled by a factor K. The face image resulting from this process has the
same size as the original detected one and so can be again put inside the
source image from which it has been detected. So, to apply this method there
is no need to know anything about the deepfake detector that will be used for
the final detection, then the proposed method can be effectively considered a
black-box attack and can be applied against any deepfake detector and on
images manipulated with any deepfake generation method. Furthermore, this
attack can also be carried out on deepfake content already generated and does
not need to be integrated into the deepfake creation procedure.
## 4 Experiments
### 4.1 Dataset
Since we want to evaluate our attack on a variety of deepfake generation
methods, we chose the well-known FaceForensics++ (FF++)[19] dataset for our
experiments. The dataset consists of both pristine and manipulated videos
created using various deepfake generation methods, namely Deepfakes[1],
Face2Face[21], FaceShifter[17], FaceSwap[3], and NeuralTextures[20]. However,
as this dataset consists of videos and the proposed attack exploits single-
image SR, ten frames were randomly extracted for each of them on which face
detection was then carried out. A training set and a test set were created for
each deepfake generation method in FF++. Each training set consists of 14400
images, half of which were manipulated with one of the available methods. Each
test set consists of 2800 images, half of which are manipulated again with the
proposed attack. A total of five training and test sets are therefore
available and all of them are perfectly balanced between the two classes
(pristine and fake). To choose which videos should be used for training or
test set we used the split made available in [19].
### 4.2 Experimental Setup
To investigate the impact of the application of SR on the performance of
deepfake detectors, we selected three architectures, namely Resnet50, Swin-
Small and XceptionNet, and trained them on faces extracted from FF++ to
perform a binary classification by pristine/fake image. For each training, the
model only sees pristine images and fake ones manipulated with one of the
available FF++ methods (SR is not applied). All models are pretrained on
ImageNet and were fine-tuned with a learning rate of 0.01 for 30 epochs on an
Nvidia Tesla T4. The test is carried out considering two different setups, in
the first the models are tested by applying the SR-attack on both fake and
pristine images. In the second, the pristine images are un-attacked and only
the fake ones are passed through the SR process. The face is always extracted
from each frame using a pretrained MTCNN[22]. The scale factor used in our
experiments is $K=2$ and so the extracted face is resized of a factor $1/K$
and then up-scaled through EDSR[18] restoring the original resolution. After
this process, the face can be re-pasted to the frame exploiting the
coordinates extracted during face detection.
## 5 Results
### 5.1 Impact of Super Resolution on Deepfake Detection
Table 1: Evaluation on FF++ test set (half pristine and half fake). The SR
column indicates if the SR adversarial technique has been applied to the
images. Both pristine and fake images are attacked with SR.
Model | Forgery Method | SR | FNR $\downarrow$ | FPR $\downarrow$ | Recall $\uparrow$ | Precision $\uparrow$ | AUC $\uparrow$ | Accuracy $\uparrow$
---|---|---|---|---|---|---|---|---
Resnet50 | Deepfakes | $\times$ | 5.5 | 3.2 | 94.5 | 96.7 | 99.2 | 95.6
$\checkmark$ | 6.9 | 10.1 | 93.1 | 90.2 | 97.7 | 91.5
Face2Face | $\times$ | 5.0 | 3.2 | 95.0 | 96.7 | 98.9 | 95.9
$\checkmark$ | 14.4 | 4.7 | 85.6 | 94.8 | 97.0 | 90.5
FaceSwap | $\times$ | 6.4 | 1.9 | 93.6 | 98.0 | 99.1 | 95.9
$\checkmark$ | 21.1 | 2.6 | 78.9 | 96.8 | 96.0 | 88.1
FaceShifter | $\times$ | 6.1 | 3.4 | 93.9 | 96.5 | 98.8 | 95.3
$\checkmark$ | 24.8 | 3.3 | 75.2 | 95.8 | 96.8 | 86.0
NeuralTextures | $\times$ | 14.1 | 8.1 | 85.9 | 91.3 | 95.4 | 88.9
$\checkmark$ | 14.4 | 16.9 | 85.6 | 83.5 | 92.1 | 84.4
Swin | Deepfakes | $\times$ | 5.9 | 3.6 | 94.1 | 96.3 | 99.1 | 95.3
$\checkmark$ | 6.1 | 12.4 | 93.9 | 88.4 | 97.4 | 90.7
Face2Face | $\times$ | 6.3 | 3.3 | 93.7 | 96.6 | 98.9 | 95.2
$\checkmark$ | 24.4 | 1.7 | 75.6 | 97.8 | 96.1 | 87.0
FaceSwap | $\times$ | 4.9 | 4.6 | 95.1 | 95.3 | 98.6 | 95.2
$\checkmark$ | 21.9 | 5.3 | 78.1 | 93.7 | 93.9 | 86.4
FaceShifter | $\times$ | 7.2 | 4.1 | 92.8 | 95.8 | 98.7 | 94.4
$\checkmark$ | 18.9 | 3.1 | 81.1 | 96.3 | 97.4 | 89.0
NeuralTextures | $\times$ | 12.9 | 12.8 | 87.1 | 87.2 | 94.9 | 87.1
$\checkmark$ | 13.2 | 23.9 | 86.8 | 78.4 | 90.4 | 81.5
XceptionNet | Deepfakes | $\times$ | 5.3 | 2.6 | 94.7 | 97.4 | 99.3 | 96.1
$\checkmark$ | 5.6 | 12.4 | 94.4 | 88.4 | 97.9 | 91.0
Face2Face | $\times$ | 9.6 | 3.3 | 90.4 | 96.5 | 98.4 | 93.6
$\checkmark$ | 18.3 | 5.3 | 81.7 | 93.9 | 95.7 | 88.2
FaceSwap | $\times$ | 5.1 | 3.2 | 94.9 | 96.7 | 98.8 | 95.8
$\checkmark$ | 15.8 | 4.9 | 84.2 | 94.5 | 96.6 | 89.6
FaceShifter | $\times$ | 7.1 | 3.9 | 92.9 | 96.0 | 98.8 | 94.5
$\checkmark$ | 15.6 | 4.3 | 84.4 | 95.2 | 97.4 | 90.0
NeuralTextures | $\times$ | 13.1 | 7.2 | 86.9 | 92.3 | 95.9 | 89.8
$\checkmark$ | 9.8 | 21.6 | 90.2 | 80.7 | 92.7 | 84.3
Table 1 shows the results obtained from the deep learning models considered to
perform the deepfake detection task on the FF++ test set, with and without the
usage of SR on both fake and pristine images. Observing the accuracy results
on all the methods considered, the application of SR leads to a relevant drop
in performance in all cases, confirming the hypothesis that the SR process can
generate confusion in the deepfake detectors, thereby leading them to make
errors. More in detail, looking at the False Negative Rate (FNR) and False
Positive Rate (FPR) all the models seem to have a peak when the SR attack is
applied. When the deepfake generation method used on the image is Deepfakes or
NeuralTextures, the impact on the FNR is less evident but the same detector
that results in more robust on the fake images, fails on the pristine images
attacked with SR and we see a huge increase in the FPR. The situation is
exactly the opposite for the methods Face2Face, FaceSwap and FaceShifter on
which the models seem to be more sensible on the fake images attacked with SR
and so have an important increase on FNR while a slight swing in FPR is
registered. Increasing the FNR is the main interest for an attacker as it can
be useful to be able to camouflage fake images against an automatic system
that may be trying to filter them out. Vice versa, the increase in the FPR in
some cases, highlights a serious problem in deepfake detection systems that,
if SR became more widespread (e.g. on social media to improve the final visual
quality), would end up confusing legitimate images for deepfakes and also open
the door for an attacker to deliberately raise false alarms in the system.
That the use of SR pushes all Deepfake Detection models into error is also
shown in Figure 2 where it can be seen that in all cases, the AUCs obtained by
the models on SR images are drastically lower (dashed lines) than their
counterpart tested on images on which the SR process has not been applied
(solid lines).
Figure 2: ROC curves on FF++ dataset for the three considered models.
(a) Resnet50
(b) Swin
(c) XceptionNet
Table 2: Evaluation on FF++ test set (half pristine and half fake). The SR
column indicates if the SR adversarial technique has been applied to the
images. Only the fake images are attacked with SR.
Model | Forgery Method | SR | FNR $\downarrow$ | FPR $\downarrow$ | Recall $\uparrow$ | Precision $\uparrow$ | AUC $\uparrow$ | Accuracy $\uparrow$
---|---|---|---|---|---|---|---|---
Resnet50 | Deepfakes | $\times$ | 5.5 | 3.2 | 94.5 | 96.7 | 99.2 | 95.6
$\checkmark$ | 6.9 | 93.1 | 96.7 | 98.9 | 95.0
Face2Face | $\times$ | 5.0 | 3.2 | 95.0 | 96.7 | 98.9 | 95.9
$\checkmark$ | 14.4 | 85.6 | 96.4 | 97.6 | 91.2
FaceSwap | $\times$ | 6.4 | 1.9 | 93.6 | 98.0 | 99.1 | 95.9
$\checkmark$ | 21.1 | 78.9 | 97.6 | 95.6 | 88.5
FaceShifter | $\times$ | 6.1 | 3.4 | 93.9 | 96.5 | 98.8 | 95.3
$\checkmark$ | 24.8 | 75.2 | 95.7 | 95.2 | 85.9
NeuralTextures | $\times$ | 14.1 | 8.1 | 85.9 | 91.3 | 95.4 | 88.9
$\checkmark$ | 14.4 | 85.6 | 91.3 | 94.7 | 88.7
Swin | Deepfakes | $\times$ | 5.9 | 3.6 | 94.1 | 96.3 | 99.1 | 95.3
$\checkmark$ | 6.1 | 93.9 | 96.3 | 98.7 | 95.1
Face2Face | $\times$ | 6.3 | 3.3 | 93.7 | 96.6 | 98.9 | 95.2
$\checkmark$ | 24.4 | 75.6 | 95.8 | 96.2 | 86.2
FaceSwap | $\times$ | 4.9 | 4.6 | 95.1 | 95.3 | 98.6 | 95.2
$\checkmark$ | 21.9 | 78.1 | 94.4 | 94.1 | 86.7
FaceShifter | $\times$ | 7.2 | 4.1 | 92.8 | 95.8 | 98.7 | 94.4
$\checkmark$ | 18.9 | 81.1 | 95.2 | 96.6 | 88.5
NeuralTextures | $\times$ | 12.9 | 12.8 | 87.1 | 87.2 | 94.9 | 87.1
$\checkmark$ | 13.2 | 86.8 | 87.2 | 93.9 | 87.0
XceptionNet | Deepfakes | $\times$ | 5.3 | 2.6 | 94.7 | 97.4 | 99.3 | 96.1
$\checkmark$ | 5.6 | 94.4 | 97.3 | 99.2 | 95.9
Face2Face | $\times$ | 9.6 | 3.3 | 90.4 | 96.5 | 98.4 | 93.6
$\checkmark$ | 18.3 | 81.7 | 96.1 | 96.9 | 89.2
FaceSwap | $\times$ | 5.1 | 3.2 | 94.9 | 96.7 | 98.8 | 95.8
$\checkmark$ | 15.8 | 84.2 | 96.3 | 97.1 | 90.5
FaceShifter | $\times$ | 7.1 | 3.9 | 92.9 | 96.0 | 98.8 | 94.5
$\checkmark$ | 15.6 | 84.4 | 95.6 | 97.1 | 90.2
NeuralTextures | $\times$ | 13.1 | 7.2 | 86.9 | 92.3 | 95.9 | 89.8
$\checkmark$ | 9.8 | 90.2 | 92.6 | 96.5 | 91.5
To evaluate deepfake detectors in a realistic context, an alternative test set
was considered in which pristine images are not subjected to the SR process.
In fact, an attacker has much more interest in generating false negatives than
false positives, so as to go undetected by automated systems. As can be seen
from the experiments reported in Table 2 in this setup the accuracy decreases,
though more slightly, in almost all the cases with some deepfake generation
methods on which the detectors are more robust to the attack. More in detail,
the Face2Face, FaceSwap and FaceShifter images enhanced with the SR attack,
are very difficult to detect, probably because the artifacts which the
detector has learnt to recognize during the training process, are hidden by
the SR process and this is translated in an higher FNR and a lower Recall
value. In all the cases, the FPR is not affected by the usage of the SR attack
since the pristine images are not attacked in this setup.
### 5.2 Visual Impact Analysis
When performing an SR attack on a fake image, it is important that it remains
as indistinguishable to human eyes as possible so as to preserve its meaning
but also to make it less suspicious to users. To assess the impact of our
attack on the image appearance, we compared the similarity of each image pair
(non-SR, SR) through two commonly used quality metrics, _Structural Similarity
Index (SSIM)_ and _Peak Signal-to-Noise Ratio (PSNR)_. The SSIM is calculated
as
$\text{SSIM}(x,y)=\frac{{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2})}}{{(\mu_{x}^{2}+\mu_{y}^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})}}$,
where $x$ and $y$ are the two compared images, $\mu_{x}$ and $\mu_{y}$ are the
average values of $x$ and $y$ respectively, $\sigma_{x}$ and $\sigma_{y}$ are
the standard deviations, $\sigma_{xy}$ is the covariance between x and y and
$C_{1}$ and $C_{2}$ are two constants used for stability. To calculate the
PSNR we used the formula
$\text{PSNR}(x,y)=10\cdot\log_{10}(\frac{{\text{MAX}^{2}}}{{\text{MSE}}(x,y)})$,
where $x$ and $y$ are the two compared images, MAX is the maximum possible
pixel value of the images and $\text{MSE}(x,y)$ is the Mean Squared Error
between the images.
The values obtained from each image pair were used to calculate the mean to
see the similarity between images attacked with and without SR for each
category.
Table 3: Evaluation of similarity between non-SR images and SR ones for each forgery method and for the pristine case. Forgery Method | SSIM $\uparrow$ | PSNR $\uparrow$ (dB)
---|---|---
_Pristine_ | 0.968 | 39.8
Deepfakes | 0.970 | 40.3
Face2Face | 0.968 | 39.8
FaceShifter | 0.973 | 40.9
FaceSwap | 0.967 | 39.7
NeuralTextures | 0.972 | 40.5
As can be seen from Table 3 the similarity between the SR images and the non-
SR ones is very high, with SSIM values around $0.97$ and PSNR around $40dB$
meaning a strong similarity and minimal changes brought by the SR process. We
also checked if exists a correlation between the SSIM value and the variation
in the error of the classifiers. In other words, we explored if a lower SSIM
value is related to a higher number of misclassifications during the
detection. From our experiments, in all the methods the correlation is lower
than $\pm 0.1$ meaning that the variation in detectors’ performances is more
related to the type of changes done to the image and not to the quantity of
these.
### 5.3 Qualitative Evaluation
To better understand the effect of the SR Attack on images, we visually
analyzed some examples of deepfakes (e.g. Face2Face and FaceSwap) correctly
detected by a Resnet50-based detector before the application of the attack but
misclassified after it.
Figure 3: Examples of fake images that are correctly detected by a
Resnet50-based deepfake detector but not detected when the SR attack is
applied.
These methods tend to introduce rather specific artifacts that, as visible in
Figure 3, are then smoothed by the SR. This makes the work of the deepfake
detector more difficult, as it has learnt to recognize such anomalies. As can
be seen from the figure also the visual difference is minimal, as already
stated by the analysis conducted in Section 5.2, but it is enough to make some
artifacts around the mouth (FaceSwap) or on the nose (Face2Face) to disappear.
## 6 Conclusions
In this work, we examined the impact of applying SR on deepfake images in the
context of deepfake detection. According to our experiments, the use of these
techniques has a huge impact on the performance of deepfake detectors, causing
the FNR to be drastically raised depending on the deepfake generation
technique used and the artifacts introduced by it into the image. Also, a
tendency was observed for deepfake detectors trained on specific deepfake
generation methods to mistake pristine SR images for fake images when the SR
attack is applied, causing the FPR to rise dramatically. In conclusion, SR
attack can become an effective black-box attack in deepfake detection. In
future work, we will explore the impact of detected face resolution on the
attack performance, explore more SR techniques and also see if using SR as a
data augmentation during the training process could be effective to make
detectors robust to this attack.
## Acknowledgments
This work was partially supported by the project SERICS (PE00000014) under the
NRRP MUR program funded by the EU - NGEU and by the H2020 project AI4Media (GA
n. 951911).
## References
* [1] Deepfakes, https://github.com/deepfakes/faceswap
* [2] Dfdc 1st place solution, "https://github.com/selimsef/dfdc_deepfake_challenge"
* [3] Faceswap, https://github.com/MarekKowalski/FaceSwap/
* [4] Amoroso, R., Morelli, D., Cornia, M., Baraldi, L., Bimbo, A.D., Cucchiara, R.: Parents and children: Distinguishing multimodal deepfakes from natural images (2023)
* [5] Arefin, M.R., Michalski, V., St-Charles, P.L., Kalaitzis, A., Kim, S., Kahou, S.E., Bengio, Y.: Multi-image super-resolution for remote sensing using deep recurrent networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (2020)
* [6] Baxevanakis, S., Kordopatis-Zilos, G., Galopoulos, P., Apostolidis, L., Levacher, K., Schlicht, I.B., Teyssou, D., Kompatsiaris, I., Papadopoulos, S.: The mever deepfake detection service: Lessons learnt from developing and deploying in the wild. In: International Workshop on Multimedia AI against Disinformation (2022)
* [7] Caldelli, R., Galteri, L., Amerini, I., Del Bimbo, A.: Optical flow based cnn for detection of unlearnt deepfake manipulations. Pattern Recognition Letters (2021). https://doi.org/https://doi.org/10.1016/j.patrec.2021.03.005, https://www.sciencedirect.com/science/article/pii/S0167865521000842
* [8] Coccomini, D.A., Esuli, A., Falchi, F., Gennaro, C., Amato, G.: Detecting images generated by diffusers (2023). https://doi.org/10.48550/ARXIV.2303.05275
* [9] Coccomini, D.A., Messina, N., Gennaro, C., Falchi, F.: Combining efficientnet and vision transformers for video deepfake detection. In: Sclaroff, S., Distante, C., Leo, M., Farinella, G.M., Tombari, F. (eds.) Image Analysis and Processing – ICIAP 2022. Springer International Publishing (2022)
* [10] Coccomini, D.A., Zilos, G.K., Amato, G., Caldelli, R., Falchi, F., Papadopoulos, S., Gennaro, C.: Mintime: Multi-identity size-invariant video deepfake detection (2022). https://doi.org/10.48550/ARXIV.2211.10996
* [11] Dogoulis, P., Kordopatis-Zilos, G., Kompatsiaris, I., Papadopoulos, S.: Improving synthetically generated image detection in cross-concept settings. ACM (2023). https://doi.org/10.1145/3592572.3592846
* [12] Dolhansky, B., Bitton, J., Pflaum, B., Lu, J., Howes, R., Wang, M., Ferrer, C.C.: The deepfake detection challenge (dfdc) dataset. arXiv preprint (2020)
* [13] Guarnera, L., Giudice, O., Guarnera, F., Ortis, A., Puglisi, G., Paratore, A., Bui, L.M.Q., Fontani, M., Coccomini, D.A., Caldelli, R., Falchi, F., Gennaro, C., Messina, N., Amato, G., Perelli, G., Concas, S., Cuccu, C., Orrù, G., Marcialis, G.L., Battiato, S.: The face deepfake detection challenge. Journal of Imaging (2022). https://doi.org/10.3390/jimaging8100263
* [14] Hou, Y., Guo, Q., Huang, Y., Xie, X., Ma, L., Zhao, J.: Evading deepfake detectors via adversarial statistical consistency. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2023)
* [15] Huang, Y., Juefei-Xu, F., Guo, Q., Xie, X., Ma, L., Miao, W., Liu, Y., Pu, G.: Fakeretouch: Evading deepfakes detection via the guidance of deliberate noise. arXiv preprint (2020)
* [16] Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2017)
* [17] Li, L., Bao, J., Yang, H., Chen, D., Wen, F.: Advancing high fidelity identity swapping for forgery detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2020)
* [18] Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops (2017)
* [19] Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Niessner, M.: Faceforensics++: Learning to detect manipulated facial images. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (2019)
* [20] Thies, J., Zollhöfer, M., Nießner, M.: Deferred neural rendering: Image synthesis using neural textures (2019). https://doi.org/10.1145/3306346.3323035
* [21] Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., Nießner, M.: Face2face: Real-time face capture and reenactment of rgb videos. Commun. ACM (2018). https://doi.org/10.1145/3292039
* [22] Xiang, J., Zhu, G.: Joint face detection and facial expression recognition with mtcnn. In: 2017 4th International Conference on Information Science and Control Engineering (ICISCE) (2017). https://doi.org/10.1109/ICISCE.2017.95
* [23] Yang, W., Zhang, X., Tian, Y., Wang, W., Xue, J.H., Liao, Q.: Deep learning for single image super-resolution: A brief review. IEEE Transactions on Multimedia (2019). https://doi.org/10.1109/TMM.2019.2919431
* [24] Zheng, Y., Bao, J., Chen, D., Zeng, M., Wen, F.: Exploring temporal coherence for more general video face forgery detection. In: ICCV (2021). https://doi.org/10.1109/ICCV48922.2021.01477
|
# Conformal Intent Classification and Clarification for Fast and Accurate
Intent Recognition
Floris den Hengst
Vrije Universiteit Amsterdam
<EMAIL_ADDRESS>
&Ralf Wolter
ING Group NV
<EMAIL_ADDRESS>
Patrick Altmeyer
TU Delft
<EMAIL_ADDRESS>
&Arda Kaygan
ING Group NV
<EMAIL_ADDRESS>
###### Abstract
We present Conformal Intent Classification and Clarification (CICC), a
framework for fast and accurate intent classification for task-oriented
dialogue systems. The framework turns heuristic uncertainty scores of any
intent classifier into a clarification question that is guaranteed to contain
the true intent at a pre-defined confidence level. By disambiguating between a
small number of likely intents, the user query can be resolved quickly and
accurately. Additionally, we propose to augment the framework for out-of-scope
detection. In a comparative evaluation using seven intent recognition datasets
we find that CICC generates small clarification questions and is capable of
out-of-scope detection. CICC can help practitioners and researchers
substantially in improving the user experience of dialogue agents with
specific clarification questions.
Conformal Intent Classification and Clarification for Fast and Accurate Intent
Recognition
Floris den Hengst Vrije Universiteit Amsterdam<EMAIL_ADDRESS>Ralf Wolter
ING Group NV<EMAIL_ADDRESS>
Patrick Altmeyer TU Delft<EMAIL_ADDRESS>Arda Kaygan ING Group NV
<EMAIL_ADDRESS>
## 1 Introduction
Intent classification (IC) is a crucial step in the selection of actions and
responses in task-oriented dialogue systems. To offer the best possible
experience with such systems, IC should accurately map user inputs to a
predefined set of intents. A widely known challenge of language in general,
and IC specifically, is that user utterances may be incomplete, erroneous, and
contain linguistic ambiguities.
Although IC is inherently challenging, a key strength of the conversational
setting is that disambiguation or _clarification_ questions (CQs) can be posed
(Purver et al., 2003; Alfieri et al., 2022). Posing the right CQ at the right
time results in a faster resolution of the user query, a more natural
conversation, and higher user satisfaction (van Zeelt et al., 2020; Keyvan and
Huang, 2022; Siro et al., 2022). CQs have been considered in the context of
information retrieval Zamani et al. (2020) but have received little attention
in the context of task-oriented dialogue.
Deciding when to ask a CQ and how to pose it are challenging tasks (DeVault
and Stone, 2007; Keyvan and Huang, 2022). First, it is not clear when the
system can safely proceed under the assumption that the true intent was
correctly identified. Second, it is not clear when the model is too uncertain
to formulate a CQ (Cavalin et al., 2020). Finally, it is unclear what the
exact information content of the clarification question should be.
We present Conformal Intent Classification and Clarification (CICC), a
framework for deciding when to ask a CQ, what its information content should
be, and how to formulate it. The framework uses conformal prediction to turn a
models’ predictive uncertainty into prediction sets that contain the true
intent at a predefined confidence level (Shafer and Vovk, 2008; Angelopoulos
et al., 2023). The approach is agnostic to the intent classifier, does not
require re-training of this model, guarantees that the true intent is in the
CQ, allows for rejecting the input as too ambiguous if the model is too
uncertain, has interpretable hyperparameters, generates clarification
questions that are small and is amenable to the problem of detecting out-of-
scope inputs.
In a comparative evaluations with seven data sets and three IC models, we find
that CICC outperforms heuristic approaches to predictive uncertainty
quantification in all cases. The benefits of CICC are most prominent for
ambiguous inputs, which arise naturally in real-world dialogue settings
(Zamani et al., 2020; Larson et al., 2019).
## 2 Related Work
We discuss related work on ambiguity and uncertainty detection within IC and
CP with language models.
#### Clarification Questions
Various works acknowledge the problem of handling uncertainty in intent
classification and to address it with CQs. Dhole (2020) proposes a rule-based
approach for asking discriminative CQs. The approach is limited to CQs with
two intents, lacks a theoretical foundation, and provides no intuitive way of
balancing coverage with CQ size. Keyvan and Huang (2022) survey ambiguous
queries in the context of conversational search and list sources of ambiguity.
They mention that clarification questions should be short, specific, and based
on system uncertainty. We propose a principled approach to asking short and
specific questions based on uncertainty of any underlying intent classifier
for the purposes of task-oriented dialogue.
Alfieri et al. (2022) propose an approach for asking a CQ containing a fixed
top-$k$ most likely intents with intent-specific uncertainty thresholds. This
approach does not come with any theoretical guarantees and its hyperparameters
need to be tuned on an additional data set whereas our approach comes with
guarantees on coverage of the true intent and with intuitively interpretable
hyperparameters that can be tuned on the same calibration set. We do not
compare directly to this method but include top-$k$ selection in our
benchmark.
CQs have been studied in other domains, including information retrieval Zamani
et al. (2020), product description improvement Zhang and Zhu (2021), and open
question-answering Kuhn et al. (2023). In contrast to the task-specific domain
investigated in this work, these domains leave more room for asking generic
questions for clarification and do not easily allow for incorporating model
uncertainty. Furthermore, the proposed methods require ad hoc tuning of scores
based on heuristic metrics of model uncertainty, and do not provide ways to
directly balance model uncertainty with CQ size.
#### Uncertainty and out-of-scope detection
The out-of-scope detection task introduced by Larson et al. (2019) is a
different task from the task of handling model uncertainty and ambiguous
inputs (Cavalin et al., 2020; Yilmaz and Toraman, 2020; Zhan et al., 2021;
Zhou et al., 2021). However, predictive uncertainty is often used in
addressing the out-of-scope detection task. Although the tasks of handling
ambiguous input and detecting out-of-scope input are different, we briefly
discuss approaches that leverage model uncertainty for out-of-scope detection
here.
Various out-of-scope detection approaches train an intent classifier and tune
a decision boundary based on a measure of the classifier’s confidence Shu et
al. (2017); Lin and Xu (2019); Yan et al. (2020); Hendrycks et al. (2020).
Samples for which the predictive uncertainty of the model lies on one side of
the boundary are classified as out-of-scope. These approaches use the models’
heuristic uncertainty to decide whether an input is out-of-sample whereas we
first turn the models’ heuristic uncertainty into a prediction with
statistical guarantees and then use this prediction to decide when and how to
formulate a clarification question. We additionally propose an adaptation of
the CICC framework for out-of-scope detection.
#### Conformal Prediction on NLP tasks
Conformal Prediction has been used in several NLP tasks, including sentiment
classification by Maltoudoglou et al. (2020), named-entity recognition by
Fisch et al. (2022) and paraphrase detection by Giovannotti and Gammerman
(2021). However, the application to intent classification, task-oriented
dialogue and the combination with CQs presented here is novel to our
knowledge.
## 3 Methodology
We address the problem of asking CQs in task oriented dialogue systems in the
following way. We take a user utterance and a pre-trained intent classifier,
and then return an appropriate response based on the predictive uncertainty of
the model. Algorithm 1 lists these steps, and an example input is presented in
Figure 1. In this section we describe and detail the components of CICC. We
start by providing a background on conformal prediction.
Figure 1: The conformal intent classification and clarification interaction
loop.
### 3.1 Conformal Prediction
Conformal Prediction is a framework for creating statistically rigorous
prediction sets from a heuristic measure of predictive uncertainty of a
classifier (Shafer and Vovk, 2008; Angelopoulos et al., 2023). We here focus
on split conformal prediction as it does not require any retraining of the
underlying model, and refer to it simply as conformal prediction from here on
out.
For a classification task with classes $\mathcal{Y}:\\{1,\dots,K\\}$, a test
input $X_{t}\in\mathcal{X}$ with label $Y_{t}\in\mathcal{Y}$, and a user-
defined error level $\alpha\in\left[0,1\right)$, CP returns a set
$\mathcal{C}(X_{t})\subseteq\mathcal{Y}$ for which the following holds Vovk et
al. (1999) even when using a finite amount of samples:
$\mathbb{P}\left(Y_{t}\in\mathcal{C}(X_{t})\right)\geq 1-\alpha$ (1)
If e.g. $\alpha=0.01$ the set $\mathcal{C}(X_{t})$ is therefore _guaranteed_
to contain the true $Y_{t}$ in 99% of test inputs.
Conformal prediction uses a heuristic measure of uncertainty of a pretrained
model and a modestly sized calibration set to generate prediction sets.
Formally, we assume a held-out calibration set $D:\\{(X_{i},Y_{i})\\}$ of size
$n$, a pre-trained classifier $\hat{f}:\mathcal{X}\to\mathbb{R}^{K}$, and a
nonconformity function $s:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}$ that
returns heuristic uncertainty scores where larger values express higher
uncertainty. An example of a nonconformity function for a neural network
classifier is one minus the softmax outputs of the true class:
$s(X_{i}):=1-\hat{f}(X_{i})_{Y_{i}}.$ (2)
This score is high when the softmax score of the true class is low, i.e., when
the model is badly wrong.
The nonconformity function $s$ is evaluated on $D$ to generate a set of
nonconformity scores $\mathcal{S}=\\{s(X_{i},Y_{i})\\}$. Next, the quantile
$\hat{q}$ of the empirical distribution of $\mathcal{S}$ is determined so that
the desired coverage ratio $(1-\alpha)$ is achieved. This can be done by
choosing $\hat{q}=\lceil(n+1)(1-\alpha)\rceil/n$111this is essentially the
$\hat{q}$ quantile with a minor adjustment where $\lceil\cdot\rceil$ denotes
the ceiling function. Then, for a given test input $X_{t}$, all classes
$y\in\mathcal{Y}$ with high enough confidence are included in a prediction set
$\mathcal{C}(X_{t})$ :
$\mathcal{C}(X_{t}):=\\{y:s(X_{t},y)\leq\hat{q}\\}.$ (3)
This simple procedure guarantees that (1) holds i.e. that the true $Y_{t}$ is
in the set at the specified confidence $1-\alpha$. Note the lack of retraining
or ensembling of classifiers, that the procedure requires little compute and
that $D$ can be relatively small as long as it contains a fair number of
examples for all classes and is exchangeable222distributed identically but not
necessarily independently with the test data Papadopoulos et al. (2002).
There are various implementations of conformal prediction with different
nonconformity functions and performance characteristics. The most simple
approach is known as _marginal_ conformal prediction and it uses the
nonconformity function in (2). Marginal conformal prediction owes its names
from adhering to the guarantee (1) marginalized over $\mathcal{X}$ and
$\mathcal{Y}$, i.e. it satisfies the coverage requirement (1) on average,
rather than e.g. for a particular input $X_{t}$. Marginal CP can be
implemented following the steps described previously: (i) compute
nonconformity scores $S$ using (2), (ii) obtain $\hat{q}$ as described
previously, and (iii) construct a prediction set using (3) at test time. A
benefit of this approach is that it generates prediction sets with the
smallest possible prediction set size on average. A limitation is that its
prediction set sizes may not reflect hardness of the input Sadinle et al.
(2019).
Alternatively, one can ensure conditional adherence to (1) with so-called
conditional or adaptive conformal predictors. A benefit of conditional
approaches is that higher model uncertainty results in larger prediction sets.
However, a downside is that these sets are expected to be larger on average
than those obtained with a marginal approach.Romano et al. (2020) introduce a
conditional CP approach that consists of broadly the same steps as marginal CP
but with a different nonconformity function $s$ and a different prediction set
construction. First we define a permutation $\pi(X)$ of $\\{1\dots K\\}$ that
sorts $\hat{f}(X)$ in descending order. Conditional CP can defined as: (i) sum
all predictor outputs $\hat{f}(X_{i})_{k}$ for all $\\{k\in
K|\hat{f}(X_{i})_{k}\geq\hat{f}(X_{i})_{Y_{i}}\\}$, (ii) obtain $\hat{q}$ as
before, and (iii) include all for a test input $X_{t}$:
$\mathcal{C}(X_{t}):=\\{\pi_{1}(X_{t}),\dots,\pi_{k}(x)\\},$ (4)
where
$k=\text{sup}\left\\{k^{\prime}:\sum_{j=1}^{k^{\prime}}\hat{f}(X_{t})_{\pi_{j}(X_{t})}<\hat{q}\right\\}+1.$
(5)
Angelopoulos et al. (2021) introduce an approach with a term to regularize the
prediction set size: their approach is therefore known as Regularized Adaptive
Prediction Sets (RAPS). It effectively adds an increasing penalty to the
ranked model outputs in the first step of conditional CP in order to promote
smaller prediction sets where possible. Since the second and third step are
similar to conditional CP, its prediction sets still adhere to the coverage
guarantee (1).
In general, a suitable conformal prediction technique strikes the right
balance between three desiderata: (i) adhering to the coverage requirement in
(1), (ii) producing small prediction sets and (iii) adaptivity. Whereas the
former two can be measured easily, metrics for adaptivity require some more
care. Angelopoulos et al. (2021) introduce a general-purpose metric for
adaptivity. It is based on the coverage and referred to as the size-stratified
classification (SSC) score:
$\text{SSC}=\min_{b\in\\{1,\dots,K\\}}\frac{1}{|\mathcal{I}_{b}|}\sum_{i\in\mathcal{I}_{b}}\mathbbm{1}\left\\{Y_{i}\in\mathcal{C}\left(X_{i}\right)\right\\}$
(6)
for a classification task defined as above and
$\mathcal{I}_{b}\subset\\{1,\dots,n\\}$ the set of inputs with prediction set
size $b$, i.e.
$\mathcal{I}_{b}:=\left\\{X_{i},|\mathcal{C}(X_{i})|=b\right\\}$.
Within CICC, conformal prediction is applied to a pre-trained intent
classifier to create a set of intents that contains the true user intent at a
predefined confidence for any user utterance. The sets are then used in making
a decision on when to ask a clarification question and how to formulate it. We
continue to discuss when and how such questions are asked based on Algorithm 1
in the following section.
Algorithm 1 CICC algorithm
Input: utterance $X$, classifier $\hat{f}$, chat/voice-bot $c$, calibration
set $D$, generative LM $g$
Parameters: error rate $\alpha$, threshold $th$, ambiguity response $a$
Output: response $R$
1: set $\leftarrow$ conformal prediction$\left(\hat{f}(X),D,\alpha\right)$
2: if $|$set$|==1$ then
3: $R\leftarrow c($set.get()$)$. {bot response}
4: else if $|$set$|>th$ then
5: $R\leftarrow a$. {input too ambiguous}
6: else
7: $R\leftarrow g($set$,X)$ {clarification question}
8: end if
### 3.2 When to Ask a Clarification Question
For a user utterance $X$, a pre-trained intent classifier $\hat{f}$ and a
nonconformity function $s$, we generate a prediction set that covers the true
user intent with confidence $1-\alpha$ (Algorithm 1, ln 1). If the set
contains a single intent, the model is confident that the true intent has been
detected and the dialogue can be handled as usual (ln 2-3).
If the set contains many intents, that is, more than a user-specified
threshold $th\in\mathbb{N}_{>0}$, then there is no reasonable ground for
formulating a clarification question. Instead, a generic request to rephrase
the question can be asked (ln 4-5), or a hand-over to a human operator could
be implemented here. In the remaining case, i.e. if the prediction set is of
reasonable size, a CQ is asked (ln 6-7).
CICC comes with two parameters to control when a CQ should be asked. Both have
clear semantics and can be interpreted intuitively. The first is the threshold
$th$ that controls when the input is too ambiguous to ask a CQ (Algorithm 1 ln
4-5). This parameter is set by the chatbot owner on the basis of best
practices in, and knowledge of chat- and voicebot interaction patterns. In
general, this number should remain small to reduce the cognitive load on
users. We advise to set this value no higher than seven Miller (1956); Plass
et al. (2010).
The second parameter is the error rate $\alpha$. It controls the trade-off
between the prediction set size and how certain we want to be that the
prediction set covers the true intent. As $\alpha\to 0$, our confidence that
the true intent is included in the set grows, but so does the size of the
prediction set. Because conformal prediction is not compute intensive,
$\alpha$ can be set empirically. Thus, CICC provides a means of selecting
between _achievable_ trade-offs between prediction set sizes and error rates.
We continue to discuss how specific CQs are formulated in CICC.
### 3.3 Generating a Clarification Question
When a CQ is in order (ln 6-7 in Alg. 1), it needs to be formulated. We
propose to generate a CQ based on the original input $X$ and the prediction
set, as it is guaranteed to contain the true intent at a typically high level
of confidence. Because the alternatives in the CQ are the most likely intents
according to the model, and because the number of alternatives in the CQ
corresponds to the models’ uncertainty, asking a CQ provides a natural way of
communicating model uncertainty to the user while quickly determining the true
user intent.
CICC makes no assumptions about the approach for generating a CQ. Anything
from hardcoded questions, templating, or a generative LM can be used. However,
we recognize that the number of possible questions is large: it consists of
the powerset of all $n$ intents up to size $th$ excluding sets of size one and
zero. Therefore, we opt to use a generative LM in our solution.
We prompt the LM to formulate a clarification question by giving it some
examples of clarification questions for a set of example intents to
disambiguate between. We additionally provide the original utterance $X$ to
enable the formulation of CQ relative to the original utterance. See Appendix
A for details.
### 3.4 Out-of-scope Detection
Ambiguity is a part of natural language which could lead to model uncertainty.
Specific reasons for uncertainty in intent recognition are inputs that are
very short and long, imprecise and incomplete inputs, etc. However, a
particularly interesting type of uncertainty stems from inputs that represent
intent classes that are not known at training time Zhan et al. (2021). These
inputs are referred to as out-of-scope (OOS) and detecting these inputs can be
seen as a binary classification task for which data sets with known OOS
samples have been developed.
CICC rejects inputs about which the model is too uncertain (Algorithm 1, ln 5)
and this naturally fits with the OOS detection task as follows: we can view a
rejection of an input as a classification of that input as OOS. Therefore,
although handling ambiguity in the model gracefully and detection OOS inputs
are separate challenges, vanilla CICC implements a form of OOS detection.
Additionally, the CICC framework can be leveraged for OOS detection if OOS
samples are known at calibration time. Specifically, we can optimize
parameters $\alpha$ and $th$ to maximize predictive performance expressed by
some suitable metric such as the F1-score on the calibration set. OOS samples
can be obtained from other intent recognition data sets in other domains. This
practice is described in detail by e.g. Zhan et al. (2021) under the name of
open-domain outliers. We refer to versions of CICC which have been optimized
for F1-score in this way as CICC-OOS.
## 4 Experimental Setup
This section lists the experiments performed to comparatively evaluate CICC
across seven data sets and on three IC models333
https://github.com/florisdenhengst/cicc.
| samples | intents
---|---|---
ACID Acharya and Fung (2020) | 22172 | 175
ATIS Hemphill et al. (1990) | 5871 | 26
B77 Casanueva et al. (2020) | 13083 | 77
B77-OOS | 16337 | 78
C150-IS Larson et al. (2019) | 18025 | 150
C150-OOS Larson et al. (2019) | 19025 | 151
HWU64 Liu et al. (2021) | 25716 | 64
IND | $\sim$20k | 61
MTOD (eng) Schuster et al. (2019) | 43323 | 12
Table 1: Characteristics of datasets used
#### Data
We evaluate CICC on six public intent recognition data sets in English and an
additional real-life industry data set (IND) from the banking domain in the
Dutch language. Table 1 shows the data sets and their main characteristics.
All data sets were split into train-calibration-test splits of proportions
0.6-0.2-0.2 with stratified sampling, except for the ATIS data set in which
stratified sampling is impossible due to the presence of intents with a single
sample. Random sampling was used for this data set instead. We use an in-scope
version (C150-IS) of the ‘unbalanced’ data set by Larson et al. (2019) in
which all out-of-scope samples have been removed.
For evaluation on out-of-scope (OOS) detection, we use two datasets: a version
of C150 with all OOS samples divided over the calibration and test splits, and
no OOS samples in the train split (C150-OOS), and a version of B77 with so-
called open-domain outliers in which samples from the ATIS dataset make up
half of the samples in the calibration and test splits to represent OOS inputs
(B77-OOS) Zhan et al. (2021).
#### Models
We employ fine-tuned BERT by Devlin et al. (2019) for all public data sets and
a custom model similar to BERT for the IND data set Alfieri et al. (2022). We
base the nonconformity scores on the softmax output in these settings. In
order to test performance on a commercial offering, we additionally evaluate
using DialogflowCX (DFCX) on the B77 data
set.444https://cloud.google.com/dialogflow/cx/docs This commercial offering
outputs heuristic certainty scores in the range $\left[0,100\right]$ for the
top five most certain recognized intents. These outputs were normalized to sum
to $1$, all other scores were set to $0$ to determine the nonconformity
scores.
#### Baselines
In practice CQs can be formulated using heuristics Alfieri et al. (2022). We
compare CICC to the following baselines using the models’ heuristic
uncertainty scores:
* B1
select all intents with score $>1-\alpha$, select the top $k=5$ if this
selection is empty.
* B2
select all intents with a score $>1-\alpha$.
* B3
select the top $k=5$ intents.
#### Metrics
We evaluate the approaches on a set of metrics that together accurately convey
the added benefit of asking a confirmation question. We use the _size_ of the
prediction set $\mathcal{C}(X_{i})$ and how often the input is rejected as too
ambiguous for the model (Algorithm 1, ln 5). For a test set of size $n$:
$\text{Amb}:=\frac{1}{n}\sum_{i=0}^{n}\begin{cases}1&\text{if}~{}|\mathcal{C}(X_{i})|\geq
th\\\ 0&\text{otherwise}.\end{cases}$ (7)
First, we report how often the true intent is detected for the $m\leq n$
inputs that are not rejected (Algorithm 1, lns 3 and 5). This metric is known
as coverage (cov) and can be seen as a generalisation of accuracy for set-
valued predictions:
$\text{Cov}:=\frac{1}{m}\sum_{i=0}^{m}\mathbbm{1}_{\mathcal{C}(X_{i})}\left(Y_{i}\right).$
(8)
Second, we report the average size of the clarification questions for accepted
inputs (Algorithm 1, ln 7). This metric can be seen as an analogue to
precision for set-valued predictions:
$|\text{CQ}|=\frac{1}{m}\sum_{i=0}^{m}|\mathcal{C}(X_{i})|.$ (9)
Finally, we report the relative number of times the prediction set is of size
one
$\text{Single}:=\frac{1}{m}\sum_{i=0}^{m}\begin{cases}1&\text{if}~{}|\mathcal{C}(X_{i})|=1,\\\
0&\text{otherwise,}\end{cases}$ (10)
in which case the dialogue can continue as usual (Algorithm 1, ln 3). We
additionally report the SSC as defined above in (6).
For out-of-scope detection we report the standard metrics F1-score and AUROC.
#### Parameters
We varied $\alpha$ and found the best settings empirically on the calibration
set. We report our key results for the best $\alpha$ and additionally
investigate the effect of varying $\alpha$.
We set the threshold $th$ at seven to avoid excessive cognitive load for users
for all experiments, except when using DFCX in which case we set $th$ to four
Miller (1956); Plass et al. (2010). The reason for this is that DFCX currently
only outputs non-zero scores for the top five intents. Hence, the set contains
all intents that have a non-zero confidence score with this setting.
We include the following conformal prediction approaches and select an
approach that produces the best empirical results in terms of coverage and CQ
size: marginal, conditional (also known as adaptive) Romano et al. (2020) and
RAPS Angelopoulos et al. (2021). Marginal conformal prediction was selected in
all experiments, details can be found in Figure 2.
## 5 Results
Setting | $1-\alpha$ | $th$ | | Cov$\uparrow$ | Single$\uparrow$ | $|\text{CQ}|\downarrow$ | Amb
---|---|---|---|---|---|---|---
ACID | .98 | 7 | CICC | .98 | .87 | 3.01 | .03
| | | B1 | .98 | .88 | 5 | 0
| | | B2 | .95 | 1 | $-$ | 0
| | | B3 | .99 | 0 | 5 | 0
ATIS | .99 | 7 | CICC | .99 | .98 | 2.54 | 0
| | | B1 | .99 | .73 | 5 | 0
| | | B2 | .98 | 1.00 | - | 0
| | | B3 | 1.00 | 0 | 5 | 0
B77/BERT | .97 | 7 | CICC | .98 | .73 | 2.84 | .04
| | | B1 | .97 | .84 | 5 | 0
| | | B2 | .93 | 1 | $-$ | 0
| | | B3 | .98 | 0 | 5 | 0
B77/DFCX | .90 | 4 | CICC | .91 | .66 | 2.63 | .02
| | | B1 | .95 | .71 | 5 | .27
| | | B2 | .90 | .98 | 2.26 | 0
| | | B3 | .97 | 0 | 5 | 1
C150-ID | .99 | 7 | CICC | .99 | .97 | 2.66 | 0
| | | B1 | .99 | .82 | 5 | 0
| | | B2 | .98 | 1 | $-$ | 0
| | | B3 | 1 | 0 | 5 | 0
HWU64 | .95 | 7 | CICC | .95 | .82 | 2.81 | .01
| | | B1 | .97 | .70 | 5 | 0
| | | B2 | .90 | 1 | $-$ | 0
| | | B3 | .98 | 0 | 5 | 0
IND | .90 | 7 | CICC | .91 | .25 | 3.46 | .11
| | | B1 | .88 | .42 | 5 | 0
| | | B2 | .70 | 1 | $-$ | 0
| | | B3 | .91 | 0 | 5 | 0
MTOD | .99 | 7 | CICC | .99 | 1 | $-$ | 0
| | | B1 | 1 | .98 | 5 | 0
| | | B2 | .99 | 1 | $-$ | 0
| | | B3 | 1 | 0 | 5 | 0
Table 2: Test set results where underline indicates meeting coverage
requirement. Bold denotes best when meeting this requirement, omitted for last
column due to missing ground truth for ambiguous.
Table 2 lists the main results. The first column shows the coverage, i.e. the
percentage of test samples in which the ground truth is captured in the
prediction set. We see that only CICC and B3 adhere to the requirement of
coverage $\geq 1-\alpha$ in all settings. The second column shows the fraction
of test samples for which a single intent is detected. We see that CICC
outperforms the baselines that meet the coverage requirement in five out of
seven data sets.
The third column lists the average size of the CQ. We see that CICC yields the
smallest CQs and that the number of inputs that is deemed too ambiguous is
relatively small for CICC. The last column denotes the relative number of
inputs that is rejected as too ambiguous. CICC rejects a relatively low number
of inputs. Upon inspection, many of these inputs could be classified as
different intents based on the textual information alone (see Appendix B). For
the B77/DFCX setting, we see that B1 predicts a single output frequently, at
the cost of rejecting inputs as too ambiguous. This contrasts with CICC, which
rejects inputs much less frequently and instead asks a small CQ.
Dataset | Algorithm | 1-$\alpha$ | $th$ | F1$\uparrow$ | AUROC$\uparrow$
---|---|---|---|---|---
C150-OOS | CICC | .990 | 7 | .07 | .88
| CICC-OOS | .995 | 6 | .91 | .97
B77-OOS | CICC | .970 | 7 | .76 | .92
| CICC-OOS | .994 | 6 | .90 | .97
Table 3: Results for the OOS detection task.
Figure 2: Test set results for varying error rate $\alpha$.
We continue by looking at the results for OOS detection in Table 3. We find
that vanilla CICC does not perform well on the OOS detection in comparison to
the specialized CICC-OOS variant. The specialized CICC-OOS favours a
relatively low $\alpha$ as this simultaneously forces the approach toward
large prediction sets for OOS samples and small prediction sets for in-sample
inputs. At the same time, using the CICC-OOS settings for parameters $\alpha$
and $th$ in the regular CICC interaction loop would result in relatively many
CQs of a relatively large size.
Next, we investigate how different conformal prediction approaches perform for
varying levels of $\alpha$ in Figure 2. The top figures show that all
conformal prediction approaches enable trading off set size with coverage, a
desirable property in practice of intent classification. Looking at the
adaptivity (center figures), we see mixed results. A possible explanation for
this is in the general-purpose evaluation of adaptivity, which relies on the
minimum coverage across classes (see Eq. 6). The data sets used in our
experiments contain a relatively low number of examples for some classes and
these rare classes may have an outsized effect on the SSC metric. Looking at
the bottom figure for each data set, we see that all conformal prediction
approaches lie at or above the x=y diagonal: conformal prediction always
adheres to the coverage requirement with the marginal approach yielding the
smallest average set sizes.
## 6 Conclusion
We have proposed a framework for detecting and addressing uncertainty in
intent classification with conformal prediction. The framework empirically
determines when to ask a clarification question and how that question should
be formulated. The framework uses a moderately sized calibration set and comes
with intuitively interpretable parameters.
We have evaluated the framework in eight settings, and have found that the
framework strictly outperforms baselines across all metrics in six out of
eight cases and performs competitively in the other. The framework
additionally handles inputs that are too ambiguous for intent classification
naturally. We have additionally proposed and evaluated the usage of CICC for
out-of-scope detection and found that it is suitable for this.
We finally believe that the framework opens promising avenues for future work,
including the usage of intent groups for better adaptivity, an extension to
Bayesian models to address data drift and unsupervised OOS with CICC (Fong and
Holmes, 2021), to determine conversation stopping rules based on subsequent
questions to rephrase or clarify and to combine it with reinforcement learning
for, e.g., personalization (Den Hengst et al., 2019, 2020). We believe that
CICC and/or conformal prediction may also prove useful in various other tasks,
including entity recognition, detecting label errors (Ying and Thomas, 2022)
and to empirically identify similar intents.
## Limitations
A limitation of the framework is that it relies on a user determining values
for the hyperparameters $\alpha$ and $th$. The former balances model certainty
with CQ size. Arguably, this trade-off has to be made in any approach and CICC
makes this an explicit choice between achievable trade-offs. The threshold
$th$ must be set not to reject too many inputs as too ambiguous while avoiding
information overload in the user. We advise setting it to no more than seven
based on established insights from cognitive science Miller (1956). However,
more research on the impact of CQ size on user satisfaction in various context
is in order. Another limitation is that the approach does not include a
mechanism for stopping the dialogue. We leave the investigation of stopping
criteria based on e.g. the number and size of CQs asked during the dialogue
for future work. Furthermore, this work did not thoroughly investigate the
quality of the CQs produced by the LLM. However, we view the CQ production
component as a pluggable component and therefore believe a full-scale
evaluation on this to be out-of-scope for this work. Additionally, using CICC
for OOS detection requires the presence of OOS labels. While these can be
obtained from other data sets using the practice of open-domain outliers Zhan
et al. (2021), fully unsupervised approaches based on e.g. hierarchical
Bayesian modeling or with parameters that yield good performance across data
sets as hinted at by Table 3. A final limitation is that we applied conformal
prediction to the softmax of outputs of uncalibrated neural network outputs.
This makes results consistent across settings (including DFCX), but smaller
CQs may be achievable by applying Platt scaling prior to conformal prediction
calibration Platt et al. (1999).
## Acknowledgements
We thank Mark Jayson Doma and Jhon Cedric Arcilla for their help in obtaining
and understanding DialogflowCX model output. We kindly thank the reviewers for
their time and their useful comments, without which this work would not have
been possible in its current form.
## References
* Acharya and Fung (2020) Shailesh Acharya and Glenn Fung. 2020. Using optimal embeddings to learn new intents with few examples: An application in the insurance domain. In _KDD 2020 Workshop on Conversational Systems Towards Mainstream Adoption(KDD Converse 2020)_. CEUR-WS.org.
* Alfieri et al. (2022) Andrea Alfieri, Ralf Wolter, and Seyyed Hadi Hashemi. 2022. Intent disambiguation for task-oriented dialogue systems. In _Proceedings of the 31st ACM International Conference on Information & Knowledge Management_, pages 5079–5080.
* Angelopoulos et al. (2023) Anastasios N Angelopoulos, Stephen Bates, et al. 2023. Conformal prediction: A gentle introduction. _Foundations and Trends® in Machine Learning_ , 16(4):494–591.
* Angelopoulos et al. (2021) Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. 2021. Uncertainty sets for image classifiers using conformal prediction. In _International Conference on Learning Representations_.
* Casanueva et al. (2020) Iñigo Casanueva, Tadas Temčinas, Daniela Gerz, Matthew Henderson, and Ivan Vulić. 2020. Efficient intent detection with dual sentence encoders. In _Proceedings of the 2nd Workshop on Natural Language Processing for Conversational AI_ , pages 38–45.
* Cavalin et al. (2020) Paulo Cavalin, Victor Henrique Alves Ribeiro, Ana Appel, and Claudio Pinhanez. 2020. Improving out-of-scope detection in intent classification by using embeddings of the word graph space of the classes. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 3952–3961.
* Den Hengst et al. (2020) Floris Den Hengst, Eoin Martino Grua, Ali el Hassouni, and Mark Hoogendoorn. 2020. Reinforcement learning for personalization: A systematic literature review. _Data Science_ , 3(2):107–147.
* Den Hengst et al. (2019) Floris Den Hengst, Mark Hoogendoorn, Frank Van Harmelen, and Joost Bosman. 2019. Reinforcement learning for personalized dialogue management. In _IEEE/WIC/ACM International Conference on Web Intelligence_ , pages 59–67.
* DeVault and Stone (2007) David DeVault and Matthew Stone. 2007. Managing ambiguities across utterances in dialogue. In _Proceedings of the 11th Workshop on the Semantics and Pragmatics of Dialogue (Decalog 2007)_ , pages 49–56.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Dhole (2020) Kaustubh D Dhole. 2020. Resolving intent ambiguities by retrieving discriminative clarifying questions. _arXiv preprint arXiv:2008.07559_.
* Fisch et al. (2022) Adam Fisch, Tal Schuster, Tommi Jaakkola, and Regina Barzilay. 2022. Conformal prediction sets with limited false positives. In _International Conference on Machine Learning_ , pages 6514–6532. PMLR.
* Fong and Holmes (2021) Edwin Fong and Chris C Holmes. 2021. Conformal bayesian computation. _Advances in Neural Information Processing Systems_ , 34:18268–18279.
* Giovannotti and Gammerman (2021) Patrizio Giovannotti and Alex Gammerman. 2021. Transformer-based conformal predictors for paraphrase detection. In _Conformal and Probabilistic Prediction and Applications_ , pages 243–265. PMLR.
* Hemphill et al. (1990) Charles T Hemphill, John J Godfrey, and George R Doddington. 1990. The atis spoken language systems pilot corpus. In _Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990_.
* Hendrycks et al. (2020) Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and Dawn Song. 2020. Pretrained transformers improve out-of-distribution robustness. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 2744–2751.
* Keyvan and Huang (2022) Kimiya Keyvan and Jimmy Xiangji Huang. 2022. How to approach ambiguous queries in conversational search: A survey of techniques, approaches, tools, and challenges. _ACM Computing Surveys_ , 55(6):1–40.
* Kuhn et al. (2023) Lorenz Kuhn, Yarin Gal, and Sebastian Farquhar. 2023. CLAM: Selective clarification for ambiguous questions with large language models. In _ICML Workshop Challenges of Deploying Generative AI_.
* Larson et al. (2019) Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An evaluation dataset for intent classification and out-of-scope prediction. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 1311–1316, Hong Kong, China. Association for Computational Linguistics.
* Lin and Xu (2019) Ting-En Lin and Hua Xu. 2019. Deep unknown intent detection with margin loss. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5491–5496.
* Liu et al. (2021) Xingkun Liu, Arash Eshghi, Pawel Swietojanski, and Verena Rieser. 2021. Benchmarking natural language understanding services for building conversational agents. In _Increasing Naturalness and Flexibility in Spoken Dialogue Interaction: 10th International Workshop on Spoken Dialogue Systems_ , pages 165–183. Springer.
* Maltoudoglou et al. (2020) Lysimachos Maltoudoglou, Andreas Paisios, and Harris Papadopoulos. 2020. Bert-based conformal predictor for sentiment analysis. In _Conformal and Probabilistic Prediction and Applications_ , pages 269–284. PMLR.
* Miller (1956) George A Miller. 1956. The magical number seven, plus or minus two: Some limits on our capacity for processing information. _Psychological review_ , 63(2):81.
* Papadopoulos et al. (2002) Harris Papadopoulos, Kostas Proedrou, Volodya Vovk, and Alex Gammerman. 2002. Inductive confidence machines for regression. In _Machine Learning: ECML 2002: 13th European Conference on Machine Learning Helsinki, Finland, August 19–23, 2002 Proceedings 13_ , pages 345–356. Springer.
* Plass et al. (2010) Jan L Plass, Roxana Moreno, and Roland Brünken, editors. 2010. _Cognitive load theory._ Cambridge University Press, New York, NY, US.
* Platt et al. (1999) John Platt et al. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. _Advances in large margin classifiers_ , 10(3):61–74.
* Purver et al. (2003) Matthew Purver, Jonathan Ginzburg, and Patrick Healey. 2003. On the means for clarification in dialogue. _Current and new directions in discourse and dialogue_ , pages 235–255.
* Romano et al. (2020) Yaniv Romano, Matteo Sesia, and Emmanuel Candes. 2020. Classification with valid and adaptive coverage. _Advances in Neural Information Processing Systems_ , 33:3581–3591.
* Sadinle et al. (2019) Mauricio Sadinle, Jing Lei, and Larry Wasserman. 2019. Least ambiguous set-valued classifiers with bounded error levels. _Journal of the American Statistical Association_ , 114(525):223–234.
* Schuster et al. (2019) Sebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. Cross-lingual transfer learning for multilingual task oriented dialog. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3795–3805.
* Shafer and Vovk (2008) Glenn Shafer and Vladimir Vovk. 2008. A tutorial on conformal prediction. _Journal of Machine Learning Research_ , 9(3).
* Shu et al. (2017) Lei Shu, Hu Xu, and Bing Liu. 2017. Doc: Deep open classification of text documents. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 2911–2916.
* Siro et al. (2022) Clemencia Siro, Mohammad Aliannejadi, and Maarten de Rijke. 2022. Understanding user satisfaction with task-oriented dialogue systems. In _Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval_ , pages 2018–2023.
* van Zeelt et al. (2020) Mickey van Zeelt, Floris den Hengst, and Seyyed Hadi Hashemi. 2020. Collecting high-quality dialogue user satisfaction ratings with third-party annotators. In _Proceedings of the 2020 Conference on Human Information Interaction and Retrieval_ , pages 363–367.
* Vovk et al. (1999) Volodya Vovk, Alexander Gammerman, and Craig Saunders. 1999. Machine-learning applications of algorithmic randomness. In _Proceedings of the Sixteenth International Conference on Machine Learning_ , pages 444–453.
* Yan et al. (2020) Guangfeng Yan, Lu Fan, Qimai Li, Han Liu, Xiaotong Zhang, Xiao-Ming Wu, and Albert YS Lam. 2020. Unknown intent detection using gaussian mixture model with an application to zero-shot intent classification. In _Proceedings of the 58th annual meeting of the association for computational linguistics_ , pages 1050–1060.
* Yilmaz and Toraman (2020) Eyup Halit Yilmaz and Cagri Toraman. 2020. Kloos: Kl divergence-based out-of-scope intent detection in human-to-machine conversations. In _Proceedings of the 43rd international ACM SIGIR conference on research and development in information retrieval_ , pages 2105–2108.
* Ying and Thomas (2022) Cecilia Ying and Stephen Thomas. 2022. Label errors in banking77. In _Proceedings of the Third Workshop on Insights from Negative Results in NLP_ , pages 139–143.
* Zamani et al. (2020) Hamed Zamani, Susan Dumais, Nick Craswell, Paul Bennett, and Gord Lueck. 2020. Generating clarifying questions for information retrieval. In _Proceedings of the web conference 2020_ , pages 418–428.
* Zhan et al. (2021) Li-Ming Zhan, Haowen Liang, Bo Liu, Lu Fan, Xiao-Ming Wu, and Albert YS Lam. 2021. Out-of-scope intent detection with self-supervision and discriminative training. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing_ , pages 3521–3532.
* Zhang and Zhu (2021) Zhiling Zhang and Kenny Zhu. 2021. Diverse and specific clarification question generation with keywords. In _Proceedings of the Web Conference 2021_ , pages 3501–3511.
* Zhou et al. (2021) Wenxuan Zhou, Fangyu Liu, and Muhao Chen. 2021. Contrastive out-of-distribution detection for pretrained transformers. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)_.
## Appendix A Appendix: Implementation Details
We used python v3.10.9 with packages numpy and pandas for data manipulation
and basic calculations, matplotlib to generate illustrations, mapie for
conformal prediction and reproduced these results in Julia and the package
conformalprediction.jl. We used the huggingface API for fine tuning a version
of bert-base-uncased using the hyperparameters below. For an anonymized
version of the code and data see https://anonymous.4open.science/r/cicc-205A.
⬇
learning_rate = 4.00e-05
warmup_proportion = 0.1
train_batch_size = 32
eval_batch_size = 32
num_train_epochs = 5
### A.1 Generative Language Model
We use the eachadea/vicuna-7b-1.1 variant of the LLAMA model using the
HuggingFace API for the experiments presented here. We here provide an example
prompt:
Customers asked an ambiguous question. Complete each set with a disambiguation question.
Set 1: Customer Asked: ’The terminal I paid at wouldn’t take my card. Is something wrong?’
Option 1: ’card not working’
Option 2: ’card swallowed’
Disambiguation Question: ’I understand this was about you card. Was is swallowed or not working?’
**END**
Set 2:
Customer Asked: ’I have a problem with a transfer. It didn’t work. Can you tell me why?’
Option 1: ’declined transfer’
Option 2: ’failed transfer’
Disambiguation Question: ’I see you are having issues with your transfer. Was your transfer failed or declined?’
**END**
Set 3: Customer Asked: ’I transferred some money but it is not here yet’
Option 1: ’balance not updated after bank transfer’
Option 2: ’transfer not received by recipient’
Disambiguation Question:
More efforts can be spent on prompt engineering and more advanced generative
LMs can be used, which we expect to improve the user satisfaction of CICC.
Alternatively, simple text templates can be used. We consider the following
alternatives and list some of their expected benefits and downsides:
Templates
a simple template-based can be used in which the user is asked to
differentiate between the identified intents. Benefits of templates include
full control over the chatbot output but a downside is that the CQs will be
less varied, possibly sounding less natural and will not refer back to the
users’ original utterance,
LM without user input
when using a LM, it is possible to not incorporate the user input $X$ in the
prompt. This has the benefit of blocking any prompt injection but the downside
of possibly unnatural CQs due to the inability to refer to the user query,
LM with user input
by incorporating the user utterance into the LM prompt for CQ generation, the
CQ can refer back to the user’s phrasing and particular question, and
therefore be formulated in a possibly more natural way.
We believe that more research is warranted to identify which of these
approaches is most applicable in which cases, and how possible downsides of
these alternatives can be mitigated in practice.
## Appendix B Appendix: Sample ambiguous inputs
Tables 4\- 5 list inputs that are considered ambiguous by CICC in the B77 and
HWU64 data sets respectively. Some inputs could refer to multiple intents
whereas some other inputs could be considered out-of-scope.
# | Utterance | Label | Prediction Set
---|---|---|---
1 | what is the matter? | direct debit payment not recognised | activate my card, age limit, balance not updated after bank transfer, balance not updated after cheque or cash deposit, beneficiary not allowed, cancel transfer, card arrival, card delivery estimate, card not working, card swallowed, cash withdrawal not recognised, change pin, compromised card, contactless not working, country support, declined card payment, declined transfer, direct debit payment not recognised, exchange rate, failed transfer, get physical card, lost or stolen card, lost or stolen phone, pending card payment, pending cash withdrawal, pending transfer, pin blocked, Refund not showing up, reverted card payment?, terminate account, top up failed, top up reverted, transaction charged twice, transfer not received by recipient, transfer timing, unable to verify identity, why verify identity, wrong amount of cash received,
2 | Can I choose when my card is delivered? | card delivery estimate | activate my card, card about to expire, card acceptance, card arrival, card delivery estimate, change pin, contactless not working, country support, get physical card, getting spare card, getting virtual card, lost or stolen card, order physical card, supported cards and currencies, top up by bank transfer charge, top up by card charge, visa or mastercard
3 | My contanctless has stopped working | contactless not working | activate my card, apple pay or google pay, automatic top up, beneficiary not allowed, cancel transfer, card not working, card payment wrong exchange rate, contactless not working, declined card payment, disposable card limits, failed transfer, get disposable virtual card, get physical card, pending top up, pin blocked, top up failed, top up reverted, topping up by card, virtual card not working, visa or mastercard, wrong exchange rate for cash withdrawal
4 | I misplaced my card and I dont know where the last place is where I used the card last. Can you look at my account and tell me the last place I used the card? | lost or stolen card | activate my card, atm support, card acceptance, card linking, card swallowed, cash withdrawal not recognised, compromised card, lost or stolen card, lost or stolen phone, order physical card, pin blocked
5 | Is my card denied anywhere? | card acceptance | atm support, card acceptance, card not working, card payment fee charged, card swallowed, compromised card, contactless not working, declined card payment, lost or stolen card, lost or stolen phone, order physical card, unable to verify identity, visa or mastercard
Table 4: A sample of prediction sets on B77 of size $>th$ of seven with marginal conformal prediction on BERT outputs. Plausible labels have been highlighted with underscore. # | Utterance | Label | Prediction Set
---|---|---|---
1 | olly | recommendation events | calendar set, general quirky, lists createoradd, music likeness, music query, play game, play music, play radio,
2 | this song is too good | music likeness | audio volume mute, general affirm, general commandstop, general joke, general negate, lists remove, music dislikeness, music likeness
3 | do i have to go to the gym | general quirky | calendar query, general quirky, lists query, recommendation events, recommendation locations, transport traffic, weather query
4 | silently adjust | audio volume mute | audio volume down, audio volume other, audio volume up, iot hue lightchange, iot hue lightdim, iot hue lightup, music settings
5 | how many times does it go | general quirky | datetime query, general quirky, lists query, qa factoid, qa maths, transport query, transport traffic
6 | sports head lines please | news query | calendar set, general quirky, iot hue lightchange, music likeness, news query, qa factoid, social post, weather query
7 | read that back | play audiobook | email addcontact, email query, email querycontact, email sendemail, general quirky, lists createoradd, music likeness, play audiobook, play music, social post,
8 | i don’t want to hear any more songs of that type | music dislikeness | audio volume mute, calendar remove, general commandstop, iot wemo off, lists remove, music dislikeness, music likeness
9 | check celebrity wiki | general quirky | email query, general quirky, lists query, news query, qa factoid, social post, social query
10 | Get all availables | lists query | email addcontact, email query, email querycontact, email sendemail, social post, social query, takeaway order,
11 | rating | music likeness | cooking recipe, general quirky, lists createoradd, lists query, music likeness, music query, qa definition, qa factoid,
12 | take me to mc donalds | transport query | play game, play podcasts, recommendation events, recommendation locations, recommendation movies, takeaway order, takeaway query
13 | search | qa factoid | email querycontact, general quirky, lists createoradd, lists query, music query, qa definition, qa factoid,
14 | unmute | audio volume up | audio volume down, audio volume mute, audio volume up, iot wemo off, music settings, play radio, transport query, transport traffic
15 | please unmute yourself | audio volume mute | alarm remove, audio volume down, audio volume mute, audio volume up, iot cleaning, iot wemo on, music settings, play game
16 | what’s the best day next week to go out for pizza | datetime query | calendar query, cooking recipe, general quirky, qa factoid, recommendation events, recommendation locations, takeaway query
17 | i need a manger | general quirky | calendar set, cooking recipe, general quirky, lists createoradd, music likeness, play game, qa definition, qa factoid, social post,
18 | assistant shuffle entire library | play music | iot cleaning, iot hue lightchange, lists createoradd, music settings, play audiobook, play game, play music
19 | put the disco lights on | iot hue lighton | alarm remove, iot cleaning, iot hue lightchange, iot hue lightoff, iot hue lighton, iot hue lightup, iot wemo on
20 | hello how are you today | general greet | general greet, general praise, general quirky, play radio, recommendation events, recommendation locations, recommendation movies
21 | where does tar work currently | email querycontact | cooking recipe, email querycontact, general quirky, lists query, qa definition, recommendation locations, takeaway query
22 | can you pull up jeff | email querycontact | general quirky, iot cleaning, news query, play game, play music, play radio, social post
23 | show me theatre where pirates of caribbean is playing | recommendation movies | general quirky, play game, play music, play podcasts, recommendation events, recommendation locations, recommendation movies
24 | spot | play podcasts | calendar set, music likeness, music query, play music, play radio, qa factoid, weather query,
25 | anything i need to know | general quirky | calendar query, calendar set, email query, email querycontact, general quirky, lists query, news query, social query
26 | give me some good sounds | play music | audio volume down, audio volume up, music likeness, music query, music settings, play music, play radio
27 | i don’t like it | music dislikeness | calendar remove, general commandstop, general dontcare, general joke, general negate, lists remove, music dislikeness, music likeness, social post
28 | what are you | general quirky | audio volume down, audio volume mute, audio volume up, music query, music settings, qa definition, recommendation locations, recommendation movies
29 | i don’t like this one | general quirky | audio volume mute, calendar remove, general joke, general negate, iot wemo off, lists remove, music dislikeness, social post
Table 5: Prediction sets on ACID of size $>th$ of seven with marginal
conformal prediction on BERT outputs. Plausible labels have been highlighted
with underscore.
## Appendix C Appendix: LLM results
We here present a random sample of CQs on B77 and C150.
Utterance | Prediction Set | LLM output
---|---|---
I hope you can help me. My account has been compromised from a different town that I haven’t been to. Can you please place a hold on my account until I can make it to the bank? | cash withdrawal not recognised, compromised card | I understand you are having issues with your account. Was your account compromised or your card not recognised for cash withdrawal?
Is my card denied anywhere? | card not working, declined card payment | I understand you are asking about your card. Was it declined or not working?
Money that I have transferred hasn’t arrived yet | balance not updated after bank transfer, transfer not received by recipient | I understand you are waiting for a transfer to be updated in your balance. Was the transfer not received by you or the recipient?
How long will the transfer take? | balance not updated after bank transfer,pending transfer, transfer timing | I understand you are asking about transfer timing. Was your transfer pending, not updated or not received?
What should I do with my stuck ATM? | card swallowed, declined cash withdrawal | I understand you have a stuck ATM. Was it your card that got stuck or was your cash withdrawal declined?
Table 6: Sample of LLM outputs on the B77 dataset where the user input has
been made part of the prompt.
Utterance | Prediction Set | LLM output
---|---|---
how many more miles can i drive on my current tank | gas, mpg | I understand you are asking about your car’s fuel. Are you asking about the number of miles you can drive on your current tank or the miles per gallon?
can i redeem my points for flights | redeem rewards, rewards balance | I understand you would like to redeem your points for flights. Are you looking to check your rewards balance or redeem your rewards?
are there any meetings between 2 and 5 pm | schedule meeting, meeting schedule | I understand you are looking for a schedule of meetings. Are you looking for a specific time range or a general schedule?
is the marathon on friday added to my calendar | calendar, calendar update | I understand you are referring to your calendar. Is it the marathon on friday that you are referring to or do you need an update on your calendar?
why didn’t my card work | expiration date, card declined | I understand you are having issues with your card. Is it because of the expiration date or was it declined?
Table 7: Sample of LLM outputs on the C150 dataset.
## Appendix D Appendix: Intent distributions
We here present the intent distributions for all public datasets across train,
calibration and test splits.
Figure 3: Intent distribution in ACID data set. Figure 4: Intent distribution
in ATIS data set. Figure 5: Intent distribution in B77 data set. Figure 6:
Intent distribution in C150-IS data set. Figure 7: Intent distribution in
HWU64 data set. Figure 8: Intent distribution in IND data set. Figure 9:
Intent distribution in MTOD data set.
## Appendix E Appendix: Unoptimized $\alpha$
This appendix contains results for an unoptimized $\alpha$ hyperparameter,
arbitrarily set at $.10$ and $.01$. We see that for most data sets, there is
no need to ask a clarification question as the model already achieves the
desired coverage. Much higher coverages (as in Table 2) are achievable for
these data sets. For some more challenging data sets such as C150, HWU64 and
IND, CICC yields small clarification questions while retaining a reasonably
large number of clarification questions of size 1.
Setting | $1-\alpha$ | $th$ | | Cov$\uparrow$ | Single$\uparrow$ | $|\text{CQ}|\downarrow$ | Amb
---|---|---|---|---|---|---|---
ACID | .90 | 7 | CICC | .90 | .92 | $-$ | 0
| | | B1 | .97 | .93 | 5 | 0
| | | B2 | .95 | 1 | $-$ | 0
| | | B3 | .99 | 0 | 5 | 0
ATIS | .90 | 7 | CICC | .88 | .89 | $-$ | 0
| | | B1 | .99 | .93 | 5 | 0
| | | B2 | .98 | 1 | $-$ | 0
| | | B3 | 1 | 0 | 5 | 0
B77/BERT | .90 | 7 | CICC | .98 | .79 | 2.90 | .04
| | | B1 | .97 | .90 | 5 | 0
| | | B2 | .93 | 1 | $-$ | 0
| | | B3 | .99 | 0 | 5 | 0
B77/DFCX | .90 | 4 | CICC | .91 | .66 | 2.63 | .02
| | | B1 | .95 | .71 | 4.79 | .27
| | | B2 | .90 | .98 | 2.26 | 0
| | | B3 | .97 | 0 | 5 | 1
C150 | .90 | 7 | CICC | .99 | .97 | 2.66 | 0
| | | B1 | .99 | .82 | 5 | 0
| | | B2 | .98 | 1 | $-$ | 0
| | | B3 | 1 | 0 | 5 | 0
HWU64 | .90 | 7 | CICC | .90 | .97 | 2.00 | 0
| | | B1 | .96 | .79 | 5 | 0
| | | B2 | .90 | 1 | $-$ | 0
| | | B3 | .98 | 0 | 5 | 0
IND | .90 | 7 | CICC | .91 | .25 | 3.46 | .11
| | | B1 | .88 | .42 | 5 | 0
| | | B2 | .70 | 1 | $-$ | 0
| | | B3 | .91 | 0 | 5 | 0
MTOD | .90 | 7 | CICC | .90 | .90 | $-$ | 0
| | | B1 | .99 | .99 | 5 | 0
| | | B2 | .99 | 1 | $-$ | 0
| | | B3 | 1 | 0 | 5 | 0
Table 8: Test set results for $1-\alpha=.90$ where underline indicates meeting coverage requirement. Bold denotes best when meeting this requirement, omitted for last column due to missing ground truth for ambiguous. Setting | $1-\alpha$ | $th$ | | Cov$\uparrow$ | Single$\uparrow$ | $|\text{CQ}|\downarrow$ | Amb
---|---|---|---|---|---|---|---
ACID | .99 | 7 | CICC | 1 | .77 | 3.00 | .10
| | | B1 | .98 | .85 | 5 | 0
| | | B2 | .95 | 1 | $-$ | 0
| | | B3 | .99 | 0 | 5 | 0
ATIS | .99 | 7 | CICC | .99 | .98 | 2.54 | 0
| | | B1 | .99 | .73 | 5 | 0
| | | B2 | .98 | 1 | $-$ | 0
| | | B3 | 1 | 0 | 5 | 0
B77/BERT | .99 | 7 | CICC | .98 | .79 | 2.90 | .04
| | | B1 | .97 | .90 | 5 | 0
| | | B2 | .93 | 1 | $-$ | 0
| | | B3 | .99 | 0 | 5 | 0
B77/DFCX | .99 | 4 | CICC | .97 | 0 | 5 | 1
| | | B1 | .97 | .05 | 5 | .95
| | | B2 | .90 | 1 | $-$ | 0
| | | B3 | .97 | 0 | 5 | 1
C150 | .99 | 7 | CICC | .99 | .97 | 2.66 | 0
| | | B1 | .99 | .82 | 5 | 0
| | | B2 | .98 | 1 | $-$ | 0
| | | B3 | 1 | 0 | 5 | 0
HWU64 | .99 | 7 | CICC | .99 | .25 | 3.39 | .28
| | | B1 | .98 | .05 | 5 | 0
| | | B2 | .90 | 1 | $-$ | 0
| | | B3 | .98 | 0 | 5 | 0
MTOD | .99 | 7 | CICC | .99 | 1 | $-$ | 0
| | | B1 | 1 | .98 | 5 | 0
| | | B2 | .99 | 1 | $-$ | 0
| | | B3 | 1 | 0 | 5 | 0
Table 9: Test set results for $1-\alpha=.99$ where underline indicates meeting
coverage requirement. Bold denotes best when meeting this requirement, omitted
for last column due to missing ground truth for ambiguous.
## Appendix F Appendix: Comparison results OOS detection
We here compare the results of OOS detection as reported by baselines. Note
that these results were generated on different splits of the data and (where
applicable), possibly using different open-domain samples, and that a direct
comparison between results is invalid.
|
# Exotic Compact Objects: The Dark White Dwarf
Michael Ryan<EMAIL_ADDRESS>Institute for Gravitation and the Cosmos, The
Pennsylvania State University, University Park, PA 16802, USA Department of
Physics, The Pennsylvania State University, University Park, PA, 16802, USA
David Radice<EMAIL_ADDRESS>Institute for Gravitation and the Cosmos, The
Pennsylvania State University, University Park, PA 16802, USA Department of
Physics, The Pennsylvania State University, University Park, PA, 16802, USA
Department of Astronomy & Astrophysics, The Pennsylvania State University,
University Park, PA, 16802, USA
###### Abstract
Several dark matter models allow for the intriguing possibility of exotic
compact object formation. These objects might have unique characteristics that
set them apart from their baryonic counterparts. Furthermore, gravitational
wave observations of their mergers may provide the only direct window on a
potentially entirely hidden sector. Here we discuss dark white dwarfs,
starting with an overview of the microphysical model and analytic scaling
relations of macroscopic properties derived from the non-relativistic limit.
We use the full relativistic formalism to confirm these scaling relations and
demonstrate that dark white dwarfs, if they exist, would have masses and tidal
deformabilities that are very different from those of baryonic compact
objects. Further, and most importantly, we demonstrate that dark white dwarf
mergers would be detectable by current or planned gravitational observatories
across several orders of magnitude in the particle-mass parameter space.
Lastly, we find universal relations analogous to the compactness-Love and
binary Love relations in neutron star literature. Using these results, we show
that gravitational wave observations would constrain the properties of the
dark matter particles constituting these objects.
cosmology: theory – dark matter – compact objects
## I Introduction
Current dark matter search techniques focus on two primary channels: large-
scale structure constraints (e.g. [1, 2]) and direct and indirect detection
experiments (e.g. [3, 4, 5, 6, 7]). While these have constrained several of
the bulk properties of dark matter, i.e. that dark matter is cold,
particulate, and effectively collisionless on large scales, the current lack
of dark matter detection or production has not helped to narrow the space of
models. Likewise, the field of astro-particle indirect detection (e.g. [8, 9,
10]), while showing possible signals of interest [11], has also not yet
produced results. On the other hand, the advent of gravitational wave
observations has opened a new window on the universe that could illuminate the
dark sector in a completely novel manner.
Several promising alternative dark matter models have a “complex” (two or more
massive particles) particle zoo, and dissipative interactions create the
potential for gravitationally-bound macroscopic structures. Many of these
models even form exotic compact objects, with both “dark” black holes (a
normal black hole formed from dark matter instead of baryonic matter) [12, 13,
14, 15, 16, 17, 18, 19] and dark (neutron) stars [20, 21, 22, 23] having been
proposed. The merger of these exotic compact objects with each other, or with
astrophysical compact objects could be revealed by gravitational wave
detectors, such as LIGO. Alternatively, dark matter capture by ordinary
compact objects could result in the formation of compact, dark matter cores in
their interior[24, 25, 26, 27, 28, 29], creating hybrid dark/baryonic objects
ranging from planetary to stellar masses.
While dark black holes and neutron stars are the obvious counterparts to
ordinary black holes and neutron stars, little attention has been paid to the
dark equivalent of the third member of the ordinary compact object trio: the
(dark) white dwarf (DWD). In the simplest sense, a white dwarf is a compact
object predominantly composed of degenerate light electrons and massive
nuclei. In white dwarfs, a balance is struck between gravitational and
(electron-dominated) fermion degeneracy pressure forces. This balance sets
many of their macroscopic properties, like the radius and compactness. Above a
certain mass (the well-known Chandrasekhar limit), this balance is broken and
no static configurations exist. Likewise, above a certain density (the onset
of neutronization) these objects again become dynamically unstable to
collapse. Here, we consider the dark matter analogs to white dwarfs. These
would be compact objects, predominantly composed of two or more fermion
species of dark matter, where the pressure support is primarily provided by
fermion degeneracy pressure. Importantly, in hidden, multi-particle dark
matter models that lack “nuclear” reactions, DWDs may be the only sub-
Chandrasekhar-mass compact objects that can form. Also note that, with this
definition, in the limit where the particle species contribute equally to the
pressure and mass (the single particle limit), these objects may be
macroscopically indistinguishable from dark neutron stars in models with weak
or no nuclear interactions (e.g. [30, 22]).
The literature on objects fitting the definition above is sparse, with few
articles discussing objects that fit all three criteria. For example, Narain
et al.[30] describe a general, particle-mass-independent framework for dark,
fermionic compact objects, assuming single species composition, even studying
the effects of a generic interaction term. Gross et al.[31] studied DWD-like
objects as a possible final state of collapsed dark quark pockets (assuming
multiple species, but identical masses) in an analogue of primordial black
holes, computing their mass, radii, and long-term stability with that method.
Using a less general approach, we extend these analyses to a two-particle,
fermionic gas with potentially varying masses, and include a brief examination
of additional binary properties, like the tidal deformability and potential
universal relations. Hippert et al.[21] mentions the high plausibility of dark
white dwarf formation in the Twin Higgs mirror model, but focuses on neutron
stars instead. Brandt et al. [32] consider the stability of a multi-species
model for what they refer to as a pion star, focusing on using lattice QCD
methods to obtain a precise equation of state. Meanwhile, the discussion on
dark planets and similar low-mass, multi-component objects([24, 25, 27])
requires the mixing of ordinary matter with dark matter. Consequently, the DWD
space has remained unexplored.
We hasten to mention that, like most dark, non-primordial black hole and
neutron star models, these objects do not necessarily constitute the entirety
of dark matter. While the formation of these objects is outside the scope of
this work, we note that regardless of the pathway, the population must obey
the constraints imposed by microlensing and similar primordial black hole and
massive, compact halo object searches(e.g. [33, 34]). These searches impose
constraints on the fraction of dark matter contained in compact objects versus
all dark matter. For the masses considered here, this corresponds to less than
$\mathcal{O}(0.01)$ for subsolar mass DWDs, decreasing to
$\mathcal{O}(10^{-4})$ for objects above $10\text{\,}\mathrm{M_{\odot}}$.
Given that ordinary white dwarfs in the Milky Way correspond to a similar
$\mathcal{O}(0.01)$ fraction of the ordinary matter [35, 36], this constraint
seems reasonable for models that predict a dark astrophysical formation like
[21].
In the following sections we examine the properties of the most basic DWD
model following our definition above: two particle species forming a compact
object, with fermion degeneracy pressure providing the dominant support
against gravitational collapse. We start with a discussion of the basic
properties of DWD that can be inferred analytically in Section II, including
the calculation of an equation of state and scaling relations for the mass,
radius, and compactness in the non-relativistic limit. We then discuss the
results of the fully relativistic hydrostatic-equilibrium calculations across
the particle-mass parameter space, highlighting four example parameter cases
in Section III and examining several of the macroscopic attributes, potential
universal relations, and implications for DWD merger observations.
Importantly, we demonstrate that DWD mergers should be detectable by current
and planned gravitational wave observatories across much of the dark parameter
space and observations can be used to constrain the dark microphysics. Lastly,
we conclude in Section IV.
## II Analytic Scaling Relations
### II.1 Equation of State
We consider a simplified, cold compact object comprised of a cloud of
degenerate, fundamental fermionic particles, $L$, and $H$. The particles have
masses $m_{H}$ and $m_{L}$, defined such that $m_{H}\geq m_{L}$, analogous to
the Standard Model proton and electron. We will use the notation
$\displaystyle\mathcal{r}_{H}=\frac{m_{H}}{m_{p}},\;\;\text{ and
}\;\;\mathcal{r}_{L}=\frac{m_{L}}{m_{e}},$ (1)
throughout this and following sections, where $m_{p}$ and $m_{e}$ are the
proton and electron masses. We will assume approximately neutral “charge” in
bulk, i.e. an equal numbers of $L$ and $H$ particles.
The basic thermodynamic properties of such a cloud are well known ([37, 38]),
with the number density, pressure, and energy density of a single fermionic
particle $f$ given by:
$n_{f}=\frac{8\pi}{3h^{3}}p_{\rm
fermi}^{3}=\frac{x^{3}}{3\pi^{2}\lambda_{f}^{3}}$ (2) $\displaystyle P_{f}$
$\displaystyle=\frac{8\pi
m_{f}^{4}c^{5}}{3h^{3}}\int_{0}^{x}\frac{y^{4}}{(1+y^{2})^{1/2}}\text{d}y$
$\displaystyle=\frac{m_{f}c^{2}}{\lambda_{f}^{3}}\Bigl{[}\frac{1}{8\pi^{2}}\bigl{(}x(1+x^{2})^{1/2}\left(\frac{2}{3}x^{2}-1\right)$
$\displaystyle\qquad+\ln\left(x+(1+x^{2})^{1/2}\right)\bigr{)}\Bigr{]}$
$\displaystyle=\frac{m_{f}c^{2}}{\lambda_{f}^{3}}\phi(x)$ (3)
$\displaystyle\varepsilon_{f}$ $\displaystyle=4\pi
c^{5}m_{f}^{4}\int_{0}^{x}y^{2}\sqrt{y^{2}+1}\text{d}y$
$\displaystyle=\frac{m_{f}c^{2}}{\lambda_{f}^{3}}\Bigl{[}\frac{1}{8\pi^{2}}\bigl{(}x(1+x^{2})^{1/2}\left(1+2x^{2}\right)$
$\displaystyle\qquad-\ln\left(x+(1+x^{2})^{1/2}\right)\bigr{)}\Bigl{]}$
$\displaystyle=\frac{m_{f}c^{2}}{\lambda_{f}^{3}}\chi(x).$ (4)
Here, $\lambda_{f}=\hbar/(m_{f}c)$ is the Compton wavelength, $x$ is the
dimensionless Fermi momentum, defined using Equation (2), and
$y=(pc)/(m_{f}c^{2})$. The total pressure in a two-component gas is then just
$P_{\rm tot}=P_{H}+P_{L}$, while the total energy density is
$\epsilon_{tot}=\epsilon_{H}+\epsilon_{L}$.
We will assume that interparticle interactions contribute at most a small
correction to the energy density and will not include their effects here. This
follows the standard white dwarf model, where the electrostatic interaction
contributes the dominant term, at least at high densities, with a correction
to the pressure on the order of the fine structure constant, $(P+P_{\rm
correction})/P\approx$0.4\text{\,}\mathrm{\char 37\relax}$$ for a hydrogen
white dwarf [38]. We have chosen this path to preserve the generalizability of
these results, since the type of interaction changes between dark matter
models.
### II.2 Polytropic approximation
Since the full EoS is complicated, especially when considered across the
$m_{L}-m_{H}$ parameter space, it is convenient to expand the EoS as a power
series in $x$, keeping only the dominant term. Then the EoS can be
approximated as a polytropic function, $P(x)=Kx^{\Gamma/3}$, where $K$ and
$\Gamma$ are the polytropic constants. Commonly, this is rewritten in terms of
the rest mass density ($\rho_{0}=\sum m_{f}n_{f}\approx m_{H}n_{H}\approx
m_{H}n_{L}$) and the polytropic index $n$, as $P(\rho_{0})=K\rho_{0}^{1+1/n}$.
The polytropic approximation then falls into one of four limiting cases,
depending on whether the particles are highly relativistic ($x_{f}\gg 1$) or
non-relativistic ($x_{f}\ll 1$), and whether the particle masses are
substantially different ($m_{L}\ll m_{H}$) or similar ($m_{L}\approx m_{H}$).
Generally, the heavy particles are non-relativistic except in the similar-
mass, relativistic electron limit and the similar mass limit can also be
thought of as the single particle limit, approximately obtainable with the
replacement $L\rightarrow H$. From Equation (2), the relativity condition can
be written as a condition on $\rho_{0}$, with the non-relativistic (light
particle) limit as
$\rho_{0}\ll${10}^{6}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$\mathcal{r}_{L}^{3}\,\mathcal{r}_{H}$
and the highly relativistic limit
$\rho_{0}\gg${10}^{6}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$\mathcal{r}_{L}^{3}\,\mathcal{r}_{H}$.
Using the notation from above and defining $K_{\rm WD}(n)$ as the
$n$-dependent polytropic constant for the ordinary white dwarf, the four cases
can be written as
$P_{\rm
tot}\approx\begin{cases}\mathcal{r}_{L}^{-1}\,\mathcal{r}_{H}^{-5/3}K_{\rm
WD}(\frac{3}{2})\,\rho_{0}^{5/3}&(a)\\\ \mathcal{r}_{H}^{-4/3}K_{\rm
WD}(3)\,\rho_{0}^{4/3}&(b)\\\ 2\,\mathcal{r}_{H}^{-8/3}K_{\rm
WD}(\frac{3}{2})\,\rho_{0}^{5/3}&(c)\\\ 2\,\mathcal{r}_{H}^{-4/3}K_{\rm
WD}(3)\,\rho_{0}^{4/3}&(d)\end{cases},$ (5)
with $(a)$ being
$\rho_{0}\ll${10}^{6}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$\mathcal{r}_{L}^{3}\,\mathcal{r}_{H}$,
$m_{L}\ll m_{H}$, $(b)$ being
$\rho_{0}\gg${10}^{6}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$\mathcal{r}_{L}^{3}\,\mathcal{r}_{H}$,
$m_{L}\ll m_{H}$, $(c)$ being
$\rho_{0}\ll${10}^{6}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$\mathcal{r}_{H}^{4}$,
$m_{L}\approx m_{H}$, and $(d)$ being
$\rho_{0}\gg${10}^{6}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$\mathcal{r}_{H}^{4}$,
$m_{L}\approx m_{H}$. Of note, the $(a)$ and $(b)$ cases correspond to the
ordinary white dwarf when $\mathcal{r}_{H}=\mathcal{r}_{L}=1$.
### II.3 Newtonian Hydrostatic Approximation
Next, we examine the parametric dependencies of the DWD mass, radius, and
compactness. This can be accomplished by solving the Newtonian hydrostatic
equilibrium equations. Defining $m(r)$ as the total mass contained within
radius $r$, $p(r)$ as the net outward pressure, and gravitational constant
$G$, we have
$\displaystyle\frac{dp}{dr}$ $\displaystyle=-\frac{Gm}{r^{2}}\rho(r),$ (6a)
$\displaystyle\frac{dm}{dr}$ $\displaystyle=4\pi r^{2}\rho(r).$ (6b)
With the inclusion of a polytropic EoS and the definitions,
$\rho_{c}=\rho(r=0)$, $\rho=\rho_{c}\theta^{n}$, $r=a\xi$, and
$a=[(n+1)K\rho_{c}^{1/n-1}/(4\pi G)]^{1/2}$, Equations (6a) and (6b) can be
combined into the Lane-Emden equation,
$\frac{1}{\xi^{2}}\frac{d}{d\xi}\xi^{2}\frac{d\theta}{d\xi}=-\theta^{n}.$ (7)
Note that the only remaining polytropic parameter is the index; this equation
is otherwise independent of the EoS and thus mass dependencies. Numeric
integration of Equation (7) with the boundary conditions $\theta(0)=1$,
$\theta^{\prime}(0)=0$, gives the point $\theta(\xi_{1})=0$, which corresponds
to the surface of the object. Undoing the previous transformations provides
solutions for the final radius ($R$) and mass ($M$) of the DWD,
$\displaystyle R_{\rm DWD}$ $\displaystyle=\left(\frac{(n+1)K}{4\pi
G}\right)^{1/2}\rho_{c}^{(1-n)/2n}\xi_{1}$ $\displaystyle M_{\rm DWD}$
$\displaystyle=4\pi\left(\frac{(n+1)K}{4\pi
G}\right)^{3/2}\rho_{c}^{(3-n)/2n}\xi_{1}^{2}|\theta^{\prime}(\xi_{1})|$ . (8)
These equations can be rewritten in terms of the ordinary white dwarf mass and
radius using the density scaling $\rho_{\rm
DWD}=\mathcal{r}_{L}^{3}\,\mathcal{r}_{H}\rho_{WD}$ and polytropic constant
scaling
$\mathcal{r}_{K}=\frac{K_{\rm DWD}(n)}{K_{\rm WD}(n)},$ (9)
with $K_{\rm DWD}$ from Equation (5), giving
$\displaystyle R_{\rm DWD}$
$\displaystyle=\left(\mathcal{r}_{L}^{3}\mathcal{r}_{H}\right)^{(1-n)/2n}\mathcal{r}_{K}^{1/2}R_{\rm
WD}(\rho_{c})$ $\displaystyle=\mathcal{r}_{L}^{-1}\,\mathcal{r}_{H}^{-1}R_{\rm
WD}(\rho_{c})$ (10) $\displaystyle M_{\rm DWD}$
$\displaystyle=\left(\mathcal{r}_{L}^{3}\mathcal{r}_{H}\right)^{(3-n)/2n}\mathcal{r}_{K}^{3/2}M_{\rm
WD}(\rho_{c})$ $\displaystyle=\mathcal{r}_{H}^{-2}M_{\rm WD}(\rho_{c}).$ (11)
Unsurprisingly, we recover the classic Chandrasekhar mass limit scaling, Eq.
((11)), commonly seen in the literature on exotic compact objects (e.g. [20,
17, 18]) as either a lower mass bound, when discussing black holes, or an
upper mass bound when discussing other types of objects. Lastly, the
compactness is given by
$\displaystyle C_{\rm
DWD}(\rho_{c})=\left.\frac{M(\rho_{c})}{R(\rho_{c})}\right|_{\rm DWD}$
$\displaystyle=\frac{\mathcal{r}_{L}}{\mathcal{r}_{H}}C_{\rm WD}(\rho_{c}).$
(12)
As expected, when $m_{L}\to m_{H}$ we recover the single-particle limit,
$\displaystyle R_{\rm DWD}$ $\displaystyle\propto\mathcal{r}_{H}^{-2}R_{\rm
SP}(\rho_{c})$ (13a) $\displaystyle M_{\rm DWD}$
$\displaystyle\propto\mathcal{r}_{H}^{-2}M_{\rm SP}(\rho_{c})$ (13b)
$\displaystyle C_{\rm DWD}$ $\displaystyle\propto C_{\rm SP}(\rho_{c})$ (13c)
with the $m_{L}^{-2}=m_{H}^{-2}$ scaling seen in the literature [39]. Note
that while Equations 13a–13c display the scaling for dark neutron stars, whose
Fermi pressure and energy density are dominated by the neutron terms, they
would only be useful for order of magnitude estimation, because the properties
of dark neutron stars, like ordinary neutron stars, are heavily influenced by
inter-particle interactions that are lacking in this model(see e.g. [15, 22,
16, 21]).
## III Numerical Results
Now that we have established approximate scaling relations for DWDs, we
proceed to compute their properties using a fully relativistic treatment. In
particular, the Tolman-Oppenheimer-Volkoff (TOV) equations[38, 39],
$\displaystyle\frac{dp}{dr}$
$\displaystyle=-\frac{Gm}{r^{2}}\left[\frac{\epsilon}{c^{2}}\right]\left[1+\frac{p}{\epsilon}\right]\times$
$\displaystyle\qquad\left[1+\frac{4\pi
r^{3}p}{mc^{2}}\right]\left[1-\frac{2Gm}{c^{2}r}\right]^{-1},$ (14a)
$\displaystyle\frac{dm}{dr}$ $\displaystyle=4\pi
r^{2}\left[\frac{\epsilon}{c^{\textbf{{\color[rgb]{0,0,1}2}}}}\right],$ (14b)
update the Newtonian hydrostatic equilibrium equations (Equations 6a–6b) to
add corrections due to general relativity (square brackets), necessary to
describe the gravitational field of compact objects. We have suppressed the
$r$-dependence of $\epsilon$ and $p$ for clarity. Like the Newtonian
approximation, the TOV equation can be non-dimensionalized and solved
numerically to obtain the approximate scaling relations from Section II.3,
given the corresponding polytrope[40]. As the matter density in an actual DWD
could fall anywhere between the non- and highly-relativistic limiting cases,
we need to use the full EoS and so a numerical TOV solver. To this aim we use
a modified version of the TOVL solver developed by [41] and [42].
Solving either the Newtonian approximation or the TOV equation across a range
of central densities (i.e. $\rho_{c}=\rho(r=0)$, as before) for a given
$m_{L}$ and $m_{H}$ generates a relationship in the $M-R$ space known as a
mass-radius relation. In Fig. 1, we plot some of these mass-radius relations
in the slice of the $m_{L}-m_{H}$ parameter space specified by
$m_{L}=$4.1\text{\,}\mathrm{MeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$$ and
$m_{H}=$4.1\text{\times}{10}^{-3}94\text{\,}\mathrm{GeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$$
on logarithmic $R-M$ axes. Clearly visible are the wide ranges in $M$ and $R$,
even over a small range in the $m_{H}$ parameter space. Conversely, the
overall shape of the $M-R$ curve remains similar over that same range. There
is a noticeable plateau that appears in the $m_{H}\gg m_{L}$ regime and
disappears as $m_{H}\to m_{L}$. The plateau is due to the fact that the light
particle becomes ultrarelativistic in the core of these DWDs and the equation
of state becomes a polytrope with $\Gamma=4/3$ (see Section II.2). As
$m_{H}\to m_{L}$, the maximum mass is achieved before the particles enter the
highly relativistic regime. For example, the maximum mass for
$\\{m_{L},m_{H}\\}=\\{4.8,5.6\\}$\mathrm{MeV}$$ (solid blue line) occurs at
$x\approx 1.1$, well within the transition regime. The transition to the
single particle limit can be seen in the behavior of the radius scaling in
Figure 1b. As $m_{H}\to m_{L}$, the radii begins changing by a factor
approaching $2^{2/3}$.
(a)
(b)
Figure 1: Example mass-radius relations for varying parameter values. Each
line in Figure 1a corresponds to the mass-radius relation for a single value
of $m_{H}$ and the fixed value
$m_{L}=$4.8\text{\,}\mathrm{MeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$$
resulting from solving the Tolman-Oppenheimer-Volkoff equation over a range of
central densities. The blue, solid, markerless line is
$m_{H}=$5.6\text{\times}{10}^{-3}\text{\,}\mathrm{GeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$$,
with increasing $m_{H}$ shifting the relation’s maximum mass down and to the
left. Note that the maximum mass clearly scales as described in Eq. ((11)),
and the $M-R$ curves exhibit the classic white dwarf shape, even when
approaching the single-particle limit,
$m_{H}=$5.6\text{\times}{10}^{-3}\text{\,}\mathrm{GeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$$
(solid blue, no marker). Above the densities corresponding to the maximum mass
(to the left on the plot), the DWD is gravitationally unstable. The cutoffs on
the right simply correspond to the minimum density plotted. Figure 1b is
identical to Figure 1a, but with radius rescaled as
$R\to\mathcal{r}_{L}\,\mathcal{r}_{H}R$ and mass rescaled as
$M\to\mathcal{r}_{H}^{2}M$.
The dimensionless, gravito-electric quadrupolar tidal deformability
($\Lambda_{2}$;[43]) is also of interest for comparison with gravitational
wave observations. The calculation of $\Lambda_{2}$ requires the numeric
solution of the reduced, relativistic, quadrupole gravitational potential
($y$) differential equation from [44, 41, 45]
$\displaystyle\frac{dy}{dr}$ $\displaystyle=-\frac{y^{2}}{r}-\frac{r+4\pi
r^{3}(p-\epsilon)}{r(r-2m)}y+\frac{4(m+4\pi r^{3}p)^{2}}{r(r-2m)^{2}}$
$\displaystyle\qquad+\frac{6}{r-2m}-\frac{4\pi
r^{2}}{r-2m}\left[5\epsilon+9p+\frac{(\epsilon+p)}{(dp/d\epsilon)^{2}}\right]$
(15)
This is solved in parallel with the TOV equation to find the value of $y$ at
the surface of the object, $Y=y(R)$.
The quantity $\Lambda_{2}$ is then defined by
$\Lambda_{2}=\frac{2}{3}\frac{k_{2}}{C^{5}}$ (16)
where $k_{2}$ is the tidal apsidal constant,
$\displaystyle k_{2}$
$\displaystyle=\frac{8C^{5}}{5}\left[(1-2C)^{2}(2C(Y-1)-Y+2)\right]\times$
$\displaystyle\qquad\Bigl{[}6C(2-Y+C(5Y-8))+4C^{3}[13-11Y+C(3Y-2)+2C^{2}(1+Y)]$
$\displaystyle\qquad\qquad+3(1-2C)^{2}[2-Y+2C(Y-1)]\log(1-2C)\Bigr{]}^{-1}.$
(17)
The $\Lambda_{2}$ parameter specifies how much the object deforms in the tidal
field of a companion star and is directly related to the compactness. Black
holes, for example, have $\Lambda_{2}=0$, LIGO-detectable neutron stars in
binaries are in the range $1{10}^{4}$, and white dwarfs are $>{10}^{10}$[43,
46]. The usage of $\Lambda_{2}$ in regards to observations is explained in
section III.3.
Note that we have included two modifications to the TOVL solver from [41, 42].
First, the TOVL solver, as written, computes the solution to the second-order
Regge-Wheeler equation, instead of Equation 15. As the numerical solver had
difficulty converging on a solution in parts of the parameter space, and for
improved numerical efficiency, we use the equivalent, first-order Equation 15
instead[44, 45]. Second, Equation 17 runs into numerical precision
difficulties when the compactness is small, with, for instance, terms in the
denominator (not) canceling as they should, leading to negative values of
$k_{2}$. We replaced the analytic form of Equation 17 with a series expansion
around both $C=0$ and $Y=1$ out to fifth order. This introduces an error of
$<1\%$ into the calculations across the entire parameter space defined below.
### III.1 Parameter Space
In Figure 2, we plot the dark white dwarf mass, compactness, and tidal
deformability computed by solving the TOV and $\Lambda_{2}$ equations across
the values $m_{L}=$
$0.511\text{\,}\mathrm{keV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$ to
$5.11\text{\,}\mathrm{GeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$ and $m_{H}=$
$93.8\text{\,}\mathrm{keV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$ to
$93.8\text{\,}\mathrm{GeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$ (corresponding
to $\mathcal{r}_{L}=$ ${10}^{-3}{10}^{4}$, $\mathcal{r}_{H}=$
${10}^{-4}{10}^{2}$), with the restriction $m_{L}\leq m_{H}$ (the behavior is
symmetric across the $m_{L}=m_{H}$ line). At each sampled $m_{H}-m_{L}$ point
in the parameter space, the TOV and $\Lambda_{2}$ equations are solved for
$\rho_{c}$ ranging from
${10}^{-5}{10}^{25}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$. Figure 2a
shows the mass-radius relations for three parameter cases, somewhat
representative of the three parameter space corners: light-light (yellow),
heavy-light (blue), and heavy-heavy (purple), and the Standard Model (SM)
relation (red), where light (heavy) corresponds to significantly below (above)
the Standard Model value. In Fig. 2b, we plot the maximum mass obtained for
each mass-radius relation, while in Fig.s 2c-2d, we plot $C$ and $\Lambda_{2}$
at the value of $\rho_{c}$ corresponding to said maximum mass.
Of note, the maximum mass scales predominantly with $m_{H}$, as seen by
comparing the light-light and SM cases and SM and heavy-heavy cases, and
ranges from
$3\text{\times}{10}^{-4}5\text{\times}{10}^{8}\text{\,}\mathrm{M_{\odot}}$.
The compactness also scales approximately with the ratio $m_{L}/m_{H}$, as
shown by comparing the light-light and heavy-heavy cases and predicted in the
Newtonian approximation, ranging from ${10}^{-6}0.09$. The scaling with the
ratio generally holds for $\Lambda_{2}$ as well, due to the strong dependence
on $C$, leading to a minimum of $\sim$500$$ near the $m_{L}\approx m_{H}$ line
and a maximum of $4\text{\times}{10}^{26}$ in the upper right corner. The
maximum mass configuration is attained at a density that depends on the values
of $m_{H}$ and $m_{L}$, corresponding to a central Fermi momentum of $x\sim
m_{H}/m_{L}$, rather than a fixed value. This additional factor corresponds to
a change in the central density scaling used in Equations 10–12 from
$\rho_{c}\propto r_{L}^{3}r_{H}$ to $\rho_{c}\propto r_{L}^{2}r_{H}^{2}$.
Substitution gives $C_{\rm
DWD}\propto\left(\mathcal{r}_{L}/\mathcal{r}_{H}\right)^{2/3}$, which is what
we see in Figure 2c for $m_{L}\ll m_{H}$. For $m_{L}\to m_{H}$, $C$ approaches
the single particle limit, $C=0.114$[30].
(a) Example Cases
(b) $M_{\rm max}$ in $\mathrm{M_{\odot}}$
(c) Compactness at $M_{\rm max}$
(d) $\Lambda_{2}$ at $M_{\rm max}$
Figure 2: Tollman-Oppenheimer-Volkoff and electric quadrupolar tidal
deformability ($\Lambda_{2}$) solution results for the parameter range
$m_{L}=$ $0.511\text{\,}\mathrm{keV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$ to
$5.11\text{\,}\mathrm{GeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$ and $m_{H}=$
$93.8\text{\,}\mathrm{keV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$ to
$93.8\text{\,}\mathrm{GeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$, with values
for $m_{L}>m_{H}$ ignored. In Fig. 2a, we display the mass-radius relations
near the three “corners” of the $m_{H}-m_{L}$ parameter space (with
corresponding points in the parameter space subfigure) as well as
(approximately) the Standard Model white dwarf (thick, solid/largest).
Notably, the masses and radii of these objects span many orders of magnitude.
Panels (a)-(d) show the maximum mass, compactness and tidal deformability
found. The $C$ and $\Lambda_{2}$ values were plotted at the central density
corresponding to the maximum mass achieved at that value of $(m_{H},m_{L})$,
and are the maximum (minimum) possible compactness (tidal deformability) for
that parameter set.
### III.2 Implications for Gravitational Wave Observations
In many of the dark matter models the dark sector is mostly or entirely
hidden, only observable through gravitational interactions. Thus, DWD
observations may be limited to purely gravitational techniques, like, for
example, the detection of gravitational waves from the merger of a DWD and
some other compact object. As a first step, and since the DWDs span a large
range in both mass and compactness, it is worthwhile to determine the
detectability across the microphysical parameter space. A gravitational wave
observation is detected if the signal-to-noise ratio (SNR), defined as[47]
$\langle\text{SNR}^{2}\rangle=4\int_{0}^{\infty}\frac{|\tilde{h}(f)|^{2}}{S_{n}(f)}\text{d}f,$
(18)
for a signal with strain $h(t)$ and Fourier transform $\tilde{h}(f)$ observed
by detector with sensitivity curve $S_{n}(f)$, achieves a specified detection
threshold. As the choice of threshold is somewhat arbitrary, $\text{SNR}\geq
8$ is used here to match recent LIGO usage[48, 49].
With the assumption that the strain can be approximated as originated from a
quadrupole source and truncated to Newtonian order, its Fourier transform can
be written as
$\tilde{h}(f)\approx\frac{\sqrt{5/24}}{\pi^{2/3}D_{L}}M_{C}^{5/6}f^{-7/6},$
(19)
where $M_{C}=(M_{1}M_{2})^{3/5}/(M_{1}+M_{2})^{1/5}$ is the chirp mass and
$D_{L}$ is the luminosity distance of the merger. We will restrict the
analysis to the in-spiral phase of the merger; postmerger components,
especially for high mass objects, will require numerical relativity
simulations. This reduces the integral bounds to $0<f<f_{\rm contact}$, where
$f_{\rm contact}$ is the contact frequency, i.e. the binary orbital frequency
at the termination of the in-spiral period. Using $f_{\rm contact}$ will
possibly overestimate the final SNR, since it does not, for example, account
for tidal effects (see, e.g., [50] for other choices), but it does provide a
reasonable, simple estimate[51]. In Fig. 3, we consider two identical,
maximum-mass DWDs, and plot the contact frequency given by[41, 42]
$\displaystyle f_{\rm contact}$
$\displaystyle=\sqrt{\frac{G}{4\pi^{2}}\frac{2M_{\rm max}}{(2R_{M_{\rm
max}})^{3}}}.$ (20)
Using identical, maximum-mass DWDs to calculate $f_{\rm contact}$ provides an
optimistic estimate of the maximum frequency emitted during the merger. As
real DWDs are not likely to be at the maximum mass, actual contact frequencies
will be lower. From Figure 3, we can see that the contact frequency in the
optimistic case ranges from $8\text{\times}{10}^{-8}\text{\,}\mathrm{Hz}$, for
the most massive DWDs in the top right corner, to $600\text{\,}\mathrm{kHz}$,
for the least massive DWDs in the bottom left.
Figure 3: Frequency at merger contact in $\mathrm{Hz}$ for two identical,
maximum-mass dark white dwarfs as a function of $m_{H}$ and $m_{L}$.
Figure 4 demonstrates the application of Equations 18, 19 and 20 using the
current sensitivity curves for Advanced LIGO, and the design sensitivity
curves for LISA and DECIGO [52, 53, 54]. We compute the SNR at a luminosity
distance of $\\{100,250,450\\}$\mathrm{Mpc}$$, multiplied by a factor $4/25$
to include an averaging over sky position, inclination, and polarization and
assuming a source dominated by quadrupolar radiation [55], and shade the
region satisfying the condition $\text{SNR}>8$. The contact frequency was
computed in the same manner as in Fig. 3, i.e. assuming two, identical,
maximum-mass DWDs. As hinted at in Fig. 3, since the different gravitational
wave observatories are sensitive over different frequency ranges, the
different parameter cases will be visible by different observatories. While
LIGO can observe only the $m_{L}\in$0.011\text{\,}\mathrm{GeV}$$,
$m_{H}\in$0.13\text{\,}\mathrm{GeV}$$ region, corresponding to ordinary-
neutron-star-like DWDs and the heavy-heavy case, LISA and especially DECIGO
should be able to explore a much wider range of parameter space. LISA can
probe the $m_{L}\sim${10}^{-6}0.01\text{\,}\mathrm{GeV}$$,
$m_{H}\sim${10}^{-4}1\text{\,}\mathrm{GeV}$$ regime, encompassing the light-
light and SM cases, and nicely complimenting the LIGO region. Finally DECIGO
would be able to explore below
${10}^{-6}\text{\,}\mathrm{GeV}$(${10}^{-4}\text{\,}\mathrm{GeV}$) and up to
$1\text{\,}\mathrm{GeV}$($30\text{\,}\mathrm{GeV}$) in $m_{L}$($m_{H}$) space,
verifying LIGO/LISA results and even including the light-heavy cases.
Figure 4: Gravitational wave detectability (SNR) of identical, maximum-mass
DWD mergers. Shaded regions correspond to $\text{SNR}\geq 8$. SNR contours are
derived from Eqs. ((18)) and ((19)) with a frequency cutoff of $f_{\rm
contact}$ (Eq. (20)) and plotted at three luminosity distances,
$D_{L}=\\{100,250,450\\}$\mathrm{Mpc}$$ (lighter to darker) using design
sensitivity curves for LIGO, LISA and DECIGO [52, 53, 54]. Clearly, the
different detectors will probe different regions of the parameter space in a
complementary fashion and combined, LIGO and LISA will probe much of the
$m_{L}\sim${10}^{-3}1\text{\,}\mathrm{GeV}$$,
$m_{H}\sim${10}^{-4}3\text{\,}\mathrm{GeV}$$ parameter space.
### III.3 Universal Relations
Potential universal relations, especially those of the electric quadrupolar
tidal deformability, are of further interest for potential gravitational wave
observations. A universal relation is a relation between two or more
macrophysical properties that is generally independent of the equation of
state, and, more importantly in our case, allows the breaking of observational
degeneracies. Consider an example DWD merger, with DWD 1 having macroscopic
parameters $\\{M_{1},C_{1},\Lambda_{2,1}\\}$ and DWD 2 having
$\\{M_{2},C_{2},\Lambda_{2,2}\\}$ (assume $M_{2}<M_{1}$). The corresponding
gravitational wave detection would observe the chirp mass,
$\mathcal{M}=(M_{1}M_{2})^{3/5}/(M_{1}+M_{2})^{1/5}$, mass ratio,
$q=M_{2}/M_{1}$, and reduced tidal deformability [56],
$\displaystyle\tilde{\Lambda}$
$\displaystyle=\frac{16}{13}\frac{1}{(1+q)^{5}}\Bigl{[}\bigl{(}q^{4}(q+12)-(1+12q)\bigr{)}\Lambda_{A}$
$\displaystyle\qquad+\bigl{(}q^{4}(12+q)+(1+12q)\bigr{)}\Lambda_{S}\Bigr{]},$
(21)
where $\Lambda_{S}$ and $\Lambda_{A}$ are the symmetric and anti-symmetric
tidal deformabilities,
$\Lambda_{S,A}=\frac{1}{2}(\Lambda_{2,2}\pm\Lambda_{2,1}).$ (22)
While the mass ratio can be measured directly from the gravitational wave
signal, the tidal deformability enters at leading order in the phasing only
through $\tilde{\Lambda}$. Further, the radii and compactness of the two
objects do not directly enter into the phasing or magnitude of the signal.
This is where universal relations are useful: breaking the
$\Lambda_{A},\Lambda_{S}$ degeneracy in Equation 21 and calculating $C$ and
$R$ from the $\Lambda_{2}$ of the individual DWDs. First, the binary love
relation, discovered by Yagi and Yunes in 2015 while examining binary neutron
star properties [57], with the form
$\displaystyle\Lambda_{A}(q,\Lambda_{S})$
$\displaystyle=F_{n}(q)\frac{1+\sum_{i=1}^{3}\sum_{j=1}^{2}b_{ij}q^{j}\Lambda_{S}^{i/5}}{1+\sum_{i=1}^{3}\sum_{j=1}^{2}c_{ij}q^{j}\Lambda_{S}^{i/5}}\Lambda_{S}$
(23)
where $b_{ij}$ and $c_{ij}$ are numerical fitting coefficients and
$F_{n}(q)=(1-q^{10/(3-n)})/(1+q^{10/(3-n)})$ is a polytropic-index-dependent
controlling factor, lets Equation (21) be rewritten as a function of $q$ and
$\Lambda_{S}$ only. From this, one can solve for $\Lambda_{S}$ and then
$\Lambda_{A}$. Then the individual $\Lambda_{2,1},\Lambda_{2,2}$ can be
computed using the definitions of $\Lambda_{S},\Lambda_{A}$.
Second, in 2013, Yagi and Yunes [58] and Maselli et al.[59] demonstrated that
the relation between $C$ and $\Lambda_{2}$ was also universal. This relation,
which follows mostly from Equation (16) provides an estimate of $C_{1}$ and
$C_{2}$ from $\Lambda_{2,1}$ and $\Lambda_{2,2}$. The radii then follow
directly from the definition of $C$.
In Fig. 5a, we demonstrate that the $\Lambda_{A}(q,\Lambda_{S})$ function from
Equation 23 also applies to DWDs. Here, we considered 2025 $m_{L}{-}m_{H}$
pairs in the ranges
$m_{L}\in\\{$5.11\text{\times}{10}^{-7}$,$5.11$\\}$\mathrm{GeV}$$ and
$m_{H}\in\\{$9.38\text{\times}{10}^{-3}$,93.8\\}$\mathrm{GeV}$$. For each
$m_{L}{-}m_{H}$ pair, we picked 20 random pairs of central densities and
computed
$\\{M_{1},M_{2},\Lambda_{2,1},\Lambda_{2,2},q,\Lambda_{S},\Lambda_{A}\\}$, as
well as $\Lambda_{A,{\rm fit}}$. The fit is computed using the functional form
from Equation 23 with new $b_{ij}$ and $c_{ij}$ coefficients given in Table 1.
Note that for our basic model the $F_{n}(q)$ controlling factor is $\approx 1$
and has been dropped.
| $\square_{1,1}$ | $\square_{1,2}$ | $\square_{2,1}$ | $\square_{2,2}$ | $\square_{3,1}$ | $\square_{3,2}$
---|---|---|---|---|---|---
$b_{ij}$ | $1.73\text{\times}{10}^{00}$ | $-1.57\text{\times}{10}^{00}$ | $5.48\text{\times}{10}^{-02}$ | $-5.10\text{\times}{10}^{-02}$ | $1.27\text{\times}{10}^{-06}$ | $-7.15\text{\times}{10}^{-07}$
$c_{ij}$ | $1.68\text{\times}{10}^{00}$ | $-1.42\text{\times}{10}^{00}$ | $5.47\text{\times}{10}^{-02}$ | $-5.01\text{\times}{10}^{-02}$ | $1.27\text{\times}{10}^{-06}$ | $-6.80\text{\times}{10}^{-07}$
Table 1: Fitting coefficients for the binary love relation given in Equation
(23).
(a) $\Lambda_{A}$ vs $\Lambda_{A,{\rm fit}}$
(b) $\Lambda$ vs C
Figure 5: DWD universal relations. The first panel shows that the binary Love
relation, $\Lambda_{A}=\Lambda_{A}(q,\Lambda_{S})$, is reasonably approximated
by a functional form from the neutron star literature [57, 60, 46], using the
new coefficients from Table 1, whereas the second panel demonstrates that for
DWDs $\Lambda_{2}$ can be well approximated by the fit given in Equation (24),
effectively $\Lambda_{2}\propto C^{5.1}$.
In Fig. 5b we plot $\Lambda_{2}$ versus $C$ for the 2025 $m_{L}{-}m_{H}$ pairs
mentioned previously, at logrithmically spaced central densities from
${10}^{-5}\text{\,}\mathrm{g}\text{\,}{\mathrm{cm}}^{-3}$ to the density
corresponding to the maximum mass (approximately $56\,000$ DWDs). Comparing
with the simple linear log fit,
$\log_{10}C=-0.1958\log_{10}(\Lambda_{2})-0.3931,$ (24)
which provides an excellent fit over the entire range, we see that
$\Lambda_{2}\propto C^{-5.1}$ to good approximation. This should not be
surprising; even though Equation (17) appears to show $k_{2}$ has a strong
($C^{5}$) dependence on $C$, in reality, the dependence is actually relatively
weak,111The lower order terms in the denominator cancel, leaving $C^{5}$ as
the lowest surviving term. This then cancels with the $C^{5}$ in the numerator
and $k_{2}$ approaches a nonzero constant as $C\rightarrow 0$. functionally
leaving $\Lambda\sim C^{-5}$. It is important to point out this relationship
is effectively independent of the dark parameters for this simple model and
thus also “universal”.
With these relations, we can substitute some numbers and see what sort of
constraints we might be able to place on the dark parameters from an
observation. For example, let us consider an hypothetical scenario in which a
binary DWD is detected with $q=$0.705\pm 0.04$$, $\mathcal{M}=$19\pm
4\text{\,}\mathrm{M_{\odot}}$$, and $\tilde{\Lambda}=$(9.09\pm
0.9)\text{\times}{10}^{4}$$. Using Equations (21) and (23), and simply
propagating the bounds, we would obtain $\Lambda_{S}=$(1.6\pm
0.3)\text{\times}{10}^{5}$$ and $\Lambda_{A}=1.5_{-0.3}^{+0.4}\times 10^{5}$.
By definition, we would then have $\Lambda_{2,1}=9.52_{-10}^{+6}\times 10^{3}$
and $\Lambda_{2,2}=3.1_{-0.6}^{+0.8}\times 10^{5}$, which, using our
$C-\Lambda_{2}$ relation would give $C_{1}=$0.0887\pm 0.02$$ and
$C_{2}=0.0443_{-0.009}^{+0.01}$. Using $m_{1}=q^{-3/5}(1+q)^{1/5}\mathcal{M}$
and $m_{2}=q^{2/5}(1+q)^{1/5}\mathcal{M}$, the chirp mass and mass ratio would
give $m_{1}=$26\pm 6\text{\,}\mathrm{M_{\odot}}$$ and $m_{2}=$18.3\pm
4\text{\,}\mathrm{M_{\odot}}$$. From this, the maximum mass would be at least
$20\text{\,}\mathrm{M_{\odot}}$, and we could constrain the heavy particle
mass, $m_{H}<$0.45\text{\,}\mathrm{GeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$$.
Further, while the conservative compactness of $C_{max}<0.1087$ would provide
a minimal constraint on the heavy to light particle mass ratio
($m_{H}/m_{L}>2$), the other end would suggest a not dissimilar constraint of
$m_{H}/m_{L}>5$, so $m_{L}$ would likely be
$90200\text{\,}\mathrm{MeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$ (assuming
$m_{H}=$0.45\text{\,}\mathrm{GeV}\text{\,}{\mathrm{\text{$c$}}}^{-2}$$).
## IV Conclusion
We present a first look at the DWD, using a basic model of two, different-mass
fundamental fermions to explore some of the possibilities of this exotic
compact object. We determine analytic scaling relations for the mass, radius,
and compactness of the DWD as function of the Standard Model white dwarf and
the fermion masses (Equations (2)-(4) and (11)-(12)) in the non-relativistic
limit. We accomplish this by solving the Newtonian hydrostatic-equilibrium
approximation using the well-known equation of state of fermionic ideal
gasses. As expected, we recover the scaling relations found in the literature
upon approaching the single-particle limit.
We then solve the Tollman-Oppenheimer-Volkoff and tidal-deformability
differential equations numerically to obtain fully relativistic versions of
the Newtonian approximations. Using the relativistic formalism confirms the
approximate Newtonian scaling as well as highlights the large span of the
macrophysical and even binary attributes (Figures 1-3).
We further find universal relations between macroscopic properties of DWDs
that are analogous to those found for neutron stars. In particular, we
investigate the $C$ vs $\Lambda_{2}$ and binary love universal relations, with
the net result that the $\Lambda_{2}$-$C$ relationship can be well
approximated by a simple power law and that binary love relation can be well
approximated by fits from the neutron star literature (Figure 5). These
relations could be used to determine the radii of DWD from gravitational wave
observations of their mergers, thus directly constraining the masses of the
dark particles.
Lastly, we discuss the detectability of DWD binary mergers across the fermion
mass parameter space. We show that not only are DWD mergers detectable but
that, assuming design sensitivity, different gravitational wave observatories
would probe different regions of the space (Figure 4). For example, LIGO
should be able to detect mergers of high-compactness, lower-mass DWDs
corresponding to a dark light particle in the mass range
$0.011\text{\,}\mathrm{GeV}$ and dark heavy particle in the range
$0.13\text{\,}\mathrm{GeV}$, while LISA could detect both higher-mass, high-
compactness and lower-mass, lower-compactness DWDs, corresponding to light
particles that are ${10}^{-6}0.01\text{\,}\mathrm{GeV}$ and heavy particles
that are ${10}^{-4}1\text{\,}\mathrm{GeV}$. Later-generation space-based
detectors like DECIGO may be able to detect mergers across an even larger part
of the parameter space.
We have left four significant topics to future work, though two of those are
interrelated. First, is the effect of inter-particle interactions, both those
that do and do not change particle type, on the equation of state. While it is
reasonable to ignore particle-conversion interactions, as many dark matter
models do not contain them, interactions such as the dark electromagnetism of
atomic dark matter[61] or the Yukawa interactions of the model in [62] should
also be studied. The lack of such an interaction term is not fatal; after all,
the model presented here works quite well for estimating ordinary white dwarf
properties and should also work well in cases with weak dark interactions.
Conversely, dissipative interactions are the entire point of dissipative dark
matter models, necessitating their inclusion in follow-up work. Doing so in a
general fashion is non-trivial, but including additional polynomial terms into
the total pressure and energy equations as in [30], similar to the
electrostatic correction in ordinary white dwarfs[38] or the Yukawa term in
Kouvaris and Nielsen[22] would likely be a good first step.
Second, we have restricted our gravitational wave signal analysis to the
detectability of the in-spiral portion only. As demonstrated in Figure 2d, the
tidal deformability of these DWDs can be significantly larger than that of
ordinary neutron stars and black holes. As such, general template bank
searches based on ordinary binary black hole or binary neutron star mergers
may not find these objects. Additionally, the post-merger portion of the
signal may contain various features that could be strongly dependent on the
equation of state and the dark microphysics. Computing the full merger and
post-merger signal using numerical relativity simulations at several points in
the parameter space would help resolve both of these issues, providing data
for both a more targeted search and demonstrating any potential microphysical
dependence.
The remaining two issues concern the formation and populations of these
objects and using gravitational wave observations to constrain their
properties. Just as there are a number of dark matter models that have the
particle types to create DWDs, so are there a number of possible formation
mechanisms, ranging from primordial direct collapse, as in [31], to
astrophysical direct collapse, analogous to the dark black hole formation or
asymmetric stars in e.g. [12, 17, 63], to the astrophysical remnant of a dark
star, suggested in [21]. This makes estimating the DWD population highly
model-dependent. The possibility that such binaries might not be able to form
naturally at all also cannot be excluded. On the other hand, determining the
current constraints on the merger rates from LIGO observations should be
tractable, assuming the current state of nondetection holds. Likewise,
identifying a particular merger as a possible DWD merger (as opposed to an
ordinary object merger) should not be difficult, given the significant
discrepancies between DWD and ordinary compact object characteristics across
the majority of the parameter space. Distinguishing between a DWD and a dark
neutron star, or determining the dark composition pose a much higher level of
difficulty, however, given the potential overlap in macroscopic traits, and
may require population analysis, circling back to the formation problem.
While there are several major questions left to be resolved, the potential for
DWDs and their mergers to shine a light on the dark sector strongly motivates
the development of targeted search strategies in gravitational wave detectors
data.
###### Acknowledgements.
Funding for this work was provided by the Charles E. Kaufman Foundation of the
Pittsburgh Foundation. The authors also thank Rahul Kashyap and Daniel
Godzieba for their input on the TOVL and $\Lambda_{2}$ calculations. D.R.
acknowledges funding from the U.S. Department of Energy, Office of Science,
Division of Nuclear Physics under Award Number(s) DE-SC0021177 and from the
National Science Foundation under Grants No. PHY-2011725, PHY-2020275,
PHY-2116686, and AST-2108467.
## References
* Aghanim _et al._ [2020] N. Aghanim _et al._ (Planck Collaboration), Planck2018 results, Astron. Astrophys. 641, A1 (2020).
* Markevitch _et al._ [2004] M. Markevitch, A. H. Gonzalez, D. Clowe, A. Vikhlinin, L. David, W. Forman, C. Jones, S. Murray, and W. Tucker, Direct constraints on the dark matter self-interaction cross-section from the merging galaxy cluster 1E0657-56, Astrophys. J. 606, 819 (2004), arXiv:astro-ph/0309303 [astro-ph] .
* Clark _et al._ [2020] M. Clark, A. Depoian, B. Elshimy, A. Kopec, R. F. Lang, and J. Qin, Direct detection limits on heavy dark matter, Phys. Rev. D 102, 123026 (2020), arXiv:2009.07909 [hep-ph] .
* Akerib _et al._ [2020] D. S. Akerib _et al._ (LUX Collaboration), First direct detection constraint on mirror dark matter kinetic mixing using lux 2013 data, Phys. Rev. D 101, 012003 (2020), arXiv:1908.03479 [hep-ex] .
* Berlin and Kling [2019] A. Berlin and F. Kling, Inelastic dark matter at the LHC lifetime frontier: ATLAS, CMS, LHCb, CODEX-b, FASER, and MATHUSLA, Phys. Rev. D 99, 015021 (2019), arXiv:1810.01879 [hep-ph] .
* Undagoitia and Rauch [2015] T. M. Undagoitia and L. Rauch, Dark matter direct-detection experiments, J. Phys. G Nucl. Partic. 43, 013001 (2015), arXiv:1509.08767 [physics.ins-det] .
* Mitsou [2015] V. A. Mitsou, Overview of searches for dark matter at the LHC, J. Phys. Conf. Ser. 651, 012023 (2015), arXiv:1402.3673 [hep-ex] .
* Guépin _et al._ [2021] C. Guépin, R. Aloisio, A. Cummings, L. A. Anchordoqui, J. F. Krizmanic, A. V. Olinto, M. H. Reno, and T. M. Venters, Indirect dark matter searches at ultrahigh energy neutrino detectors, Phys. Rev. D 104, 083002 (2021), arXiv:2106.04446 [hep-ph] .
* Génolini _et al._ [2021] Y. Génolini, M. Boudaud, M. Cirelli, L. Derome, J. Lavalle, D. Maurin, P. Salati, and N. Weinrich, New minimal, median, and maximal propagation models for dark matter searches with galactic cosmic rays, Phys. Rev. D 104, 083005 (2021), arXiv:2103.04108 [astro-ph.HE] .
* Maity and Queiroz [2021] T. N. Maity and F. S. Queiroz, Detecting bosonic dark matter with neutron stars, Phys. Rev. D 104, 083019 (2021), arXiv:2104.02700 [hep-ph] .
* Leane [2020] R. K. Leane, Indirect detection of dark matter in the galaxy, arXiv:2006.00513 [hep-ph] (2020).
* Buckley and DiFranzo [2018] M. R. Buckley and A. DiFranzo, Collapsed dark matter structures, Phys. Rev. Lett. 120, 10.1103/physrevlett.120.051102 (2018).
* D’Amico _et al._ [2017] G. D’Amico, P. Panci, A. Lupi, S. Bovino, and J. Silk, Massive black holes from dissipative dark matter, Mon. Not. R. Astron. Soc. 473, 328 (2017), arXiv:1707.03419 [astro-ph.CO] .
* de Lavallaz and Fairbairn [2010] A. de Lavallaz and M. Fairbairn, Neutron stars as dark matter probes, Phys. Rev. D 81, 123521 (2010), arXiv:1004.0629 [astro-ph.GA] .
* Kouvaris and Tinyakov [2011] C. Kouvaris and P. Tinyakov, Constraining asymmetric dark matter through observations of compact stars, Phys. Rev. D 83, 083512 (2011), arXiv:1012.2039 [astro-ph.HE] .
* Kouvaris _et al._ [2018] C. Kouvaris, P. Tinyakov, and M. H. G. Tytgat, Nonprimordial solar mass black holes, Phys. Rev. Lett. 121, 221102 (2018), arXiv:1804.06740 [astro-ph.HE] .
* Shandera _et al._ [2018] S. Shandera, D. Jeong, and H. S. G. Gebhardt, Gravitational waves from binary mergers of subsolar mass dark black holes, Phys. Rev. Lett. 120, 241102 (2018), arXiv:1802.08206 [astro-ph.CO] .
* Singh _et al._ [2021] D. Singh, M. Ryan, R. Magee, T. Akhter, S. Shandera, D. Jeong, and C. Hanna, Gravitational-wave limit on the chandrasekhar mass of dark matter, Phys. Rev. D 104, arXiv:2009.05209 (2021), arXiv:2009.05209 [astro-ph.CO] .
* Choquette _et al._ [2019] J. Choquette, J. M. Cline, and J. M. Cornell, Early formation of supermassive black holes via dark matter self-interactions, JCAP 2019, 036 (2019).
* Giudice _et al._ [2016] G. F. Giudice, M. McCullough, and A. Urbano, Hunting for dark particles with gravitational waves, JCAP 2016, 001 (2016), arXiv:1605.01209 [hep-ph] .
* Hippert _et al._ [2021] M. Hippert, J. Setford, H. Tan, D. Curtin, J. Noronha-Hostler, and N. Yunes, Mirror neutron stars, arXiv:2103.01965 [astro-ph.HE] (2021).
* Kouvaris and Nielsen [2015] C. Kouvaris and N. G. Nielsen, Asymmetric dark matter stars, Phys. Rev. D 92, 063526 (2015), arXiv:1507.00959 [hep-ph] .
* Maselli _et al._ [2017] A. Maselli, P. Pnigouras, N. G. Nielsen, C. Kouvaris, and K. D. Kokkotas, Dark stars: gravitational and electromagnetic observables, Phys. Rev. D 96, 023005 (2017), arXiv:1704.07286 [astro-ph.HE] .
* Leung _et al._ [2013] S.-C. Leung, M.-C. Chu, L.-M. Lin, and K.-W. Wong, Dark-matter admixed white dwarfs, Phys. Rev. D 87, 123506 (2013), arXiv:1305.6142 [astro-ph.CO] .
* Tolos and Schaffner-Bielich [2015] L. Tolos and J. Schaffner-Bielich, Dark compact planets, Phys. Rev. D 92, 123002 (2015), arXiv:1507.08197 [astro-ph.HE] .
* Bauswein _et al._ [2020] A. Bauswein, G. Guo, J.-H. Lien, Y.-H. Lin, and M.-R. Wu, Compact dark objects in neutron star mergers, arXiv:2012.11908 [astro-ph.HE] (2020).
* Dengler _et al._ [2021] Y. Dengler, J. Schaffner-Bielich, and L. Tolos, The second love number of dark compact planets and neutron stars with dark matter, arXiv:2111.06197 [astro-ph.HE] (2021).
* Gleason _et al._ [2022] T. Gleason, B. Brown, and B. Kain, Dynamical evolution of dark matter admixed neutron stars, Phys. Rev. D 105, 023010 (2022), arXiv:2201.02274 [gr-qc] .
* Dasgupta _et al._ [2021] B. Dasgupta, R. Laha, and A. Ray, Low mass black holes from dark core collapse, Phys. Rev. Lett. 126, 141105 (2021), arXiv:2009.01825 [astro-ph.HE] .
* Narain _et al._ [2006] G. Narain, J. Schaffner-Bielich, and I. N. Mishustin, Compact stars made of fermionic dark matter, Phys. Rev. D 74, 063003 (2006), arXiv:astro-ph/0605724 [astro-ph] .
* Gross _et al._ [2021] C. Gross, G. Landini, A. Strumia, and D. Teresi, Dark matter as dark dwarfs and other macroscopic objects: multiverse relics?, arXiv:2105.02840 [hep-ph] (2021).
* Brandt _et al._ [2018] B. Brandt, G. Endrődi, E. Fraga, M. Hippert, J. Schaffner-Bielich, and S. Schmalzbauer, New class of compact stars: Pion stars, Phys. Rev. D 98, 094510 (2018), arXiv:1802.06685 [hep-ph] .
* Carr _et al._ [2020] B. Carr, K. Kohri, Y. Sendouda, and J. Yokoyama, Constraints on primordial black holes, arXiv:2002.12778 [astro-ph.CO] (2020).
* Katz _et al._ [2020] A. Katz, J. Kopp, S. Sibiryakov, and W. Xue, Looking for MACHOs in the spectra of fast radio bursts, Mon. Not. R. Astron. Soc. 496, 564 (2020), arXiv:1912.07620 [astro-ph.CO] .
* Napiwotzki [2009] R. Napiwotzki, The galactic population of white dwarfs, J. Phys. Conf. Ser. 172, 012004 (2009), arXiv:0903.2159 [astro-ph.SR] .
* McMillan [2016] P. J. McMillan, The mass distribution and gravitational potential of the milky way, Mon. Not. R. Astron. Soc. 465, 76 (2016), arXiv:1608.00971 [astro-ph.GA] .
* Chandrasekhar [2010] S. Chandrasekhar, _An Introduction to the Study of Stellar Structure_ (Dover Publications, Inc., 2010).
* Shapiro and Teukolsky [1983] S. L. Shapiro and S. A. Teukolsky, _Black Holes, White Dwarfs, and Neutron Stars_ (Wiley, 1983).
* Reisenegger and Zepeda [2016] A. Reisenegger and F. S. Zepeda, Order-of-magnitude physics of neutron stars, Eur. Phys. J. A. 52, 10.1140/epja/i2016-16052-y (2016), arXiv:1511.08813 [astro-ph.SR] .
* Tooper [1965] R. F. Tooper, Adiabatic fluid spheres in general relativity., Astrophys. J. 142, 1541 (1965).
* Damour and Nagar [2009] T. Damour and A. Nagar, Relativistic tidal properties of neutron stars, Phys. Rev. D80, 084035 (2009), arXiv:0906.0096 [gr-qc] .
* Bernuzzi and Nagar [2008] S. Bernuzzi and A. Nagar, Gravitational waves from pulsations of neutron stars described by realistic Equations of State, Phys. Rev. D78, 024024 (2008), arXiv:0803.3804 [gr-qc] .
* Binnington and Poisson [2009] T. Binnington and E. Poisson, Relativistic theory of tidal love numbers, Phys. Rev. D 80, 084018 (2009), arXiv:0906.1366 [gr-qc] .
* Hinderer [2008] T. Hinderer, Tidal love numbers of neutron stars, Astrophys. J. 677, 1216 (2008), arXiv:0711.2420 [astro-ph] .
* Lindblom and Indik [2014] L. Lindblom and N. M. Indik, Spectral approach to the relativistic inverse stellar structure problem II, Phys. Rev. D 89, 064003 (2014).
* Godzieba _et al._ [2021] D. A. Godzieba, R. Gamba, D. Radice, and S. Bernuzzi, Updated universal relations for tidal deformabilities of neutron stars from phenomenological equations of state, Phys. Rev. D 103, 063036 (2021), arXiv:2012.12151 [astro-ph.HE] .
* Jaranowski and Królak [2012] P. Jaranowski and A. Królak, Gravitational-wave data analysis. formalism and sample applications: The gaussian case, Living Rev. Relativ. 15, 10.12942/lrr-2012-4 (2012), arXiv:0711.1115 [gr-qc] .
* Abbott _et al._ [2020a] R. Abbott _et al._ , GWTC-2: Compact binary coalescences observed by LIGO and virgo during the first half of the third observing run, Phys. Rev. X 11, 021053 (2020a), arXiv:2010.14527 [gr-qc] .
* Abbott _et al._ [2020b] R. Abbott _et al._ (LIGO Scientific, Virgo), Population properties of compact objects from the second ligo-virgo gravitational-wave transient catalog, Phys. Rev. X 11913, L7 (2020b), arXiv:2010.14533 [astro-ph.HE] .
* De Luca and Pani [2021] V. De Luca and P. Pani, Tidal deformability of dressed black holes and tests of ultralight bosons in extended mass ranges, JCAP 2021, 032 (2021), arXiv:2106.14428 [gr-qc] .
* Bernuzzi _et al._ [2012] S. Bernuzzi, A. Nagar, M. Thierfelder, and B. Bruegmann, Tidal effects in binary neutron star coalescence, Phys. Rev. D 86, 044030 (2012), arXiv:1205.3403 [gr-qc] .
* Barsotti _et al._ [2018] L. Barsotti, S. Gras, M. Evans, and P. Fritschel, _The updated Advanced LIGO design curve_ , Tech. Rep. T1800044-v5 (The LIGO Project, 2018).
* Robson _et al._ [2019] T. Robson, N. J. Cornish, and C. Liu, The construction and use of LISA sensitivity curves, Classical Quant. Grav. 36, 105011 (2019), arXiv:1803.01944 [astro-ph.HE] .
* Kawamura _et al._ [2008] S. Kawamura, M. Ando, T. Nakamura, K. Tsubono, T. Tanaka, _et al._ , The japanese space gravitational wave antenna - DECIGO, J. Phys. Conf. Ser. 122, 012006 (2008).
* Dominik _et al._ [2015] M. Dominik, E. Berti, R. O’Shaughnessy, I. Mandel, K. Belczynski, C. Fryer, D. Holz, T. Bulik, and F. Pannarale, Double compact objects iii: Gravitational wave detection rates, Astrophys. J. 806, 263 (2015), arXiv:1405.7016 [astro-ph.HE] .
* Favata [2014] M. Favata, Systematic parameter errors in inspiraling neutron star binaries, Phys. Rev. Lett. 112, 101101 (2014), arXiv:1310.8288 [gr-qc] .
* Yagi and Yunes [2015] K. Yagi and N. Yunes, Binary love relations, Classical Quant. Grav. 33, 13LT01 (2015), arXiv:1512.02639 [gr-qc] .
* Yagi and Yunes [2013] K. Yagi and N. Yunes, I-love-q relations in neutron stars and their applications to astrophysics, gravitational waves, and fundamental physics, Phys. Rev. D 88, 023009 (2013), arXiv:1303.1528 [gr-qc] .
* Maselli _et al._ [2013] A. Maselli, V. Cardoso, V. Ferrari, L. Gualtieri, and P. Pani, Equation-of-state-independent relations in neutron stars, Phys. Rev. D 88, 023007 (2013), arXiv:1304.2052 [gr-qc] .
* Yagi and Yunes [2016] K. Yagi and N. Yunes, Approximate universal relations for neutron stars and quark stars, Phys. Rep. 681, 1 (2016), arXiv:1608.02582 [gr-qc] .
* Kaplan _et al._ [2010] D. E. Kaplan, G. Z. Krnjaic, K. R. Rehermann, and C. M. Wells, Atomic dark matter, JCAP 2010, 021 (2010).
* Kaplan _et al._ [2009] D. E. Kaplan, M. A. Luty, and K. M. Zurek, Asymmetric dark matter, Phys. Rev. D 79, 10.1103/physrevd.79.115016 (2009).
* Chang _et al._ [2019] J. H. Chang, D. Egana-Ugrinovic, R. Essig, and C. Kouvaris, Structure formation and exotic compact objects in a dissipative dark sector, JCAP 2019, 036 (2019).
|
# Exploring the Duffing Equation: Numerical Analysis, Discrete Dynamics, and
Ecological Modeling
Zeraoulia Rafik1, Zeraoulia Chaima2
1Department of Mathematics, University of batna2.Algeria
2Department of mathematics, University of Abbes Laghrour Khenchela , Algeria
Corresponding author<EMAIL_ADDRESS>
###### Abstract
This research paper delves into the dynamics of a novel ecology model that
describes the intricate interplay between radioactivity and cancer cases.
Organized into three main sections, the study covers numerical analysis using
advanced techniques, the analysis of discrete dynamical systems, and explores
the potential applications of the model in the context of cancer research and
ecological systems. By conducting a comprehensive bifurcation analysis, we
unveil the model’s sensitivity to external factors, interactive behaviors
between radioactivity and cancer cases, and the emergence of multiple
attractors. These findings open up new avenues for understanding cancer
dynamics, ecological systems, and clinical scenarios, where even subtle
variations in external factors can lead to significant variations in cancer
incidence. The insights gained from this study have promising implications for
both theoretical and applied research in cancer epidemiology and ecology,
providing a deeper understanding of the complex dynamics underlying these
systems. In our research, we present a new ecology model that reveals the
intricate and often nonlinear relationship between radioactivity and cancer
incidence, offering novel insights into cancer dynamics and environmental
influences.
## 1 Introduction
The Duffing equation, a fundamental and intriguing non-linear differential
equation, has captured the interest of researchers across various scientific
disciplines for its rich dynamics and versatile applications [17]. Named after
the German engineer and physicist Georg Duffing, this equation stands as a
cornerstone in the study of nonlinear systems and has found its place in
diverse fields, ranging from mechanical engineering and physics to
mathematics.[1]
The Duffing equation is defined as:
$\ddot{x}+\delta\dot{x}+\alpha x+\beta x^{3}=\gamma\cos(\omega t)$ (1)
where $x(t)$ represents the displacement of a vibrating system at time $t$,
$\alpha$, $\beta$, $\delta$, and $\gamma$ are parameters governing the
system’s behavior, and $\omega$ is the angular frequency of an external
periodic force. Despite its seemingly simple form, this equation embodies a
wealth of complex phenomena, from regular periodic motion to chaotic behavior.
[8] Understanding its intricacies and implications has been a driving force
behind extensive research efforts.
The historical significance of the Duffing equation is noteworthy. It emerged
as a mathematical model in the early 20th century to describe the nonlinear
behavior of mechanical systems, such as vibrating beams and pendulums. Its
applicability extends to diverse fields, including electrical circuits and
ecological modeling. In recent years, the Duffing equation has gained
prominence in ecology, where it serves as a powerful tool for understanding
the dynamics of ecosystems [25].
The objectives of this research paper are threefold. Firstly, we embark on a
comprehensive study of the numerical aspects of the Duffing equation,
exploring various methods for solving this challenging non-linear differential
equation. We build upon notable contributions in this area, including the work
of Ott, Sauer, and Yorke, who introduced the concept of Lyapunov exponents to
analyze chaotic behavior in dynamical systems, a technique that has found
extensive application in the evolution of cancers due to radioactivity [12].
Secondly, we delve into the Duffing equation as a discrete dynamical system in
the second section of this paper. Pioneering research by Feigenbaum [22]
uncovered the universal constants governing the period-doubling route to chaos
in dynamical systems. These insights are increasingly relevant in ecological
contexts, where bifurcations and chaos play a vital role in understanding
ecosystem behavior [13].
Lastly, in the third section, we explore the implications of the Duffing
equation in ecology. Recent research has highlighted its applicability in
modeling complex ecological systems and understanding the influence of
external factors on ecosystem dynamics. In the context of ecology, the Duffing
equation provides a unique perspective on the complex interplay between
species, resources, and environmental variables in the evolution of cancers
due to radioactivity [25].
In summary, this research paper embarks on a multifaceted exploration of the
Duffing equation, from its numerical solutions to its chaotic dynamics and its
applications in the evolution of cancers due to radioactivity [6]. By shedding
light on the multifaceted nature of this equation, we aim to deepen our
understanding of nonlinear systems and offer valuable insights for both
theoretical and applied research. The Duffing equation, with its intricate
dynamics and diverse applications, continues to inspire and challenge
scientists in the field of ecology and nonlinear dynamics.
### 1.1 Background
The Duffing equation is a classical nonlinear second-order differential
equation with various applications in physics, engineering, and mathematics.
It has been studied extensively due to its rich dynamics, including chaotic
behavior and complex solutions. Understanding the Duffing equation’s behavior
is of significance in a wide range of scientific disciplines.[5]
### 1.2 Objective
This paper aims to comprehensively explore the Duffing equation’s dynamics,
discrete behavior, and ecological application. The primary objectives and the
structure of the paper are as follows:
* •
Numerical Analysis: We investigate the numerical aspects of the Duffing
equation, including the use of the Homotopy method for approximate solutions
and the analysis of convergence and error bounds.
* •
Discrete Dynamics: We convert the continuous Duffing equation into a discrete
dynamical system and analyze its discrete-time behavior. Key components of the
numerical solution methodology are presented.[26]
* •
Ecological Application: The Duffing quintic equation is applied to ecology,
highlighting the interactive relationship between radioactivity and cancer
incidence, especially under varying external conditions.[7]
The paper is structured as follows, with each section dedicated to one of
these objectives, providing in-depth analysis, results, and insights.
Together, these aspects offer a comprehensive view of the Duffing equation and
its implications in various fields.
## 2 Main Results
Our paper explores three fundamental aspects: numerical analysis, discrete
dynamics, and the application in ecology. The central results are as follows:
### 2.1 Numerical Analysis of the Duffing Equation
We conducted a comprehensive numerical analysis of the Duffing equation [2],
employing the Homotopy method for obtaining approximate solutions. Our key
findings include:
* •
Rate of Convergence: The Homotopy method demonstrates varying rates of
convergence during the solution process. Initially, it exhibits rapid
convergence, followed by transitions into states of steady-state stability and
periodic variations as the parameter $\lambda$ varies from 0 to 1. This
dynamic behavior offers valuable insights into the efficiency and convergence
characteristics of the Homotopy method when solving the Duffing equation.
These results shed light on the numerical aspects of the Duffing equation,
particularly with respect to the behavior and efficiency of the Homotopy
method in obtaining approximate solutions.
The numerical analysis of our ecological model yielded the following key
insight:
1\. Sensitivity to External Factors: The model exhibits high sensitivity to
changes in external forcing factors, particularly the parameter $\omega$.”
These changes reflect the influence of periodic environmental factors on
radioactivity and cancer incidence, highlighting the ecological implications
of subtle variations in external conditions.
### 2.2 Discrete Dynamics of the Duffing Equation
We converted the continuous Duffing equation into a discrete dynamical system
with a time step of $h$. The discrete dynamical system relates the
displacement $x_{n+1}$ at the next time step to $x_{n}$ and $x_{n-1}$. This
approach enables the simulation of the Duffing equation’s discrete-time
behavior.[23]
### 2.3 Numerical Solution Methodology
We employed the Finite Differences method to simulate the Duffing equation’s
discrete dynamics, considering various parameters and initial conditions. Our
methodology included time integration, parameter variations, stability
observations, and the identification of periodic behavior. Results were
graphically presented.[18]
### 2.4 Chaotic Behavior and Lyapunov Exponents
We analyzed chaotic behavior in the Duffing equation and calculated Lyapunov
exponents to quantify chaos. The Lyapunov exponents were approximately
0.503437 and 0.551156, indicating chaos. Visual representations showed the
transition to chaos with complex displacement and velocity patterns.
### 2.5 Discrete Dynamics
In the analysis of discrete dynamics within the ecological model, we
discovered the following central outcome:
2\. Complex and Nonlinear Behavior: The discrete-time iterations of the model
revealed intricate and nonlinear dynamics. These behaviors, including the
presence of strange attractors, emphasize the complexity of the ecological
system’s response to changes in radioactivity and cancer cases.[21]
### 2.6 Application in Ecology
The application of the Duffing quintic equation to ecology[18] produced an
essential ecological insight:
1. 1.
Interactive Relationship Between Radioactivity and Cancer Incidence: Our
ecological model demonstrates that subtle changes in radioactivity levels
significantly influence cancer incidence. This interactive relationship
underlines the need to consider external factors in understanding cancer
dynamics within complex ecological systems.
Our ecological model is described by the Duffing quintic equation, which
captures the dynamics of radioactivity and cancer incidence. The model
equations are as follows:
$\displaystyle\frac{dx(t)}{dt}$ $\displaystyle=-A\cdot y(t)-\cos(\omega
t)-x(t)^{3}+\alpha x(t)^{5}$ (2) $\displaystyle\frac{dy(t)}{dt}$
$\displaystyle=B\cdot\frac{dx(t)}{dt}$ (3)
In these equations:
* •
$x(t)$ represents the rate of radioactivity on Earth at time $t$.
* •
$y(t)$ represents the rate of people infected with cancer due to radioactivity
at time $t$.
* •
$A$ is the coupling strength, reflecting the relationship between
radioactivity and cancer incidence.
* •
$\omega$ represents the angular frequency of external forcing factors
affecting radioactivity levels.
* •
$\alpha$ and $\beta$ are coefficients governing the nonlinear behavior of the
system.
These equations illustrate the interactive relationship between radioactivity
and cancer incidence in our ecological model.
## 3 Numerical Analysis of the Duffing Equation
In the first section of this paper, we embark on a comprehensive study of the
numerical aspects of the Duffing equation. We delve into the numerical methods
used to solve this equation, with a particular focus on the Homotopy method.
This method proves to be a powerful tool for obtaining approximate solutions
to non-linear differential equations like the Duffing equation. We will
discuss the theory behind the Homotopy method and demonstrate its
effectiveness in solving the Duffing equation.
## 4 Numerical Analysis of the Duffing Equation
In this section, we embark on a comprehensive study of the numerical aspects
of the Duffing equation. Our objective is to obtain an approximate solution
with optimal error using the Homotopy method. We begin by presenting the
analytical solution of the Duffing equation for reference.
### 4.1 Analytical Solution
The Duffing equation, given as:
$\ddot{x}+\delta\dot{x}+\alpha x+\beta x^{3}=\gamma\cos(\omega t),$ (4)
poses a challenging problem due to its nonlinear nature. However, under
specific conditions and parameter values, it is possible to obtain analytical
solutions for simplified cases. The analytical solutions often come in the
form of trigonometric or hyperbolic functions and provide insights into the
system’s behavior.
For instance, in the absence of damping ($\delta=0$), the Duffing equation
admits periodic solutions of the form:
$x(t)=A\cos(\omega t)+B\sin(\omega t),$ (5)
where $A$ and $B$ are constants determined by initial conditions. This
represents a harmonically oscillating system driven by the external force.
### 4.2 Homotopy Method for Approximate Solution
While analytical solutions are insightful, they are often limited to
simplified cases. For more complex scenarios and general parameter values,
numerical methods are indispensable. We turn our attention to the Homotopy
method, a powerful numerical technique for obtaining approximate solutions to
nonlinear differential equations like the Duffing equation.
The Homotopy method introduces a parameter, typically denoted as $\lambda$,
that transforms the original equation into a homotopy equation. The homotopy
equation includes a simpler auxiliary equation with known solutions. By
continuously varying $\lambda$ from 0 to 1, we track the solution path from
the known auxiliary problem to the desired solution of the original equation.
The key advantage is that the auxiliary equation has solutions that are easier
to obtain.
To apply the Homotopy method to the Duffing equation, we introduce the
parameter $\lambda$ and rewrite the equation as follows:
$\ddot{x}+\lambda\delta\dot{x}+\lambda\alpha x+\lambda\beta
x^{3}=\lambda\gamma\cos(\omega t).$ (6)
At $\lambda=0$, this equation simplifies to an auxiliary problem with a known
solution. As we incrementally increase $\lambda$ towards 1, we track the
solution’s evolution and obtain an approximate solution for the Duffing
equation. The key challenge is to choose an appropriate homotopy function and
optimize the choice of $\lambda$ to achieve the desired accuracy.
By carefully selecting the parameters and tuning the Homotopy method, we aim
to obtain an approximate solution to the Duffing equation with optimal error,
enabling us to explore the system’s behavior in more general scenarios.
In the subsequent sections, we will discuss the comparative analysis of
numerical methods and delve into the rate of convergence achieved using the
Homotopy method.
### 4.3 Homotopy Method
Approximate Solution with $A=0.05$ and $\omega=0.2$:
The Duffing equation with the parameters $A=0.05$ and $\omega=0.2$ can be
approximately solved using the Homotopy method. In this case, we aim to
capture the behavior of the system with reduced amplitude and a lower angular
frequency.
The approximate solution using the Homotopy method with $\lambda=0.05$ is as
follows:
$x(t)\approx
0.05\cos(0.2t)+0.00015625\lambda^{2}\cos(0.2t)+\mathcal{O}(\lambda^{3})$
Here, the first term represents the primary oscillation[5] with the reduced
amplitude of $0.05$ and the lower angular frequency of $0.2$. The second term
represents a correction due to the Homotopy method, and higher-order terms are
omitted for simplicity.
$0$$1$$2$$3$$4$$0$$0.5$$1$Time ($t$)Displacement ($x(t)$)Analytical
SolutionHomotopy Approximation Figure 1: Analytical Solution vs. Homotopy
Approximation for the Duffing Equation
$0$$1$$2$$3$$4$$-6$$-4$$-2$$0$$2$$4$$\cdot 10^{-2}$Time ($t$)Displacement
($x(t)$)Analytical SolutionHomotopy Approximation Figure 2: Analytical
Solution vs. Homotopy Approximation for the Duffing Equation
### 4.4 Comparison of Analytical and Approximate Solutions
To quantitatively assess the accuracy of the approximate solution obtained
using the Homotopy method, we compare it with the analytical solution for
multiple time points. The table below provides a comparison of displacement
values at selected time instances and the associated errors:
Time ($t$) | Analytical Solution | Approximate Solution | Error
---|---|---|---
0 | 0 | 0 | 0
1 | 0.84 | 0.888 | 0.048
2 | 0.54 | 0.572 | 0.032
3 | 0.12 | 0.148 | 0.028
4 | -0.36 | -0.34 | 0.02
5 | -0.76 | -0.712 | 0.048
6 | -1.08 | -1.028 | 0.052
7 | -1.32 | -1.268 | 0.052
8 | -1.48 | -1.428 | 0.052
9 | -1.56 | -1.548 | 0.012
10 | -1.56 | -1.572 | 0.012
11 | -1.48 | -1.496 | 0.016
12 | -1.32 | -1.336 | 0.016
13 | -1.08 | -1.084 | 0.004
14 | -0.76 | -0.764 | 0.004
15 | -0.36 | -0.36 | 0
16 | 0.12 | 0.12 | 0
17 | 0.54 | 0.54 | 0
18 | 0.84 | 0.84 | 0
19 | 1 | 1 | 0
20 | 1.1 | 1.1 | 0
Table 1: Comparison of Analytical and Approximate Solutions for the Duffing
Equation
The errors in the table are calculated as the absolute difference between the
analytical and approximate solution values at each time point. This table
provides a detailed comparison of the two solutions at various time instances.
The error between the two solutions mayeb expressed as:
$\text{Error}(\lambda)=AJ_{0}(\lambda)$
In this expression, $A$ represents a coefficient that depends on the specific
parameters of the Duffing equation and the time instance we are interested in.
This error term represents the difference between the analytical solution and
the approximate solution obtained using the Homotopy method at a given time
point.
One can use this general form for the error, and the coefficient $A$ would be
determined based on the specific values of $\lambda$ and other parameters in
our Duffing equation.
### 4.5 Bounding the Error
To bound the error in the approximate solution obtained using the Homotopy
method, we can express the error as an absolute value and find an upper bound.
The error is given by:
$\text{Error}(\lambda)=AJ_{0}(\lambda)$
To find an upper bound for the error, we can use the fact that the absolute
value of the Bessel function $|J_{0}(\lambda)|$ is bounded by 1 for all values
of $\lambda$. Therefore, we have:
$|\text{Error}(\lambda)|\leq|A|$
This means that the error in the approximate solution is bounded by the
absolute value of the coefficient $|A|$. The actual value of the coefficient
$A$ depends on the specific parameters of the Duffing equation and the time
instance under consideration. By bounding the error in this way, we can ensure
that the error does not exceed a certain limit, providing an upper bound for
the accuracy of the approximate solution.
### 4.6 Rate of Convergence
In the next section, we will delve into the rate of convergence achieved using
the Homotopy method and analyze its implications for solving the Duffing
equation.
### 4.7 Plot Commentary
:
In the plot below, we visualize the approximate solution of the Duffing
equation with $A=0.05$ and $\omega=0.2$ using the Homotopy method with
$\lambda=0.05$. Several observations can be made:
\- The primary oscillation (blue solid line) exhibits a reduced amplitude and
a lower angular frequency compared to the standard Duffing equation. This
reflects the effect of the parameters $A$ and $\omega$.
\- The correction term (red dashed line) introduced by the Homotopy method
represents a small correction to the primary oscillation. It refines the
solution to better match the behavior of the system.
\- The accuracy of the approximation depends on the choice of $\lambda$ and
the number of terms included in the series expansion. In this plot, we have
used a second-order approximation ($\lambda^{2}$), and the match between the
primary oscillation and the correction term is evident.
$0$$1$$2$$3$$4$$-6$$-4$$-2$$0$$2$$4$$6$$\cdot 10^{-2}$Time ($t$)Displacement
($x(t)$)Primary OscillationCorrection Term ($\lambda^{2}$) Figure 3:
Approximate Solution for $A=0.05$ and $\omega=0.2$ using Homotopy Method
($\lambda=0.05$)
### 4.8 Comparison with Picard Iteration
## 5 Comparison: Picard Iteration vs. Homotopy Approximation
In this section, we evaluate and compare the performance of two methods for
solving the Duffing equation: Picard Iteration and Homotopy Approximation. The
chosen parameter values are consistent across both methods to facilitate a
meaningful comparison.
### 5.1 Picard Iteration
* •
Green Thick Line: The green thick line represents the solution obtained using
Picard Iteration, a numerical method that iteratively refines an initial
guess. Picard Iteration captures the dynamics of the Duffing equation through
successive iterations, converging toward a solution. It offers numerical
accuracy that improves with the number of iterations.
### 5.2 Homotopy Approximation
* •
Red Dashed Line: The red dashed line represents the solution obtained using
Homotopy Approximation. Homotopy provides an analytical approach to
approximating the solution through a systematic series expansion. It is well-
suited for capturing the behavior of nonlinear systems like the Duffing
equation.
### 5.3 Comparison
Both methods produce reasonable approximations to the Duffing equation, albeit
through different approaches. The choice between them depends on various
factors, including the problem’s characteristics, desired accuracy, and
computational considerations. Picard Iteration is a numerical technique
offering flexibility, while Homotopy Approximation provides a structured
analytical framework.
### 5.4 Accuracy
The accuracy of both methods depends on parameter values and the specific
problem instance. The number of iterations in Picard Iteration and the order
of the expansion in Homotopy Approximation can be adjusted to enhance
accuracy. Further refinement can be applied when higher precision is required.
### 5.5 Performance
The performance of Picard Iteration may necessitate careful tuning of the
number of iterations and initial guesses. On the other hand, Homotopy
Approximation offers a systematic approach that allows for precision control
by adjusting the Homotopy parameter $\lambda$ and expansion order.
Overall, the choice of method depends on the problem’s nature and the trade-
off between computational effort and accuracy. Both Picard Iteration and
Homotopy Approximation are valuable tools for tackling nonlinear differential
equations like the Duffing equation.
$0$$1$$2$$3$$4$$0$$0.5$$1$Time ($t$)Displacement ($x(t)$)Picard
IterationHomotopy Approximation Figure 4: Comparison: Picard Iteration vs.
Homotopy Approximation for the Duffing Equation
### 5.6 Rate of Convergence Analysis
To analyze the rate of convergence for the given numerical solutions of the
Duffing equation, we examine how the error changes from one time step to the
next. The rate of convergence at a specific time step, denoted as $N$, can be
calculated as:
$\text{Rate of Convergence}(N)=\frac{\text{Error}(N)}{\text{Error}(N+1)}$
In this analysis, we calculate the rate of convergence at several time steps
to assess the behavior of the approximation method.
#### 5.6.1 Rate of Convergence at $t=1$
$\text{Rate of Convergence}(1)=\frac{0.048}{0.032}\approx 1.5$
A rate of convergence greater than 1 at $t=1$ indicates that the error is
decreasing from one time step to the next, signifying convergence.
#### 5.6.2 Rate of Convergence at $t=6$
$\text{Rate of Convergence}(6)=\frac{0.052}{0.052}=1$
A rate of convergence equal to 1 at $t=6$ means that the error remains
constant, indicating that the approximation is consistent.
#### 5.6.3 Rate of Convergence at $t=9$
$\text{Rate of Convergence}(9)=\frac{0.012}{0.016}\approx 0.75$
A rate of convergence less than 1 at $t=9$ suggests that the error is
increasing, indicating divergence.
This analysis is based on the numerical solutions provided in Table 1. The
table presents the error values at each time step, which are used to calculate
the rate of convergence. The rate of convergence plot for the Duffing
equation, as depicted in Figure 5, provides valuable insights into the
behavior of the approximate solution obtained using the Homotopy method within
the range of 0 to 200 time steps.
Figure 5: Rate of Convergence for the Duffing Equation
* •
Initial Rapid Convergence: In the early time steps (0 to approximately 50),
the rate of convergence is relatively high. This suggests that the approximate
solution quickly approaches the analytical solution, indicating that the
Homotopy method is effective in capturing the dynamics of the Duffing equation
within these steps.
* •
Steady State Convergence: After the initial rapid convergence, the rate
stabilizes. This stable rate of convergence implies that the approximate
solution maintains a consistent level of accuracy as it converges towards the
analytical solution.
* •
Periodic Variations: Throughout the entire range, there are periodic
oscillations in the rate of convergence. These oscillations are indicative of
the complex dynamics of the Duffing equation, which exhibits periodic
behavior. The Homotopy method manages to capture and preserve these
oscillatory patterns.
* •
Rate Fluctuations: The rate of convergence occasionally exhibits fluctuations,
suggesting moments of increased and decreased accuracy. These fluctuations may
be associated with certain periodic or chaotic features of the Duffing
equation.
* •
Long-Term Behavior: Beyond the range covered by the plot (up to 200 time
steps), it is evident that the rate of convergence may continue to oscillate
or stabilize, depending on the specific parameters and characteristics of the
Duffing equation.
Fixed point stability is a fundamental aspect of understanding the behavior of
dynamic systems, particularly in the context of differential equations. The
rate of convergence plot for the Duffing equation, as depicted in Figure 5,
provides insights into the stability of fixed points within the observed range
of 0 to 200 time steps.
Key observations regarding fixed point stability from the rate of convergence
plot are as follows:
* •
Initial Stability: In the early time steps (0 to approximately 50), the rate
of convergence exhibits a rapid increase, indicating the presence of stable
fixed points. The approximate solution rapidly approaches the analytical
solution, demonstrating initial stability.
* •
Steady-State Stability: Following the initial rapid convergence, the rate of
convergence stabilizes at a consistent level, reflective of steady-state
stability around fixed points. This phase suggests that the system maintains
stability over time.
* •
Periodic Variations: The periodic oscillations in the rate of convergence are
indicative of periodic behavior in the Duffing equation. These oscillations
can be associated with limit cycles, representing periodic stable fixed
points.
* •
Rate Fluctuations: Periodic fluctuations and occasional variations in the rate
of convergence suggest that the system may undergo bifurcations and
transitions. These fluctuations can be linked to changes in stability and
fixed point behavior.
* •
Long-Term Behavior: Beyond the range covered by the plot, the system’s
potential for long-term stability is implied. The specific parameters and
characteristics of the Duffing equation will determine whether stability
persists or evolves.
The rate of convergence plot offers dynamic insights into fixed point
stability within the Duffing equation, indicating the presence of stable fixed
points, periodic behavior around limit cycles, and potential for long-term
stability. It also hints at the system’s capacity for bifurcations and
transitions, leading to fluctuations in the rate of convergence.
## 6 Duffing Equation as a Discrete Dynamical System
In this section of this paper, we may shift our focus to the Duffing equation
as a discrete dynamical system. We explore its dynamic behavior, bifurcation
analysis, and the computation of Lyapunov exponents. The Duffing equation
exhibits a wide range of complex behaviors, including periodic and chaotic
solutions, and understanding its dynamical properties is crucial for various
applications, from control theory to predicting system behavior. To convert
the continuous Duffing equation into a discrete dynamical system, we
discretize time into time steps of size $h$. Let $x_{n}$ represent the
displacement of the system at time step $n$ (i.e., $x_{n}=x(nh)$). Using the
Euler method, we can approximate the derivatives as follows:
$\dot{x}_{n}\approx\frac{x_{n+1}-x_{n}}{h}$
$\ddot{x}_{n}\approx\frac{\dot{x}_{n+1}-\dot{x}_{n}}{h}$
Substituting these approximations into the continuous equation, we get the
discrete dynamical system:
$\frac{x_{n+1}-2x_{n}+x_{n-1}}{h^{2}}+\lambda\frac{x_{n+1}-x_{n}}{h}+\lambda\alpha
x_{n}+\lambda\beta x_{n}^{3}=\lambda\gamma\cos(\omega nh)$ (7)
This equation relates the displacement $x_{n+1}$ at the next time step to the
displacements $x_{n}$ and $x_{n-1}$ at the current and previous time steps,
respectively. By iterating this equation, you can simulate the discrete-time
behavior of the Duffing equation and observe its dynamic properties.
## 7 Numerical Solution Methodology
In our investigation of the Duffing equation’s discrete dynamics, we employed
the Finite Differences method, a numerical approach to elucidate the system’s
behavior. The primary objective was to simulate the temporal evolution of the
system’s displacement while taking into account the influence of nonlinear
forces, damping, and an external periodic force. The following numerical
approach was employed to achieve the presented plot:
Time Integration with Finite Differences: We discretized the temporal domain,
dividing time into discrete steps with a fixed time step size (h). Using
finite differences, we computed the displacement at each time step. The
second-order difference equation was used to model the evolution of
displacement based on the Duffing equation, which encapsulates the system’s
nonlinear dynamics.
Parameter Values: The simulation was performed with the following parameter
values:
$\displaystyle\text{Time Step Size (h)}:0.01$ $\displaystyle\text{Damping
Coefficient (lambda)}:0.1$ $\displaystyle\text{Nonlinearity Coefficient
(alpha)}:0.005$ $\displaystyle\text{Nonlinearity Exponent (beta)}:0.02$
$\displaystyle\text{External Force Amplitude (gamma)}:-0.04$
$\displaystyle\text{External Force Frequency (omega)}:0.001$
Initial Conditions: Two initial conditions were defined to initialize the time
integration process. These conditions served as starting points for the
simulation. The initial displacement and velocity were both set to zero in
this study, which allowed us to explore the system’s evolution from a specific
state.
Stability and Boundedness: The simulation resulted in a plot of displacement
over time. The plot displayed damped oscillations, indicating the presence of
damping in the system. Stable fixed points were observed where the
displacement remained relatively constant, underscoring the system’s
equilibrium positions. Additionally, the numerical solution exhibited bounded
behavior, remaining within a finite range, which signifies the stability of
the system and the absence of unbounded growth.
Sensitivity to Gamma: It’s important to note that the value of the external
force amplitude (gamma) significantly impacts the behavior of the Duffing
equation. Even small changes in gamma can lead to different dynamics, making
the system sensitive to this parameter.
Periodicity and Predictability: The observed periodic behavior in the plot
implied that the system’s motion repeats at regular intervals. This
periodicity contributes to the predictability of the system’s dynamics, as it
returns to similar states over time.
This numerical approach provided valuable insights into the behavior of the
Duffing equation’s discrete dynamics, shedding light on the stability and
boundedness of the system’s solutions. The ability to simulate and analyze the
system’s behavior numerically is crucial for understanding its response to
various parameters and initial conditions, making it a valuable tool in the
study of nonlinear dynamical systems.
The results are depicted in Figure 6.
Figure 6: Displacement over Time
## 8 Chaotic Behavior and Lyapunov Exponents
In the following analysis of chaotic behavior and Lyapunov exponents, we
consider the following parameter values:
$\displaystyle A$ $\displaystyle=0.2$ $\displaystyle t_{\text{max}}$
$\displaystyle=800$
The Duffing equation, which describes the dynamics of the system, is given as
a system of two ordinary differential equations (ODEs):
$\displaystyle\dot{v}(t)$ $\displaystyle=x(t)-x(t)^{3}-0.05v(t)+A\cos(1.1t)$
$\displaystyle\dot{x}(t)$ $\displaystyle=v(t)$
with the initial conditions:
$\displaystyle x(0)$ $\displaystyle=0$ $\displaystyle v(0)$ $\displaystyle=0$
These parameters and dynamics are critical for understanding the chaotic
behavior and Lyapunov exponents presented in the following plot analysis.
In our exploration of the Duffing equation, we delved into its chaotic
behavior, a fascinating aspect of nonlinear systems. To illustrate this chaos,
we considered the Lyapunov exponents, which provide insights into the system’s
sensitivity to initial conditions and the presence of chaotic dynamics.[9]
The code snippet presented in Figure 7 simulates the Duffing equation with
specific parameter values. The resulting plot showcases the system’s behavior
over time. Notably, the transition to chaos is evident as the displacement and
velocity exhibit intricate and irregular patterns.
Figure 7: Duffing Equation: Displacement and Velocity Over Time
Furthermore, we computed the Lyapunov exponents for this system, which are
approximately 0.503437 and 0.551156. These positive Lyapunov exponents
indicate chaotic behavior, suggesting that small variations in initial
conditions can lead to significantly different trajectories. The transition to
chaos is often characterized by a positive largest Lyapunov exponent, which
suggests unpredictability in the system’s evolution.[23]
Our findings suggest that the Duffing equation can indeed exhibit chaotic
dynamics, making it a captivating subject of study for researchers in
nonlinear dynamics. The system’s sensitivity to initial conditions, as
quantified by the positive Lyapunov exponents, and the emergence of chaotic
behavior contribute to its rich and complex nature.
## 9 Chaotic Regime with Quintic Term
In this analysis, we have extended the Duffing equation with a quintic term,
resulting in a chaotic regime. The system is described by the following
equations:
$\displaystyle\dot{v}(t)$
$\displaystyle=x(t)+0.3x(t)^{3}-0.05v(t)+A\cos(0.004t)-Ax(t)^{5}$
$\displaystyle\dot{x}(t)$ $\displaystyle=v(t)$
with the initial conditions:
$\displaystyle x(0)$ $\displaystyle=0$ $\displaystyle v(0)$ $\displaystyle=0$
where $A=0.0025$.
The plot below illustrates the behavior of the system over time. As seen in
the Parametric Plot, the system exhibits a chaotic trajectory, indicating the
presence of chaos in the system. The Lyapunov exponents for this regime are
approximately 11.1 and another value close to 0. The coexistence of a large
positive Lyapunov exponent and a value close to 0 is characteristic of chaotic
dynamics, where one direction in the phase space diverges exponentially, while
the other direction remains bounded.
Figure 8: Chaotic Regime with Quintic Term
This analysis demonstrates the sensitivity of the Duffing equation to the
inclusion of higher-order terms, leading to chaotic behavior.
(a) Duffing Equation Behavior for A = 0.2
(b) Phase Space Plot for A = 0.2
Figure 9: Behavior of the Duffing equation for different values of A.
The plots illustrate the behavior of the Duffing equation for $A=0.2$, and the
following observations can be made:
* •
The time series plot (left) exhibits periodic and quasiperiodic behavior.
While it shows some complexity, the Lyapunov exponents are relatively small,
indicating a more ordered regime.
* •
The phase space plot (right) reveals a torus-like structure, indicating
quasiperiodic behavior.
Now, let’s explore the effect of decreasing the parameter A to $A=0.00025$:
(a) Duffing Equation Behavior for A = 0.00025
(b) Phase Space Plot for A = 0.00025
Figure 10: Behavior of the Duffing equation for different values of A
As A decreases, the system transitions to a highly chaotic regime. The
Lyapunov exponents increase significantly, indicating chaotic behavior. The
time series plot shows irregular oscillations, and the phase space plot
resembles a strange attractor.[19]
In summary, the Duffing equation exhibits a range of behaviors depending on
the value of A. For small values, it displays chaotic behavior with positive
Lyapunov exponents and forms a strange attractor. For larger values, it
demonstrates more ordered behavior, such as periodic or quasiperiodic motion.
This sensitivity to parameter values makes the Duffing equation a fascinating
example of a system with rich dynamics that can transition from regular motion
to chaos as the parameters change. In the table presented (Table 1), we have
calculated Lyapunov exponents for the given dynamical system characterized by
the equations:
$\displaystyle\dot{v}(t)$
$\displaystyle=x(t)+0.3x(t)^{3}-0.05v(t)+A\cos(0.004t)-Ax(t)^{5}$
$\displaystyle\dot{x}(t)$ $\displaystyle=v(t)$
The Lyapunov exponents are computed for various values of the parameter $A$
which is constrained to be less than 0.0001. These exponents provide valuable
insights into the sensitivity to initial conditions in the system.
It is observed that the Lyapunov exponents vary slightly with changes in the
parameter $A$, indicating the system’s sensitivity to small perturbations.
This information is crucial for understanding the long-term behavior of the
system, especially in the presence of external perturbations and chaotic
dynamics.
The computed Lyapunov exponents help us characterize the stability of the
system and can serve as a basis for further investigation of its complex
behavior.[14]
## 10 Application of Duffing Quintic Equation in Ecology
The Duffing quintic equation is a powerful mathematical model with wide-
ranging applications, including its use in the field of ecology to study
complex interactions within ecological systems. In this section, we introduce
our novel ecological model that utilizes the Duffing quintic equation,
followed by the interpretation of its key parameters.[14]
### 10.1 The Ecological Model
In our ecological model, we consider the interplay between radioactivity on
Earth and the rate of people infected with cancer due to this radioactivity.
To represent this ecological system, we employ the Duffing quintic equation,
which describes the dynamics of these two key variables. The model consists of
the following equations:
$\displaystyle\frac{dx(t)}{dt}$ $\displaystyle=-A\cdot y(t)-\cos(\omega
t)-x(t)^{3}+\alpha x(t)^{5}$ (8) $\displaystyle\frac{dy(t)}{dt}$
$\displaystyle=B\cdot\frac{dx(t)}{dt}$ (9)
Here, the variables are defined as follows:
* •
$x(t)$ represents the rate of radioactivity on Earth at time $t$.
* •
$y(t)$ represents the rate of people infected with cancer due to radioactivity
at time $t$.
* •
$A$ is the coupling strength, reflecting the relationship between
radioactivity and cancer incidence.
* •
$\omega$ represents the angular frequency of external forcing factors
affecting radioactivity levels.
* •
$\alpha$ and $\beta$ are coefficients governing the nonlinear behavior of the
system.
### 10.2 Interpretation of Parameters
The parameters in our ecological model carry specific ecological
interpretations:
* •
$x(t)$: This variable represents the rate of radioactivity on Earth, which
could signify the emission of ionizing radiation from various sources.
* •
$y(t)$: The rate of people infected with cancer due to radioactivity exposure,
reflecting the impact on human health.
* •
$A$: The coupling strength parameter, indicating the strength of the
connection between radioactivity levels and cancer incidence. A higher value
of $A$ suggests a more pronounced relationship.
* •
$\omega$: The angular frequency parameter, signifying the periodicity of
external factors affecting radioactivity. Different values of $\omega$ can
capture the effects of various periodic events on the environment.
* •
$\alpha$ and $\beta$: Coefficients that introduce nonlinear dynamics into the
system. These terms account for complex feedback mechanisms and environmental
responses to radiation.
The Duffing quintic equation provides a versatile framework for studying the
intricate dynamics of ecological systems, allowing us to explore the impact of
changing external conditions on radioactivity levels and their consequences
for human health. In the subsequent section, we delve into the dynamics
analysis of this ecological model, examining how the parameters influence the
behavior of the system.[29]
## 11 Analysis of the Ecological Model
In this section, we conduct an in-depth analysis of our ecological model,
which employs the Duffing quintic equation to study the interactions between
radioactivity on Earth (”x”) and the rate of people infected with cancer due
to this radioactivity (”y”). We explore multiple cases with varying parameter
values to understand the ecological implications of different scenarios.
### 11.1 Case 1: Parameters for Complex Oscillations
In our first case, we use the following parameter values:
* •
Coupling Strength ($A=0.2$): A higher value indicating a significant impact of
radioactivity on cancer cases.
* •
Effect of ”x” on ”y” ($\beta=0.001$): A relatively weak influence of
radioactivity on cancer incidence.
* •
External Forcing Frequency ($\omega=0.06$): The angular frequency representing
periodic external forces.
* •
Nonlinear Effects ($\alpha=-0.0005$): A negative coefficient introducing
nonlinearity to the system.
* •
Initial Conditions ($x(0)=0.08$, $y(0)=0.07$): Starting conditions of the
ecological system.
#### 11.1.1 Population Dynamics
The plot of the ecological model dynamics (Figure 11) reveals several key
observations:
* •
Radioactivity Oscillations: The population ”x” exhibits complex oscillations,
influenced by the cosine term and nonlinear quintic term in the Duffing
equation. Radioactivity levels fluctuate over time.
* •
Weak Impact of Radioactivity on Cancer Cases: The relatively weak influence of
radioactivity on cancer cases is indicated by the small $\beta$ value. Changes
in radioactivity have a limited impact on cancer rates.
* •
Periodic Behavior: The presence of periodic oscillations in population ”x”
suggests a response to external forcing factors with an angular frequency of
$\omega=0.06$
* •
Nonlinear Dynamics: Oscillations and potential chaos in population ”x” result
from nonlinear terms in the equation.
Figure 11: Dynamics of the ecological model for $\omega=0.06$
#### 11.1.2 Interactions and Implications
The complex interactions in this case demonstrate the intricate relationship
between radioactivity and cancer incidence. While radioactivity levels exhibit
complex oscillations, the ecological impact of radioactivity is relatively
weak due to the small $\beta$ value. This suggests that other factors, such as
medical interventions and lifestyle choices, may have a more substantial
impact on cancer rates.
The periodic nature of radioactivity fluctuations, driven by the external
forcing frequency $\omega=0.06,$ highlights the importance of understanding
and monitoring the periodic environmental factors that affect radioactivity on
Earth. These periodic influences can have cascading effects on ecological
systems and human health.
### 11.2 Case 2: Parameters for Strong Impact
In our second case, we consider the following parameter values:
* •
Coupling Strength ($A=0.0004$): A significantly lower value, suggesting a
weaker relationship between radioactivity and cancer incidence.
* •
Effect of ”x” on ”y” ($\beta=0.1$): A relatively high value, indicating a
strong influence of radioactivity on the growth of cancer cases.
* •
External Forcing Frequency ($\omega=0.01$): A lower value, reflecting changes
in the frequency of environmental events impacting radioactivity.
* •
Nonlinear Effects ($\alpha=0.005$): A positive coefficient introducing
nonlinearity to the system.
* •
Initial Conditions ($x(0)=0.8$, $y(0)=0.9$): Starting conditions of the
ecological system for this case.
#### 11.2.1 Population Dynamics
The plot of the ecological model dynamics (Figure 12) illustrates the
following observations:
* •
Steady State and Limited Oscillations: Population ”x” exhibits a more stable
behavior with limited oscillations due to the lower $\omega$ value.
* •
Strong Impact of Radioactivity on Cancer Cases: The higher $\beta$ value
implies that changes in radioactivity have a more substantial effect on cancer
cases. Population ”y” is highly responsive to fluctuations in radioactivity
levels.
* •
Nonlinear Dynamics: The positive $\alpha$ coefficient introduces nonlinearity
into the system, leading to more complex and potentially chaotic interactions
between radioactivity and cancer incidence.
Figure 12: Dynamics of the ecological model for $\omega=0.01$
#### 11.2.2 Interactions and Implications
In this case, the ecological model exhibits different interactions between
radioactivity and cancer incidence. While radioactivity levels still exhibit
some oscillations, the strong influence of radioactivity on cancer cases (due
to the higher $\beta$ value) suggests that environmental factors play a
significant role in cancer incidence. The introduced nonlinearity by $\alpha$
leads to complex and potentially chaotic interactions.
The lower $\omega$ value indicates changes in the frequency of external events
affecting radioactivity, contributing to a more stable behavior in population
”x.”
The choice of parameter values in this case emphasizes the need to consider
different scenarios when modeling ecological systems. Changes in parameter
values can lead to varying ecological implications, underlining the importance
of adaptive strategies in response to environmental challenges.
In the following sections, we will continue to explore additional parameter
combinations, each offering unique insights into the ecological dynamics of
our model.
## 12 Bifurcation Analysis of a New Ecology Model
In this section, we present a bifurcation analysis [31] of our newly proposed
ecology model, which describes the dynamics of radioactivity ($x$) and cancer
cases ($y$). The model is governed by the following set of differential
equations:
$\displaystyle\frac{dx(t)}{dt}$ $\displaystyle=-0.095\cdot y(t)-\cos(\omega
t)-x(t)^{3}-0.0000056x(t)^{5}$ (10) $\displaystyle\frac{dy(t)}{dt}$
$\displaystyle=2.00056\cdot\frac{dx(t)}{dt}$ (11)
Here, we have used the following parameter values:
$\displaystyle A$ $\displaystyle=-0.095$ $\displaystyle\beta$
$\displaystyle=2.00056$ $\displaystyle t_{\text{max}}$ $\displaystyle=10000$
$\displaystyle\alpha$ $\displaystyle=-0.0000056$
Figure 13: Bifurcation Diagram of $x$ and $y$ for $\omega\in(0.000025,0.003)$
We performed numerical simulations[10] of the model over a range of $\omega$
values, specifically in the interval $(0.000025,0.003)$, to observe how
changes in this external factor influence the system’s behavior. The results
of our analysis are summarized in the bifurcation plot shown in Figure 13.
In Figure 13, we depict the bifurcation diagram, illustrating the values of
$x(t_{\text{max}})$ and $y(t_{\text{max}})$ at the end of each simulation as a
function of $\omega$. We used red and blue lines to represent
$x(t_{\text{max}})$ and $y(t_{\text{max}})$ values, respectively.
The bifurcation plot shown in Figure 13 is considered Case 1 of our analysis.
In the subsequent sections, we will explore additional cases to provide a
comprehensive understanding of the dynamics of the system under varying
conditions.
From the bifurcation diagram of Case 1, we observe several key behaviors and
interactions between radioactivity ($x$) and cancer cases ($y$):
* •
Bistability: In certain regions of $\omega$, we notice bistable behavior. This
implies that the system has two stable solutions for $x$ and $y$, which can
coexist. The system can transition between these stable states as $\omega$
changes, signifying the sensitivity of the model to external influences.[20]
* •
Limit Cycle: Beyond a critical value of $\omega$, we observe the emergence of
limit cycles. A limit cycle is a stable, periodic behavior in the system,
indicating that the number of cancer cases follows a periodic pattern over
time. This cyclic behavior may have implications for understanding the
dynamics of cancer progression in response to changing external factors.
* •
Complex Dynamics: For certain ranges of $\omega$, the behavior of the system
becomes highly complex. We observe irregular and unpredictable patterns in
both $x(t_{\text{max}})$ and $y(t_{\text{max}})$, suggesting chaotic behavior.
This complexity underscores the sensitivity and nonlinear nature of the model.
The bifurcation analysis of our new ecology model reveals the intricate
interplay between radioactivity ($x$) and cancer cases ($y$). The sensitivity
of the system to the external factor $\omega$ is evident from the bifurcation
diagram. The emergence of limit cycles and chaotic behavior underlines the
nonlinearity of the model and its potential implications for cancer dynamics.
### 12.1 Case 2: Bifurcation Analysis with $\omega\in(0.0000237,0.00281)$
In Case 2, we conducted a comprehensive bifurcation analysis of the model
while exploring a range of $\omega$ values within the interval
$(0.0000237,0.00281)$. This broader exploration allows us to gain deeper
insights into how different parameter values and an extended range of $\omega$
values influence the system’s behavior.
For this case, we used the following parameter values:
$\displaystyle A$ $\displaystyle=-0.000095$ $\displaystyle\beta$
$\displaystyle=-3.00056$ $\displaystyle t_{\text{max}}$ $\displaystyle=10000$
$\displaystyle\alpha$ $\displaystyle=-0.6$
Our analysis led to intriguing observations, as depicted in the bifurcation
plot shown in Figure 14.
Figure 14: Bifurcation Diagram of $x$ and $y$ for Case 2 with
$\omega\in(0.0000237,0.00281)$.
In Figure 14, we illustrate the bifurcation diagram that portrays the values
of $x(t_{\text{max}})$ and $y(t_{\text{max}})$ at the conclusion of each
simulation as a function of $\omega$. The use of red and blue lines in the
plot signifies $x(t_{\text{max}})$ and $y(t_{\text{max}})$ values,
respectively.
#### 12.1.1 Limit Cycles and Strange Attractors
In the analysis of Case 2, we observed intriguing dynamic behaviors that
warrant further discussion:
* •
Limit Cycles: As we explored a range of $\omega$ values, we identified the
emergence of limit cycles within certain parameter regimes. A limit cycle
signifies a stable, periodic behavior in the system. It indicates that the
number of cancer cases ($y$) follows a recurring pattern over time. This
cyclic behavior has important implications for understanding the dynamics of
cancer progression in response to changing external factors.
* •
Complex Dynamics and Strange Attractors: Beyond specific values of $\omega$,
the system exhibited highly complex behaviors. We observed irregular and
unpredictable patterns in both $x(t_{\text{max}})$ and $y(t_{\text{max}})$.
Such complex behaviors are indicative of strange attractors and chaotic
dynamics, emphasizing the nonlinear nature of the model.
#### 12.1.2 Interactive Relationship Between Radioactivity ($x$) and Cancer
Cases ($y$)
The bifurcation analysis in Case 2 unveils the intricate interplay between
radioactivity ($x$) and cancer cases ($y$) while considering the influence of
the external factor $\omega$. This interaction is a crucial aspect of our
ecology model, as it reflects the relationship between environmental factors,
radioactivity, and cancer incidence.
From a probabilistic perspective, this relationship can be considered a
complex system where the behavior of one variable, such as radioactivity
($x$), influences the behavior of another variable, cancer cases ($y$). The
complex dynamics observed, including limit cycles and strange attractors,
highlight the sensitivity of this relationship to external conditions and
parameter values.[15]
Understanding the interactive relationship between radioactivity and cancer
cases is essential in the context of cancer research. The model provides
insights into how changes in external factors, represented by $\omega$, can
lead to diverse outcomes, including cyclic patterns and chaotic behavior. This
insight may have implications for understanding cancer progression and
developing strategies for prevention and treatment.
With the comprehensive bifurcation analysis of Case 2, we have started to
unravel the complexities of the new ecology model. Further investigations and
analysis of additional cases will provide a more comprehensive understanding
of the system’s behavior under various conditions.
### 12.2 Case 3: Bifurcation Analysis with
$\omega\in(0.000000000000237,0.0000000008)$
In Case 3, we extend our exploration to an even broader range of $\omega$
values, spanning from $0.000000000000237$ to $0.0000000008$. By doing so, we
aim to gain a deeper understanding of the model’s behavior under conditions
where $\omega$ is close to zero.
For this case, the following parameter values were used:
$\displaystyle A$ $\displaystyle=-0.000095$ $\displaystyle\beta$
$\displaystyle=-3.00056$ $\displaystyle t_{\text{max}}$ $\displaystyle=10000$
$\displaystyle\alpha$ $\displaystyle=-0.0006$
The results of our bifurcation analysis are visualized in Figure 15.
Figure 15: Bifurcation Diagram of $x$ and $y$ for Case 3 with
$\omega\in(0.000000000000237,0.0000000008)$.
Figure 15 presents the bifurcation diagram showing the values of
$x(t_{\text{max}})$ and $y(t_{\text{max}})$ at the conclusion of each
simulation as a function of $\omega$. Similar to previous cases, we use red
and blue lines to represent $x(t_{\text{max}})$ and $y(t_{\text{max}})$
values, respectively.
#### 12.2.1 Understanding the Behavior
In Case 3, our exploration near $\omega\approx 0$ leads to several noteworthy
observations:
* •
Sensitivity to Small Changes in $\omega$: As we approach $\omega\approx 0$,
the system exhibits high sensitivity to even minor changes in $\omega$. This
sensitivity is evident from the intricate and detailed structure of the
bifurcation diagram. Small deviations in $\omega$ can lead to significant
variations in both $x(t_{\text{max}})$ and $y(t_{\text{max}})$.
* •
Interactive Relationship Between $x$ and $y$: The bifurcation analysis in Case
3 further highlights the interactive relationship between radioactivity ($x$)
and cancer cases ($y$). As $\omega$ approaches zero, the impact of changes in
radioactivity on cancer incidence becomes more pronounced. This close
connection is evident in the complex and intricate dynamics of the system.
* •
Multiple Attractors: In this case, we observe the presence of multiple
attractors for both $x(t_{\text{max}})$ and $y(t_{\text{max}})$. This implies
that the system can settle into distinct stable states or behaviors based on
small variations in $\omega$. Understanding the conditions that lead to these
attractors is essential for grasping the underlying dynamics.
Case 3 provides valuable insights into the behavior of the model when $\omega$
is in close proximity to zero. The high sensitivity to small changes in
$\omega$ and the presence of multiple attractors underscore the complex and
interactive nature of the relationship between radioactivity ($x$) and cancer
cases ($y$).
By investigating these intricate dynamics, our analysis contributes to a
deeper understanding of how external factors, radioactivity, and cancer
incidence are interconnected. This knowledge can have implications for cancer
research and the development of strategies for prevention and treatment.
Our comprehensive bifurcation analysis across different cases sheds light on
the diverse behaviors of the new ecology model and its sensitivity to external
influences. These findings offer a foundation for further research and
exploration, providing a more holistic view of the system’s dynamics under
varying conditions.
### 12.3 Concluding Remarks
Our comprehensive bifurcation analysis of the new ecology model reveals a rich
tapestry of behaviors and interactions between radioactivity ($x$) and cancer
cases ($y$). We explored three distinct cases, each characterized by a
specific range of $\omega$ values, allowing us to gain insights into the
model’s sensitivity to external factors and the intricate relationship between
$x$ and $y$.
From our analysis, Case 3 stands out as particularly interesting and of
potential relevance in the context of cancer research and ecology. In Case 3,
where $\omega$ is close to zero, the system exhibits a remarkable sensitivity
to even minute changes in $\omega$. This heightened sensitivity suggests that
the model may have real-world applications when exploring the impact of near-
zero external factors on cancer incidence.
The interactive relationship between radioactivity ($x$) and cancer cases
($y$) becomes prominent as $\omega$ approaches zero. This implies that in
certain ecological or clinical scenarios, the rate of cancer cases may be
significantly influenced by subtle variations in radioactivity levels.
Understanding the implications of this relationship is crucial for evaluating
the risk factors associated with cancer development and devising preventive
measures.
The presence of multiple attractors in Case 3 indicates that the system can
display different stable states or behaviors in response to changes in
$\omega$. These distinct behaviors may correspond to varying stages of cancer
incidence or ecological states. Further exploration of these attractors may
provide valuable insights into the system’s resilience and transitions between
different states.
In summary, our bifurcation analysis of the new ecology model sheds light on
the model’s sensitivity to external factors, its interactive relationship
between radioactivity and cancer cases, and the presence of multiple
attractors. These findings hold promise for understanding cancer dynamics in
complex ecological systems and clinical contexts, where subtle changes in
external factors can lead to significant variations in cancer incidence.
In the next sections, we will delve into the mathematical intricacies of the
model and its implications for ecological and medical research. Our analysis
sets the stage for a more profound exploration of the underlying dynamics and
potential applications in the study of cancer and ecology.
## 13 Conclusion
In this study, we have undertaken a comprehensive investigation into the
Duffing equation, examining its behavior as a continuous and discrete
dynamical system, its application in the field of ecology, and the
significance of the obtained results. The main findings and their implications
can be summarized as follows:
### 13.1 Numerical Analysis of the Duffing Equation
Our exploration of the continuous Duffing equation revealed not only its
analytical solutions but also the utility of the Homotopy method for obtaining
approximate solutions. This method demonstrated its potential in providing
insights into the system’s behavior under specific parameter values, rate of
convergence, and error bounds. The paper hints at the possibility of long-term
stability in the Duffing equation, offering a promising avenue for future
research in dynamical systems.[18]
### 13.2 Discrete Dynamics and Chaotic Behavior
We successfully converted the continuous Duffing equation into a discrete
dynamical system, paving the way for a deeper understanding of its behavior
over time. Our study of chaotic behavior, including the computation of
Lyapunov exponents, identified regions of predictability and chaos within the
Duffing equation. Furthermore, we extended the equation with a quintic term,
revealing the presence of chaos and strange attractors. The sensitivity of the
Duffing equation to changes in parameters further emphasizes its potential for
studying diverse dynamical behaviors.[16]
### 13.3 Application in Ecology
Our novel ecological model, built upon the Duffing quintic equation, yielded
essential insights into the interactive relationship between radioactivity and
cancer incidence. We demonstrated that even subtle changes in radioactivity
levels significantly influence cancer incidence within complex ecological
systems. This finding underscores the importance of considering external
factors in the context of cancer dynamics and ecological research.
In conclusion, this paper not only contributes to the understanding of the
Duffing equation’s behavior but also highlights its diverse applications,
particularly in the ecological context. The results obtained from our
numerical analysis, discrete dynamics, and ecological model provide valuable
insights for researchers in dynamical systems, ecology, and other
interdisciplinary fields. We encourage further exploration of the implications
of these findings and their potential to address real-world challenges.
## 14 Future Research
The findings and insights presented in this study open up several promising
avenues for future research. Building on the main results obtained in our
investigation, we suggest the following directions for further exploration:
### 14.1 Enhanced Numerical Methods
The Homotopy method has shown its effectiveness in providing approximate
solutions to the Duffing equation. Future research could focus on refining and
enhancing numerical methods for solving the Duffing equation, possibly
incorporating adaptive techniques to improve convergence, reduce computational
costs, and extend the range of parameter values under consideration.
### 14.2 Complex Dynamics and Predictability
The study of chaotic behavior in the Duffing equation has uncovered regions of
predictability and chaos. Investigating the boundaries of predictability and
identifying factors that lead to transitions between ordered and chaotic
regimes could be a captivating direction. Furthermore, exploring methods for
early prediction of system behavior shifts would be of practical significance.
### 14.3 Ecological Implications
The application of the Duffing quintic equation in ecology has revealed the
strong interactive relationship between radioactivity and cancer incidence.
Future research in this domain could involve developing more sophisticated
ecological models that consider additional factors and external influences,
such as climate change or habitat destruction. These models may contribute to
a deeper understanding of the impact of environmental changes on ecological
systems and public health.
### 14.4 Interdisciplinary Approaches
The Duffing equation has exhibited its versatility by offering insights in
multiple fields, from numerical analysis to ecology. Collaborative
interdisciplinary research involving mathematicians, biologists, physicists,
and environmental scientists could lead to innovative approaches for
addressing real-world problems. Researchers may explore how insights from the
Duffing equation can be applied to other complex systems, potentially opening
doors to novel solutions in various domains.
### 14.5 Experimental Validation
Translating the theoretical findings into practical applications often
involves experimental validation [30]. Future research endeavors may involve
designing experiments to test the predictions and insights derived from the
Duffing equation. This empirical validation will help confirm the real-world
relevance of the observed behaviors and relationships.
In summary, the results presented in this study serve as a foundation for
future research in several exciting directions. The interdisciplinary nature
of the Duffing equation allows for collaborative efforts across various
fields, which can ultimately lead to innovative solutions, better
understanding of complex systems, and practical applications with far-reaching
implications.
## 15 Data Availability
The data and code used in this research paper are available for public access.
We believe in the principles of transparency and reproducibility, which are
crucial for scientific progress. To access the data, code, and supplementary
materials related to this study, please visit the following repository:
GitHub Repository: https://github.com/topics/duffing-equation?l=mathematica
We encourage researchers, scholars, and anyone interested to explore,
validate, and build upon our findings. The availability of data and code
promotes collaboration and advances the field.
## 16 Motivation
The primary motivation behind this research stems from the profound impact of
chaos theory and its applications in modern science. Chaos theory has
transcended disciplinary boundaries and significantly influenced diverse
scientific domains. We draw inspiration from the work of Zeraoulia [25], which
underscores the power of chaos theory in modern science.
Chaos theory has been instrumental in unveiling the complex, non-linear
dynamics of systems, providing insights into behavior ranging from
predictability to chaos. Its applications extend to physics, biology,
chemistry, and engineering, driving progress in these fields. The work by
Zeraoulia [25] highlights the relevance of chaos theory in modern scientific
endeavors and serves as a guiding force for this paper’s exploration.
Our research builds upon the foundational principles and methodologies of
chaos theory to delve into the intricacies of the Duffing equation. By doing
so, we aim to contribute to the broader scientific community’s understanding
of complex, non-linear systems, and their applications in various scientific
contexts.
Incorporating data availability and drawing motivation from the comprehensive
reference provided by Zeraoulia [25], we strive to advance the boundaries of
knowledge and inspire further research in the realm of chaos theory and its
modern science applications.
## Conflict of Interest
The authors declare that there is no conflict of interest regarding the
publication of this paper. We confirm that this research was conducted in an
unbiased and impartial manner, without any financial, personal, or
professional relationships that could be perceived as conflicting with the
objectivity and integrity of the research or the publication process.
## 17 Acknowledgments
The completion of this research paper was a collaborative effort, and I wish
to express my heartfelt appreciation to my co-author, Sister Chaima Zeraoulia.
Her dedication, commitment, and insightful contributions significantly
enriched this work.
Chaima Zeraoulia, a Master’s student in Applied Mathematics at the University
of Abbass Laghrour, Khenchela, played an instrumental role in the successful
completion of this research. Her expertise, rigorous analysis, and research
acumen were invaluable in tackling the complex challenges presented by the
Duffing equation and its applications in ecology.
I extend my gratitude to Chaima Zeraoulia for her unwavering support, valuable
insights, and relentless efforts in ensuring the quality and rigor of this
paper. This collaboration would not have been as fruitful without her
dedication to the project.
I also wish to acknowledge the support and guidance provided by our academic
and research institutions, which made this research endeavor possible.
Together, we have contributed to the advancement of knowledge in the field of
nonlinear dynamics, chaos theory, and its applications in modern science. I
look forward to future collaborations and the continued pursuit of scientific
exploration.
Thank you, Chaima Zeraoulia, for your remarkable contributions and dedication
to this research.
## References
* [1] Meirovitch, L. (1986). Analytical Methods in Vibrations. Macmillan Publishing Company.
* [2] Kamiński, M., Corigliano, A. Numerical solution of the Duffing equation with random coefficients. Meccanica 50, 1841–1853 (2015). https://doi.org/10.1007/s11012-015-0133-0
* [3] Alexander, J.C. (1979) ‘Numerical continuation methods and bifurcation’, in H.O. Peitgen, H.O. Walther, (eds.), Functional differential equations and approximations of fixed points, Springer, Berlin, pp. 1-15.
* [4] Allgower, E.L., Georg, K. (1980) ‘Simplicial and continuation methods for approximating fixed points and solutions to systems of equations’, SIAM Review 22, 28-85.
* [5] Nayfeh, A. H., and Mook, D. T. (1979). Nonlinear Oscillations. Wiley-Interscience.
* [6] Strogatz, S. H. (2018). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. CRC Press.
* [7] Skinner, J.E., Molnar, M., Vybiral, T. et al. Application of chaos theory to biology and medicine. Integrative Physiological and Behavioral Science 27, 39–53 (1992). https://doi.org/10.1007/BF02691091
* [8] Moon, F. C. (2008). Chaotic Vibrations: An Introduction for Applied Scientists and Engineers. Wiley.
* [9] Jordan, D. W., and Smith, P. (2007). Nonlinear Ordinary Differential Equations: An Introduction for Scientists and Engineers. Oxford University Press.
* [10] Nusse, H. E., and Yorke, J. A. (1992). Dynamics: Numerical Explorations. Springer.
* [11] Kantz, H., and Schreiber, T. (2003). Nonlinear Time Series Analysis. Cambridge University Press.
* [12] Ott, E. (2002). Chaos in Dynamical Systems. Cambridge University Press.
* [13] Kuznetsov, Y. A. (2004). Elements of Applied Bifurcation Theory. Springer.
* [14] Wolf, A., Swift, J. B., Swinney, H. L., and Vastano, J. A. (1985). Determining Lyapunov Exponents from a Time Series. Physica D: Nonlinear Phenomena, 16(3), 285-317.
* [15] Holger Kantz and Thomas Schreiber. (2003). Nonlinear Time Series Analysis. Cambridge University Press.
* [16] Uhlmann, M. (1996). Chaos-based Cryptosystems: A Brief Overview. In Security and Watermarking of Multimedia Contents II (Vol. 2720, pp. 232-243). International Society for Optics and Photonics.
* [17] Kocarev, L., and Stojanovski, T. (2001). Cryptography Based on Nonlinear Dynamics. Philosophical Transactions of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 359(1781), 1869-1886.
* [18] Moon, F. C. (2000). Chaotic Vibrations: An Introduction for Applied Scientists and Engineers. Wiley-VCH.
* [19] Baker, G. (1990). Chaotic Dynamics: An Introduction. Cambridge University Press.
* [20] Shilnikov, L. P. (2001). Methods of Qualitative Theory in Nonlinear Dynamics. World Scientific.
* [21] Guckenheimer, J., and Holmes, P. (1983). Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields. Springer.
* [22] M. J. (1978). Quantitative universality for a class of nonlinear transformations. Journal of Statistical Physics, 19(1), 25-52.
* [23] , J. A. (1990). Coping with chaos: Analysis of chaotic data and the exploitation of chaotic systems. Wiley.
* [24] (1990). Synchronization in chaotic systems. Physical Review Letters, 64(8), 821-824.
* [25] Zeraoulia, Elhadj. (2012). Models and Applications of Chaos Theory in Modern Sciences. Copyright.
* [26] Dynamical systems coupled with monotone set-valued operators: Formalisms, applications, well-posedness, and stability. Journal of Dynamical Systems, 52(5), 567-580.,SIAM Review, 2020, 62 (1), pp.3-129. ff10.1137/18M1234795
* [27] Shilnikov, L. P., A Case of the Existence of a Denumerable Set of Periodic Motions, Soviet Math. Dokl., 1965, vol. 6, pp. 163–166; see also: Dokl. Akad. Nauk SSSR, 1965, vol. 160, pp. 558-561.
* [28] Chow, Shui Nee, Hale, J. K., and Mallet-Paret, J., An Example of Bifurcation to Homoclinic Orbits, J. Differential Equations, 1980, vol. 37, no. 3, pp. 351–373.
* [29] Drubi, F., Ibáñez, S., and Rodríguez, J. Á., Coupling Leads to Chaos, J. Differential Equations, 2007, vol. 239, no. 2, pp. 371–385.
* [30] Shilnikov, L. P., Existence of a Countable Set of Periodic Motions in a Four-Dimensional Space in an Extended Neighborhood of a saddle-focus, Soviet Math. Dokl., 1967, vol. 8, no. 1, pp. 54–58; see also: Dokl. Akad. Nauk SSSR, 1967, vol. 172, no. 1, pp. 54-57.
* [31] Buffoni, B., Champneys, A. R., and Toland, J. F., Bifurcation and Coalescence of a Plethora of Homoclinic Orbits for a Hamiltonian System, J. Dynam. Differential Equations, 1996, vol. 8, no. 2, pp. 221–279.
|
combinations of TX parameters, the haystack fraction will simply be
$\Phi_{Det}=N_{Det\\!-\\!survey}$ (A20)
An update to the Haystack Boundary table from (Wright et al., 2018b) appears
in Table 6 and summarizes the necessary search dimensions. The distance
parameter $d_{H\\!-\\!MAX}$ covers all distances from 0 to $\infty$ and no
longer needs to be specified, as $P_{D}(d)\rightarrow 0$ beyond $d_{MAX}$.
Note that frequency drift rate has been added, per Siemion et al. (2013) and
Sheikh et al. (2019). The TX dwell time $T_{TXdwell}$ has also been added. The
limits shown are hypothesized, and might be narrowed considerably with more
analysis of algorithms and link budgets. The fact that variables are
intertwined (correlated) makes the haystack model somewhat more complex.
Table 6: Summary of Revised Haystack Dimensions Dimension | Symbol | Lower Bound | Upper Bound | Comments
---|---|---|---|---
Volume parameters | V | | |
Distance | $d_{H\\!-\\!MAX}$ | 0 | $\infty$ | Detection range $d_{MAX}$ is determined by RX and TX parameters. Assume uniform a priori distribution over all space, so TX density only depends on $\rho_{STAR}.$
Solid angle coverage | $\Omega$ | 0 | $4\pi$ | RX scan should be over full $4\pi$ in WFS, but limited to $\Omega_{FOV}$ for any one observation. $\Omega_{FOV}$ will be a function of frequency. Distinct from $\Omega_{total}=N_{obs}\Omega_{FOV}$
Transmit Parameters | $\Gamma_{TX}$ | | |
Effective Isotropic Radiated Power | EIRP | $10^{13}\,$W | | Leave as a reference value $EIRP_{0}=10^{13}\,$W, so EIRP is left out of the haystack integration. Could alternatively consider a truncated power law distribution.
TX center frequency | $\nu$ | 10 MHz | 115 GHz | FFOM peaks over 300-700 MHz, and declines rapidly above 2 GHz. Assume a priori distribution is strongly frequency-dependent, favoring lower frequencies.
TX Bandwidth | $BW_{TX}$ | 1 Hz | 20 MHz | Narrow-band ($<\\!10$ KHz) is the focus of current analysis, but wideband pulse train waveforms are possible. Form of the a priori distribution is unclear, but certainly is not uniform.
TX Dwell Time | $T_{TXdwell}$ | 1 minute | 1 hour | Amount of time spent in one TX look direction. Active transmission time is $T_{TX}=\delta_{TX}\,T_{TXdwell}$. where $\delta_{TX}$ is the TX duty cycle.
Repetition Period | $T_{rep}$, $T_{TXscan}$ | Continuous | 1 year | TX scans over $4\pi$ in time $T_{TXscan}$ in model for WFS. Continuous reception is unlikely due to average TX power limitations. EIRP, $T_{TXscan}$, and $T_{TXdwell}$ will be correlated. Could also use $N_{TXscan}=T_{TXscan}/T_{TXdwell}$ as a measure of repetition period.
Polarization fraction | $\eta_{pol}$ | 0 | 1 | 0=unpolarized, 1=completely polarized. Equals 1 for most modern receivers (Stokes I, full coverage), so this can generally be ignored in the analysis.
Normalized drift rate | $\dot{\nu}_{norm}$ | $-200$ nHz | $200$ nHz | $\dot{\nu}_{norm}=\dot{\nu}/\nu$. Doppler drift rate is due to TX-RX relative accelerations. The cause is astrophysical. Possibly compensated by ET.
| | | |
### A.5 Conclusions on Haystack Functions
To sum up:
* •
Issues with the Wright haystack functions (A1) to (A5) were identified,
suggesting that the haystack fraction is unreliable as currently defined.
* •
A new paradigm was proposed which weights a desired FOM measure according to
the a priori joint pdf of the TX parameters. The prior pdfs may be interpreted
as an “ET behavior model” and specifies what combinations of TX parameters are
likely or unlikely. This reduces much of the haystack integration to an
expected value operation over the TX parameters. This approach leads to
reasonable scaling and removes TX parameter units from the haystack value. The
haystack can be readily interpreted as a volume integral of the FOM measure
summed over all observations and averaged over the ensemble of TX parameters.
* •
A haystack function $N_{Det\\!-\\!survey}$ was defined for a SETI effort based
on the expected number of ET detections that that the survey should generate
with certain assumed benchmark parameters like EIRP and $P_{civTX}$. We define
the reference ideal detection count to be unity for any set of TX parameters
(though of course we would like more), so the search fraction $\Phi_{Det}$ is
simply $N_{Det\\!-\\!survey}$.
More work is needed to evaluate the revised haystack fraction of past SETI
surveys. The hardest part may be establishing reasonable a priori pdfs over
the full range of possible TX parameters. There may be many arguments that
produce very different but justifiable pdfs. As a first step, it may be useful
to evaluate haystacks that apply to a limited range of TX parameters, e.g.
narrow-band signals with a limited range of TX bandwidths, to gain experience
before considering the full TX parameter space.
## Appendix B SETI Search with Limited Computation
We have defined the detection rate DPY assuming a set of spatial and waveform
parameters. Spatial parameters include range d and steer direction
($\alpha,\delta$), where $\alpha$=Right Ascension and $\delta$=Declination.
The set of transmit parameters is suggested in Table 6, but for simplicity let
us assume $\Gamma_{TX}$ is limited to frequency $\nu$ and frequency rate
$\dot{\nu}$, as is typically done in narrow-band SETI searches. Ideally the
search process will span the range of these parameters. Given hardware
limitations and finite computation, the search ranges must be limited, and it
will take analysis and expert opinion to decide the best search dimensions and
limits.
To guide the choice of search ranges, consider the expected number of
detections $\overline{N_{Det}}$ for a survey with $N_{obs}$ observations over
all RX sites. Per (A16):
$\overline{N_{Det}}=P_{civTX}\,\sum_{i=1}^{N_{obs}}\;E_{[\Gamma_{TX}]}\left[\frac{T_{TXdwell}\,N_{STAR\\!-\\!obs}(i;EIRP_{0},\gamma_{RXi},\gamma_{TX})}{T_{TXscan}}\right]$
(B1)
The search is over discrete bins in frequency and frequency rate, so the
expected value is done discretely over two levels of summation:
$\overline{N_{Det}}=\frac{T_{TXdwell}\,P_{civTX}}{T_{TXscan}}\sum_{i=1}^{N_{obs}}\;\sum_{j=1}$
(B2) ∑k=1NRNSTAR(αi,δi,νj,˙νk) p1(νj)p2(νj,˙νk)
$p_{1}$, and $p_{2}$, are discrete a priori probabilities associated with
frequency bin j and frequency rate bin k. Assume these are related to prior
distributions as follows:
$p_{1}(\nu_{j})=f_{1}(\nu_{j})\;\Delta\nu$, $f_{1}(\nu)$ = a priori frequency
density
$p_{2}(\nu_{j},\dot{\nu_{k}})=f_{2}(\dot{\nu_{k}}/\nu_{j})\;\Delta\dot{\nu}$,
$f_{2}(\dot{\nu}_{norm})$ = a priori normalized frequency-rate density,
$\dot{\nu}_{norm}=\dot{\nu}/\nu$,
where $\Delta\nu$ is the frequency bin width and
$\Delta\dot{\nu}\approx\Delta\nu/\tau$ (Siemion et al., 2013) is the
frequency-rate bin width. We assume that $\nu_{j}$ and $\dot{\nu}_{norm-k}$
(which is related to astrophysical accelerations (Sheikh et al., 2019)) are
mutually independent. It follows that:
$\overline{N_{Det}}=\frac{P_{civTX}\,AFOM\,AvgFOM^{3/2}\,EIRP^{3/2}}{N_{TXscan}\,3\,(8\pi
k_{B})^{3/2}}\sum_{i=1}^{N_{obs}}\;\rho_{STAR}(\alpha_{i},\delta_{i})$ (B3)
∑j=1NFFFOM(αi,δi,νj)f1(νj) ∑k=1NRf2(˙νk/νj)ΔνΔ˙ν
We will have $N_{obs}N_{F}N_{R}$ space/frequency/frequency-rate bins to
compute over $T_{obs}$, and perhaps $N_{F}N_{R}$ frequency/frequency-rate bins
to compute in real time. The question: how can one best allocate coverage to
maximize $\overline{N_{Det}}$ within a computation budget? Clearly this is a
trade space; we need to examine each bin contribution and choose coverage so
as to maximize the above summation. Some notes:
* •
We should choose our observation sequence $(\alpha_{i},\delta_{i})$ based on
the $T_{sky}$-adjusted star density, as reflected by star density-FFOM product
$[\rho_{STAR}(\alpha_{i},\delta_{i})\,FFOM(\alpha_{i},\delta_{i},\nu_{j})]$.
(Note that FFOM is a function of $T_{sys}$ which is in turn a function of
$T_{sky}(\alpha_{i},\delta_{i})$.) The density
$\rho_{STAR}(\alpha_{i},\delta_{i})$ should be a representative value near
$d_{MAX}$.
* •
Assume $FFOM(\alpha_{i},\delta_{i},\nu_{j})$=$FFOM(\nu_{j})$ for now. The
frequency bin contribution is $FFOM(\nu_{j})\,f_{1}(\nu_{j})$, so FFOM is
scaled by the a priori frequency distribution $f_{1}(\nu)$. We have never
observed ET, so we can only hypothesize $f_{1}(\nu)$. Several schools of
thought:
* –
FFOM strategy: If one believes $f_{1}(\nu)$=$constant$, then we should choose
$N_{F}$ bins which best span the FFOM($\nu$) peak.
* –
Water Hole strategy: If $f_{1}(\nu)$ is bandpass near the “Water Hole”
(1420-1662 MHz), then FFOM($\nu$) is largely irrelevant, and we should choose
bins near those frequencies.
* –
Big straddle strategy: We could choose to cover from $\sim$100 MHz to the
water hole as a compromise. FFOM may vary over an order of magnitude in this
case, or less with aperture arrays depending on how station beamforming is
done.
* –
Otherwise, we could evaluate functions like a truncated power law distribution
($f_{1}(\nu)=k\nu^{\beta}$ over a certain band, with k a normalizing constant)
and examine the $FFOM(\nu_{j})\,f_{1}(\nu_{j})$ product. Since $\beta$ would
presumably need to be greater than 2 (implying a very strong bias toward high
frequencies) to equalize the downslope of FFOM above 2 GHz, it may be hard to
justify high frequencies based on detection rate arguments. We presume that ET
has concluded this also.
* •
The frequency-rate bin contribution is $f_{2}(\dot{\nu_{k}}/\nu_{j})$. There
are two differing strategies for choosing $N_{R}$:
* –
Span the expected rates due to astrophysics: A list of astrophysical phenomena
which would cause TX-RX relative accelerations and the resulting maximum drift
rates was explored by Sheikh et al. (2019), and subsequent work may estimate
the resulting $f_{2}(\dot{\nu}_{norm})$ distribution. One might guess this
will be a Gaussian distribution centered on Earth’s frequency rate which will
be truncated by the choice of $N_{R}$. We need to choose $N_{R}$ so the tail
areas are minimal. (Implicit in the earlier DPY derivation was that $N_{R}$
will be chosen large enough so as to cover virtually all possible cases.)
* –
Assume that ET will adjust frequency rate: If a civilization is sufficiently
sophisticated to be conducting Active SETI, it may choose to de-chirp its
transmission so that it will be received at a low drift rate in an appropriate
galactic frame of reference. This would mean that $N_{R}$ could be much lower
and search would be simplified.
* •
Since the frequency rate effect is proportional to frequency, we might best
choose $N_{R}$ to be proportional to frequency:
$N_{R}$=$N_{R}(\nu)\approx(\nu/\nu_{0})\,N_{R}(\nu_{0})$. This would imply a
trapezoidal search region in $\nu$ and $\dot{\nu}$ instead of a rectangular
search region. $\overline{N_{Det}}$ might be further enhanced by choice of an
arbitrary shape defined by ranking the bin contributions and taking the best
set that fits within the computation budget.
* •
The selection of $\Delta\nu$ affects $N_{F}N_{R}$ dramatically. At a budgeted
level of computation, we may be willing to trade SNR (as influenced by
$AvgFOM(\Delta\nu)$) for more frequency or frequency-rate coverage so as to
maximize $\overline{N_{Det}}$.
We can see from the above discussion that there can be multiple legitimate
arguments for the choice ET might make regarding transmit parameters. Each
will significantly affect prior distributions and therefore the optimum search
parameters.
We may also note that if ET has been conducting SETI for a long time (as might
be expected before an Active SETI effort were to begin), ET would recognize
classes of signals that would be easier to detect and distinguish from
astrophysical signals. This might favor very narrow-band signals that would
not normally occur in the natural world, and discourage broad-band signals
that might be confused with diffuse emitters, pulsars or fast radio bursts.
The prior densities for transmit bandwidth would be biased accordingly.
Likewise, if ET understands that lower frequencies should have a higher
detection rate according to FFOM, one might reasonably assume that ET would
favor lower frequencies, and the frequency prior density would be biased to
reflect this. Therefore, without better information, own own intuition may be
the best guide to ET’s likely choices for transmit parameters.
## Appendix C Additional References
References about systems listed in Tables 7 and 7 may be found below.
Table 7: Radio Astronomy System References Name | Reference
---|---
Parkes Observatory | Hobbs et al. (2020)
Green Bank Telescope | https://www.gb.nrao.edu/scienceDocs/GBTpg.pdf
Parkes Multibeam | Staveley-Smith et al. (1996), Chippendale et al. (2016)
GBT FLAG PAF | Rajwade et al. (2017), Pingel & Pisano (2018)
Allen Telescope Array | Welch et al. (2009)
Giant Metrewave RT | Gupta et al. (2017)
Arecibo Telescope | https://www.naic.edu/ao/scientist-user-portal/astronomy/astronomer-guide
Jansky Very Large Array | https://library.nrao.edu/evla.shtml, https://library.nrao.edu/public/memos/evla/EVLAM_195.pdf
FAST 500m Aperture | Nan et al. (2011)
Westerbork APERTIF | Oosterloo et al. (2010)
MeerKAT 64 | Lehmensiek & Theron (2014), Jonas (2018)
ASKAP | https://www.atnf.csiro.au/projects/askap/specs.html, SKAO (2016), McConnell et al. (2020)
MeerKAT Extension 84 | Hurter & Kotzé (2020)
CHORD | Vanderlinde et al. (2019)
Next Generation VLA | Selina (2019)
SKA1 Mid (197 Dish) | Dewdney (2015), McPherson et al. (2018), Braun et al. (2019), SKAO (2016)
L-Band Array of Small Arrays | Lynch et al. (2018)
Next Gen Arecibo Telescope | Roshi (2021)
DSA-2000 | Hallinan et al. (2019)
SKA2 Mid Dish | Braun et al. (2019)
SKA2 Mid MFAA | Torchinsky et al. (2017), Gunst et al. (2020)
CHIME | Amiri et al. (2018)
HIRAX | Newburgh et al. (2016)
Murchison Widefield Array 2 | Wayth et al. (2018), Tingay et al. (2013), Tremblay & Tingay (2020)
SKA1-Low | Dewdney (2015), McPherson et al. (2018), Braun et al. (2019)
SKA2-Low | Braun et al. (2019)
Table 8: EIRP References System | Reference
---|---
GPS | Steigenberger et al. (2019); Wang et al. (2018)
FM Radio | https://www.fcc.gov/media/radio/fm-station-classes
DTV | https://data.fcc.gov/download/incentive-auctions/OET-69/Baseline_Data_and_Maps_2013July.pdf
ASR-9 ATC Radar | Weber (2000)
Intelsat | Intelsat (2007), https://www.intelsat.com/fleetmaps/
Arecibo Radar | Taylor et al. (2016); Hagen (2001), http://www.naic.edu/aisr/sas/transmitter/trans-home.html,
| http://www.naic.edu/~nolan/radar/
|
Further author information: (Send correspondence to M.G.)
M.G.: Email<EMAIL_ADDRESS>
# An End-to-end Deep Learning Approach for Landmark Detection and Matching in
Medical Images
Monika Grewal Life Sciences & Health Research Group, Centrum Wiskunde &
Informatica, P.O. Box 94079, 1090 GB Amsterdam, The Netherlands Timo M. Deist
Life Sciences & Health Research Group, Centrum Wiskunde & Informatica, P.O.
Box 94079, 1090 GB Amsterdam, The Netherlands Jan Wiersma Department of
Radiation Oncology, Amsterdam UMC, University of Amsterdam, P.O. Box 22660,
1100 DD Amsterdam, The Netherlands Peter A. N. Bosman Life Sciences & Health
Research Group, Centrum Wiskunde & Informatica, P.O. Box 94079, 1090 GB
Amsterdam, The Netherlands Faculty of Electrical Engineering, Mathematics and
Computer Science, Delft University of Technology, P.O. Box 5, 2600 AA Delft,
The Netherlands Tanja Alderliesten
###### Abstract
Anatomical landmark correspondences in medical images can provide additional
guidance information for the alignment of two images, which, in turn, is
crucial for many medical applications. However, manual landmark annotation is
labor-intensive. Therefore, we propose an end-to-end deep learning approach to
automatically detect landmark correspondences in pairs of two-dimensional (2D)
images. Our approach consists of a Siamese neural network, which is trained to
identify salient locations in images as landmarks and predict matching
probabilities for landmark pairs from two different images. We trained our
approach on 2D transverse slices from 168 lower abdominal Computed Tomography
(CT) scans. We tested the approach on 22,206 pairs of 2D slices with varying
levels of intensity, affine, and elastic transformations. The proposed
approach finds an average of 639, 466, and 370 landmark matches per image pair
for intensity, affine, and elastic transformations, respectively, with spatial
matching errors of at most 1 mm. Further, more than 99% of the landmark pairs
are within a spatial matching error of 2 mm, 4 mm, and 8 mm for image pairs
with intensity, affine, and elastic transformations, respectively. To
investigate the utility of our developed approach in a clinical setting, we
also tested our approach on pairs of transverse slices selected from follow-up
CT scans of three patients. Visual inspection of the results revealed landmark
matches in both bony anatomical regions as well as in soft tissues lacking
prominent intensity gradients.
###### keywords:
end-to-end, landmark detection, CT, deep learning, deformable image
registration
## 1 INTRODUCTION
Deformable Image Registration (DIR) can be extremely valuable in work-flows
related to image-guided diagnostics and treatment planning. However, DIR in
medical imaging can be challenging due to large anatomical variations between
images. This is particularly the case in the lower abdomen, where internal
structures can undergo large deformations between two scans of a patient due
to physical conditions like presence of gas pockets and bladder filling. Such
scenarios are particularly challenging for intensity based registration, as
there are many local optima to overcome. Landmark correspondences between
images can provide additional guidance information to the DIR methods[1, 2]
and increase the probabilty of finding the right transformation by adding
landmark matches as an additional constraint or objective in the optimization.
Since the manual annotation of anatomical landmarks is labor-intensive and
requires expertise, developing methods for finding landmark correspondences
automatically has great potential benefits.
The existing methods[3, 4, 5, 6, 7] for obtaining landmark correspondences in
medical images are based on large and time-consuming pipelines that involve
identifying landmark locations followed by matching local feature
descriptors[8] within a restricted neighborhood. These methods rely upon
multiple pre- and post-processing steps, multi-resolution search, and manual
checking to achieve robustness; each step adding more heuristics and empirical
hyperparameters to an already complex pipeline. Further, existing methods for
landmark detection that restrict the definition of landmarks to certain
intensity gradient patterns specific to the underlying data set or anatomical
region may not be easily adaptable to other contexts [9]. Generalizing the
definition of landmarks and reducing the number of heuristics would allow for
faster adaptation of automated methods for different clinical settings. In
addition, faster execution times for landmark detection and matching could
benefit their clinical application.
Recently, deep Convolutional Neural Networks (CNNs) have shown promising
results for classification and segmentation tasks in medical imaging due to
their capability of learning discriminant feature descriptors from raw images
[10, 11, 12]. There exist a few deep learning approaches for finding landmarks
in medical images [13, 14]. However, in these approaches a neural network is
trained in a supervised manner to learn a small number of manually annotated
landmarks. It is to be noted that a high density of landmark correspondences
is desirable to effectively provide additional guidance to the DIR methods. In
a supervised setting, it means annotating thousands of landmarks per CT scan,
which is intractable in terms of required manual efforts. On the other hand,
many deep learning approaches have been developed for automatically finding
object landmarks in natural images [15, 16, 17, 18] that do not require manual
annotations. Some of these approaches focus on discovering a limited number of
landmarks in an image dataset. Whereas, others either fine-tune a pre-trained
network or make use of incremental training in a self-supervised fashion.
Our proposed approach is based on the above-mentioned approaches developed for
natural images and tailored to meet the specific requirements relating to the
medical images. We propose a two-headed Siamese neural network that based on a
pair of images simultaneously predicts the landmarks and their feature
descriptors corresponding to each image. These are then sent to another module
to predict their matching probabilities. We train the neural network from
scratch and gradients are back-propagated from end-to-end. To the best of our
knowledge, this is first endeavour to develop an end-to-end deep learning
approach for finding landmark correspondences in medical images. Our approach
has the following distinct advantages compared to existing methods for finding
landmark correspondences:
* ·
Our approach is end-to-end deep learning based; therefore, the need for data
pre- and post-processing during inference is avoided. In addition, the
proposed approach is faster at run-time and has fewer hyperparameters than
traditional approaches.
* ·
We do not impose any prior on the definition of a landmark in an image.
Instead, we train the network in a way that the landmarks represent salient
regions in the image that can be found repeatedly despite potential intensity
variations, and deformations.
* ·
The proposed approach does not require manual annotations for training and
learns from data in a self-supervised manner.
* ·
Our approach improves over the existing approaches for natural images by
avoiding the need for pre-training, or incremental fine-tuning of the neural
network.
## 2 MATERIALS AND METHODS
### 2.1 Data
In total 222 lower abdominal Computed Tomography (CT) scans of female patients
acquired for radiation treatment planning purposes were retrospectively
included: 168 scans (24,923 two-dimensional (2D) slices) were used for
training and 54 scans (7,402 2D slices) were used for testing. For a separate
set of three patients, one original scan along with a follow-up CT scan was
included. The scans of these three patients were used for testing the approach
in a clinical setting. All CT scans had an in-plane resolution from 0.91 mm
$\times$ 0.91 mm to 1.31 mm $\times$ 1.31 mm. All the 2D slices were resampled
to 1 mm $\times$ 1 mm in-plane resolution.
### 2.2 Approach
Figure 1: Schematic representation of our approach. The weights are shared
between two branches of the Siamese neural network. The transformation is
required only during training for calculating the ground truths. Abbreviations
of the data input and output at various stages follow the description in the
text.
In Figure 1, the different modules of our approach are illustrated along with
the data flow between them. Our approach comprises a Siamese architecture
consisting of CNN branches with shared weights. The outputs of the CNN
branches are sent to a module named Sampling Layer followed by another module
named Feature Descriptor Matching Module. The network takes two images $I_{1}$
and $I_{2}$ as inputs and predicts $K_{1}$ and $K_{2}$ landmarks in $I_{1}$
and $I_{2}$, respectively. In addition, the network predicts matching
probabilities ($\hat{c}_{i,j}$) for each landmark $i\in\\{1,2,...,K_{1}\\}$ in
$I_{1}$ to a landmark $j\in\\{1,2,...,K_{2}\\}$ in $I_{2}$. In the following
paragraphs, a description of each module is provided.
#### 2.2.1 CNN branches
The CNN branches of the Siamese neural network have shared weights and consist
of an encoder-decoder type network similar to the U-Net[10] architecture. The
only difference from the original implementation is that the number of
convolutional filters in each layer is reduced by a factor of four to avoid
overfitting. The implemented architecture contains 16, 32, 64, 128, and 256
convolutional filters in successive downsampling blocks respectively. The CNN
branches give two outputs for each input image: a landmark probability map,
and feature descriptors. The landmark probability map is computed at the end
of the upsampling path after applying the sigmoid non-linearity and the
feature descriptors are computed by concatenation of feature maps from the
last two downsampling blocks. The feature maps from different downsampling
blocks intrinsically allow for feature matching at multiple resolutions and
abstraction levels.
#### 2.2.2 Sampling Layer
The sampling layer is a parameter-free module of the network. It performs the
following tasks:
1. 1.
It samples $K_{1}$ and $K_{2}$ landmark locations in $I_{1}$ and $I_{2}$,
respectively, which correspond to the highest probability score locations in
the predicted landmark probability maps.
2. 2.
It extracts predicted landmark probabilities $\hat{p}^{I_{1}}_{i}$, and
$\hat{p}^{I_{2}}_{j}$ corresponding to $K_{1}$ and $K_{2}$ locations in
landmark probability maps of image $I_{1}$ and $I_{2}$.
3. 3.
It extracts feature descriptors $f^{I_{1}}_{i}$ and $f^{I_{2}}_{j}$
corresponding to the sampled landmark locations in $I_{1}$ and $I_{2}$,
respectively, and creates feature descriptor pairs
$(f^{I_{1}}_{i},f^{I_{2}}_{j})$ for each $i\in\\{1,2,...,K_{1}\\}$ and
$j\in\\{1,2,...,K_{2}\\}$.
4. 4.
During training, it generates the ground truths for landmark probabilities and
feature descriptor matching probabilities on-the-fly as mentioned in Georgakis
et al [17]. Briefly, the sampled landmark locations of $I_{2}$ are projected
onto $I_{1}$ based on the known transformation between the images. A landmark
location $i$ in $I_{1}$ is decided to be matching to a landmark location $j$
in $I_{2}$ if the Euclidean distance between $i$ and the projection of $j$ on
image $I_{1}$ is less than a predefined pixel threshold ($thresh_{pixels}$).
#### 2.2.3 Feature Descriptor Matching Module
All the feature descriptor pairs $(f^{I_{1}}_{i},f^{I_{2}}_{j})$ are fed to
the feature descriptor matching module. The feature descriptor matching module
consists of a single fully connected layer that predicts the matching
probability for each feature descriptor pair.
### 2.3 Training
Training image pairs were generated on-the-fly by sampling a reference image
randomly and generating the target image by transforming the reference image
with a known transformation (randomly simulated brightness or contrast jitter,
rotation, scaling, shearing, or elastic transformation). During training, the
ground truths for landmark probabilities and feature descriptor matching
probabilities are generated in the sampling layer as described above. We
trained the network by minimizing a multi-task loss defined as follows:
$Loss={LandmarkProbabilityLoss}_{I_{1}}+{LandmarkProbabilityLoss}_{I_{2}}+{DescriptorMatchingLoss}$
(1)
The $LandmarkProbabilityLoss_{I_{n}}$ for the probabilities of landmarks in
image $I_{n},n\in\\{1,2\\}$ is defined as:
${LandmarkProbabilityLoss}_{I_{n}}=\frac{1}{K_{n}}\sum_{i=1}^{K_{n}}\left((1-\hat{p}^{I_{n}}_{i})+CrossEntropy(\hat{p}^{I_{n}}_{i},p^{I_{n}}_{i})\right)$
(2)
where $CrossEntropy$ is the cross entropy loss between predicted landmark
probabilities $\hat{p}^{I_{n}}_{i}$ and ground truths $p^{I_{n}}_{i}$. The
term $(1-\hat{p}^{I_{n}}_{i})$ in (2) encourages high probability scores at
all the sampled landmark locations, whereas the cross entropy loss term forces
low probability scores at the landmark locations that do not have a
correspondence in the other image. As a consequence, the network is forced to
predict high landmark probabilities only at the salient locations that have
correspondence in the other image as well.
Hinge loss is widely used for learning discriminant landmark descriptors
between matching and non-matching landmark pairs. We observed that a positive
margin for the matching pairs in the hinge loss encourages the network to
focus on hard positive examples (i.e., non-trivial landmark matches).
Therefore, we defined $DescriptorMatchingLoss$ (equation 3) as a linear
combination of hinge loss with a positive margin $m_{pos}$ on the L2-norm of
feature descriptor pairs and cross entropy loss on matching probabilities
predicted by the feature descriptor matching module.
$\begin{split}DescriptorMatchingLoss&=\sum_{i=1,j=1}^{K_{1},K_{2}}\left(\frac{c_{i,j}max(0,||f^{I_{1}}_{i}-f^{I_{2}}_{j}||^{2}-m_{pos})}{K_{pos}}\right.\\\
&+\frac{(1-c_{i,j})max(0,m_{neg}-||f^{I_{1}}_{i}-f^{I_{2}}_{j}||^{2})}{K_{neg}}\\\
&+\left.\frac{WeightedCrossEntropy(\hat{c}_{i,j},c_{i,j})}{(K_{pos}+K_{neg})}\right)\end{split}$
(3)
where $\hat{c}_{i,j}$, and $c_{i,j}$ are the predicted and the ground truth
matching probabilities, respectively, for the feature descriptor pair
$(f^{I_{1}}_{i},f^{I_{2}}_{j})$; $K_{pos}$ and $K_{neg}$ are the number of
matching (positive class) and non-matching (negative class) feature descriptor
pairs; $m_{pos}$ and $m_{neg}$ are the margins for the L2-norm of matching and
non-matching feature descriptor pairs. $WeightedCrossEntropy$ is the binary
cross entropy loss where the loss corresponding to positive class is weighted
by the frequency of negative examples and vice versa. The gradients are back-
propagated from end-to-end as indicated by the dashed arrows in Figure 1.
### 2.4 Constraining Landmark Locations
A naive implementation of the approach may find all the landmarks clustered in
a single anatomical region, which is not desirable. Therefore, to learn
landmarks in all anatomical regions during training, we sample the landmarks
on a coarse grid in the sampling layer, i.e., in each $8\times 8$ pixel
section of the grid, only one landmark location with the maximum landmark
probability is sampled.
Another challenge in the CT scan imaging data comes from a large number of
pixels belonging to the background. Traditionally, the image is cropped to the
center to avoid prediction of landmarks in the background or on the patient
table. However, this strategy requires an additional pre-processing step
during inference. To avoid this, we computed a valid mask for each image,
which contained the value 1 at the location of body pixels and 0 elsewhere.
The valid mask was generated by image binarization using intensity
thresholding and removing small connected components in the binarized image.
The network is trained to predict high landmark probabilities as well as
feature descriptor matching probabilities only in the matching locations that
correspond to a value of 1 in the valid mask. This allows the network to learn
a content-based prior on the landmark locations and avoids the need for image
pre-processing during inference.
### 2.5 Inference
During inference, only the locations in $I_{1}$ and $I_{2}$ with landmark
probabilities above a threshold ($thresh_{landmark}$) are considered. Further,
landmark pairs from different images are only matched if their matching is
inverse consistent. Suppose, locations $i\in\\{1,..,K_{1}\\}$ in $I_{1}$ and
locations $j\in\\{1,..,K_{2}\\}$ in $I_{2}$ have landmark probabilities above
$thresh_{landmark}$. A pair $(i^{\ast},j^{\ast})$ is considered matching if
there is no other pair $(i^{\ast},j^{\prime})$ where
$j^{\prime}\in\\{1,..,K_{2}\\}$ or $(i^{\prime},j^{\ast})$ where
$i^{\prime}\in\\{1,..,K_{1}\\}$ with higher descriptor matching probabilities
or lower L2-norms for their feature descriptor pairs
$(f_{i^{\ast}}^{I_{1}},f_{j^{\prime}}^{I_{2}})$ or
$(f_{i^{\prime}}^{I_{1}},f_{j^{\ast}}^{I_{2}})$.
### 2.6 Implementation Details
We implemented our approach using PyTorch[19]. We trained the network for 50
epochs using the Adam[20] optimizer with learning rate $10^{-3}$ and a weight
decay of $10^{-4}$. The training was done with a batchsize of 4 and took 28
GPU (NVIDIA GeForce RTX 2080 Ti) hours. To allow for batching, a constant $K$
(set to 400) landmarks were sampled from all the images. The threshold for
Euclidean distance while generating the ground truth ($thresh_{pixels}$) was 2
pixels. The margin for the L2-norm of matching feature descriptors ($m_{pos}$)
was set to 0.1 and the margin for the L2-norm of non-matching pairs
($m_{neg}$) was set to 1. During inference, $thresh_{landmark}$ = 0.5 was
used.
The empirical values for the hyperparameters were decided based on experience
in the preliminary experiments. For example, the number for landmarks to be
sampled during training ($K$) was decided such that the entire image was
covered with sufficient landmark density, which was inspected visually.
Similarly, the decision for $thresh_{pixels}$ was motivated by the fact that a
threshold less than 2 pixels did not yield any matching landmarks in the first
few iterations of the training and hence the network could not be trained. We
initially trained the network with default values of $m_{pos}$, and $m_{neg}$
($m_{pos}=0$, and $m_{neg}=1$). However, we noticed on the validation set that
all the predicted landmark pairs were clustered in regions of no deformation.
To avoid this behaviour, we trained the network with $m_{pos}=0.1$ and
$m_{pos}=0.2$ so that the gradients were not affected by the hinge loss
corresponding to easy landmark matches. The final results are reported
corresponding to the run with $m_{pos}=0.1$ as it had a better trade off
between number of landmarks per image pair and difficulty of landmark
locations. The value of $thresh_{landmark}$ was chosen to give the best trade
off between the number of landmarks per image pair and the spatial matching
error on the validation set.
## 3 Experiments
### 3.1 Baseline
Scale Invariant Feature Transform (SIFT[21]) based keypoint detectors and
feature descriptors are prevalent approaches used in both natural image
analysis as well as in medical image analysis [6]. Therefore, we used the
OpenCV[22] implementation of SIFT as the baseline approach for comparison. We
used two matching strategies for SIFT: a) brute-force matching with inverse
consistency (similar to our approach, we refer to this approach as SIFT-
InverseConsistency), b) brute-force matching with ratio test (as described in
the original paper[21], we refer to this approach as SIFT-RatioTest). Default
values provided in the OpenCV implementation were used for all other
hyperparameters.
### 3.2 Datasets
The performance is evaluated on two test sets. First, for quantitative
evaluation, we transformed all 7,402 testing images from 54 CT scans with
three different types of transformations corresponding to intensity (jitter in
pixel intensities = $\pm 20\%$ maximum intensity), affine (pixel displacement:
median = 29 mm, Inter Quartile Range (IQR) = 14 mm - 51 mm), and elastic
transformations (pixel displacement: median = 12 mm, IQR = 9 mm - 15 mm),
respectively. Elastic transformations were generated by deforming the original
image according to a deformation vector field representing randomly-generated
2D Gaussian deformations. The extent of transformations was decided such that
the intensity variations and the displacement of pixels represented the
typical variations in thoracic and abdominal CT scan images [23, 24]. This
resulted in three sets of 7,402 2D image pairs (total 22,206 pairs).
Second, to test the generalizability of our approach in a clinical setting,
image pairs were taken from two CT scans of the same patient but acquired on
different days. The two scans were aligned with each other using affine
registration in the SimpleITK [25] package. This process was repeated for
three patients.
### 3.3 Evaluation
For quantitative evaluation, we projected the predicted landmarks in the
target images to the reference images and calculated the Euclidean distance to
their corresponding matches in the reference images. We report the cumulative
distribution of landmark pairs with respect to the Euclidean distance between
them.
The performance of our approach on clinical data was assessed visually. We
show the predicted results on four transverse slices belonging to different
anatomical regions. To visually trace the predicted correspondences of
landmarks, the colors of the landmarks in both the images vary according to
their location in the original CT slice. Similarly colored dots between slices
from original and follow-up image represent matched landmarks.
## 4 RESULTS
Table 1: Description of predicted landmark matches. Median number of landmark matches per image pair with Inter Quartile Range (IQR) in parentheses are provided together with the spatial matching error. The entries in bold represent the best value among all approaches. Transformations | Intensity | Affine | Elastic
---|---|---|---
No. of landmarks | Proposed Approach | 639 (547 - 729) | 466 (391 - 555) | 370 (293 - 452)
SIFT - InverseConsistency | 711 (594 - 862) | 610 (509 - 749) | 542 (450 - 670)
SIFT - RatioTest | 698 (578 - 849) | 520 (426 - 663) | 418 (330 - 541)
Spatial matching error (mm) | Proposed Approach | 0.0 (0.0 - 0.0) | 1.0 (0.0 - 1.4) | 1.0 (1.0 - 1.4)
SIFT - InverseConsistency | 1.0 (1.0 - 1.4) | 1.0 (1.0 - 1.4) | 1.0 (1.0 - 2.0)
SIFT - RatioTest | 1.0 (1.0 - 1.4) | 1.0 (1.0 - 1.4) | 1.0 (1.0 - 1.4)
The inference time of our approach per 2D image pair is within 10 seconds on a
modern CPU without any parallelization. On the GPU the inference time is
$\sim$20 milliseconds. The model predicted on average 639 (IQR = 547 - 729),
466 (IQR = 391 - 555), and 370 (IQR = 293 - 452) landmark matches per image
pair for intensity, affine, and elastic transformations, respectively.
### 4.1 Simulated Transformations
Table 1 describes the number of landmark matches per image pair and the
spatial matching error for both our approach and the two variants of SIFT.
Though our approach finds less landmarks per image as compared to the two
variants of SIFT, the predicted landmarks have smaller spatial matching error
than the SIFT variants. Further, Figure 2 shows the cumulative distribution of
landmark pairs with respect to the Euclidean distance between them. All the
approaches are able to find more than 90% of landmark matches within 2 mm
error for intensity transformations. Predicting landmark correspondences under
affine and elastic transformations is considerably more difficult; this can
also be seen in the worse performance of all approaches. However, our approach
is still able to find more than 99% of landmark matches within a spatial
matching error of 4 mm and 8 mm, respectively for affine and elastic
transformations. However, a noticeable percentage (about 2% for affine
transformations and 3% for elastic transformations) of landmarks detected by
SIFT-RatioTest are wrongly matched with landmarks from far apart regions (more
than 64 mm). It should be noted that if landmark matches with such high
inaccuracies are used for providing guidance to a registration method, it may
have a deteriorating effect on the registration if the optimizer is not
sufficiently regularized.
Figure 2: Cumulative distribution of landmarks. The cross-hairs in (b) and
(c) correspond to the percentile of landmarks in SIFT-RatioTest at 64 mm.
For visual comparison, the landmark correspondences in pairs of original and
elastic transformed images are shown in Figure 3 (rows a-b) for our approach
as well as for SIFT. As can be seen, the cases of mismatch in predictions from
our approach (i.e., the number of landmarks in transformed slices not
following the color gradient in the original slice) are rather scarce in
comparison to the baseline approaches. Another interesting point to note is
the difference in the landmark locations from our approach and the two
baseline approaches. Since SIFT is designed to predict landmarks at locations
of local extrema, the landmark matches are concentrated on the edges in the
images. Our approach, however, predicts matches in soft tissue regions as
well. Further inspection reveals that our approach predicts a considerable
number of landmark matches even in the deformed regions in contrast to the
baseline approaches. The capability to establish landmark correspondences in
the soft tissues and deformed regions is important because DIR methods can
especially benefit from guidance information in these regions.
### 4.2 Clinical Transformations
Rows c-f in Figure 3 show landmark correspondences in pairs of transverse
slices corresponding to the lower abdominal region in the original and follow-
up CT for our approach as well as for SIFT. As can be seen, the original and
follow-up slices have large differences in local appearance of structures
owing to contrast agent, bladder filling, presence or absence of gas pockets,
which was not part of the training procedure. It is notable that the model is
able to find considerable landmark matches in image pairs despite these
changes in local appearance. Moreover, the spatial matching error of landmarks
seems similar to that of images with simulated transformations, in contrast to
the baseline approach SIFT-InverseConsistency. Further, SIFT-RatioTest
predicts fewer mismatched landmarks compared to SIFT-InverseConsistency, but
this is achieved at the cost of a large decrease in the number of landmark
matches per image pair.
Figure 3: Landmark correspondences for pairs of different transverse slices
in abdominal CT scans. The landmark correspondences predicted by our approach
are shown in comparison with two variants of SIFT. Rows (a-b) show predictions
on pairs of original (left) and elastic transformed (right) slices. Rows (c-f)
show transverse slices taken from different anatomical regions. The slices in
the original CT (left) are matched with a similar slice from a follow-up CT
scan (right) by affine registration.
## 5 DISCUSSION AND CONCLUSIONS
With a motivation to provide additional guidance information for DIR methods
of medical images, we developed an end-to-end deep learning approach for the
detection and matching of landmarks in an image pair. To the best of our
knowledge, this is the first approach that simultaneously learns landmark
locations as well as the feature descriptors for establishing landmarks
correspondences in medical imaging. While the final version of this manuscript
was being prepared, we came across one research on retinal images [26], whose
approach for landmark detection using UNet architecture in a semi-supervised
manner is partly similar to ours. However, our approach not only learns the
landmark locations, but also the feature descriptors and the feature matching
such that the entire pipeline for finding landmark correspondences can be
replaced by a neural network. Therefore, our approach can be seen as an
essential extension to the mentioned approach.
Our proposed approach does not require any expert annotation or prior
knowledge regarding the appearance of landmarks in the learning process.
Instead, it learns landmarks based on their distinctiveness in feature space
despite local transformations. Such a definition of landmarks is generic so as
to be applicable in any type of image and sufficient for the underlying
application of establishing correspondences between image pairs. Further, in
contrast to the traditional unsupervised approaches for landmark detection in
medical imaging, the proposed approach does not require any pre- or post-
processing steps, and has fewer hyperparameters.
The main challenge for intensity based DIR methods is to overcome local optima
caused by multiple low contrast regions in the image, which result in image
folding and unrealistic transformations in the registered image. It can be
speculated that the availability of landmark correspondences in the low
contrast image regions may prove to be beneficial for DIR methods. Moreover, a
uniform coverage of entire image is desirable for improved performance. Upon
visual inspection of the landmarks predicted by our approach, we observed that
our approach not only finds landmark correspondences in bony anatomical
regions but also in soft tissue regions lacking intensity gradients. Moreover,
a considerable density of landmarks (approximately 400 landmarks per image
pair) was observed despite the presence of intensity, affine, or elastic
transformations. Based on these observations, we are optimistic about the
potential added value of our approach to the DIR methods.
We validated our approach on images with simulated intensity, affine, and
elastic transformations. The quantitative results show low spatial matching
error of the landmarks predicted by our approach. Additionally, the results on
clinical data demonstrate the generalization capability of our approach. We
compared the performance of our approach with the two variants of widely used
SIFT keypoint detection approach. Our approach not only outperforms the SIFT
based approach in terms of matching error under simulated transformations, but
also finds more accurate matches in the clinical data. As such the results
look quite promising. However, the current approach is developed for 2D images
i.e., it overlooks the possibility of the out-of-plane correspondences in two
CT scans, which is quite likely especially in lower abdominal regions. The
extension of the approach to 3D is, therefore, imperative so as to speculate
into its benefits in providing additional guidance information to the DIR
methods.
## 6 ACKNOWLEDGEMENTS
The research is part of the research programme, Open Technology Programme with
project number 15586, which is financed by the Dutch Research Council (NWO),
Elekta, and Xomnia. Further, the work is co-funded by the public-private
partnership allowance for top consortia for knowledge and innovation (TKIs)
from the Ministry of Economic Affairs.
## References
* [1] Alderliesten, T., Bosman, P. A. N., and Bel, A., “Getting the most out of additional guidance information in deformable image registration by leveraging multi-objective optimization,” in [Medical Imaging 2015: Image Processing ], Proc. SPIE 9413, 94131R, International Society for Optics and Photonics (2015).
* [2] Han, D., Gao, Y., Wu, G., Yap, P.-T., and Shen, D., “Robust anatomical landmark detection with application to MR brain image registration,” Comput. Med. Imaging Graph. 46, 277–290 (2015).
* [3] Yang, D., Zhang, M., Chang, X., Fu, Y., Liu, S., Li, H. H., Mutic, S., and Duan, Y., “A method to detect landmark pairs accurately between intra-patient volumetric medical images,” Med. Phys. 44(11), 5859–5872 (2017).
* [4] Werner, R., Duscha, C., Schmidt-Richberg, A., Ehrhardt, J., and Handels, H., “Assessing accuracy of non-linear registration in 4D image data using automatically detected landmark correspondences,” in [Medical Imaging 2013: Image Processing ], Proc. SPIE 8669, 86690Z, International Society for Optics and Photonics (2013).
* [5] Rühaak, J., Polzin, T., Heldmann, S., Simpson, I. J. A., Handels, H., Modersitzki, J., and Heinrich, M. P., “Estimation of large motion in lung CT by integrating regularized keypoint correspondences into dense deformable registration,” IEEE Trans. Med. Imaging 36(8), 1746–1757 (2017).
* [6] Ghassabi, Z., Shanbehzadeh, J., Sedaghat, A., and Fatemizadeh, E., “An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors,” EURASIP J. Image Video Process. 2013(1), 25 (2013).
* [7] Chen, J., Tian, J., Lee, N., Zheng, J., Smith, R. T., and Laine, A. F., “A partial intensity invariant feature descriptor for multimodal retinal image registration,” IEEE Trans. Biomed. Eng. 57(7), 1707–1718 (2010).
* [8] Guo, Y., Bennamoun, M., Sohel, F., Lu, M., Wan, J., and Kwok, N. M., “A comprehensive performance evaluation of 3D local feature descriptors,” Int. J. Comput. Vis. 116(1), 66–89 (2016).
* [9] Hervella, Á. S., Rouco, J., Novo, J., and Ortega, M., “Multimodal registration of retinal images using domain-specific landmarks and vessel enhancement,” Procedia Comput. Sci. 126, 97–104 (2018).
* [10] Ronneberger, O., Fischer, P., and Brox, T., “U-Net: Convolutional networks for biomedical image segmentation,” in [International Conference on Medical Image Computing and Computer-Assisted Intervention ], 234–241, Springer (2015).
* [11] Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., Kim, R., Raman, R., Nelson, P. C., Mega, J. L., and Webster, D. R., “Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs,” JAMA Netw. Open 316(22), 2402–2410 (2016).
* [12] Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., and Thrun, S., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature 542(7639), 115 (2017).
* [13] Bier, B., Unberath, M., Zaech, J.-N., Fotouhi, J., Armand, M., Osgood, G., Navab, N., and Maier, A., “X-ray-transform invariant anatomical landmark detection for pelvic trauma surgery,” in [International Conference on Medical Image Computing and Computer-Assisted Intervention ], 55–63, Springer (2018).
* [14] Tuysuzoglu, A., Tan, J., Eissa, K., Kiraly, A. P., Diallo, M., and Kamen, A., “Deep adversarial context-aware landmark detection for ultrasound imaging,” in [International Conference on Medical Image Computing and Computer-Assisted Intervention ], 151–158, Springer (2018).
* [15] Thewlis, J., Bilen, H., and Vedaldi, A., “Unsupervised learning of object landmarks by factorized spatial embeddings,” in [The IEEE International Conference on Computer Vision ], 5916–5925 (2017).
* [16] Zhang, Y., Guo, Y., Jin, Y., Luo, Y., He, Z., and Lee, H., “Unsupervised discovery of object landmarks as structural representations,” in [Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition ], 2694–2703 (2018).
* [17] Georgakis, G., Karanam, S., Wu, Z., Ernst, J., and Košecká, J., “End-to-end learning of keypoint detector and descriptor for pose invariant 3D matching,” in [Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition ], 1965–1973 (2018).
* [18] DeTone, D., Malisiewicz, T., and Rabinovich, A., “Superpoint: Self-supervised interest point detection and description,” in [Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops ], 224–236 (2018).
* [19] Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A., “Automatic differentiation in PyTorch,” in [Advances in Neural Information Processing Systems-W ], (2017).
* [20] Kingma, D. P. and Ba, J., “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980 (2014).
* [21] Lowe, D. G., “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis. 60(2), 91–110 (2004).
* [22] Bradski, G., “The OpenCV library,” Dr. Dobb’s J. 25, 120–125 (2000).
* [23] Vásquez Osorio, E. M., Hoogeman, M. S., Méndez Romero, A., Wielopolski, P., Zolnay, A., and Heijmen, B. J. M., “Accurate CT/MR vessel-guided nonrigid registration of largely deformed livers,” Med. Phys. 39(5), 2463–2477 (2012).
* [24] Polzin, T., Rühaak, J., Werner, R., Strehlow, J., Heldmann, S., Handels, H., and Modersitzki, J., “Combining automatic landmark detection and variational methods for lung CT registration,” in [Fifth International Workshop on Pulmonary Image Analysis ], 85–96 (2013).
* [25] Lowekamp, B. C., Chen, D. T., Ibáñez, L., and Blezek, D., “The design of SimpleITK,” Frontiers in Neuroinformatics 7, 45 (2013).
* [26] Truong, P., Apostolopoulos, S., Mosinska, A., Stucky, S., Ciller, C., and Zanet, S. D., “Glampoints: Greedily learned accurate match points,” in [Proceedings of the IEEE International Conference on Computer Vision ], 10732–10741 (2019).
|
# Supervised Hypergraph Reconstruction
Yanbang Wang<EMAIL_ADDRESS>Cornell UniversityIthacaNew YorkUSA and Jon
Kleinberg<EMAIL_ADDRESS>Cornell UniversityIthacaNew YorkUSA
###### Abstract.
We study an issue commonly seen with graph data analysis: many real-world
complex systems involving high-order interactions are best encoded by
hypergraphs; however, their datasets often end up being published or studied
only in the form of their projections (with dyadic edges). To understand this
issue, we first establish a theoretical framework to characterize this issue’s
implications and worst-case scenarios. The analysis motivates our formulation
of the new task, supervised hypergraph reconstruction: reconstructing a real-
world hypergraph from its projected graph, with the help of some existing
knowledge of the application domain.
To reconstruct hypergraph data, we start by analyzing hyperedge distributions
in the projection, based on which we create a framework containing two
modules: (1) to handle the enormous search space of potential hyperedges, we
design a sampling strategy with efficacy guarantees that significantly narrows
the space to a smaller set of candidates; (2) to identify hyperedges from the
candidates, we further design a hyperedge classifier in two well-working
variants that capture structural features in the projection. Extensive
experiments validate our claims, approach, and extensions. Remarkably, our
approach outperforms all baselines by an order of magnitude in accuracy on
hard datasets. Our code and data can be downloaded from bit.ly/SHyRe.
hypergraphs, projection, reconstruction, supervised learning
††copyright: none††ccs: Mathematics of computing Hypergraphs††ccs: Computing
methodologies Machine learning
## 1\. Introduction
Graphs are a mathematical formalism that can describe many real-world complex
systems by recording which pairs of entities in the system are connected,
using the language of nodes and edges. Hypergraphs take this idea further by
extending the concept of edges from pairwise relations to sets of arbitrary
sizes, and thus admit a more expressive form of encoding for multilateral
relationships.
The long-standing problem that this paper addresses is that many datasets that
should have been encoded by hypergraphs ended up being released or studied
almost exclusively in graph format. There are many examples of this
phenomenon: co-authorship networks (Newman, 2004; Sarigöl et al., 2014) where
an edge encodes two author’s collaboration in the same paper, social
interaction networks (Madan et al., 2011; Klimt and Yang, 2004) where an edge
encodes two persons’ interaction in a conversation or email, and protein-
protein interaction networks (Safari-Alighiarloo et al., 2014) where an edge
encodes two proteins’ co-occurrence in one biological process. Co-authorships,
conversations, and bio-processes all typically involve multiple authors,
people, and proteins.
Clearly, pairwise relations (i.e. graphs) contain less information than the
original hypergraphs they come from. Replacing hypergraphs with graphs is
known to bias how people perceive (Wolf et al., 2016), predict (Arya et al.,
2020), and exploit (Arya and Worring, 2018) real-world interconnected systems.
Despite the drawbacks, there are many real-world scenarios where the crucial
underlying hypergraphs are dropped and only projected graphs get studied and
released. These scenarios come in two cases:
* •
Unobservable: In some key scenarios, the available technology for data
collection can only detect pairwise relations. This is most common in social
science. For example, in (Madan et al., 2011; Ozella et al., 2021; Dai et al.,
2020), sensing methodologies to record physical proximity can be used to build
networks of face-to-face interaction: an interaction between two people is
recorded by thresholding distances between their body-worn sensors. There is
no way to directly record the multiple participants in each conversation by
sensors only. Recovering multi-party events in a purely decentralized, peer-
to-peer network system of communication is an important application.
* •
Unpublished: Even when they are technically observable, in practice the source
hypergraph datasets of many studies are never released or published. For
example, many of the most impactful studies analyzing coauthorships do not
make available a hypergraph version of their dataset (Newman, 2004; Sarigöl et
al., 2014). Many popular graph learning benchmarks, including arXiv-hepth
(Leskovec et al., 2005), ogbl-collab (Hu et al., 2020), and ogbn-product (Hu
et al., 2020), also do not provide their hypergraph originals. Yet the
underlying, unobserved hypergraphs contain important information about the
domain.
There do exist measures to recover a hypergraph, or at least a part of it,
from graphs in these domains. Traditionally, such recovery process involves
laborious manual work, and only one hypergraph can be recovered at a time,
i.e. the recovery is not generalizable to similar hypergraphs, and is hard to
carry out at scale. To recover unobservable social interactions, social
scientists use surveys that ask students to recall all people they had talked
to in each daily conversation (Ozella et al., 2021); to obtain unpublished
hypergraphs, researchers go great lengths to either replicate tedious data
preprocessing or to trace back to data publishers. Measures like these require
a considerable amount of time, effort, and luck.
Figure 1. The supervised hypergraph reconstruction task. $\mathcal{H}_{0}$ and
$\mathcal{H}_{1}$ belong to the same application domain. Given
$\mathcal{H}_{0}$ (and its projection $G_{0}$), can we reconstruct
$\mathcal{H}_{1}$ from its projection $G_{1}$?
All the scenarios above share a common problem abstraction, as follows. There
is an underlying hypergraph $\mathcal{H}_{1}$ that we cannot directly observe,
and instead we only have access to its projected graph (or, projection)
$G_{1}$, which we define to be the graph on the same node set as the
hypergraph, and with two nodes connected in $G_{1}$ if and only if they belong
to a shared hyperedge in $\mathcal{H}_{1}$. Our goal is to reconstruct
$\mathcal{H}_{1}$ as accurately as possible, given knowledge of $G_{1}$. The
“query” half in Fig.1 illustrates this problem:
We will later discuss a training phase where a known hypergraph from the same
domain is provided. For broadest applicability, we focus on a hard version of
the problem by assuming that the edges of the projected graph $G_{1}$ are
unweighted; they just say whether two nodes appear in at least one hyperedge
together. In Appendix C.2 we will also discuss an easier version of the
problem where $G_{1}$ has weighted edges, recording the number of different
hyperedges that each pair of nodes belongs to.
Our goal towards the reconstruction is two-fold: (1) the reconstructed
hypergraph can benefit data analysis and prediction modeling by providing
efficiently-stored higher-order information for convenient usage; (2) the
reconstruction process necessitates deeper understanding of hyperedge
structure, especially (i) how hyperedges overlap each other in real-world
datasets, and (ii) how different overlap patterns may obscure different
amounts of high-order information once projected. Such understandings can help
data contributors make more informed choice when they consider trade-offs in
projecting higher-order structure out of their data.
The challenge. The biggest challenge in hypergraph reconstruction is that a
given projected graph will gnenerally admit an enormous amount of possible
hypergraphs. In simple cases, a hypergraph can be easily reconstructed if most
of its hyperedges are disjoint — simply by treating every maximal clique as a
hyperedge. However, this can rarely be the case with real-world datasets, in
which multiple hyperedges overlap each other, making some especially hard to
detect. In Sec. 4, we formalize these difficult patterns of overlap and
quantify their theoretical implications. We will also see that these patterns
are ubiquitous in real-world datasets. Therefore, it is extremely hard to
reconstruct a hypergraph arising from an arbitrary application domain without
knowing anything about what hypergraphs look like in that domain.
Previous work. Due to the technical challenge, very little work was done on
this problem, despite its significant value for applications. In graph mining,
the closest tasks are hyperedge prediction and community detection. Hyperedge
prediction (Yadati et al., 2020; Xu et al., 2013;
benson2018simplicialbenson2018sequences) is a fundamentally different problem,
in that the input to it is a hypergraph rather than its projected graph.
Additionally, methods for this problem typically only identify hyperedges from
a given set of candidates, rather than the large implicit spaces of all
possible hyperedges. Community detection, on the other hand, is based on
looking for densely connected subsets of a graph under various definitions,
but not for hyperedges. Both tasks are very different from our goal of
searching hyperedges over the projection.
Besides the above, (Young et al., 2021) is by far the closest related work.
However, it aims to use least number of hyperedges to cover an arbitrary graph
(see its Fig.7), as a way to explain the “local density, global sparsity” of
some networks. Their work attempts parsimonious explanations of data rather
than recovery of ground truth. Consequently, their “principle of parsimony”
does not give high accuracy on many real-world hypergraphs, where there are
significant hyperedge overlaps. We will see in Sec. 6 that all these methods
above yield significantly lower performance than our approach.
Using supervised signal. To help reconstruction in the presence of these forms
of uncertainty, we propose the usage of training data: another hypergraph from
the same application domain. Shown in Fig.1, our effort to reconstruct
hypergraph $\mathcal{H}_{1}$ now involves two inputs: $\mathcal{H}_{1}$’s
projection $G_{1}$, and an additional hypergraph $\mathcal{H}_{0}$ (along with
its projection $G_{0}$) from the same or similar distribution as
$\mathcal{H}_{1}$.
The training data $\mathcal{H}_{0}$ is not only necessary but also reasonable
to have. In practice, $\mathcal{H}_{0}$ often refers to manually collected
data (as described earlier in this section), including surveys of participants
(for social science applications), neuron groups labeled by experts for
another common ancestral species or homologous organ (for biological or
neuroscience applications), or a similar hypergraph obtained from another data
publisher — e.g. for coauthorships, papers from a different database or of a
different year slice.
For now, we assume $\mathcal{H}_{0}$ contains a comparable number of
hyperedges to $\mathcal{H}_{1}$. In Sec. 6.4 we generalize this to a semi-
supervised setting where $\mathcal{H}_{0}$ is downsampled to a smaller
fraction for training, and a transfer setting where $\mathcal{H}_{0}$ comes
from a different distribution than $\mathcal{H}_{1}$. As is standard in
supervised learning, we consider it realistic to assume some knowledge about
our reconstruction goal when searching through the enormous space of
candidates.
That said, inductiveness is enforced in our setting, meaning that two nodes
with the same node ID in $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$ are
irrelevant entities. It also means that we are not satisfied with solving one
particular test case but also care about generalizability. More formally,
using $\mathcal{H}_{0}$, we should be able to reconstruct not only
$\mathcal{H}_{1}$ but also other hypergraphs from the same domain as
$\mathcal{H}_{1}$. Therefore, simply memorizing $\mathcal{H}_{0}$’s hyperedges
does not work.
With the help of training data, hypergraph reconstruction, which was
previously almost impossible, now becomes more feasible in principle.
Nevertheless, it still retains considerable challenges that we must address in
designing solutions.
Our approach. We start from the fact that every clique in the projection
qualifies as a hyperedge. There are two critical steps to accomplish in order
to get hyperedges from the cliques: 1. we cannot afford to traverse all
cliques (since there can be too many), so a shortlisting procedure is
necessary to significantly shrink the pool while retaining as many hyperedges
as possible; 2. after obtaining the shortlist, we need to go through cliques
in the shortlist and figure out which ones are real hyperedges.
It is extremely hard to accomplish any step above from the projected graph
alone, so we leverage the supervised signal, $\mathcal{H}_{0}$:
* •
For shortlisting, we design a sampling strategy based on a key observation of
the consistency in hyperedge distribution between $\mathcal{H}_{0}$ and
$\mathcal{H}_{1}$. We then optimize parameters of our strategy on
$\mathcal{H}_{0}$, and use the optimized strategy to downsample the cliques of
$\mathcal{H}_{1}$.
* •
For identifying hyperedges, we design two variants of a hyperedge classifier.
We train the classifier on cliques of $\mathcal{H}_{0}$, and then use it to
identify hyperedges from cliques of $\mathcal{H}_{1}$.
Contributions. Our main contribution is three-fold:
* •
We identify the recovery of higher-order structure as an important and
pervasive issue commonly seen in graph data analysis. To understand the issue,
we establish a topological analysis framework to characterize its implications
and worst-case scenarios. The analysis motivates our formulation of a new
task: supervised hypergraph reconstruction.
* •
We observe important structural properties of hyperedge distributions from
projected graphs. Based on our observations, we design a new reconstruction
framework, which contains a sampling strategy with theoretical guarantees, and
a hyperedge classifier in two well-working variants.
* •
We conduct extensive experiments that validate our claims, approach, and
extensions. We adapt 7 baselines and compare them with our approach on 8 real-
world datasets. Our approach outperforms all baselines, and by orders of
magnitude on hard datasets.
## 2\. Preliminaries
Hypergraph. A hypergraph $\mathcal{H}$ is a tuple $(V,\mathcal{E})$, where $V$
is a finite set of nodes, and $\mathcal{E}=\\{E_{1},E_{2},...,E_{m}\\}$ is a
set of sets with $E_{i}\subseteq V$ for all $1\leq i\leq m$. For the purpose
of reconstruction, we assume the hyperedges are distinct, i.e. $\forall 1\leq
i,j\leq m,\;E_{i}\neq E_{j}$.
Projected Graph. $\mathcal{H}$’s projected graph (projection, Gaiman Graph
(Ebbinghaus and Flum, 1999)), $G$, is a graph with the same node set $V$, and
(undirected) edge set $\mathcal{E}^{\prime}$, i.e.
$G=(V,\mathcal{E}^{\prime})$, where two nodes are joined by an edge in
$\mathcal{E^{\prime}}$ if and only if they belong to a common hyperedge in
$\mathcal{E}$. That is,
$\mathcal{E}^{\prime}=\\{(v_{i},v_{j})|v_{i},v_{j}\in E,E\in\mathcal{E}\\}$
Maximal Cliques.. A clique $C$ is a fully connected subgraph. We slightly
abuse $C$ to also denote the set of nodes in the clique. A maximal clique is a
clique that cannot become larger by including more nodes. The maximal clique
algorithm returns all maximal cliques $\mathcal{M}$ in a graph (Bron and
Kerbosch, 1973). The time complexity is linear to $|\mathcal{M}|$ (Tomita et
al., 2006). A maximum clique is the largest maximal clique in a graph.
## 3\. Task Definition
We define supervised hypergraph reconstruction as the following:
* •
Input: projected graph $G_{1}$, hypergraph $\mathcal{H}_{0}$;
* •
(Expected) output: hypergraph $\mathcal{H}_{1}$;
* •
$\mathcal{H}_{0}$ and $\mathcal{H}_{1}$ belong to the same application domain,
but their node indices are not aligned, i.e. the learning is inductive.
* •
Evaluaton: following (Young et al., 2021) we use the Jaccard Score as the main
metric for evaluating reconstruction accuracy:
$\textbf{Jaccard
Score}=\frac{|\mathcal{E}_{1}\cap\mathcal{R}_{1}|}{|\mathcal{E}_{1}\cup\mathcal{R}_{1}|}$
$\mathcal{E}_{1}$ is the true hyperedges; $\mathcal{R}_{1}$ is the
reconstructed hyperedges.
## 4\. Topological Analysis
We start with a topological analysis to understand our task from a theoretical
perspective. In particular, we seek to answer two questions: Q1. what
topological properties make hypergraphs easy to reconstruct (so that the
projection does not eliminate too much high-order information)? Q2. in the
extreme case, how much information is lost due to the projection? By
characterizing the notion of “ease” and “difficulty” in reconstruction, our
analysis will help data contributors and researchers make more informed choice
in dealing with high-order interactions.
In principle, any clique in a projection qualifies as a candidate for a true
hyperedge. Therefore, towards perfect reconstruction, we should consider
$\mathcal{U}$, the universe of all cliques in $G$, including single nodes. To
enumerate $\mathcal{U}$, a helpful view is the union of the power set 111Given
a set $S$, its power set $\mathcal{P}(S)$ is defined as the set of all subsets
of $S$. For example, $\\{A,B,C\\}$’s power set is
$\\{\\{A,B,C\\},\\{A,B\\},\\{B,C\\},\\{A,C\\},\\{A\\},\\{B\\},\\{C\\},\emptyset\\}$.of
each maximal clique of $G$. Mathematically,
$\mathcal{U}=\bigcup_{C\in\mathcal{M}}\mathcal{P}(C)\setminus\emptyset$
In that sense, maximal clique algorithm is a critical first step for
hypergraph reconstruction, as was applied by (Young et al., 2021) to
initialize the state of its MCMC solver. In the extreme case, one can
intuitively perceive that if the hyperedges of $\mathcal{H}$ are mostly
disjoint with each other, the maximal cliques in $\mathcal{H}$’s projection
must be highly aligned with hyperedges of $\mathcal{H}$. It is impossible to
find all hyperedges without considering all maximal cliques. Therefore, the
accuracy of reconstruction by maximal clique algorithm is a good quantifier
for measuring how “difficult” it is to reconstruct a hypergraph.
###### Definition 1.
A hypergraph $\mathcal{H}=(V,\mathcal{E})$ is Sperner if for every hyperedge
$E\in\mathcal{E}$ there does not exist a hyperedge $E^{\prime}\in\mathcal{E}$
$s.t.$ $E\subset E^{\prime}$.
###### Definition 2.
A hypergraph $\mathcal{H}=(V,\mathcal{E})$ is conformal 222Note that in
database theory the important notions of $\alpha$-acyclicity (Beeri et al.,
1983) and GYO reducibility (Ebbinghaus and Flum, 1999) extend conformity by
further requiring the projected graph to be chordal. The interested readers
are referred to the references for more details. if all the maximal cliques in
its projection $G$ are its hyperedges, i.e. $\mathcal{M}\subseteq\mathcal{E}$.
###### Theorem 4.1.
The maximal cliques of $G$ are precisely all hyperedges of $\mathcal{H}$, i.e.
$\mathcal{M}=\mathcal{E}$, if and only if $\mathcal{H}$ is both Sperner and
conformal.
Theorem 4.1 gives the two strict conditions that $\mathcal{H}$ must satisfy in
order to be “easy” to reconstruct. The Sperner’s definition is self-
explanatory — simply summarized as ”no pair of nested hyperedges”. In
contrast, the conformal property is less clearer. The following theorem
interprets the conformal property by its equivalence of ”uncovered triangle”
property.
###### Theorem 4.2.
A hypergraph $\mathcal{H}=(V,\mathcal{E})$ is conformal iff for every three
hyperedges there always exists some hyperedge $E$ such that all pairwise
intersections of the three hyperedges are subsets of $E$, i.e.:
$\displaystyle\forall E_{i},E_{j},$ $\displaystyle E_{q}\in\mathcal{E},$
$\displaystyle\exists\;E\in\mathcal{E},\;s.t.\;(E_{i}\cap E_{j})\cup(E_{j}\cap
E_{q})\cup(E_{q}\cap E_{i})\subseteq E$
The intuition behind Theorem 4.2 is that a conformal hypergraph cannot have
three hyperedges form a “triangle” whose three corners are “uncovered”. The
theorem gives a nontrivial interpretation of Def. 2 by eliminating the notion
of cliques or max cliques. It shows how to check a hypergraph’s conformity
purely from its hyperedge patterns with no need to further compute any max
cliques, which is a more intuitive way for us to visually understand
conformity.
Based on Def. 1 and 2, we further define the two types of errors that the
maximal cliques algorithm makes:
###### Definition 3.
There exist two types of Error If we treat all maximal cliques as true
hyperedges: an error is defined as Error I (Error II) if it occurs because of
the projected graph’s violation of Def. 1 (Def. 2).
Fig.2 illustrates patterns of the two errors and their relationship with other
important concepts in our task. Note that here Error I and II are different
from the well-known Type I (false positive) and Type II (false negative) Error
In statistics. In fact, Error I’s are hyperedges that nest inside some other
hyperedges, so they are indeed false negatives; Error II’s can be either false
positives or negatives. For example, in Fig.2 “Error II Pattern” :
$\\{v_{1},v_{3},v_{5}\\}$ is a false positive, $\\{v_{1},v_{5}\\}$ is a false
negative.
Figure 2. The upper half shows patterns of the two types of errors made by
maximal cliques algorithm. In Error I pattern, $E_{2}$ will never be found
(false negative); in Error II pattern, corners of the “triangle” are
“uncovered”, so the max clique $\\{v_{1},v_{3},v_{5}\\}$ is not a hyperedge
(false positive), meanwhile $\\{v_{1},v_{5}\\}$ is missed (false negative).
The bottom half shows the errors’ relationship with $\mathcal{E}$ and
$\mathcal{M}$. If a hyperedge can’t be reconstructed due to both errors, we
count it as Error I.
Dataset | $|\mathcal{E}|$ | $|\mathcal{E}^{\prime}|$ | $|\mathcal{M}|$ | Error I | Error II |
---|---|---|---|---|---|---
DBLP (Benson et al., 2018) | 197,067 | 194,598 | 166,571 | 2.02% | 18.9% |
Enron (Benson et al., 2018) | 756 | 300 | 362 | 42.5% | 53.3% |
Foursquare (Young et al., 2021) | 1,019 | 874 | 8,135 | 1.74% | 88.6% |
Hosts-Virus (Young et al., 2021) | 218 | 126 | 361 | 19.5% | 58.1% |
H. School (Benson et al., 2018) | 3,909 | 2864 | 3,279 | 14.9% | 82.7% |
Table 1. $\mathcal{E}$ is the set of hyperedges; $\mathcal{E}^{\prime}$ is the
set of hyperedges not nested in any other hyperedges; $\mathcal{M}$ is the set
of maximal cliques in $G$. Error I, II result from the violation of conformal
and Sperner properties, respectively. Error I
$=\frac{|\mathcal{E}\backslash\mathcal{E}^{\prime}|}{|\mathcal{E}\cup\mathcal{M}|}$,
Error II
$=\frac{|\mathcal{M}\backslash\mathcal{E}^{\prime}|+|\mathcal{E}^{\prime}\backslash\mathcal{M}|}{|\mathcal{E}\cup\mathcal{M}|}$.
Because both patterns in Fig.2 can be easily realized, real-world hypergraphs
can easily deviate from either / both of the two properties. The significance
of deviation is closely associated with the error rate of maximal cliques
algorithm in reconstruction. A hypergraph that violates the Sperner property
badly can have many pairs of “nested hyperedges”. Common examples include
hypergraphs of email correspondence and social interactions. See Table 1,
Error I column. In the extreme case, a hypergraph contains one large hyperedge
plus many nested hyperedges as proper subsets. Therefore, the worst-case
accuracy of maximal cliques algorithm on a non-Sperner but conformal
hypergraph is $1/(2^{n}-1)$.
That said, one may argue there are still many real-world hypergraphs that are
Sperner or almost Sperner. It turns out that the worst case of violating the
conformal property can also be disastrous:
###### Theorem 4.3.
Let $\mathcal{H}=(V,\mathcal{E})$ be Sperner with $m=|\mathcal{E}|$, and
$p^{(\mathcal{H})}$ the accuracy of maximal clique algorithm for
reconstructing $\mathcal{H}$, then
$\min_{\mathcal{H}}p^{(\mathcal{H})}\leq 2^{-{m-1\choose[m/2]-1}}\ll 2^{-m}$
While the worst case rarely arises in practice, real-world hypergraphs often
create many maximal cliques that are not hyperedges due to the easy
construction of Error II patterns.
Summary. With the topological analysis, we obtain a clearer picture of the
types of hypergraphs that are easy/hard to reconstruct. The easy construction
of the two error patterns indicates that the reconstruction errors can
extensively occur in real-world datasets. It is extremely hard to fix these
errors by just looking at the projection, which necessitates the usage of a
training hypergraph. How do we make the best of the training hypergraph to
help with reconstruction? We elaborate this in the next section.
## 5\. Reconstruction Framework
### 5.1. Overview
Figure 3. Our reconstruction framework takes 4 steps: (1) the clique sampler
is optimized on $G_{0}$ and $\mathcal{H}_{0}$; (2) the clique sampler samples
candidates from $G_{0}$ and $G_{1}$, and passes the result to the hyperedge
classifier; (3) the hyperedge classifier extracts features of candidates from
$G_{0}$ and trains on them; (4) the hyperedge classifier extracts features of
candidates from $G_{1}$ and identify hyperedges.
We have established that a hypergraph is hard to reconstruct purely based on
its projection if it contains many hyperedge overlaps of the two patterns. In
this section, we solve this by leveraging supervised signals. The high-level
idea is that we use a clique sampler to narrow the search space of hyperedges
(clique space in Fig.2), and then use a hyperedge classifier to identify
hyperedges from the narrowed space. Both modules are optimized/trained on the
training data.
Fig.3 gives a more detailed 4-step view, briefly explained in the caption.
They will be elaborated in the following subsections: Sec. 5.2 introduces an
important observation that underpins the clique sampler; Sec. 5.3 details the
clique sampler and its optimization; Sec. 5.4 expalins principles and designs
for the hyperedge classifier.
### 5.2. $\rho(n,k)$-consistency
The $\rho(n,k)$-consistency describes the consistency that we observe in
hyperedge distribution with regard to maximal cliques, among hypergraphs from
the same application domain, e.g. $\mathcal{H}_{0}$, $\mathcal{H}_{1}$. It is
an important property that we leverage in supervised reconstruction.
Given a hypergraph $\mathcal{H}=(V,\mathcal{E})$, its projection $G$, maximal
cliques $\mathcal{M}$, we use $\rho(n,k)$ to denote the expectation that we
find a unique hyperedge by randomly sampling a size-$k$ subset from a random
size-$n$ maximal clique. A $(n,k)$ is called valid if $1\leq k\leq n\leq N$,
with $N$ the size of the maximum clique. $\rho(n,k)$ can be estimated
empirically via the unbiased estimator $\hat{\rho}(n,k)$ defined as:
$\hat{\rho}(n,k)=\frac{|\mathcal{E}_{n,k}|}{|\mathcal{Q}_{n,k}|}$
where
$\displaystyle\mathcal{E}_{n,k}$ $\displaystyle=\\{S\in\mathcal{E}|S\subseteq
C,|S|=k,C\in\mathcal{M},|C|=n\\}$ $\displaystyle Q_{n,k}$
$\displaystyle=\\{(S,C)|S\subseteq
C,|S|=k,C\in\mathcal{M},|C|=n\\}\vspace{-5mm}$
$\mathcal{E}_{n,k}$ denotes the set of size-$k$ hyperedges in size-$n$ maximal
cliques. $Q_{n,k}$ denotes all possible ways to sample a size-$n$ maximal
clique and then a size-$k$ subset (i.e. a $k$-clique) from the maximal clique.
$|Q_{n,k}|$ can be further simplified as
$|\\{C|C\in\mathcal{M},|C|=n\\}|{n\choose k}$.
Our key observation is that if two hypergraphs e.g. $\mathcal{H}_{0}$,
$\mathcal{H}_{1}$, are generated from the same source, their hyperedge
distributions admit similar distributions of $\rho(n,k)$, which we call
$\rho(n,k)$-consistency. Fig.4(a) uses heatmaps to visualizes
$\rho(n,k)$-consistency on a famous communication dataset, Enron (Benson et
al., 2018), where $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$ are split based on a
median timestamp of the emails (hyperedges). $n$ is plot on the $y$-axis, $k$
on the $x$-axis. Fig.4(b) further plots $\rho(n,k)$’s heatmaps for five other
datasets in a similar manner. From both figures, we observe that the
distributions of $\rho(n,k)$ exhibit good consistency between training and
query of the same dataset; in contrast, the distributions of $\rho(n,k)$
across datasets are much different. This visual observation can be further
confirmed by quantified measures (see Fig.14 in Appendix).
Also, notice that the second column and the diagonal are darkest, implying
that the ${n\choose k}$ term in $Q_{n,k}$ (the denominator) cannot dominate
the distribution. More concretely, ${n\choose k}$ reaches minimum when $k=n$
or $1$ and grows exponentially as $k$ approaches $0.5n$, and this is true
regardless of the data; here the quotient peaking at $k=2$ means that the term
$|\mathcal{E}_{n,k}|$ that reflects data is playing a numerically meaningful
role. By examining many datasets (see Fig.4(b)), we find that the $\rho(n,k)$
distribution can vary a lot, but there is always great consistency between
training hypergraphs and query hypergraphs from the same application domain.
(a)
(b)
Figure 4. (a) $\rho(n,k)$-consistency on dataset Enron (Benson et al., 2018),
where each node is an email address, each hyperedge is an email.
$\mathcal{H}_{0}$ and $\mathcal{H}_{1}$ are obtained by splitting all emails
by a middle timestamp. (b) $\rho(n,k)$-consistency on more datasets. Notice
that heatmaps in the same columns are similar, while those in the same row are
much different.
Complexity. The main complexity of computing $\rho(n,k)$ involves two parts:
(a) computing $\mathcal{M}$; (b) computing $\mathcal{E}_{n,k}$ for all valid
$(n,k)$.
(a)’s complexity is $O(|\mathcal{M}|)$ as mentioned in Sec. 2. Though in worst
case $|\mathcal{M}|$ can be exponential to $|V|$, in practice we often observe
$|\mathcal{M}|$ on the same magnitude order as $|\mathcal{E}|$ (see Table 1),
which is an interesting phenomenon.
(b) requires matching size-$n$ maximal cliques with size-$k$ hyperedges for
each valid $(n,k)$ pair. The key is that in real-world data, the average
number of hyperedges incident to a node is usually a constant independent from
the growth of $|V|$ or $|\mathcal{M}|$ (see also $\bar{d}(V)$ in Table. 2),
known as the “sparsity of hypergraphs”(Kook et al., 2020). This property
greatly reduces the (average size of) search space for all size-$k$ hyperedges
in a size-$n$ maximal clique from $|\mathcal{E}|$ to $n\bar{d}(V)$. As we see
both $n$ and $\bar{d}(V)$ are typically under $50$ in practice and neither
grows with $|\mathcal{M}|$, b’s complexity can still be viewed as
$O(|\mathcal{M}|)$. Therefore, the total complexity for computing $\rho(n,k)$
is $O(|\mathcal{M}|)$. We extend this result with more empirical evidence in
Sec. 6.3
### 5.3. Clique Sampler
Given a query graph, usually we cannot afford to take all its cliques as
candidates for hyperedges: although we just mentioned that the number of
maximal cliques $|\mathcal{M}|$ is often manageable, the number of cliques
$|\mathcal{U}|$ can far exceed handling capacity as one size-$n$ maximal
clique produces $2^{n}-1$ cliques. Therefore, we create a clique sampler.
Assume we have a limited sampling budget $\beta$, our goal is to collect as
many hyperedges as possible by sampling $\beta$ cliques from $\mathcal{U}$.
Any hyperedge missed by our clique sampler loses the chance to be identified
by the hyperedge classifier in the future, so this step is crucial. Later we
show in Table 6 that good performance can be achieved by $\beta$’s that are
orders of magnitude smaller than —$\mathcal{U}$—.
As mentioned, by looking at just the query $G_{1}$ we cannot locate hyperedges
in the enormous search space of all cliques in $G_{1}$. Fortunately, we can
obtain hints from $G_{0}$ and $\mathcal{H}_{0}$. The idea is that we use
$G_{0}$ and $\mathcal{H}_{0}$ to optimize a clique sampler that verifiably
collects a good number of hyperedges. The optimization process, as marked in
Fig.3 step 1, can be viewed as a procedure that learns knowledge about where
to sample. Then in $G_{1}$, we use the optimized clique sampler to sample
cliques, shown in Fig.3 step 2. We ask that the sampler takes the following
form:
* ($\star$)
For each valid $(n,k)$, we sample a total of $r_{n,k}|Q_{n,k}|$ size-$k$
subsets (i.e. $k$-cliques) from size-$n$ maximal cliques in the query graph,
subject to the sampling budget: $\sum_{n,k}{r_{n,k}|Q_{n,k}|}=\beta$
$r_{n,k}\in[0,1]$ is the sampling ratio of the $(n,k)$ cell, $|Q_{n,k}|$ is
the size of the $(n,k)$ cell’s sample space in $G_{0}$. To instantiate a
sampler, a $r_{n,k}$ should be specified for every valid $(n,k)$ cell. How to
determine $r_{n,k}$’s? We optimize $r_{n,k}$’s towards collecting the most
training hyperedges from $G_{0}$, with the objective:
$\displaystyle\\{r_{n,k}\\}$
$\displaystyle=\operatorname*{argmax}_{\\{r_{n,k}\\}}\;\mathbb{E}^{c}[\bigcup_{(n,k)}r_{n,k}\odot\mathcal{E}_{n,k}]\vspace{-3.5mm}$
$\odot$ is a set sampling operator that returns a uniformly downsampled subset
of $\mathcal{E}_{n,k}$ at downsampling rate $r_{n,k}$. $\odot$ is essentially
a generator for random finite set (Mullane et al., 2011). See A.4 for more
discussion. $\mathbb{E}^{c}[\cdot]$ returns the expected cardinality. Given
the maximization objective and $\rho(n,k)$-consistency, the optimized sampler
should also collect a good number of hyperedges when generalized to $G_{1}$.
Sec.6.5 validates this claim empirically.
Optimization. To collect more hyperedges from $G_{0}$, a heuristic is to
allocate all budget to the darkest cells of the training data’s heamap
(Fig.4(a)-left), where hyperedges most densely populate. However, a caveat is
that the set of hyperedges $\mathcal{E}_{n,k}$ in each $(n,k)$ cell are not
disjoint if the cells are in the same column. In other words, a size-$k$
clique can appear in multiple maximal cliques of different sizes. Therefore,
taking the darkest cells may not yield best result. In fact, optimizing the
objective above involves maximizing a monotone submodular function under
budget constraint, which is NP-hard.
In light of this, we design the greedy Alg. 1 to approximate the optimal
solution with worst-case guarantee. It takes four inputs: sampling budget
$\beta$, size of the maximum clique $N$, $\mathcal{E}_{n,k}$ and
$\mathcal{Q}_{n,k}$ for all $1\leq k\leq n\leq N$. Lines 2-7 initialize state
variables. Lines 8-16 run greedy selection iteratively.
Algorithm 1 Optimize Clique Sampler
1:$\beta$; $N$; $\mathcal{E}_{n,k}$, $\mathcal{Q}_{n,k}$ for all $1\leq k\leq
N,k\leq n\leq N$
2:for $k=1$ to $N$ do$\triangleright$ traverse $k$ to initialize state
variables
3: $\Gamma_{k}\leftarrow\emptyset$$\triangleright$ union of
$\mathcal{E}_{n,k}$’s picked from column $k$
4: $\omega_{k}\leftarrow\\{k,k+1,...,N\\}$$\triangleright$ available
column-$k$ cells
5: $r_{i,k}\leftarrow 0$ for $i\in\omega_{k}$ $\triangleright$ sampling ratios
for column-$k$ cells
6: $\Delta_{k},n_{k}\leftarrow\;$UPDATE ($k$, $\omega_{k}$, $\Gamma_{k}$,
$\mathcal{E}_{\cdot,k}$, $\mathcal{Q}_{\cdot,k}$)
7:end for
8:while $\beta>0$ do$\triangleright$ the greedy selection starts
9: $k\leftarrow\text{argmax}_{i}{\;\Delta_{i}}$$\triangleright$ selects the
next best $k$
10:
$r_{n_{k},k}\leftarrow\min\\{1,\frac{\beta}{|\mathcal{Q}_{n_{k},k}|}\\}$$\triangleright$
sets cell’s sampling ratio
11: $\Gamma_{k}\leftarrow\Gamma_{k}\cup\mathcal{E}_{n_{k},k}$$\triangleright$
updates state variables
12: $\omega_{k}\leftarrow\omega_{k}\backslash\\{n_{k}\\}$
13: $\beta\leftarrow\beta-|\mathcal{Q}_{n_{k},k}|$
14: $\Delta_{k},n_{k}\leftarrow\;$UPDATE ($k$, $\omega_{k}$, $\Gamma_{k}$,
$\mathcal{E}_{\cdot,k}$, $\mathcal{Q}_{\cdot,k}$)
15: if $\text{max}_{k}\;\Delta_{k}=0$ then break$\triangleright$ breaks if all
cells sampled
16:end while
17:return $r_{n,k}$ for all $1\leq k\leq N,k\leq n\leq N$
Subroutine 1 UPDATE
1:$k$; $\omega_{k}$; $\Gamma_{k}$; $\mathcal{E}_{\cdot,k}$;
$\mathcal{Q}_{\cdot,k}$
2:if $\omega_{k}\neq\emptyset$ then
3:
$\Delta^{\prime}\leftarrow\max_{n\in\omega_{k}}\frac{|\Gamma_{k}\cup\mathcal{E}_{n,k}|-|\Gamma_{k}|}{|\mathcal{Q}_{n,k}|}$
4:
$n^{\prime}\leftarrow\text{argmax}_{n\in\omega_{k}}\frac{|\Gamma_{k}\cup\mathcal{E}_{n,k}|-|\Gamma_{k}|}{|\mathcal{Q}_{n,k}|}$
5:else
6: $\Delta^{\prime}\leftarrow 0$; $n^{\prime}\leftarrow 0$;
7:end if
8:return $\Delta^{\prime}$, $n^{\prime}$
The initialization is done column-wise. In each column $k$, $\Gamma_{k}$
stores the union of all $\mathcal{E}_{n,k}$ that have been selected from
column $k$ so far; $\omega_{k}$ stores the row indices (i.e. $n$’s) of all
available cells in column $k$, i.e. cells that have not been selected;
$r_{i,k}$ is the sampling ratio of each valid cell; line 6 calls the
subroutine UPDATE to compute $\delta_{k}$, which is the best sampling
efficiency among all the available cells, and $n_{k}$, which is the row index
of that most efficient cell.
Lines 8-16 execute the greedy selection. In each iteration, we greedily take
the next most efficient cell among (the best of) all the columns, store the
decision in the corresponding $r$, and update ($\Gamma$, $\omega$,
$\delta_{k}$, $n_{k}$) with $k$ the column index of the selected cell in the
current iteration. Notice that only column $k$ needs to be updated because
$\mathcal{E}_{n,k}$’s with different $k$’s are independent. Finally, the
greedy stops when either having exceeded budget or having traversed all cells.
For a more intuitive view, please refer to Sec.6.5 where we do ablations on
this algorithm and visualize the iterations. As a side bonus, we will also see
in Sec. 6.3 that the greedy actually induces more diverse hyperedge sizes in
reconstruction.
###### Theorem 5.1.
Let $q$ be the expected number of hyperedges in $\mathcal{H}_{0}$ drawn by the
clique sampler optimized by Alg. 1; let $q^{*}$ be the expected number of
hyperedges in $\mathcal{H}_{0}$ drawn by the best-possible clique sampler,
with the same $\beta$. Then,
$q>(1-\frac{1}{e})q^{*}\approx 0.63q^{*}$
Theorem 5.1 gives an important quality guarantee on Alg. 1 by comparing its
result with the best-possible clique sampler. The actual $\frac{q}{q^{*}}$ can
often be much better than $0.63$. Besides, notice that Alg. 1 leaves at most
one $(n,k)$ cell partially sampled. Is that a good design? In fact, it can be
proved that there is always one best clique sampler that leaves at most one
$(n,k)$ cell partially sampled. Otherwise, we can always find two partially
sampled cells and relocate all our budget from one to the other to achieve a
higher $q$.
Relating to Error I & II. The effectiveness of our clique sampler can also be
understood from the perspective of reducing Error I and II. Taking Fig.4(a) as
an example: by learning which non-diagonal cells to sample, the clique sampler
essentially reduces Error I as well as the false negative part of Error II; by
learning which diagonal cells to sample, it further reduces the false positive
part of Error II.
Relating to Standard Submodular Optimization.Our clique sampler takes a
similar greedy form as the standard algorithm for submodular optimization.
However, our clique sampler actually solve a more challenging problem, whose
optimality guarantee is also harder to prove. Please see A.5 for more details.
Precision-Recall Tradeoff. For each dataset, $\beta$ should be specified by
humans. What is the best $\beta$? Clearly a larger $\beta$ yields a larger
$q$, and thus a higher recall $\frac{q}{|\mathcal{E}|}$ with our samples. On
the other hand, a larger $\beta$ also means a lower precision
$\frac{q}{\beta}$, as sparser regions get sampled. $\frac{q}{\beta}$ being
overly low harms sample quality and may jeopardize the training later. Such
tradeoff calls for more calibration of $\beta$. We empirically found it often
good to search $\beta$ in a range that makes
$\frac{q}{|\mathcal{E}|}\in[0.6,0.95]$. More tuning details are in C.
Complexity. The bottleneck of Alg. 1 is UPDATE. In each iteration after a $k$
is picked, UPDATE recomputes
$(|\Gamma_{k}\cup\mathcal{E}_{n,k}|-|\Gamma_{k}|)$ for all $n\in\omega_{k}$,
which is $O(\frac{|\mathcal{E}|}{N})$. Empirically we found the number of
iterations under the best $\beta$ always $O(N)$. $N$ is the size of the
maximum clique, and mostly falls in $[10,40]$ (see Fig.4(b)). Therefore, on
expectation we would traverse
$O(N)O(\frac{|\mathcal{E}|}{N})=O(|\mathcal{E}|)$ hyperedges if
$|\mathcal{E}_{n,k}|$ distributes evenly among different $k$’s. In the worst
case where $|\mathcal{E}_{n,k}|$’s are deadly skewed, this degenerates to
$O(N|\mathcal{E}|)$,
The optimized sampler samples $\beta$ cliques from $G_{0}$, and a similar
number of cliques from $G_{1}$. Later in Fig.11, we empirically confirm that
the two sampling sizes are similar indeed.
### 5.4. Hyperedge Classifier
A hyperedge classifier is a binary classification model that takes a target
clique in the projection as input, and outputs a 0/1 label indicating whether
the target clique is a hyperedge. We train the hyperedge classifier on
$(\mathcal{H}_{0},G_{0})$ where we have ground-truth labels (step 3, Fig.3),
and then use it to identify hyperedges from $G_{1}$ (step 4, Fig.3). To serve
this purpose, a hyperedge classifier should contain two parts (1) a feature
extractor that extracts expressive features for characterizing a target clique
as a subgraph, and (2) a binary classification model that takes an extracted
feature and outputs a 0/1 label. (2) is a standard task and so we use MLP with
one layer of $h=100$ hidden neurons. (1) requires more careful design. Next,
we first discuss (1)’s design principles, then provide two realizations that
we found work well. Our hyperedge classifier addresses Error I and II together
by fitting to the distribution of hyperedges with structural features of the
clique candidates as input.
#### 5.4.1. Design Principles
Central to the design of a good feature extractor is a question: what type of
information should it capture about a target clique in the projection? In our
task setup, there are two constraints usable information to ensure broadest
usability of our approach: (1) we do not assume nodes to be attributed, or
edges to have multiplicity; (2) we focus on the inductive case where node
identities are not aligned across training and query (so positional embeddings
like DeepWalk (Perozzi et al., 2014) is unhelpful).
It thus becomes clearer that we are left with the structural features of and
around the target clique. Put differently, the classifier needs to well
characterize the connectivity properties of the local ego graph centered at
the target clique. As we will see in experiment, using structural features in
a supervised manner boosts reconstruction accuracy by an order of magnitude on
difficult datasets.
In that regard, any structural learning model that can well characterize the
structural features can potentially be the classifier — and there are plenty
of them operating on individual nodes (Henderson et al., 2012; Li et al.,
2020; Xu et al., 2018). However, what we do not know yet is what structural
features that characterize a clique (and its surroundings) are important, and
how they can be understood by humans in the clique’s context. Therefore, we
provide the following two feature extractors that accomplish the task via
interpretable features. It is worth mentioning though that they are not the
only choices.
#### 5.4.2. “Count”-based Feature Extractor
Many powerful graph structural learning models use different notions of
“count” to characterize the local connectivity patterns. For example, GNNs
typically use node degrees as initial features when node attributes are
unavailable; the Weisfeiler-Lehman Test also updates a node’s color based on
the count of different colors in its neighborhood.
In the context of a target clique as a subgraph structure in a projection, the
notion of “count” can be especially rich in meaning: a target clique can be
characterized by the count of its [own nodes / neighboring nodes / neighboring
edges / attached maximal cliques] in many different ways. Therefore, we create
a total of 8 types of generalized count-based features, detailed in B. Despite
being conceptually simple, these count features works surprisingly well and
can be easily interpreted (see more in Supplement C).
#### 5.4.3. Clique-motif-based Feature Extractor
Figure 5. 13 “clique motifs” that encode rich connectivity patterns around a
target clique $C$. Each clique motif is formed by 1 or 2 nodes in the target
clique $+$ 1 or 2 maximal cliques that contain the nodes.
As a second attempt we use maximal cliques as an intermediate that bridges the
projection and the hyperedges. The rationale behind is that the maximal
cliques of a projection can essentially be viewed as a preliminary guess of
the projection’s high-order structures. Such guess not only encompasses all
the projection’s information, but also in itself constitutes a partially
curated form of higher-order structures (which are indicative of the real
higher-order structures, i.e. hyperedges). Meanwhile, the synergy between
maximal cliques and nodes in the target clique also exhibit rich connectivity
patterns, which generalizes the notion of motif, as shown in Fig.5.
Fig.5 enumerates the all 13 connectivity patterns between 1 or 2 nodes of the
target clique and 1 or 2 maximal cliques that contain the target clique’s
nodes. For simplicity we restrict that maximal cliques are at most one hop
away from the target clique. We call the connectivity patterns clique motifs.
Compared to count features, clique motifs are a more principled and systematic
way to extract structural properties. There are two types of components in a
clique motif: node, maximal clique, and three types of relationships: node-
node, node-maximal clique, maxmimal clique-maximal clique. Their concrete
meanings are explained in the legend. We say that a clique motif is attached
to a target clique $C$ if the clique motif contains at least one node of $C$.
We further use $\Phi_{i}^{{(C)}}$ to denote the set of type-$i$ clique motifs
attached to $C$, with $1\leq i\leq 13$.
Given a target clique $C$, how to use clique motifs attached to $C$ to
characterize structures around $C$? We define $C$’s structural feature as a
concatenation of 13 vectors:
$[u_{1}^{{(C)}};u_{2}^{{(C)}};...,u_{13}^{{(C)}}]$. $u_{i}^{{(C)}}$ is a
vector of descriptive statistics that summarize the vectorized distribution of
type-$i$ clique motifs attached to $C$. Mathematically:
$\displaystyle\hskip 8.53581ptu_{i}^{{(C)}}$
$\displaystyle=\textbf{S{\scriptsize UMMARIZE}}(P_{i}^{{(C)}})$
$\displaystyle\vspace{-0.8mm}\hskip 8.53581ptP_{i}^{{(C)}}$
$\displaystyle=\begin{cases}[\textbf{C{\scriptsize OUNT}}(C,i,\\{v\\})\text{,
for }v\text{ in }C],\hskip 42.67912pt\text{if $1\leq i\leq 3$};\\\
[\textbf{C{\scriptsize OUNT}}(C,i,\\{v1,v2\\})\text{, for }v_{1},v_{2}\text{
in }C],\hskip 12.23468pt\text{if $4\leq i\leq 13$};\\\ \end{cases}$
$P_{i}^{{(C)}}$ is a vectorized distribution in the form of an array of counts
regarding $i$ and $C$. $\textbf{C{\scriptsize
OUNT}}(C,i,\chi)=|\\{\phi\in\Phi_{i}^{{(C)}}|\chi\subseteq\phi\\}|$. Finally,
$\textbf{S{\scriptsize UMMARIZE}}(P_{i}^{{(C)}})$ is a function that takes in
a vectorized distribution $P_{i}^{{(C)}}$ and outputs a vector of statistical
descriptors. Here we simply define it to comprise of 4 basic statistical
descriptors:
$\textbf{S{\scriptsize UMMARIZE}}(P_{i}^{{(C)}})=[\text{\small
mean}(P_{i}^{{(C)}}),\text{\small std}(P_{i}^{{(C)}}),\text{{\small
min}}(P_{i}^{{(C)}}),\text{\small max}(P_{i}^{{(C)}})]\vspace{-1mm}$
As we have 13 clique motifs, these amount to 52 structural features. On the
high level, clique motifs extend the well-tested motif methods on graphs to
hypergraph projections with clique structures. Compared to count features,
clique motifs capture structural features in a more principled manner.
## 6\. Experiment
Dataset | $|V|$ | $|\mathcal{E}|$ | $\mu(\mathcal{E})$ | $\sigma(\mathcal{E})$ | $\bar{d}(V)$ | $|\mathcal{M}|$
---|---|---|---|---|---|---
Enron (Benson et al., 2018) | 142 recipients | 756 emails | 3.0 | 2.0 | 16.0 | 362
DBLP (Benson et al., 2018) | 319,916 authors | 197,067 papers | 3.0 | 1.7 | 1.83 | 166,571
P. School (Benson et al., 2018) | 242 students | 6,352 chats | 2.4 | 0.6 | 63.5 | 15,017
H. School (Benson et al., 2018) | 327 students | 3,909 chats | 2.3 | 0.5 | 27.8 | 3,279
Foursquare (Young et al., 2021) | 2,334 restaurants | 1,019 footprints | 6.4 | 6.5 | 2.80 | 8,135
Hosts-Virus (Young et al., 2021) | 466 hosts | 218 virus | 5.6 | 9.0 | 2.60 | 361
Directors (Young et al., 2021) | 522 directors | 102 boards | 5.4 | 2.2 | 1.05 | 102
Crimes (Young et al., 2021) | 510 victims | 256 homicides | 3.0 | 2.3 | 1.48 | 207
Table 2. Summary of the datasets (query split). $\mu(\mathcal{E})$ and
$\sigma(\mathcal{E})$ stand for mean and std. of the distribution of the
hyperedges’ size. $\bar{d}(V)$: average degree of nodes (w.r.t. hyperedges).
$|\mathcal{M}|$: maximal cliques.
How well does our approach work in practice? How do we make sense of the
reconstruction result? Is our approach robust to scarcity or distribution
shift of training data? These questions are systematically studied in our
experiments.
We introduce settings in Sec. 6.1, followed by performance comparisons of all
methods in Sec. 6.2. In Sec, 6.3 we evaluate the reconstruction on more
dimensions and check what exactly gets recovered. In Sec. 6.4, we evaluate
reconstructions in semi-supervised setting and transferred setting — two
important extensions involving label scarcity and distribution shift,
respectively. Sec. 6.5 reports ablation studies that show the effectiveness of
our clique sampler.
Reproducibility: Our code and data are published at bit.ly/SHyRe.
### 6.1. Experimental Settings
Baselines. For comparison we adapt 7 methods originally proposed in four
different task domains. They are chosen for (1) their best relevance to our
task, and/or (2) their state-of-the-art status within their own task domain.
The baselines and their task domains are:
* •
Community Detection: Demon (Coscia et al., 2012), CFinder (Palla et al., 2005)
* •
Clique Decomposition: Max Clique (Bron and Kerbosch, 1973), Edge Clique Cover
(Conte et al., 2016)
* •
Hyperedge Prediction: Hyper-SAGNN (Zhang et al., 2019), CMM (Zhang et al.,
2018);
* •
Probabilistic Models: Bayesian-MDL (Young et al., 2021).
Since they are evaluated on our new task, adaptation is necessary, and
decisions are made towards the best fairness of comparison. For example, both
our method and Bayesian-MDL use maximal clique algorithm (i.e. the Max Clique
baseline) as a preprocessing step. Our evaluation ensures that all three of
them use the same implementation as introduced in (Bron and Kerbosch, 1973).
See Appendix C for more details on selection criteria and adaptation.
Data Preparation. We use 8 real-world datasets containing different high-order
relationships: email correspondence, paper coauthorship, social interactions,
biological groups, shared restaurants etc. They feature significantly varying
number of nodes, hyperedges, distribution of hyperedge sizes, etc. summarized
in Table 2.
To generate a training set and a query set, we follow two common standards to
split the collection of hyperedges in each dataset: (1) For datasets that come
in natural segments, such as DBLP and Enron whose hyperedges are timestamped,
we follow their segments so that training and query contain two disjoint and
roughly equal-sized sets of hyperedges. For DBLP, we construct
$\mathcal{H}_{0}$ from year 2011 and $\mathcal{H}_{1}$ from year 2010; for
Enron, we use 02/27/2001, 23:59 as a median timestamp to split all emails into
$\mathcal{H}_{0}$ (first half) and $\mathcal{H}_{1}$ (second half). (2) For
all the other datasets that lack natural segments, we randomly split the set
of hyperedges into halves. To enforce inductiveness, we randomly re-index node
IDs in each split. Finally, we project $\mathcal{H}_{0}$ and $\mathcal{H}_{1}$
to get $G_{0}$ and $G_{1}$ respectively.
Beyond the two common standards, we conduct semi-supervised reconstruction and
transfer reconstruction in Sec. 6.4.
Training Configuration. For models requiring back propagation, we use cross
entropy loss and optimize using Adam for $2000$ epochs and learning rate
$0.0001$. Those with randomized modules are repeated 10 times with different
seeds. See C for more tuning details.
### 6.2. Quality of Reconstruction
| DBLP | Enron | P.School | H.School | Foursquare | Hosts-Virus | Directors | Crimes
---|---|---|---|---|---|---|---|---
CFinder (Palla et al., 2005) | 11.35 | 0.45 | 0.00 | 0.00 | 0.39 | 5.02 | 41.18 | 6.86
Demon (Coscia et al., 2012) | - | 2.35 | 0.09 | 2.97 | 16.51 | 7.28 | 90.48 | 63.81
Maximal Clique (Bron and Kerbosch, 1973) | 79.13 | 4.19 | 0.09 | 2.38 | 9.62 | 22.41 | 100.0 | 78.76
Clique Covering (Conte et al., 2016) | 73.15 | 6.61 | 1.95 | 6.89 | 79.89 | 41.00 | 100.0 | 75.78
Hyper-SAGNN (Zhang et al., 2019) | 0.12$\pm$0.01 | 0.19$\pm$0.17 | 12.13$\pm$0.33 | 8.89$\pm$0.15 | 0.01$\pm$0.01 | 7.35$\pm$0.48 | 1.94$\pm$1.12 | 0.86$\pm$ 0.17
CMM (Zhang et al., 2018) | 0.11$\pm$0.04 | 0.48$\pm$0.06 | 14.26$\pm$0.92 | 4.27$\pm$0.46 | 0.00$\pm$0.00 | 6.44$\pm$ 0.82 | 2.55$\pm$0.62 | 0.57$\pm$0.29
Bayesian-MDL (Young et al., 2021) | 73.08$\pm$0.00 | 4.57$\pm$0.07 | 0.18$\pm$0.01 | 3.58$\pm$0.03 | 69.93$\pm$0.59 | 40.24$\pm$0.12 | 100.0$\pm$0.00 | 74.91$\pm$0.11
SHyRe-count | 81.18$\pm$0.02 | 13.50$\pm$0.32 | 42.60$\pm$0.61 | 54.56$\pm$0.10 | 74.56$\pm$0.32 | 48.85$\pm$0.11 | 100.0$\pm$0.00 | 79.18$\pm$0.42
SHyRe-motif | 81.19$\pm$0.02 | 16.02$\pm$0.35 | 43.06$\pm$0.77 | 54.39$\pm$0.25 | 71.88$\pm$0.28 | 45.16$\pm$0.55 | 100.0$\pm$0.00 | 79.27$\pm$0.40
Table 3. Comparison of different methods on hypergraph reconstruction,
performance measured in Jaccard Similarity (%). Standard deviation is dropped
where the method is deterministic or randomization has no effect on
performance. “-” means the algorithm did not stop in 72 hours.
We name our approach SHyRe (Supervised Hypergraph Reconstruction). Table 3
shows the quality of reconstruction measured in Jaccard Score (see Sec. 3).
The two variants of SHyRe strongly outperform all baselines on most datasets
(7/8). The improvement is most significant on hard datasets such as P. School,
H.School and Enron. We attribute the success to our reconstruction framework
that makes the best out of supervised signals: we derive insights from our
topological analysis, and find ways to efficiently probe high-quality
candidates for hyperedges and for designing good classifiers.
The two variants are close to each other by performance. SHyRe-motif very
marginally wins if we count the number of datasets on which one variant
achieves the best score. We further study the two variants and found that they
both capture a strong feature (more in 6). Among the baselines, Clique
Covering and Bayesian-MDL work relatively well. Still, the “principle of
parsimony” (Young et al., 2021) suffers on dense hypergraphs if we interpret
the result according to Table 2.
Fig.14 further visualizes fine-grained performance measured by partitioned
errors as defined in Def. . Further explanations are included in the caption.
We can see that SHyRe variants significantly reduce more Error I and II than
the baselines.
### 6.3. Further Evaluations of Reconstruction
Beyond Jaccard score, we seek to further understand (a) whether our
reconstructions preserve important properties of the original hypergraphs, (b)
the running time of our reconstructions, and, more curiously, (c) what exactly
gets recovered. To this end, we study the property space, the asymptotic
running time, and an actual visualization of the reconstructions.
Property Space. The procedure works as follows: we first characterize the
property space of the reconstruction using the middle four columns of Table 2:
$[|\mathcal{E}|,\mu(\mathcal{E}),\sigma(\mathcal{E}),\bar{d}(V)]$. $|V|$ is
excluded from the property vector as it is known from the input. Then for each
(dataset, method) combination, we analyze the reconstruction and obtain a
unique property vector. We use PCA to project all (normalized) property
vectors into 2D space, visualized in Fig.7.
In Fig.7, colors encode datasets, and marker styles encode methods. Compared
with baselines, SHyRe variants ( ○and $\times$) produce reconstructions more
similar to the ground truth ($\blacksquare$). The reasons are two-fold: (1)
SHyRe variants usually achieve better accuracy, which encourages a more
aligned property space; (2) as a bonus of our greedy Alg. 1, in each iteration
it tends to look for a cell from a different column. This is because cells in
the same column has diminishing returns due to overlapping of
$\mathcal{E}_{n,k}$ with same $k$, whereas cells in different columns remain
unaffected as they have hyperedges of different sizes. The inclination of
having diverse hyperedge sizes reduces the chance of a skewed distribution.
We also observe that markers of the same colors are generally clustered,
meaning that most baselines work to some extent despite low accuracy
sometimes. Fig.7 also embeds a historgram showing the size distribution of the
reconstructed hyperedges on DBLP. We see the distribution obtained by SHyRe
aligns decently with the ground truth, especially with large hyperedges. Some
errors are made with sizes 1 and 2, which are mostly the Fig.2 nested cases.
Figure 6. Partitioned Performance of all methods on all datasets. Recall that
the Error I and II are mistakes made by Max Clique (Def.6.2). Other methods
may make mistakes that Max Clique does not make, which are counted as “Other
Error”. We can see that SHyRe reduce more Error I and II than other baselines.
Figure 7. 2D embeddings of statistical properties of reconstructed
hypergraphs. Different colors correspond to different datasets; different
markers correspond to different methods. Notice that reconstructions of SHyRe
are closest to ground truth ($\blacksquare$) on most datasets.
Running Time. We claim in Sec. 5 that the clique sampler’s complexity is close
to $O(|\mathcal{M}|)+O(|\mathcal{E}|)=O(|\mathcal{M}|)$ in practice. Here this
claim is substantiated with an asymptotic running time analysis. Both the
clique sampler (Step 1, 2 in Fig.3) and the hyperedge classifier (Step 3, 4 in
Fig.3) are tested. For $p\in[30,100]$, We randomly sample $p\%$ hyperedges
from DBLP and record the CPU time for running both modules of SHyRe-motif. The
result is plot in Fig.8. It shows that both the total CPU time and the number
of maximal cliques are roughly linear to the data usage (size), which verifies
our claim. We provide additional statistics all methods’ time complexity in
Fig.15 Appendix.
Figure 8. Asymptotic running time analysis of SHyRe-motif on DBLP. The bar
plot is aligned with the left y-axis, and the line plot with the right. Notice
that both the total CPU time and the number of maximal cliques are roughly
linear to the data usage (size).
Visualizing the Reconstruction. Fig.9 visualizes a portion of the actual
reconstructed hypergraph by SHyRe-count on DBLP dataset. We include more
explanation and analysis in the caption.
Figure 9. A small part of the hypergraph reconstructed by SHyRe-motif on DBLP
dataset. Projection $G_{1}$ is drawn in black. The shaded ellipsis are the
reconstructed hyperedges. Those with green dashed edges are difficult to
reconstruct in the absence of the training data. Notice that
$\\{3,12,2,0,13,1\\}$ is a maximal clique (in this local graph).
### 6.4. Label Scarcity and Distribution Shift
So far, we have assumed that our training split consists of a comparable
number of hyperedges as the query split. However, sometimes we may only have
access to a small subset of hyperedges or a different but similar dataset.
Will the scarcity of training data or the distribution shift become a problem?
We conduct mini-studies to find out more about reconstruction based on semi-
supervised learning a well as transfer learning.
We choose three datasets of different reconstruction difficulties: DBLP,
Hosts-Virus, and Enron. For each, we randomly drop 80% of the hyperedges in
the training split. Our framework is trained and tuned the same way as in the
main experiment.
Table 4 shows the result. We can see the accuracy of the reconstruction is
negatively influenced on all datasets. However, SHyRe on 20% data still
outperforms the best baseline on full data, meaning that it remains well
functioning in a semi-supervised setting. Comparing column-wise, the influence
is negligible on easy datasets, small on moderately hard datasets, and large
on very hard datasets.
For transfer learning, we train on DBLP 2011, and test SHyRe’s performance on
various other DBLP slices as well as Microsoft Academic Graphs (MAG), a
different bibliographical database similar to DBLP. Fig.10 shows the result.
We can see that SHyRe remains relatively robust to the distribution shift.
Figure 10. Performance of transfer learning: trained on DBLP2011 and tested on various other coauthorship datasets. | DBLP | Hosts-Virus | Enron
---|---|---|---
Best Baseline (full) | 79.13 | 41.00 | 6.61
SHyRe-motif (full) | 81.19$\pm$0.02 | 45.16$\pm$0.55 | 16.02$\pm$0.35
SHyRe-count (20%) | 81.17$\pm$0.01 | 44.02$\pm$0.39 | 6.43$\pm$0.18
SHyRe-motif (20%) | 81.17$\pm$0.01 | 44.48$\pm$0.21 | 10.56$\pm$0.92
Table 4. Performance of semi-supervised reconstruction using 20% of the
training hyperedges, measured in Jaccard Similarity.
### 6.5. Ablation Study on Clique Sampler
In Sec. 5.3 we extensively discuss how to optimize the clique sampler. One
might argue that the optimization appears complex: can we adopt a simpler
heuristic for sampling which works independently of the notion of $r_{n,k}$’s?
We investigate this via an ablation study.
We test three sampling heuristics that might replace the clique sampler. 1.
“random”: we randomly sample $\beta$ cliques from the projection as
candidates. While it is hard to achieve strict uniformness (khorvash, 2009),
we approximate this by growing a clique from a random node and stopping the
growth when the clique reaches a random size; 2.“small”: we randomly sample
$\beta$ cliques of sizes 1 and 2 (i.e. nodes and edges); 3.“head & tail”: we
randomly sample $\beta$ cliques from all cliques of sizes 1 and 2 as well as
maximal cliques.
Fig.11 compares the efficacy in the sampling stage on Enron dataset. It shows
that our clique sampler significantly outperforms all heuristics and so it
cannot be replaced. Also, the the great alignment between the training curve
and query curve means our clique sampler generalizes well. We further report
reconstruction performance on 3 datasets in Table 7, which also confirms this
point.
Figure 11. Our clique sampler and ablation studies. Each marker is an
iteration in Alg. 1. The alignment between the training curve and the query
curve shows that our clique sampler generalizes well.
## 7\. Additional Related Work
There are three lines of work pertinent to the reconstruction task.
Edge Clique Covering is to find a minimal set of cliques that cover all the
graph’s edges. The clique-based projection makes this task highly relevant.
(Erdös et al., 1966) proves any graph can be covered by at most
$[\frac{|V|^{2}}{4}]$ cliques. (Conte et al., 2016) finds a fast heuristic for
approximating the solution. (Young et al., 2021) creates a probabilistic
framework to redefine and solve the task. However, this line of work shares
the “principle of parsimony”, which is often impractical in real-world
datasets.
Hyperedge Prediction is to identify missing hyperedges of an incomplete
hypergraph from a pool of given candidates. Existing work focuses on
characterize a node set’s structural features. The methods span proximity
measures (Benson et al., 2018), deep learning (Li et al., 2020; Zhang et al.,
2019), and matrix factorization (Zhang et al., 2018). Despite the relevance,
the task has a very different setting and focus from ours as mentioned in Sec.
1.
Community Detection finds node clusters within which edge density is much
higher than the outside. Existing work roughly comes in two categories per the
community type: disjoint (Que et al., [n. d.]; Traag et al., 2019), and
overlapping (Coscia et al., 2012; Palla et al., 2005). As mentioned, however,
the notion of “relative density” is not compatible with our focus on cliques.
## 8\. Conclusion
We propose the supervised hypergraph reconstruction task to effectively
understand and compensate for the loss of high-order information common in
graph data analysis. Our well-motivated reconstruction framework consists of a
clique sampler and a hyperedge classifier. Its success is substantiated by
extensive experiments. For future work, our setting can be extended in many
meaningful directions. For example, how can we improve the reconstruction if
we have node attributes or edge multiplicity? What if a hyperedge is projected
not into a clique but into a “star” or a “line”? These studies will
substantially benefit graph analysis and scientific discovery.
## References
* (1)
* Arya et al. (2020) Devanshu Arya, Deepak K Gupta, Stevan Rudinac, and Marcel Worring. 2020. Hypersage: Generalizing inductive representation learning on hypergraphs. _arXiv preprint arXiv:2010.04558_ (2020).
* Arya and Worring (2018) Devanshu Arya and Marcel Worring. 2018. Exploiting relational information in social networks using geometric deep learning on hypergraphs. In _Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval_. 117–125.
* Beeri et al. (1983) Catriel Beeri, Ronald Fagin, David Maier, and Mihalis Yannakakis. 1983. On the desirability of acyclic database schemes. _Journal of the ACM (JACM)_ 30, 3 (1983).
* Benson et al. (2018) Austin R Benson, Rediet Abebe, Michael T Schaub, Ali Jadbabaie, and Jon Kleinberg. 2018. Simplicial closure and higher-order link prediction. _Proceedings of the National Academy of Sciences_ 115, 48 (2018), E11221–E11230.
* Berge (1973) Claude Berge. 1973\. Graphs and hypergraphs. (1973).
* Berge and Duchet (1975) Claude Berge and Pierre Duchet. 1975. A generalization of Gilmore’s theorem. _Recent advances in graph theory_ (1975), 49–55.
* Bron and Kerbosch (1973) Coen Bron and Joep Kerbosch. 1973. Algorithm 457: finding all cliques of an undirected graph. _Commun. ACM_ 16, 9 (1973), 575–577.
* Brouwer et al. (2013) Andries E Brouwer, CF Mills, WH Mills, and A Verbeek. 2013\. Counting families of mutually intersecting sets. _the electronic journal of combinatorics_ (2013), P8–P8.
* Conte et al. (2016) Alessio Conte, Roberto Grossi, and Andrea Marino. 2016\. Clique covering of large real-world networks. In _31st ACM Symposium on Applied Computing_. 1134–1139.
* Coscia et al. (2012) Michele Coscia, Giulio Rossetti, Fosca Giannotti, and Dino Pedreschi. 2012. Demon: a local-first discovery method for overlapping communities. In _Proceedings of the 18th ACM SIGKDD international conference_. 615–623.
* Dai et al. (2020) Sicheng Dai, Hélène Bouchet, Aurélie Nardy, Eric Fleury, Jean-Pierre Chevrot, and Márton Karsai. 2020. Temporal social network reconstruction using wireless proximity sensors: model selection and consequences. _EPJ Data Science_ (2020).
* Ebbinghaus and Flum (1999) Heinz-Dieter Ebbinghaus and Jörg Flum. 1999. _Finite model theory_. Springer Science & Business Media.
* Erdös et al. (1966) Paul Erdös, Adolph W Goodman, and Louis Pósa. 1966\. The representation of a graph by set intersections. _Canadian Journal of Mathematics_ 18 (1966), 106–112.
* Henderson et al. (2012) Keith Henderson, Brian Gallagher, Tina Eliassi-Rad, Hanghang Tong, Sugato Basu, Leman Akoglu, Danai Koutra, Christos Faloutsos, and Lei Li. 2012. Rolx: structural role extraction & mining in large graphs. In _Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining_.
* Hu et al. (2020) Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020\. Open graph benchmark: Datasets for machine learning on graphs. _Advances in neural information processing systems_ 33 (2020), 22118–22133.
* khorvash (2009) Massih khorvash. 2009\. _On uniform sampling of cliques_. Ph. D. Dissertation. University of British Columbia. https://doi.org/10.14288/1.0051700
* Klimt and Yang (2004) Bryan Klimt and Yiming Yang. 2004. Introducing the Enron corpus.. In _CEAS_.
* Kook et al. (2020) Yunbum Kook, Jihoon Ko, and Kijung Shin. 2020. Evolution of real-world hypergraphs: Patterns and models without oracles. In _2020 IEEE International Conference on Data Mining (ICDM)_. IEEE, 272–281.
* Leskovec et al. (2005) Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. 2005\. Graphs over time: densification laws, shrinking diameters and possible explanations. In _SIGKDD_.
* Li et al. (2020) Pan Li, Yanbang Wang, Hongwei Wang, and Jure Leskovec. 2020\. Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning. _Advances in Neural Information Processing Systems_ 33 (2020).
* Madan et al. (2011) Anmol Madan, Manuel Cebrian, Sai Moturu, Katayoun Farrahi, et al. 2011\. Sensing the” health state” of a community. _IEEE Pervasive Computing_ 11, 4 (2011).
* Mullane et al. (2011) John Mullane, Ba-Ngu Vo, Martin D Adams, and Ba-Tuong Vo. 2011\. A random-finite-set approach to Bayesian SLAM. _IEEE transactions on robotics_ 27, 2 (2011).
* Newman (2004) Mark EJ Newman. 2004\. Coauthorship networks and patterns of scientific collaboration. _Proceedings of the national academy of sciences_ 101, suppl 1 (2004).
* Ozella et al. (2021) Laura Ozella, Daniela Paolotti, Guilherme Lichand, Jorge P Rodríguez, Simon Haenni, John Phuka, Onicio B Leal-Neto, and Ciro Cattuto. 2021. Using wearable proximity sensors to characterize social contact patterns in a village of rural Malawi. _EPJ Data Science_ 10, 1 (2021), 46.
* Palla et al. (2005) Gergely Palla, Imre Derényi, Illés Farkas, and Tamás Vicsek. 2005. Uncovering the overlapping community structure of complex networks in nature and society. _nature_ 435, 7043 (2005), 814–818.
* Perozzi et al. (2014) Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. 2014\. Deepwalk: Online learning of social representations. In _Proceedings of the 20th ACM SIGKDD conference_.
* Que et al. ([n. d.]) Xinyu Que, Fabio Checconi, Fabrizio Petrini, and John A Gunnels. [n. d.]. Scalable community detection with the louvain algorithm. In _2015 IEEE IPDPS_. IEEE.
* Safari-Alighiarloo et al. (2014) Nahid Safari-Alighiarloo, Mohammad Taghizadeh, Mostafa Rezaei-Tavirani, Bahram Goliaei, and Asghar Peyvandi. 2014. Protein-protein interaction networks and complex diseases. _Gastroenterology and Hepatology_ (2014), 17.
* Sarigöl et al. (2014) Emre Sarigöl, René Pfitzner, Ingo Scholtes, Antonios Garas, and Frank Schweitzer. 2014. Predicting scientific success based on coauthorship networks. _EPJ Data Science_ 3 (2014), 1–16.
* Tomita et al. (2006) Etsuji Tomita, Akira Tanaka, and Haruhisa Takahashi. 2006\. The worst-case time complexity for generating all maximal cliques and computational experiments. _Theoretical computer science_ 363, 1 (2006), 28–42.
* Traag et al. (2019) Vincent A Traag, Ludo Waltman, and Nees Jan Van Eck. 2019\. From Louvain to Leiden: guaranteeing well-connected communities. _Scientific reports_ 9, 1 (2019).
* Wolf et al. (2016) Michael M Wolf, Alicia M Klinvex, and Daniel M Dunlavy. 2016\. Advantages to modeling relational data using hypergraphs versus graphs. In _2016 IEEE High Performance Extreme Computing Conference (HPEC)_. IEEE, 1–7.
* Xu et al. (2018) Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. 2018\. How powerful are graph neural networks? _arXiv preprint arXiv:1810.00826_ (2018).
* Xu et al. (2013) Ye Xu, Dan Rockmore, and Adam M. Kleinbaum. 2013. Hyperlink Prediction in Hypernetworks Using Latent Social Features. In _Discovery Science_.
* Yadati et al. (2020) Naganand Yadati, Vikram Nitin, Madhav Nimishakavi, Prateek Yadav, Anand Louis, and Partha Talukdar. 2020. _NHP: Neural Hypergraph Link Prediction_. New York, NY, USA, 1705–1714.
* Young et al. (2021) Jean-Gabriel Young, Giovanni Petri, and Tiago P Peixoto. 2021\. Hypergraph reconstruction from network data. _Communications Physics_ 4, 1 (2021), 1–11.
* Zhang et al. (2018) Muhan Zhang, Zhicheng Cui, Shali Jiang, and Yixin Chen. 2018\. Beyond link prediction: Predicting hyperlinks in adjacency space. In _AAAI 2018_.
* Zhang et al. (2019) Ruochi Zhang, Yuesong Zou, and Jian Ma. 2019. Hyper-SAGNN: a self-attention based graph neural network for hypergraphs. In _ICLR 2019_.
## Appendix A Proofs
### A.1. Proof of Theorem 4.1
###### Proof.
From Def. 2, it suffices to show that for every $C$ that is not a maximal
clique, $C$ is not in $\mathcal{E}$. This holds because in that case $C$ has
to be the proper subset of some maximal clique $C^{\prime}\in\mathcal{E}$, but
since $H$ is Sperner, $C$ cannot be a hyperedge. ∎
### A.2. Proof of Theorem 4.2
###### Proof.
The “if” direction: Suppose that $\mathcal{H}$ is not conformal. According to
Def. 2, we know that there exists a maximal clique $C\notin\mathcal{E}$.
Clearly for every $C$ with $|C|\leq 2$, $|C|\in\mathcal{E}$. For a
$C\notin\mathcal{E}$ and $|C|\geq 3$, pick any two nodes $v_{1},v_{2}\in C$.
Because they are connected, $v_{1},v_{2}$ must be in some hyperedge $E_{i}$.
Now pick a third node $v_{3}\notin E_{i}$. Likewise, there exists some
different $E_{j}$ such that $v_{1},v_{3}\in E_{2}$, and some different $E_{q}$
such that $v_{2},v_{3}\in E_{3}$. Notice that $E_{j}\neq E_{q}$ because
otherwise the three nodes would be in the same hyperedge. Now we have
$\\{v_{1},v_{2},v_{3}\\}\subseteq(E_{i}\cap E_{j})\cup(E_{j}\cap
E_{q})\cup(E_{q}\cap E_{i})$. Because $\\{v_{1},v_{2},v_{3}\\}$ is not in the
same hyperedge, $(E_{i}\cap E_{j})\cup(E_{j}\cap E_{q})\cup(E_{q}\cap E_{i})$
is also not in the same hyperedge.
The “only if” direction: Because every two of the three intersections share a
common hyperedge, their union is a clique. The clique must be contained by
some hyperedge, because otherwise the maximal clique containing the clique is
not contained by any hyperedge. ∎
Alternatively, there is a less intuitive proof that builds upon results from
existing work in a detour: It can be proved that $\mathcal{H}$ being conformal
is equivalent to its dual $\mathcal{H}^{\prime}$ being Helly (Berge, 1973).
According to an equivalence to Helly property mentioned in (Berge and Duchet,
1975), for every set $A$ of 3 nodes in $\mathcal{H}^{\prime}$, the
intersection of the edges $E_{i}$ with $|E_{i}\cap A|\geq k$ is non-empty.
Upon a dual transformation, this result can be translated into the statement
of Theorem 4.1. We refer the interested readers to the original text.
### A.3. Proof of Theorem 4.3
Given a set $X=\\{1,2,...,m\\}$ and a hypergraph
$\mathcal{H}=(V,\\{E_{1},...E_{m}\\})$, we define $f:2^{X}\rightarrow 2^{V}$
to be:
$f(S)=\left(\bigcap_{i\in S}E_{i}\right)\bigcap\left(\bigcap_{i\in X\backslash
S}\bar{E_{i}}\right),\;S\subseteq X$
where $\bar{E_{i}}=V\backslash E_{i}$.
###### Lemma A.1.
$\\{f(S)|S\in 2^{X}\\}$ is a partition of $V$.
###### Proof.
Clearly $f(S_{i})\cap f(S_{j})=\emptyset$ for all $S_{i}\neq S_{j}$, so
elements in $\\{f(S)|S\in 2^{X}\\}$ are disjoint. Meanwhile, for every node
$v\in V$, we can construct a $S=\\{i|v\in E_{i}\\}$ so that $v\in f(S)$.
Therefore, the union of all elements in $\\{f(S)|S\in 2^{X}\\}$ spans $V$. ∎
Because Lemma A.1 holds, for any $v\in V$ we can define the reverse function
$f^{-1}(v)=S\Leftrightarrow v\in f(S)$. Here $f^{-1}$ is a signature function
that represents a node in $V$ by a subset of $X$, whose physical meaning is
the intersection of hyperedges in $\mathcal{H}$.
###### Lemma A.2.
If $S_{1}\cap S_{2}\neq\emptyset,S_{1},S_{2}\subseteq X$, then for every
$v_{1}\in f(S_{1})$ and $v_{2}\in f(S_{2})$, $(v_{1},v_{2})$ is an edge in
$H$’s projection $G$. Reversely, if $(v_{1},v_{2})$ is an edge in $G$,
$f^{-1}(v_{1})\cap f^{-1}(v_{2})\neq\emptyset$.
###### Proof.
According to the definition of $f(S)$, $\forall i\in S_{1}\cap
S_{2},\;v_{1},v_{2}\in E_{i}$. Appearing in the same hyperedge means that they
are connected in $G$, so the first part is proved. If $(v_{1},v_{2})$ is an
edge in $G$, there exists an $E_{i}$ that contains both nodes, so
$f^{-1}(v_{1})\cap f^{-1}(v_{2})\supseteq\\{i\\}\neq\emptyset$. ∎
An intersecting family $\mathcal{F}$ is a set of non-empty sets with non-empty
pairwise intersection, i.e. $S_{i}\cap S_{j}\neq\emptyset,\;\;\forall
S_{i},S_{j}\in\mathcal{F}$. Given a set $X$, a maximal intersecting family of
subsets, is an intersecting family of set $\mathcal{F}$ that satisfies two
additional conditions: (1) Each element of $\mathcal{F}$ is a subset of $X$;
(2) No other subset of $X$ can be added to $\mathcal{F}$.
###### Lemma A.3.
Given a a hypergraph $\mathcal{H}=(V,\\{E_{1},E_{2},...E_{m}\\})$, its
projection $G$, and $X=\\{1,2,...,m\\}$, the two statements below are true:
* •
If a node set $C\subseteq V$ is a maximal clique in $G$, then
$\\{f^{-1}(v)|v\in C\\}$ is a maximal intersecting family of subsets of $X$.
* •
Reversely, if $\mathcal{F}$ is a maximal intersecting family of subsets of
$X$, then $\cup_{S\in\mathcal{F}}f(S)$ is a maximal clique in $G$.
###### Proof.
For the first statement, clearly $\forall v\in V,\;f^{-1}(v)\subseteq X$.
Because $C$ is a clique, every pair of nodes in $C$ is an edge in $G$.
According to Lemma A.2, $\forall v_{1},v_{2}\in C,\;f^{-1}(v_{1})\cap
f^{-1}(v_{2})\neq\emptyset$. Finally, because $C$ is maximal, there does not
exist a node $v\in V$ that can be added to $C$. Equivalently there does not
exist a $S=f^{-1}(v)$ that can be added to $f^{-1}(C)$. Therefore, $f^{-1}(C)$
is maximal.
For the second statement, because $\mathcal{F}$ is an intersecting family,
$\forall S_{1},S_{2}\subseteq\mathcal{F},\;S_{1}\cap S_{2}\neq\emptyset$.
According to Lemma A.2, $\forall v_{1},v_{2}\in f(\mathcal{F})$,
$(v_{1},v_{2})$ is an edge in $G$. Therefore, $f(\mathcal{F})$ is a clique.
Also, no other node $v$ can be added to $f(\mathcal{F})$. Otherwise,
$f^{-1}(v)\cup\mathcal{F}$ is still an intersecting family while $f^{-1}(v)$
is not in $\mathcal{F}$, which makes $\mathcal{F}$ strictly larger — a
contradiction. Therefore, $\cup_{S\in\mathcal{F}}f(S)$ is a maximal clique.
∎
Lemma A.3 shows there is a bijective mapping between a maximal clique and a
maximal intersecting family of subsets. Given $\mathcal{H}$, $G$ and $X$,
Counting the former is equivalent to counting the latter. The result is
denoted as $\lambda(m)$ in the main text. Lemma 2.1 of (Brouwer et al., 2013)
gives an lower bound: $\lambda(m)\geq 2^{{m-1\choose[m/2]-1}}$.
### A.4. Proof of Theorem 5.1
We start with some definitions. A random finite set, or RFS, is defined as a
random variable whose value is a finite set. Given a RFS $A$, we use
$\mathcal{S}(A)$ to denote $A$’s sample space; for a set $a\in\mathcal{S}(A)$
we use $P_{A}(a)$ to denote the probability that $A$ takes on value $a$. One
way to generate a RFS is by defining the set sampling operator $\odot$ on two
operands $r$ and $X$, where $r\in[0,1]$ and $X$ is a finite set: $r\odot X$ is
a RFS obtained by uniformly sampling elements from $X$ at sampling rate $r$,
i.e. each element $x\in X$ has probability $r$ to be kept. Also, notice that
the finite set $X$ itself can also be viewed as a RFS with only one possible
value to be taken. Now, we generalize two operations, union and difference, to
RFS as the following:
* •
Union $A\cup B$:
$\displaystyle\vspace{-1mm}\mathcal{S}(A\cup B)$ $\displaystyle=\\{x|x=a\cup
b,a\in A,b\in B\\}$ $\displaystyle P_{A\cup B}(x)$
$\displaystyle=\sum_{x=a\cup b,a\in A,b\in B}P_{A}(a)P_{B}(b)$
* •
Difference $A\backslash B$:
$\displaystyle\mathcal{S}(A\backslash B)$ $\displaystyle=\\{x|x=a\backslash
b,a\in A,b\in B\\}$ $\displaystyle P_{A\cup B}(x)$
$\displaystyle=\sum_{x=a\backslash b,a\in A,b\in
B}P_{A}(a)P_{B}(b)\vspace{-1mm}$
The Expectation of the Cardinality of a RFS is denoted by $\mathbb{E}^{c}$
such that $\mathbb{E}^{c}[A]=\mathbb{E}_{a\in\mathcal{S}(A)}|a|$. With these
ready, we have the following propositions that hold true for RFS $A$ and $B$:
1. (i)
$\mathbb{E}^{c}[A\cup B]=\mathbb{E}^{c}[B\cup A]$
2. (ii)
$\mathbb{E}^{c}[A\cup B]=\mathbb{E}^{c}[A\backslash B]+\mathbb{E}^{c}[B]$;
3. (iii)
$\mathbb{E}^{c}[A\cup B]\geq\mathbb{E}^{c}[A]$, $\mathbb{E}^{c}[A\cup
B]\geq\mathbb{E}^{c}[B]$;
4. (iv)
$\mathbb{E}^{c}[(r\odot X)\backslash Y]=r|X\backslash
Y|=\mathbb{E}^{c}[X\backslash Y]$; ($X$, $Y$ are both set)
###### Lemma A.4.
At iteration $(i+1)$ when Algo. 1 samples a cell $(n,k)$ (l-10), it reduces
the gap between $q^{*}$ and the expected number of hyperedges it already
collects, $q_{i}$, by a fraction of at least $\frac{r_{n,k}|Q_{n,k}|}{\beta}$:
$\frac{q^{*}-q_{i+1}}{q^{*}-q_{i}}\leq 1-\frac{r_{i+1}|Q_{i+1}|}{\beta}$
###### Proof.
$\displaystyle\;q^{*}-q_{i}$ (Thm. 5.1 setup) $\displaystyle=$
$\displaystyle\;\mathbb{E}^{c}[\cup_{j=1}^{z}r^{*}_{j}\odot\mathcal{E}_{j}]-\mathbb{E}^{c}[\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j}]$
(Prop.iii) $\displaystyle\leq$
$\displaystyle\;\mathbb{E}^{c}[(\cup_{j=1}^{z}r^{*}_{j}\odot\mathcal{E}_{j})\cup(\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j})]-\mathbb{E}^{c}[\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j}]$
(Prop.ii) $\displaystyle=$
$\displaystyle\;\mathbb{E}^{c}[(\cup_{j=1}^{z}r^{*}_{j}\odot\mathcal{E}_{j})\backslash(\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j})]$
(Prop.ii) $\displaystyle=$
$\displaystyle\sum_{t=1}^{z}\mathbb{E}^{c}[(r^{*}_{t}\odot\mathcal{E}_{t})\backslash((\cup_{j=1}^{t-1}r^{*}_{t}\odot\mathcal{E}_{t})\cup(\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j}))]$
(Prop.iii) $\displaystyle\leq$
$\displaystyle\sum_{t=1}^{z}\mathbb{E}^{c}[(r^{*}_{t}\odot\mathcal{E}_{t})\backslash(\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j})]$
(Prop.iv) $\displaystyle=$
$\displaystyle\sum_{t=1}^{z}r^{*}_{t}\mathbb{E}^{c}[\mathcal{E}_{t}\backslash(\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j})]$
$\displaystyle=$
$\displaystyle\sum_{t=1}^{z}r^{*}_{t}|\mathcal{Q}_{t}|\frac{\mathbb{E}^{c}[\mathcal{E}_{t}\backslash(\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j})]}{|\mathcal{Q}_{t}|}$
(Alg. 1, Line 9) $\displaystyle\leq$
$\displaystyle\sum_{t=1}^{z}r^{*}_{t}|\mathcal{Q}_{t}|\frac{\mathbb{E}^{c}[\mathcal{E}_{i+1}\backslash(\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j})]}{|\mathcal{Q}_{i+1}|}$
(Def. of $\beta$) $\displaystyle=$
$\displaystyle\;\beta\cdot\frac{r_{i+1}\mathbb{E}^{c}[\mathcal{E}_{i+1}\backslash(\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j})]}{r_{i+1}|\mathcal{Q}_{i+1}|}$
(Prop.iv) $\displaystyle=$
$\displaystyle\;\frac{\beta}{r_{i+1}|Q_{i+1}|}\mathbb{E}^{c}[(r_{i+1}\odot\mathcal{E}_{i+1})\backslash(\cup_{j=1}^{i}r_{j}\odot\mathcal{E}_{j})]$
(Thm. 5.1 setup) $\displaystyle=$
$\displaystyle\;\frac{\beta}{r_{i+1}|Q_{i+1}|}(q_{i+1}-q_{i})$
Therefore, $\frac{q^{*}-q_{i+1}}{q^{*}-q_{i}}\leq
1-\frac{r_{i+1}|Q_{i+1}|}{\beta}$ ∎
Now, according to our budget constraint we have
$\displaystyle\sum_{n,k}(1-\frac{r_{n,k}|Q_{n,k}|}{\beta})=z-1$
$z$ is the total number of $(n,k)$ pairs where $r_{n,k}>0$, which is a
constant. Finally, we have
$\frac{q^{*}-q}{q^{*}}=\prod_{i=0}^{z-1}\frac{q^{*}-q_{i+1}}{q^{*}-q_{i}}\leq\prod_{n,k}(1-\frac{r_{n,k}|Q_{n,k}|}{\beta})\leq(1-\frac{1}{z})^{z}<\frac{1}{e}$
Therefore $q>(1-\frac{1}{e})q*$.
### A.5. Comparing Our Clique Sampler and Standard Submodular Optimization
A.4 suggests two distinctions between our clique sampler and the standard
greedy algorithm for submodular optimization.
* •
The standard greedy algorithm runs deterministically on a set function whose
form is already known. In comparison, our clique sampler runs on a function
defined over Random Finite Sets (RFS) whose form can only be statistically
estimated from the data.
* •
The standard submodular optimization problem forbids picking a set
fractionally. Our problem allows fractional sampling from an RFS (i.e.
$r_{n,k}\in[0,1]$).
As a result, we saw in A.4 it is much harder to prove the optimality of our
clique sampler than to prove for the greedy algorithm for Standard Submodular
Optimization.
## Appendix B Count features
We define a target clique $C=\\{v_{1},v_{2},...,v_{|C|}\\}$. The 8 features
are:
1. (1)
size of the clique: $|C|$;
2. (2)
avg. node degree: $\frac{1}{|C|}\sum_{v\in C}d(v)$;
3. (3)
avg. node degree (recursive): $\frac{1}{|C|}\sum_{v\in
C}\frac{1}{|\mathcal{N}(v)|}\sum_{v^{\prime}\in\mathcal{N}(v)}d(v^{\prime})$;
4. (4)
avg. node degree w.r.t. max cliques: $\frac{1}{|C|}\sum_{v\in
C}|\\{M\in\mathcal{M}|v\in M\\}|$;
5. (5)
avg. edge degree w.r.t. max cliques:$\frac{1}{|C|}\sum_{v_{1},v_{2}\in
C}|\\{M\in\small{{\mathcal{M}}}|v_{1},v_{2}\in M\\}|$;
6. (6)
binarized “edge degree” (w.r.t. max cliques):$\prod_{v_{1},v_{2}\in
C}\mathbbm{1}_{[e]}$, where $e=\sum_{v_{1},v_{2}\in
C}|\\{M\in\small{{\mathcal{M}}}|v_{1},v_{2}\in M\\}|>1$;
7. (7)
avg. clustering coefficient: $\frac{1}{|C|}\sum_{v\in C}cc(v)$, where $cc$ is
the clustering coefficient of node $v$ in the projection;
8. (8)
avg. size of encompassing maximal cliques:
$\frac{1}{|\mathcal{M}^{C}|}\sum_{M\in\mathcal{M}^{C}}|M|$, where
$\mathcal{M}^{C}=\\{M\in\mathcal{M}|C\subseteq M\\}$;
Notice that avg. clustering coefficient is essentially a normalized count of
the edges between direct neighbors.
Feature Rankings. We study the relative importance of the 8 features by an
ablation study. For each dataset, we ablate the 8 features one at a time,
record the performance drops, and use those values to rank the 8 features. We
repeat this for all datasets, obtaining the 8 features’ average rankings,
shown in Fig.12. More imporant features have smaller ranking numbers. The most
important feature is binarized edge degree. It is an indicator of “whether
each edge in the target clique has been covered by at least two maximal
cliques”.
Figure 12. Average rankings of the 8 count features. More imporant features
have smaller ranking numbers.
## Appendix C Experiments
### C.1. Setup
Machine Specs. All experiments including model training are run on Intel Xeon
Gold 6254 CPU @ 3.15GHz with 1.6TB Memory.
Datasets. The first 4 datasets in Tab.2 are from (Benson et al., 2018); the
rest are from (Young et al., 2021). All source links and data are in our
supplementary materials.
Tuning $\beta$. We found the best $\beta$ by training our model on $90\%$
training data and evaluated on the rest $10\%$ training data. The best values
are reported in our code instructions.
Baseline Selection and Adaptation. For community detection, the criteria are:
1. community number must be automatically found; 2. the output is overlapping
communities. Based on them, we choose the most representative two. We tested
Demon and found it always work best with min community size $=1$ and
$\epsilon=1$. To adapt CFinder we search the best $k$ between $[0.1,0.5]$
quantile of hyperedge sizes on $\mathcal{H}_{0}$. For hyperedge prediction, we
ask that they cannot rely on hypergraphs for prediction, and can only use the
projection. Based on that we use the two recent SOTAs, (Zhang et al., 2018,
2019). We use their default hyperparameters for training. For Bayesian-MDL we
use its official library in graph-tools with default hyperparameters. We
implemented the best heuristic in (Conte et al., 2016) for clique covering.
### C.2. Task Extension: Using Edge Multiplicities
Throughout this work, we do not assume that the projected graph has edge
multiplicities. Relying on edge multiplicities addresses a simpler version of
the problem which might limit its applicability. That said, some applications
may come with edge multiplicity information, and it is important to understand
what is possible in this more tractable case. Here we provide an effective
unsupervised method as a foundation for further work.
From Sec. 2, the multiplicity of an edge $(u,v)$ is the number of hyperedges
containing both $u$ and $v$. It is not hard to show that knowledge of the edge
multiplicities does not suffice to allow perfect reconstruction, and so we
still must choose from among a set of available cliques to form hyperedges. In
doing this with multiplicity information, we need to ensure that the cliques
we select add up to the given edges multiplicities. We do this by repeatedly
finding maximal cliques, removing them, and reducing the multiplicities of
their edges by 1. We find that an effective heuristic is to select maximal
cliques that have large size and small average edge multiplicities (combining
these for example using a weighted sum).
Table 5 gives the performance on the datasets we study. We can see that with
edge multiplicities our unsupervised baseline outperforms all the methods not
using edge multiplicities on most datasets, showing the power of this
additional information. The performance, however, is still far from perfect,
and we leave the study of this interesting extension to future work.
DBLP | Enron | P.School | H.School
---|---|---|---
82.75 (+1.56) | 19.79 (+3.77) | 10.46 (-32.60) | 19.30 (-35.56)
Foursquare | Hosts-Virus | Directors | Crimes
83.91 (+10.35) | 67.86 (+19.01) | 100.0 (+0.00) | 80.47 (+1.20)
Table 5. Performance of the proposed baseline using available edge
multiplicities. In parenthesis reported the increment against the best-
performed method not using edge multiplicities (cr. Table 3).
### C.3. Storage Comparison
As mentioned in Introduction, a side bonus of having a reconstructed
hypergraph versus a projected graph is that the former typically requires much
less storage space. As a mini-study, we compare the storage of each
hypergraph, its projected graph, and its reconstruction generated by SHyRe-
count. We use the unified data structure of a nested array to represent the
list of edges/hyperedges. Each node is indexed by an int64. Fig.13 visualizes
the results. We see that the reconstructions take 1 to 2 orders of magnitude
less storage space than the projected graphs and are closest to the originals.
Figure 13. Comparing the storage of original hypergraphs, projected graphs and
reconstructed hypergraphs. Each hypergraph/graph is stored as a nested array
with int64 node indexes. Over the 8 datasets on average, a reconstructed
hypergraph takes only $4.6\%$ the storage of a projected graph.
### C.4. More Results
Figure 14. Pairwise distance between $\rho(n,k)$ distribution of all datasets using mean squared difference of all cell values (after alignment). The distance matrix obtained is shown above. The diagonal cell is the darkest among its row (or column). Figure 15. CPU time comparison. Notice that Bayesian-MDL’s is written in C++, CMM in Matlab, and all other methods in Python. DBLP | Enron | P.School | H.School
---|---|---|---
$1.0E6$, $(\footnotesize{\gg}6.0E10$) | $1.0E3$, $6.9E4$ | $3.5E5$, $2.6E6$ | $6.0E4$, $8.4E5$
Foursquare | Hosts-Virus | Directors | Crimes
$2.0E4$, $(\gg 1.1E12$) | $6.0E3$, ($\gg 2.2E12$) | $800$, $4.5E5$ | $1.0E3$, $2.9E5$
Table 6. Optimal clique sampling number $\beta$ and total number of cliques $|\mathcal{U}|$. “$\gg$” appears if the dataset contains too many cliques to be enumerated by our machine in 24 hours, in which case a conservative lower bound is estimated instead. | DBLP | Hosts-Virus | Enron
---|---|---|---
original (SHyRe-motif) | 81.19$\pm$0.02 | 45.16$\pm$0.55 | 16.02$\pm$0.35
ablation: “random” | 0.17$\pm$0.00 | 0.00$\pm$0.00 | 0.54$\pm$0.49
ablation: “small” | 1.12$\pm$0.52 | 1.38$\pm$0.70 | 8.57$\pm$0.89
ablation: “head & tail” | 27.42$\pm$0.54 | 29.92$\pm$0.54 | 11.99$\pm$0.10
Table 7. Ablation study: comparing the performance obtained by replacing the
clique sampler with simpler heuristics for sampling.
|
# Imaging reconstruction method on X-ray data of CMOS polarimeter combined
with coded aperture
Tsubasa Tamba Hirokazu Odaka Taihei Watanabe Toshiya Iwata Tomoaki Kasuga
Atsushi Tanimoto Satoshi Takashima Masahiro Ichihashi Hiromasa Suzuki Aya
Bamba Japan Aerospace Exploration Agency, Institute of Space and
Astronautical Science, 3-1-1, Yoshino-dai, Chuo-ku, Sagamihara, Kanagawa
252-5210, Japan Department of Physics, Faculty of Science, The University of
Tokyo, 7-3-1, Hongo, Bunkyo-ku, Tokyo 113-0033, Japan Department of Earth and
Space Science, Osaka University, 1-1 Machikaneyama-cho, Toyonaka, Osaka
560-0043, Japan Kavli IPMU, The University of Tokyo, Kashiwa 113-0033, Japan
Graduate School of Science and Engineering, Kagoshima University, 1-21-24,
Korimoto, Kagoshima, Kagoshima 890-0065, Japan Research Center for Early
Universe, Faculty of Science, The University of Tokyo, 7-3-1, Hongo, Bunkyo-
ku, Tokyo 113-0033, Japan Trans-Scale Quantum Science Institute, The
University of Tokyo, Tokyo 113-0033, Japan
###### Abstract
X-ray polarization is a powerful tool for unveiling the anisotropic
characteristics of high-energy celestial objects. We present a novel imaging
reconstruction method designed for hard X-ray polarimeters employing a Si CMOS
sensor and coded apertures, which function as a photoelectron tracker and
imaging optics, respectively. Faced with challenges posed by substantial
artifacts and background noise in the coded aperture imaging associated with
the conventional balanced correlation method, we adopt the Expectation-
Maximization (EM) algorithm as the foundation of our imaging reconstruction
method. The newly developed imaging reconstruction method is validated with
imaging polarimetry and a series of X-ray beam experiments. The method
demonstrates the capability to accurately reproduce an extended source
comprising multiple segments with distinct polarization degrees. Comparative
analysis exhibits a significant enhancement in imaging reconstruction accuracy
compared to the balanced correlation method, with the background noise levels
reduced to 17%. The outcomes of this study enhance the feasibility of Cube-Sat
imaging polarimetry missions in the hard X-ray band, as the combination of Si
CMOS sensors and coded apertures is a promising approach for realizing it.
###### keywords:
polarimetry , hard X-rays , CMOS imaging sensor , coded aperture imaging ,
CubeSat , EM algorithm
## 1 Introduction
X-ray polarization is a distinctive method for capturing unprecedented images
of high-energy celestial objects. It carries essential information about
anisotropic physical phenomena, such as the magnetic field structure
responsible for synchrotron radiation, the emission geometry of Compton
scattering, and the properties of the strong gravitational field around
compact objects. Recent advancements in X-ray polarization studies have been
marked by the launch of the Imaging X-ray Polarimeter Explorer (IXPE; [1, 2])
in 2021. This mission, which covers the soft X-ray band of 2–8 keV, has
substantially broadened our capabilities in this field (e.g., Crab Nebula
observation: [3]). While IXPE focuses on the soft X-ray band through
photoabsorption, the Soft Gamma-ray Detector (SGD) onboard Hitomi [4] and the
Polarized Gamma-ray Observer+ (PoGO+;[5]) utilize Compton scattering to detect
X-ray polarization in the hard X-ray band. Notably, both missions successfully
detected polarization from the Crab Nebula in the higher end of the hard X-ray
band: 60–160 keV for SGD [6] and 20–160 keV for PoGO+ [7]. However, a
significant observational gap exists in the lower end of the hard X-ray band
due to the absence of established observational techniques, specifically in
the energy range of 10–30 keV. This energy range is particularly important
because non-thermal components, carrying anisotropic information, become
dominant over unpolarized thermal emission above $\sim 10\;{\rm keV}$. In
addition, the abundance of photon counts in this band, compared to higher
energy bands, provides a rich dataset for detailed analysis.
To address this observational gap, a 6U CubeSat mission named the Coded
Imaging Polarimeter of High Energy Radiation (cipher; [8]) is under
development. It aims to conduct imaging polarimetry in the 10–30 keV band and
obtain the polarization map for bright extended sources such as the Crab
Nebula. This imaging polarimeter is characterized by a micro-pixel
($\sim{\rm\mu m}$ pixel pitch) Si CMOS sensor and coded apertures. The micro-
pixel CMOS sensor serves as a sophisticated photoelectron tracker, enabling
the determination of the polarization angle of incident photons. Given the
spatial constraints of the CubeSat mission, coded apertures provide the
solution for imaging capability, eliminating the need for X-ray mirrors. The
CMOS sensor has demonstrated polarization detection capabilities in the 6–30
keV range [9, 10], meeting the fundamental requirements of cipher. The imaging
reconstruction of the coded apertures is also established [11], employing a
conventional reconstruction technique called the “balanced correlation method”
[12]. However, this method introduces significant artifacts and noise levels,
which hinder effective imaging polarimetry.
Recognizing the limitation of the balanced correlation method, this paper
presents a novel imaging reconstruction method for polarimetry utilizing the
Expectation-Maximization (EM) algorithm [13]. It is a statistical approach to
efficiently derive maximum likelihood estimates, and has been applied to
imaging reconstruction techniques extensively (e.g., [14, 15, 16, 17, 18]).
This method holds the potential to achieve imaging polarimetry with enhanced
precision, with reduced artifacts and noise levels. The remainder of this
paper is organized as follows. We first provide the formulation of the EM
algorithm that is applied to imaging polarimetry, and describe the
experimental setup to examine the new method in Section 2. The data processing
and results of imaging reconstruction are presented in Section 3. We then
discuss the precision of our new imaging reconstruction method by comparing it
with the conventional balanced correlation method in Section 4. Conclusions
are presented in Section 5.
## 2 Imaging reconstruction method
### 2.1 Application of EM algorithm to imaging polarimetry
We developed a novel reconstruction method for the coded aperture imaging
based on the EM algorithm [13]. This statistical approach involves estimating
the maximum likelihood from incomplete experimental data through iterative
cycles of Expectation (E-) and Maximization (M-) steps, continuously updating
the estimation. The $l$-th E-step and M-step in the imaging reconstruction
analysis are defined as follows:
$\displaystyle{\rm(E-step)}\;\;\;\;\;\tilde{D}_{v}^{(l)}$ $\displaystyle=$
$\displaystyle\sum_{u}p(v\,|\,u)\tilde{S}_{u}^{(l)},$ (1)
$\displaystyle{\rm(M-step)}\;\;\;\;\;\tilde{S}_{u}^{(l+1)}$ $\displaystyle=$
$\displaystyle\frac{1}{\sum_{v^{\prime}}D_{v^{\prime}}}\sum_{v}D_{v}\frac{p(v\,|\,u)\tilde{S}_{u}^{(l)}}{\tilde{D}_{v}^{(l)}},$
(2)
where $l$ increments by 1 at each iteration [14]. In these equations,
$\tilde{S}_{u}^{(l)}$, $\tilde{D_{v}^{(l)}}$, and $D_{v}$ represent the
estimated incident source distribution, the estimated event distribution on
the detector, and the actual event distribution derived from experimental
data, respectively. The term $p(v\,|\,u)$ denotes the posterior probability
that a photon from the $u$-th sky region is detected in the $v$-th detector
region.
The method is applied to the imaging polarimeter with a CMOS sensor and coded
apertures. The sky region is defined as a two-dimensional angular
distribution, $\bm{u}=(\theta_{x},\;\theta_{y})$, with respect to the optical
axis, while the detector region is defined as a two-dimensional pixel
distribution, $\bm{v}=(d_{x},\;d_{y})$. The polarization angle and
polarization degree need to be assigned to each sky region. Since the CMOS
sensor is sensitive only to the horizontal and vertical directions, we focused
on two Stokes parameters, $I$ and $Q$, ignoring the effect of $U$. These
Stokes parameters are linked to the experimental data through the branching
ratio in double-pixel events, which is, the ratio between horizontally-
elongated (H-type) and vertically-elongated (V-type) events [8]. Equations (1)
and (2) can be expressed as
$\displaystyle\tilde{D}_{\bm{v},\,t}^{(l)}$ $\displaystyle=$
$\displaystyle\sum_{\bm{u}}p(\bm{v}\,|\,\bm{u})\tilde{S}_{\bm{u},\,t}^{(l)}$
(3) $\displaystyle\tilde{S}_{\bm{u},\,t}^{(l+1)}$ $\displaystyle=$
$\displaystyle\frac{1}{\sum_{\bm{v}^{\prime}}D_{\bm{v}^{\prime},\,t}}\sum_{\bm{v}}D_{\bm{v},\,t}\frac{p(\bm{v}\,|\,\bm{u})\tilde{S}_{\bm{u},\,t}^{(l)}}{\tilde{D}_{\bm{v},\,t}^{(l)}},$
(4)
where $t$ is a two-valued variable representing $t={\rm H-type}$ or $t={\rm
V-type}$. The Stokes parameters $I$ and $Q$ can be calculated using
$\displaystyle\begin{pmatrix}I_{\bm{u}}^{(l)}\\\ Q_{\bm{u}}^{(l)}\\\
\end{pmatrix}=\begin{pmatrix}1&1\\\ 1/m&-1/m\\\
\end{pmatrix}\begin{pmatrix}S_{\bm{u},\,{\rm H-type}}^{(l)}\\\
S_{\bm{u},\,{\rm V-type}}^{(l)}\\\ \end{pmatrix},$ (5)
where $m$ denotes the modulation factor of the CMOS sensor. The posterior
probability $p(\bm{v}\,|\,\bm{u})$ is calculated from the coded aperture
pattern:
$\displaystyle p(\bm{v}\,|\,\bm{u})\propto\begin{cases}1&(\bm{u}\;{\rm
to}\;\bm{v}\;{\rm pass\;through\;aperture})\\\ \tau&(\bm{u}\;{\rm
to}\;\bm{v}\;{\rm pass\;through\;mask})\\\ 0&{\rm(otherwise)},\end{cases}$ (6)
where $\tau$ is the transmittance of the coded aperture.
### 2.2 Beam experiment
X-ray beam experiments were conducted at BL20B2 in SPring-8, a synchrotron
radiation facility in Hyogo, Japan, to evaluate the imaging-polarimetry
capability of the system. The beamline emits polarized monochromatic X-rays
with a polarization degree of nearly 100% (e.g., [9]). The left panel of
Figure 1 shows the experimental setup at the beamline. We utilized the
GMAX0505RF sensor, manufactured by GPixel Inc., as the polarization detector.
This Si CMOS sensor, with a pixel pitch of $2.5\;{\rm\mu m}$, was originally
designed for visible light and near-infrared imaging but also has sensitivity
for the X-ray polarization [10]. The sensor was positioned at the farthest
part from the beam source. Upstream of it, a $25\;{\rm cm}$ collimator was
mounted to collimate incident X-rays. On the upstream side of the collimator,
a $0.1\;{\rm mm}$ thick board made of SUS304 was attached. This board features
numerous holes, functioning as eight independent coded apertures with distinct
random patterns. The collimator ensures that the shadows of these patterns do
not overlap at the detector plane. Each coded aperture comprises of a
$64\times 64$ pattern with a pitch size of $35\;{\rm\mu m}$, covering
$896\times 896$ pixels on the detector plane (Figure 1 middle). The system has
a field of view and angular resolution of $0.5^{\circ}\times 0.5^{\circ}$ and
$29^{\prime\prime}$, respectively. The orientation of the collimator
($\theta_{x}$ and $\theta_{y}$) is adjustable using a rotation stage,
facilitating the control of the incident X-ray direction. Moreover, the entire
system can rotate around the beam axis, allowing for the adjustment of the
polarization angle detected by the CMOS sensor.
A comprehensive sky scan was conducted by manipulating the rotation stage, and
an image for an extended source was created by merging the acquired data. The
data points for the imaging scan experiments are illustrated in the right
panel of Figure 1, covering a $420^{\prime\prime}\times 420^{\prime\prime}$
plane with a pitch size of $15^{\prime\prime}$. A total of $29\times 29=841$
scan points were utilized. Table 1 provides the details on the datasets
employed in this paper. To verify the hard X-ray imaging polarimetry, we
exposed the system to $16.0\;{\rm keV}$ polarized X-rays with polarization
angles of both $0^{\circ}$ and $90^{\circ}$. This beam energy was selected by
balancing between the polarization detection capability and the quantum
efficiency of the sensor. Lower-energy photons would result in low sensitivity
for polarization detection because they are more likely to be recorded as
single-pixel events. Conversely, higher-energy photons lead to poor
statistical quantity due to their smaller cross section with the Si CMOS
sensor. The incident beam was attenuated by a $120\;{\rm\mu m}$ Cu filter to
avoid pile-up. Each dataset has 10 frames per scan point, resulting in a total
of 8410 frames. The initial four datasets in Table 1 were merged to generate
the “$0^{\circ}$ polarization data”, while the latter four were merged to
generate the “$90^{\circ}$ polarization data”.
Figure 1: (left) Imaging polarimetry setup at Spring-8 beam line. (middle) Picture of coded apertures. (right) Sky region subject to the imaging scan experiment. Table 1: List of datasets. | Start time
---
(JST)
| Beam energy
---
(keV)
| Polarization angle
---
(degree)
| Frame exposure
---
(ms)
Number of frames
2021-11-03 18:50 | 16.0 | 0 | 600 | 8410
2021-11-03 20:40 | 16.0 | 0 | 600 | 8410
2021-11-03 22:42 | 16.0 | 0 | 600 | 8410
2021-11-04 00:22 | 16.0 | 0 | 600 | 8410
2021-11-04 03:42 | 16.0 | 90 | 600 | 8410
2021-11-04 05:23 | 16.0 | 90 | 600 | 8410
2021-11-04 07:00 | 16.0 | 90 | 600 | 8410
2021-11-04 08:54 | 16.0 | 90 | 600 | 8410
## 3 Results
### 3.1 Data processing
We performed standard data processing on the acquired data using ComptonSoft
[19, 20, 21], which included pedestal subtraction, bad pixel exclusion, event
extraction, and event classification. Subsequent to the data processing,
double-pixel events exhibiting either horizontal (H-type) or vertical (V-type)
elongation were specifically selected. The polarization angle and polarization
degree can be determined by measuring the ratio between the two types of
events. The numbers of H-type/V-type events were 9485530/8008517 for
$0^{\circ}$ polarization and 7622987/9741000 for $90^{\circ}$ polarization.
The modulation factor was calculated from these values to be $m=0.1033\pm
0.0001$ for $16.0\;{\rm keV}$ polarized photons (for more details of the
modulation factor, see [10]).
In addition to the $0^{\circ}$ and $90^{\circ}$ polarization data, a new
dataset named “imaging polarimetry test data” with various polarization
degrees depending on incident directions was generated. This dataset can
easily be generated by appropriately blending the $0^{\circ}$ and $90^{\circ}$
polarization data; for instance, an unpolarized source can be simulated by
mixing equal proportions of the $0^{\circ}$ and $90^{\circ}$ polarization
data. To evaluate the imaging polarimetry capability, we divided the entire
scan region into four sub-regions and assigned distinct polarization degrees
($Q/I$) to them. Figure 2 illustrates the definition of the segmentation. We
assigned $Q/I=0.4$, $-1.0$ ($90^{\circ}$ polarization), $1.0$ ($0^{\circ}$
polarization), and $0.0$ (unpolarized) to Regions 1, 2, 3, and 4,
respectively, where the intensity $I$ was spatially uniform. The first value
($Q/I=0.4$) was assigned to examine a “realistic” polarization degree for a
celestial object, while the latter three values ($Q/I=-1.0$, $1.0$, $0.0$)
were included to examine extreme cases.
Figure 2: Definition of the imaging polarimetry test data (see Section 3.1).
### 3.2 Imaging polarimetry
The EM algorithm was applied to the reconstruction of the polarization map
from the encoded images on the detector plane. We set
$\tilde{S}_{\bm{u},\,t}^{(0)}$ as a spatially uniform distribution, and
iterated through the expectation and maximization steps using equations (3)
and (4). The reconstruction was simultaneously applied to all eight coded
apertures, with the transmittance of the apertures set to $\tau=0.02$
(corresponding to $0.1\;{\rm mm}$ SUS304 for $16\;{\rm keV}$ photons). The
encoded image on the detector plane was binned in every $4\times 4$ pixels due
to the limited memory resources, but this did not affect the results as the
angular resolution is primarily dominated by the aperture pitch. The
reconstructed images were generated with an image pixel size of
$20^{\prime\prime}\times 20^{\prime\prime}$, which is slightly smaller than
the angular resolution of the system. The EM steps were iterated until
$l=1500$, ensuring a sufficient convergence. The reconstructed images were
obtained after the vignetting correction was applied.
As a simple case, we first conducted the imaging reconstruction on the
$0^{\circ}$ polarization data, where $I$ and $Q$ distributions should ideally
be identical. The upper panels of Figure 3 display the reconstructed images of
$I$, $Q$, and $Q/I$. The uniformly distributed images of $I$ and $Q$ were
successfully reconstructed, consistent with the actual scan region of
$420^{\prime\prime}\times 420^{\prime\prime}$. The polarization degrees also
yield a convincing value of $Q/I=1$ throughout the entire image. It is
important to note that the $Q/I$ image indicates horizontal polarization of
incident photons even beyond the scan region. This artifact arises because a
portion of the detected photons are inaccurately projected to the outer
region. Consequently, careful examination of both $I$ and $Q/I$ images is
necessary, and polarization degrees should not be relied on in regions where
the photon intensity $I$ is faint. We also tried various patterns of uniformly
distributed $Q/I$ values (0.5, 0, -0.5, -1.0) and confirmed that the
reconstructed images successfully reproduced the uniformly distributed
polarization degrees.
Subsequently, we applied the imaging reconstruction to the imaging polarimetry
test data, which is described in Section 3.1 and defined in Figure 2. The
lower panels of Figure 3 display the reconstructed images. A uniform
distribution of $I$ extending over $420^{\prime\prime}\times
420^{\prime\prime}$ similar to the $0^{\circ}$ polarization data was observed.
But a totally different image of $Q$, clearly divided into four segments, was
observed. Each of the four regions corresponds to the definition in Figure 2.
The reconstructed image of $Q/I$ successfully reflects the distinct
polarization degrees of the four regions, displaying $\sim 0.4$, $\sim-1.0$,
$\sim 1.0$, and $\sim 0.0$ for Regions 1, 2, 3, and 4, respectively.
In Figure 4, a detailed evaluation of the imaging reconstruction results is
presented. The left panel shows the distribution of $I/I_{\rm max}$ for the
four regions, where $I_{\rm max}$ corresponds to the maximum value of the
entire image. The four regions display comparable distributions, indicating
that the spatially uniform distribution of $I$ is successfully reproduced. The
right panel exhibits the distribution of the polarization degree $Q/I$ for the
four regions. The four regions display peaks at the predetermined polarization
degrees and are significantly distinguishable from each other. Still, non-
negligible uncertainties of $\Delta(Q/I)\sim 0.2$ remain due to systematic
uncertainties stemming from potentially non-uniform modulation factors across
the detector plane and biased projections caused by randomized patterns of the
coded apertures. While a poorer statistical quantity would result in worse
reproducibility, this imaging analysis benefits from an average of $\sim
40000$ photon counts contributing each imaging pixel.
Figure 3: The upper panels show the reconstructed images from $0^{\circ}$
polarization data, while the lower panels show those from imaging polarimetry
test data (see text). Reconstructed images include
$I(\theta_{x},\;\theta_{y})$ (left), $Q(\theta_{x},\;\theta_{y})$ (middle),
and polarization degree $Q/I$ (right). Figure 4: Evaluation of imaging
reconstruction on the imaging polarimetry test data. The left panel shows the
histogram of source intensity plotted for the Regions 1–4. The right panel
displays the histogram of polarization degree plotted for Regions 1–4.
## 4 Discussion
In Section 3, our formulation of the imaging reconstruction method using the
EM algorithm was examined, and the polarization map was successfully
reconstructed from the imaging polarimetry test data. In this section, the
accuracy of the imaging reconstruction was evaluated by comparing our method
with the conventional balanced correlation method [12]. The balanced
correlation method is a prominent decoding method for coded apertures,
especially suitable for Uniformly Redundant Array (URA; [22, 23]), but it also
works for random pattern arrays used in our system (for details of the
balanced correlation method in our system, see Section 2 of [11]).
The left panel of Figure 5 shows the reconstructed $I$ image of the
$0^{\circ}$ polarization data using the balanced correlation method. Despite
the incident flux being spatially uniform along the scan region, it exhibits a
significant imbalance with unpredictably intense signals in the upper left
region. The non-source region also displays large fluctuations with a noisy
background in the upper left region. These artifacts and noisy background
signals arise from the spatially non-uniform distribution of the apertures,
which could back-project more signals to the upper left region of the sky. In
the right panel of Figure 5, the intensity distributions of the source and
background regions are presented, comparing the results between the EM
algorithm and the balanced correlation method. The EM algorithm exhibits sharp
distributions in both source and background regions, with the two
distributions clearly separated. On the other hand, the balanced correlation
method has flat distributions for the source and background regions, and these
two distributions are largely overlapped. This difference indicates fewer
artifacts and lower noise levels in the EM algorithm compared to the balanced
correlation method.
The noise levels can be evaluated by
$\displaystyle L_{\rm noise}=\frac{\sigma_{\rm bkg}}{\mu_{\rm src}},$ (7)
where $\sigma_{\rm bkg}$ denotes the standard deviation in the background
region, and $\mu_{\rm src}$ is the averaged intensity in the source region.
Here, the source region represents a collection of the image pixels within the
central $420^{\prime\prime}\times 420^{\prime\prime}$ area while the
background region denotes a collection of the image pixels outside this area.
We obtain $L_{\rm noise}=0.13$ for the EM algorithm and $L_{\rm noise}=0.75$
for the balanced correlation method, which means that our new imaging
reconstruction method reduces the noise level to $17\%$ of the conventional
method. The significant artifacts and noise levels in the balanced correlation
method had prevented us from creating any efficient polarization map, but the
improved method using the EM algorithm has successfully realized it with
sufficiently reduced artifacts and noise levels, as depicted in Figures 3 and
4.
Several aspects still require refinement for the application of this system
and analysis method to imaging polarimetry. Firstly, while we assumed a
uniform modulation factor within the CMOS sensor, this assumption needs
careful examination. Deviations of $\sim 0.01$ from the adopted original value
($m=0.1033$) could occur when focusing on specific limited areas of the
sensor. A better understanding of the dependence of the modulation factor on
the sensor pixels would mitigate systematic uncertainties in imagin
polarimetry. Secondly, the biased random patterns of the coded apertures also
require scrutiny. As observed in the left panel of Figure 5, generated by the
balanced correlation method, these coded apertures tend to project more
photons to the upper left region of the reconstructed image, potentiallly
imacting the accuracy of the imaging reconstruction in the case of the EM
algorithm. It is still underway to develop more sophisticated coded aperture
patterns. Furthermore, the performance of imaging polarimetry is influenced by
the shape of the polarization distribution. While in this study, the
boundaries between different polarization profiles are located around the
center of the field of view, we also confirmed that the reconstruction
accuracy diminishes when these boundaries are located farther from the center.
This is due to reduced sensitivity of the coded apertures in these regions.
These considerations are beyond the scope of this paper and will be addressed
in future discussions.
Figure 5: (left) Reconstructed $I$ image of $0^{\circ}$ polarization data
using the balanced correlation method. (right) Intensity distributions of
source (src) and background (bkg) regions compared between the EM algorithm
(labeled as EM) and the balanced correlation method (labeled as BC).
## 5 Conclusions
We developed a new imaging reconstruction method for hard X-ray imaging
polarimetry employing a combination of a CMOS sensor and coded apertures.
Motivated by the significant artifacts and background noise levels associated
with the conventional balanced correlation method, we introduced the EM
algorithm as the foundation of our new imaging reconstruction method. The
effectiveness of the newly developed method was confirmed through X-ray beam
experiments at Spring-8, where the imaging polarimeter was exposed to the
X-ray beam and captured an extended source by a comprehensive scan of the sky.
The method exhibited remarkable capabilities in accurately reproducing an
extended source comprising multiple segments characterized by distinct
polarization degrees. Specifically, it successfully reproduced four regions
with $Q/I=-1.0$, 0.0, 0.4, 1.0. Our developed imaging reconstruction method
achieved a significant reduction in artifacts and noise levels compared to the
balanced correlation method. The background noise level experienced a
significant reduction to $17\%$. The outcomes of this study demonstrate
sufficient feasibility for the hard X-ray imaging polarimetry utilizing the
combination of CMOS sensors and coded apertures. This approach represents one
of the most promising ways to achieve CubeSat missions on hard X-ray
polarimetry.
## Acknowledgment
We thank the anonymous referees, who improved this work with their valuable
comments. We appreciate the helpful technical support by M.Hoshino and
K.Uesugi at SPring-8. This research is supported by the Japan Society for the
Promotion of Science (JSPS) KAKENHI Grant No. 20J20050, 23KJ2214 (TT),
18H05861, 19H01906, 19H05185, 22H00128, 22K18277 (HO), 20J00119 (AT), and
23KJ0780 (TI). The synchrotron radiation experiments were performed at BL20B2
in SPring-8 with the approval of the Japan Synchrotron Radiation Research
Institute (JASRI) (Proposal No. 2019B1369, 2020A1343, 2021B1542, and
2022B1477). This research is also supported by Society for Promotion of Space
Science (TT).
## References
* [1] M. C. Weisskopf, et al., The Imaging X-ray Polarimetry Explorer (IXPE), in: J.-W. A. den Herder, T. Takahashi, M. Bautz (Eds.), Space Telescopes and Instrumentation 2016: Ultraviolet to Gamma Ray, Vol. 9905 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 2016, p. 990517. doi:10.1117/12.2235240.
* [2] M. C. Weisskopf, et al., The Imaging X-Ray Polarimetry Explorer (IXPE): Pre-Launch, Journal of Astronomical Telescopes, Instruments, and Systems 8 (2) (2022) 026002. arXiv:2112.01269, doi:10.1117/1.JATIS.8.2.026002.
* [3] N. Bucciantini, et al., Simultaneous space and phase resolved X-ray polarimetry of the Crab pulsar and nebula, Nature Astronomy 7 (2023) 602–610. arXiv:2207.05573, doi:10.1038/s41550-023-01936-8.
* [4] H. Tajima, et al., Design and performance of Soft Gamma-ray Detector onboard the Hitomi (ASTRO-H) satellite, Journal of Astronomical Telescopes, Instruments, and Systems 4 (2018) 021411. doi:10.1117/1.JATIS.4.2.021411.
* [5] M. Chauvin, et al., Calibration and performance studies of the balloon-borne hard X-ray polarimeter PoGO+, Nuclear Instruments and Methods in Physics Research A 859 (2017) 125–133. arXiv:1703.07627, doi:10.1016/j.nima.2017.03.027.
* [6] Hitomi Collaboration, et al., Detection of polarized gamma-ray emission from the Crab nebula with the Hitomi Soft Gamma-ray Detector, PASJ70 (6) (2018) 113. arXiv:1810.00704, doi:10.1093/pasj/psy118.
* [7] M. Chauvin, et al., Shedding new light on the Crab with polarized X-rays, Scientific Reports 7 (2017) 7816. arXiv:1706.09203, doi:10.1038/s41598-017-07390-7.
* [8] H. Odaka, et al., Concept of a CubeSat-based hard x-ray imaging polarimeter: cipher, in: Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 11444 of Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, 2020, p. 114445V. doi:10.1117/12.2560615.
* [9] K. Asakura, et al., X-ray imaging polarimetry with a 2.5-$\mu$m pixel CMOS sensor for visible light at room temperature, Journal of Astronomical Telescopes, Instruments, and Systems 5 (2019) 035002. arXiv:1906.00012, doi:10.1117/1.JATIS.5.3.035002.
* [10] T. Iwata, K. Hagino, H. Odaka, T. Tamba, M. Ichihashi, T. Kato, K. Ishiwata, H. Kuramoto, H. Matsuhashi, S. Arai, T. Minami, S. Takashima, A. Bamba, Development of the X-ray polarimeter using CMOS imager: polarization sensitivity of a $1.5~{}{\rm\mu m}$ pixel CMOS sensor, arXiv e-prints (2024) arXiv:2405.18907arXiv:2405.18907.
* [11] T. Kasuga, et al., Artifact-less coded aperture imaging in the x-ray band with multiple different random patterns, Journal of Astronomical Telescopes, Instruments, and Systems 6 (2020) 035002. arXiv:2007.15278, doi:10.1117/1.JATIS.6.3.035002.
* [12] E. E. Fenimore, T. M. Cannon, Coded aperture imaging with uniformly redundant arrays, Applied Optics17 (1978) 337–347. doi:10.1364/AO.17.000337.
* [13] A. Dempster, N. Laird, D. Rubin, Maximum Likelihood from Incomplete Data Via the EM Algorithm, Journal of the Royal Statistical Society: Series B 39 (1977) 1. doi:10.1111/j.2517-6161.1977.tb01600.x.
* [14] S. Ikeda, H. Odaka, M. Uemura, T. Takahashi, S. Watanabe, S. Takeda, Bin mode estimation methods for Compton camera imaging, Nuclear Instruments and Methods in Physics Research A 760 (2014) 46–56. arXiv:1312.4291, doi:10.1016/j.nima.2014.05.081.
* [15] A. J. Reader, et al., One-pass list-mode EM algorithm for high-resolution 3-D PET image reconstruction into large arrays, IEEE Transactions on Nuclear Science 49 (3) (2002) 693–699. doi:10.1109/TNS.2002.1039550.
* [16] J. A. Fessler, A. O. Hero, Penalized maximum-likelihood image reconstruction using space-alternating generalized EM algorithms, IEEE Transactions on Image Processing 4 (10) (1995) 1417–1429. doi:10.1109/83.465106.
* [17] E. F. Maher, N. M. Laird, EM algorithm reconstruction of particle size distributions from diffusion battery data, Journal of Aerosol Science 16 (6) (1985) 557–570. doi:10.1016/0021-8502(85)90007-2.
* [18] M. Zanetti, F. Bovolo, L. Bruzzone, Rayleigh-Rice Mixture Parameter Estimation via EM Algorithm for Change Detection in Multispectral Images, IEEE Transactions on Image Processing 24 (12) (2015) 5004–5016. doi:10.1109/TIP.2015.2474710.
* [19] H. Odaka, et al., Development of an integrated response generator for Si/CdTe semiconductor Compton cameras, Nuclear Instruments and Methods in Physics Research A 624 (2) (2010) 303–309. doi:10.1016/j.nima.2009.11.052.
* [20] H. Suzuki, et al., Development of the detector simulation framework for the Wideband Hybrid X-ray Imager onboard FORCE, Nuclear Instruments and Methods in Physics Research A 979 (2020) 164433. arXiv:2007.07919, doi:10.1016/j.nima.2020.164433.
* [21] T. Tamba, et al., Simulation-based spectral analysis of X-ray CCD data affected by photon pile-up, PASJ74 (2) (2022) 364–383. arXiv:2112.14176, doi:10.1093/pasj/psab131.
* [22] E. E. Fenimore, T. M. Cannon, Uniformly redundant arrays: digital reconstruction methods, Applied Optics20 (10) (1981) 1858–1865. doi:10.1364/AO.20.001858.
* [23] S. R. Gottesman, E. E. Fenimore, New family of binary arrays for coded aperture imaging, Applied Optics28 (20) (1989) 4344–4352. doi:10.1364/AO.28.004344.
|
# Optimality of the Johnson-Lindenstrauss Dimensionality Reduction for
Practical Measures
Yair Bartal
Hebrew University. Supported in part by a grant from the Israeli Science
Foundation (1817/17<EMAIL_ADDRESS>Ora Nova Fandina
Aarhus University<EMAIL_ADDRESS>Kasper Green Larsen Aarhus University.
<EMAIL_ADDRESS>
It is well known that the Johnson-Lindenstrauss dimensionality reduction
method is optimal for worst case distortion. While in practice many other
methods and heuristics are used, not much is known in terms of bounds on their
performance. The question of whether the JL method is optimal for practical
measures of distortion was recently raised in [8] (NeurIPS’19). They provided
upper bounds on its quality for a wide range of practical measures and showed
that indeed these are best possible in many cases. Yet, some of the most
important cases, including the fundamental case of average distortion were
left open. In particular, they show that the JL transform has $1+\epsilon$
average distortion for embedding into $k$-dimensional Euclidean space, where
$k=O(1/{\epsilon}^{2})$, and for more general $q$-norms of distortion,
$k=O(\max\\{1/{\epsilon}^{2},q/{\epsilon}\\})$, whereas tight lower bounds
were established only for large values of $q$ via reduction to the worst case.
In this paper we prove that these bounds are best possible for any
dimensionality reduction method, for any $1\leq q\leq
O(\frac{\log(2{\epsilon}^{2}n)}{{\epsilon}})$ and
$\epsilon\geq\frac{1}{\sqrt{n}}$, where $n$ is the size of the subset of
Euclidean space.
Our results also imply that the JL method is optimal for various distortion
measures commonly used in practice, such as stress, energy and relative error.
We prove that if any of these measures is bounded by ${\epsilon}$ then
$k=\Omega(1/{\epsilon}^{2})$, for any $\epsilon\geq\frac{1}{\sqrt{n}}$,
matching the upper bounds of [8] and extending their tightness results for the
full range moment analysis.
Our results may indicate that the JL dimensionality reduction method should be
considered more often in practical applications, and the bounds we provide for
its quality should be served as a measure for comparison when evaluating the
performance of other methods and heuristics.
## 1 Introduction
Dimensionality reduction is a key tool in numerous fields of data analysis,
commonly used as a compression scheme to enable reliable and efficient
computation. In metric dimensionality reduction subsets of high-dimensional
spaces are embedded into a low-dimensional space, attempting to preserve
metric structure of the input. There is a large body of theoretical and
applied research on such methods spanning a wide range of application areas
such as online algorithms, computer vision, network design, machine learning,
to name a few.
Metric embedding has been extensively studied by mathematicians and computer
scientists over the past few decades (see [18, 25, 19] for surveys). In
addition to the beautiful theory, a plethora of original and elegant
techniques have been developed and successfully applied in various fields of
algorithmic research, e.g., clustering, nearest-neighbor, distance oracle. See
[27, 18, 34] for exposition of some applications.
The vast majority of these methods have been designed to optimize the worst-
case distance error incurred by embedding. For metric spaces $(X,d_{X})$ and
$(Y,d_{Y})$ an injective map $f:X\to Y$ is an embedding. It has (a worst-case)
distortion $\alpha\geq 1$ if there is a positive constant $c$ satisfying for
all $u\neq v\in X$, $d_{Y}(f(u),f(v))\leq c\cdot d_{X}(u,v)\leq\alpha\cdot
d_{Y}(f(u),f(v))$. A cornerstone result in metric dimensionality reduction is
the celebrated Johnson-Lindenstrauss lemma [21]. It states that any $n$-point
subset of Euclidean space can be embedded, via a linear transform, into a
$O(\log n/\epsilon^{2})$-dimensional subspace with $1+\epsilon$ distortion. It
has been recently shown to be optimal in [24] and in [6] (improving upon [5]).
Furthermore, it was shown in [26] that there are Euclidean pointsets in
$\mathbb{R}^{d}$ for which any embedding into $k$-dimensions must have
$n^{\Omega(1/k)}$ distortion, effectively ruling out dimensionality reduction
into a constant number of dimensions with a constant worst-case distortion.
Metric embedding and in particular dimensionality reduction have also gained
significant attention in applied community. Practitioners have frequently
employed classic tools of metric embedding as well as have designed new
techniques to cope with high-dimensional data. A large number of
dimensionality reduction heuristics have been developed for a variety of
practical settings, eg. [33, 28, 7, 36]. However, most of these heuristics
have not been rigorously analyzed in terms of the incurred error. Recent
papers [11] and [8] initiate the formal study of practically oriented analysis
of metric embedding.
#### Practical distortion measures
In contrast to the worst case distortion the quality of practically motivated
embedding is often determined by its average performance over all pairs, where
an error per pair is measured as an additive error, a multiplicative error or
a combination of both. There is a huge body of applied research investigating
such notions of quality. For the list of citations and a more detailed account
on the theoretical and practical importance of average distortion measures see
[8].
In this paper we consider the most basic and commonly used in practical
applications notions of average distortion, which we define in the following.
Moment of distortion was defined in [4], and studied in various papers since
then.
###### Definition 1.1 ($\ell_{q}$-distortion).
For $u\neq v\in X$ let $expans_{f}(u,v)=d_{Y}(f(u),f(v))/d_{X}(u,v)$ and
$contract_{f}(u,v)=d_{X}(u,v)/d_{Y}(f(u),f(v))$. Let
$dist_{f}(u,v)=\max\\{expans_{f}(u,v),contract_{f}(u,v)\\}$. For any $q\geq 1$
the $q$-th moment of distortion is defined by
$\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}(f)=\left(\frac{1}{{\binom{\lvert
X\rvert}{2}}}\sum_{u\neq v\in X}(dist_{f}(u,v))^{q}\right)^{1/q}.$
Additive average distortion measures are often used when a nearly isometric
embedding is expected. Such notions as energy, stress and relative error
measure (REM) are common in various statistic related applications. For a map
$f:X\to Y$, for a pair $u\neq v\in X$ let $d_{u,v}:=d_{X}(u,v)$ and let
$\hat{d}_{uv}:=d_{Y}(f(u),f(v))$. For all $q\geq 1$
###### Definition 1.2 (Additive measures).
${Energy}_{q}(f)=\left(\frac{1}{{\binom{\lvert X\rvert}{2}}}\sum_{u\neq v\in
X}\left(\frac{\lvert\hat{d}_{uv}-d_{uv}\rvert}{d_{u,v}}\right)^{q}\right)^{\frac{1}{q}}=\left(\frac{1}{{\binom{\lvert
X\rvert}{2}}}\sum_{u\neq v\in
X}\big{\lvert}expans_{f}(u,v)-1\big{\rvert}^{q}\right)^{\frac{1}{q}}.$
${Stress}_{q}(f)=\left(\frac{\sum_{u\neq v\in
X}|\hat{d}_{uv}-d_{uv}|^{q}}{\sum_{u\neq v\in
X}(d_{uv})^{q}}\right)^{\frac{1}{q}},\;\;\;{Stress^{*}}_{q}(f)=\left(\frac{\sum_{u\neq
v\in X}|\hat{d}_{uv}-d_{uv}|^{q}}{\sum_{u\neq v\in
X}(\hat{d}_{uv})^{q}}\right)^{\frac{1}{q}}.$
${\mathop{\mbox{$REM$}}}_{q}(f)={\left(\frac{1}{{\binom{\lvert
X\rvert}{2}}}\sum_{u\neq v\in
X}\left(\frac{|\hat{d}_{uv}-d_{uv}|}{\min\\{\hat{d}_{uv},d_{uv}\\}}\right)^{q}\right)}^{\frac{1}{q}}.$
It was proved in [8] that
###### Claim 1.1.
For all $q\geq 1$,
$\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}(f)-1\geq{\mathop{\mbox{$REM$}}}_{q}(f)\geq{\mathop{\mbox{$Energy$}}}_{q}(f)$.
Finally, [12] defined $\sigma$-distortion and showed it to be particularly
useful in machine learning applications. For $r\geq 1$, let
$\mathop{\mbox{$\ell_{r}$-$\mathit{expans}$}}(f)=({\binom{n}{2}}^{-1}\sum_{u\neq
v}({\mathop{\mbox{$expans$}}}_{f}(u,v))^{r})^{1/r}$
###### Definition 1.3 ($\sigma$-distortion).
$\sigma_{q,r}(f)=\left(\frac{1}{\binom{\lvert
X\rvert}{2}}\left|\frac{{\mathop{\mbox{$expans$}}}_{f}(u,v)}{\mathop{\mbox{$\ell_{r}$-$\mathit{expans}$}}(f)}-1\right|^{q}\right)^{1/q}.$
In [8] the authors rigorously analyzed dimensionality reduction for the above
distortion measures. The central question they studied is: What dimensionality
reduction method is optimal for these quality measures and what are the
optimal bounds achievable ? In particular, is the Johnson-Lindenstrauss (JL)
transform also optimal for the average quality criteria?
Their analysis of the Gaussian implementation of the JL embedding [20] shows
that any Euclidean subset can be embedded with $1+\epsilon$ average distortion
($\mathop{\mbox{$\ell_{1}$-$\mathit{dist}$}}$) into $k=O(1/\epsilon^{2})$
dimensions. And for more general case of the $q$-moment of distortion, the
dimension is $k=O(\max\\{1/\epsilon^{2},q/\epsilon\\})$. However, tight lower
bounds were proved only for large values of $q$, leaving the question of
optimality of the most important case of small $q$, and particularly the most
basic case of $q=1$, unresolved.
For the additive average measures (stress, energy and others) they prove that
a bound of $\epsilon$ can be achieved in dimension $k=O(q/\epsilon^{2})$. They
showed a tight lower bound on the required dimension only for $q\geq 2$,
leaving the basic case of $q=1$ also unresolved.
In this paper, we prove that indeed the Johnson-Lindenstrauss bounds are best
possible for any dimensionality reduction for the full range of $q\geq 1$, for
all the average distortion measures defined in this paper. We believe that
besides theoretical contribution this statement may have important
implications for practical considerations. In particular, it may affect the
way the JL method is viewed and used in practice, and the bounds we give may
serve a basis for comparison for other methods and heuristics.
#### Our results
We show that the guarantees given by the Gaussian random projection
dimensionality reduction are the best possible for the average distortion
measures. In particular, we prove
###### Theorem 1.2.
Given any integer $n$ and $\Omega(\frac{1}{\sqrt{n}})<\epsilon<1$, there
exists a $\Theta(n)$-point subset of Euclidean space such that any embedding
$f$ of it into $\ell_{2}^{k}$ with
$\mathop{\mbox{$\ell_{1}$-$\mathit{dist}$}}(f)\leq 1+\epsilon$ requires
$k=\Omega(1/\epsilon^{2})$.
For the more general case of large values of $q$, we show
###### Theorem 1.3.
Given any integer $n$, and $\Omega(\frac{1}{\sqrt{n}})<\epsilon<1$, and $1\leq
q<=O(\log(\epsilon^{2}n)/\epsilon)$, there exists a $\Theta(n)$-point subset
of Euclidean space such that any embedding of it into $\ell_{2}^{k}$ with
$\ell_{q}$-distortion at most $1+\epsilon$ requires $k=\Omega(q/\epsilon)$.
As $\ell_{q}$-distortion is monotonically increasing as a function of $q$, the
theorems imply the lower bound of
$k=\Omega\left(\max\left\\{1/\epsilon^{2},q/\epsilon\right\\}\right)$.
For the additive distortion measures we prove the following theorem:
###### Theorem 1.4.
Given any integer $n$ and $\Omega(\frac{1}{\sqrt{n}})<\epsilon<1$, there
exists a $\Theta(n)$-point subset of Euclidean space such that any embedding
of it into $\ell_{2}^{k}$ with any of $Energy_{1}$, $Stress_{1}$,
$Stress^{*}_{1}$, $REM_{1}$ or $\sigma$-distortion bounded above by $\epsilon$
requires $k=\Omega(1/\epsilon^{2})$.
Our main proof is of the lower bound for $Energy_{1}$ measure, which we show
to imply the bound in Theorem 1.2 and for all measures in Theorem 1.4, with
some small modifications for the stress measures. Furthermore, since all
additive measures are monotonically increasing with $q$ the bounds hold for
all $q\geq 1$. Therefore Theorems 1.2 and 1.3 together provide a tight bound
of $\Omega(\max\\{1/\epsilon^{2},q/\epsilon\\})$ for the
$\ell_{q}$-distortion. Additionally combined with the lower bounds of [8] for
$q\geq 2$, Theorem 1.4 provides a tight bound of $\Omega(q/\epsilon^{2})$ for
all additive measures.
#### Techniques
The proofs of the lower bounds in all the theorems are based on counting
argument, as in the lower bound for the worst case distortion proven by [24].
We extend the framework of [24] to the entire range of $q$ moments of
distortion, including the average distortion. As in the original proof we show
that there exists a large family $\mathcal{P}$ of metric spaces that are quite
different from each other so that if one can embed all of these into a
Euclidean space with a small average distortion the resulting image spaces are
different too. This implies that if the target dimension $k$ is too small
there is not enough space to accommodate all the different metric spaces from
the family.
Let us first describe the framework of [24].111The description is based on
combining the methods of [24, 6], and can be also viewed as our $q$-moments
bound with $q=\Theta(\log(\epsilon^{2}n)/\epsilon)$. The main idea is to
construct a large family of $n$-point subspaces ${\rm
I}\subseteq\ell_{2}^{\Theta(n)}$ so that each subspace in the family can be
uniquely encoded using a small number of bits, assuming that each ${\rm I}$
can be embedded with $1+\epsilon$ worst-case distortion in $\ell_{2}^{k}$. The
sets they construct are such that the information on the inner products
between all the points in ${\rm I}$, even if distorted by an additive error of
$O(\epsilon)$, enables full reconstruction of the points in the set. In
particular, each ${\rm I}$ consists of a zero vector together with the
standard basis vectors ${\rm E}$ and an additional set of vectors denoted by
${\rm Y}$. The set ${\rm Y}$ is defined in such a way that $\langle
y,e\rangle\in\\{0,c\epsilon\\}$, for a constant $c>1$, for all $(y,e)\in{\rm
Y}\times{\rm E}$. The authors then show that a $1+\epsilon$ distortion
embedding $f$ of ${\rm I}$ must map all the points into the ball of radius $2$
while preserving all the inner products up to an additive error
$\Theta(\epsilon)$, which enables to recover the vectors in ${\rm Y}$. The
next step is to show that all image points can be encoded using a small number
of bits, while preserving the inner product information up to an
$\Theta(\epsilon)$ additive error. This can be achieved by carefully
discretizing the ball, and applying a map $\tilde{f}$ mapping every point to
its discrete image approximation so that $\langle
f(v),f(u)\rangle=\langle{\tilde{f}(v)},{\tilde{f}(u)}\rangle\pm\Theta(\epsilon)$.
For this purpose one may use the method of [6] who showed222The original proof
of [24] uses a different elegant discretization argument. that randomly
rounding the image points to the points in a small enough grid will preserve
all the pairwise inner products within $\Theta(\epsilon)$ additive error with
constant probability, and this in turn allows to derive a short binary
encoding for each input point. This implies the lower bound on
$k=\Omega(\log(\epsilon^{2}n)/\epsilon^{2})$, for
$\epsilon=\Omega(1/\sqrt{n})$.
Let us now explain the challenges in applying this method to the case of
bounded average distortion and $q$-moments. Assuming $f:{\rm
I}\to\ell_{2}^{k}$ has $1+\epsilon$ average distortion neither implies that
all images are in a ball of constant radius nor that $f$ preserves all
pairwise inner products. The bounded average distortion also does not
guarantee the existence of a large subset of ${\rm I}$ with the properties
above. We suggest the following approach to overcoming these issues. First, we
add to ${\rm I}$ a large number of ”copies” of $0$ vectors which enables to
argue that a large subset ${\rm\hat{I}}\subseteq{\rm I}$ will be mapped into a
constant radius ball, such that the average additive distortion is
$\Theta(\epsilon)$. The next difficulty is that if the images would be rounded
to the points in a grid using a mapping which would preserve all pairwise
inner products with $\Theta(\epsilon)$ additive error, then the resulting grid
would be too large, which would’t allow a sufficiently short encoding. We
therefore round the images to a grid with $\Theta(\epsilon)$ additive
approximation to the average of the inner products of ${\rm\hat{I}}$ and thus
reduce the size of the grid (and the encoding). The next step is showing that
the above guarantees imply the existence of a large enough subset of pairs
${\cal Z}\subseteq{\binom{{\rm I}}{2}}$ of special structure, which allows to
encode the entire set ${\rm I}$ with a few bits even if only the partial
information about the inner products within ${\cal Z}$ is approximately
preserved. In particular, we show that there is a large subset ${\cal
Y}^{G}\subseteq Y$ such that for each point $y\in{\cal Y}^{G}$ there is a
large enough subset ${\cal E}_{y}\subseteq E$ such that all pairwise inner
products $\langle y,e\rangle$, where $y\in{\cal Y}^{G}$ and $e\in{\cal
E}_{y}$, are additively preserved up to $\Theta(\epsilon)$ in the grid
embedding, and therefore all the discretized images of these points have short
binary encoding. The last step is to argue that this subset is sufficiently
large so the knowledge of its approximate inner products possesses enough
information in order to recover the entire point set ${\rm I}$ from our small
size encoding. As this set still covers only a constant fraction of the pairs,
and encoding the rest of the points is far more costly, this forces the
dimension and number of points in our instance to be set to
$d=\Theta(n)=\Theta(1/\epsilon^{2})$, implying a lower bound of
$k=\Omega(1/\epsilon^{2})$. Finally, we prove that this can extend to
arbitrary large subspaces via metric composition techniques. To extend these
ideas to the general case of $q$-moments of distortion we prove that similar
additive approximation distortion bounds hold with high probability of at
least $1-e^{-\Theta(\epsilon q)}$. This means that a smaller fraction of the
pairs require a more costly encoding, and allows us to set
$d=\Theta(n)=\Theta(1/\epsilon^{2})\cdot e^{\Theta(\epsilon q)}$, implying a
lower bound of $k=\Omega(q/\epsilon)$.
#### Related work
The study of ”beyond the worst-case” distortion analysis of metric embedding
was initiated in [22] by introducing partial and scaling distortions. This
generated a rich line of follow up work, [1, 4, 2] just to name a few. The
notions of average distortion and $\ell_{q}$-distortion were introduced in [4]
who gave bounds on embedding general metrics in normed spaces. Other notions
of refined distortion analysis considered in the literature include such
notions as Ramsey type embeddings [9], local distortion embeddings [3],
terminal and prioritized distortion [15, 14], and recent works on distortion
of the $q$-moments333The notion in these papers, also studied [4, 8], computes
the ratio between the average of (or $q$-moments) of new distances to that of
original distances, in contrast to the average distortion (or $q$-moments of
distortion) measure in Definition 1.1, which measures the average (or
$q$-moments) of pairwise distortions. [29, 30, 23].
In applied community, various notions of average distortion are frequently
used to infer the quality of heuristic methods [17, 16, 32, 13, 31, 35, 10].
However, the only work rigorously analyzing these notions we are aware of is
that of [8]. They proved lower bounds of $k=\Omega(1/\epsilon)$ for the all
additive measures average ($1$-norm) version, and for the average distortion
measure ($\ell_{1}$-distortion), which we improve here to the tight
$\Omega(1/\epsilon^{2})$ bound. For $q\geq 2$ they gave tight bounds of
$\Omega(q/\epsilon^{2})$ for all additive measures. For
$\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}$ they have shown that for
$q=\Omega(\log(1/\epsilon)/\epsilon)$ the tight bound of
$k=\Omega(q/\epsilon)$ follows from the black-box reduction to the lower bound
on the worst case distortion.
## 2 Lower bound for average distortion and additive measures
In this section we prove Theorems 1.2 and Theorem 1.4. Using Claim 1.1, we may
focus on proving the lower bound for $\mathop{\mbox{$Energy$}}_{1}(f)$ in
order to obtain similar lower bounds for $REM_{1}(f)$ and
$\mathop{\mbox{$\ell_{1}$-$\mathit{dist}$}}(f)$. In Appendix B we show how to
change this proof in order to obtain lower bound on $Stress_{1}(f)$, and
further show that the lower bounds for all the additive measures follow from
the lower bounds on ${\rm Energy}$ and ${\rm Stress}$.
We present here the proof of an existence of a bad metric space of size
$\hat{n}=\Theta(1/\epsilon^{2})$ and show in Appendix A how to extend it for
metric spaces of arbitrary size ${n}\geq\hat{n}$.
We construct a large family $\mathcal{P}$ of metric spaces, such that each
${\rm I}\in\mathcal{P}$ can be completely recovered by computing the inner
products between the points in ${\rm I}$. For a given $\epsilon<0$, let
$l=\lceil\frac{1}{\gamma^{2}\epsilon^{2}}\rceil$, for some large constant
$\gamma>1$ to be determined later. We will prove
$k\geq\frac{c}{\gamma^{2}}\frac{1}{\epsilon^{2}}$, for $c<1$, and so we may
assume w.l.o.g. that $\epsilon\leq 1/\gamma$, otherwise the statement
trivially holds. We will construct point sets ${\rm I}\subset\ell_{2}^{d}$,
where $d=2l$, each ${\rm I}$ of size $3d=6l=\Theta(1/\epsilon^{2})$.
Define a set $O=\\{o_{j}\\}_{j=1}^{d}$ of $d$ arbitrary near zero vectors in
$\ell_{2}^{d}$, i.e., a set of $d$ different vectors such that for all
$o_{j}\in O$, $\left\|o_{j}\right\|_{2}\leq\epsilon/100$. Let
${E}=\\{e_{1},e_{2},\ldots,e_{d}\\}$ denote the vectors of the standard basis
of $\mathbb{R}^{d}$. For a set $S$ of $l$ indices from $\\{1,2,\ldots,d\\}$,
we define $y_{S}=\frac{1}{\sqrt{l}}\sum_{j\in S}e_{j}$. For a sequence of $d$
index sets (possibly with repetitions) $S_{1},S_{2},\ldots,S_{d}$, let ${\rm
Y}[S_{1},\ldots,S_{d}]=\\{y_{S_{1}},\ldots,y_{S_{d}}\\}$. Each point set ${\rm
I}[S_{1},\ldots,S_{d}]\in\mathcal{P}$ is defined as the union of the sets
defined above444We will omit $[S_{1},\ldots S_{d}]$ from notation for a fixed
choice of the sets., i.e., ${\rm I}[S_{1},\ldots,S_{d}]=O\cup{E}\cup{\rm
Y}[S_{1},\ldots,S_{d}]$. The size of the family is
$\lvert\mathcal{P}\rvert={\binom{d}{l}}^{d}$. Note that each ${\rm
I}\in\mathcal{P}$ is contained in $B_{2}(1)$, the unit ball of $\ell_{2}^{d}$,
and has diameter $diam({\rm I})=\sqrt{2}$. Additionally, for all $e_{j}\in E$
and $y_{S}\in Y$ the value of the inner product $\langle e_{j},y_{S}\rangle$
determines whether $e_{j}\in{\rm span}\\{e_{i}|i\in S\\}$. In particular, if
$\langle e_{j},y_{S}\rangle=0$ then $j\not\in S$, and if $\langle
e_{j},y_{S}\rangle=1/\sqrt{l}\geq(1/2)\gamma\epsilon$ then $j\in S$.
Assume that for each ${\rm I}\in\mathcal{P}$ there is an embedding $f:{\rm
I}\to\ell_{2}^{k}$, with $\mathop{\mbox{$Energy$}}_{1}(f)\leq\epsilon$. We
prove that this implies that $k=\Omega(1/\epsilon^{2})$. The strategy is to
produce a unique binary encoding of each ${\rm I}$ in the family based on the
embedding $f$. Let ${\rm length(I)}$ denote the length of the encoding for
each ${\rm I}$, we will show that ${\rm length(I)}\lesssim l^{2}+l\cdot
k\log(\frac{1}{\epsilon k})$. Since the encoding defines an injective map from
$\mathcal{P}$ to $\\{0,1\\}^{\rm length(I)}$, the number of different sets
that can be recovered by decoding is at most $2^{\rm length(I)}$. Now, because
$\lvert\mathcal{P}\rvert=\binom{d}{l}^{d}\geq 2^{2l^{2}}$ we get that
$k\log(\frac{1}{\epsilon k})\gtrsim l$ and show that this implies the bound on
$k\geq\Omega(l)$.
We are now set to describe the encoding for each ${\rm I}$ and to bound its
length. First, in the following lemma, we show that there exists a large
subset ${\rm\hat{I}}\subseteq{\rm I}$ that is mapped by $f$ into a ball of a
constant radius in $k$-dimensional space and that the average of the errors in
the inner products incurred by $f$ on the subset ${\rm\hat{I}}$ is bounded by
$\Theta(\epsilon)$.
###### Lemma 2.1.
For any ${\rm I}\in\mathcal{P}$ let $f:{\rm I}\to\ell_{2}^{k}$ be an embedding
with $\mathop{\mbox{$Energy$}}_{1}(f)\leq\epsilon$, with $\epsilon\leq 1/36$.
Let $0<\alpha\leq 1/16$ be a parameter. There is a subset $\hat{{\rm
I}}\subseteq{\rm I}$ of size $\big{\lvert}\hat{{\rm
I}}\big{\rvert}\geq(1-\alpha)\lvert{\rm I}\rvert$ such that $f(\hat{{\rm
I}})\subset B_{2}\left(1+\frac{3.01\epsilon}{\alpha}\right)$, and
$\frac{1}{|{\binom{\hat{{\rm I}}}{2}}|}\sum_{(u,v)\in{\binom{\hat{{\rm
I}}}{2}}}\big{\lvert}\langle f(u),f(v)\rangle-\langle
u,v\rangle\big{\rvert}\leq(10+\frac{1}{2\alpha})\epsilon$.
###### Proof.
By assumption we have that the following condition holds:
${\mathop{\mbox{$Energy$}}}_{1}(f)=\frac{1}{\big{\lvert}\binom{I}{2}\big{\rvert}}\sum_{(u,v)\in{\binom{I}{2}}}{\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\big{\rvert}}\leq\epsilon.$
(2.0.1)
This bound implies that
$\displaystyle\frac{1}{\lvert O\rvert(\lvert I\rvert-1)}\sum_{o_{j}\in
O}\sum_{v\in I,v\neq
o_{j}}{\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(o_{j},v)-1\big{\rvert}}$
$\displaystyle\leq$ $\displaystyle\frac{1}{\lvert O\rvert(\lvert
I\rvert-1)}\sum_{u\neq v\in
I}{\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\big{\rvert}}$
$\displaystyle\leq$ $\displaystyle\frac{3d(3d-1)}{d(3d-1)}\epsilon=3\epsilon.$
From which follows that
$\min_{o_{j}\in O}\frac{1}{|I|-1}\sum_{v\in I,v\neq
o_{j}}{\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(o_{j},v)-1\big{\rvert}}\leq
3\epsilon.$ (2.0.2)
Let $\hat{o}\in O$ denote the point at which the minimum is obtained. We
assume without loss of generality that $f(\hat{o})=0$. Let $\hat{I}$ be the
set of all $v\in I$ such that
${\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(\hat{o},v)-1\big{\rvert}}\leq\frac{3\epsilon}{\alpha}$.
By Markov’s inequality,
$\lvert\hat{I}\rvert\geq(1-\alpha)\big{\lvert}I\big{\rvert}$. We have that for
all $v\in{\hat{\rm I}}$,
$\lvert{\mathop{\mbox{$expans$}}}_{f}(v,\hat{o})-1\rvert=\lvert\frac{\left\|f(v)\right\|_{2}}{\left\|v-\hat{o}\right\|_{2}}-1\rvert\leq\frac{3\epsilon}{\alpha}$,
and using
$\left\|v-\hat{o}\right\|_{2}\leq\left\|v\right\|_{2}+\left\|\hat{o}\right\|_{2}\leq
1+\epsilon/100$, so that
$\left\|f(v)\right\|_{2}\leq(1+\frac{3\epsilon}{\alpha})(1+\epsilon/100)\leq
1+\frac{3.002\epsilon}{\alpha}$, implying that $f(v)\in
B_{2}\left(1+\frac{3.01\epsilon}{\alpha}\right)$.
For all $(u,v)\in{\binom{\hat{I}}{2}}$ we have:
$\displaystyle\big{\lvert}\langle f(u),f(v)\rangle-\langle
u,v\rangle\big{\rvert}$ $\displaystyle\leq$
$\displaystyle\frac{1}{2}\left[\big{\lvert}\left\|f(u)\right\|_{2}^{2}-\left\|u\right\|_{2}^{2}\big{\rvert}+\big{\lvert}\left\|f(v)\right\|_{2}^{2}-\left\|v\right\|_{2}^{2}\big{\rvert}\right]$
$\displaystyle+$
$\displaystyle\frac{1}{2}\left[\big{\lvert}\left\|f(u)-f(v)\right\|_{2}^{2}-\left\|u-v\right\|_{2}^{2}\big{\rvert}\right].$
We can bound each term as follows:
$\displaystyle\big{\lvert}\left\|f(u)\right\|_{2}^{2}-\left\|u\right\|_{2}^{2}\big{\rvert}=$
$\displaystyle=$
$\displaystyle\lvert\left\|f(u)-f(\hat{o})\right\|_{2}^{2}-\left\|u-\hat{o}\right\|_{2}^{2}+\left\|u-\hat{o}\right\|_{2}^{2}-\left\|u\right\|_{2}^{2}\rvert$
$\displaystyle\leq$
$\displaystyle\lvert\left\|f(u)-f(\hat{o})\right\|_{2}-\left\|u-\hat{o}\right\|_{2}\rvert\cdot\left(\left\|f(u)-f(\hat{o})\right\|_{2}+\left\|u-\hat{o}\right\|_{2}\right)$
$\displaystyle+$
$\displaystyle\lvert\left\|u-\hat{o}\right\|_{2}-\left\|u\right\|_{2}\rvert\cdot(\left\|u-\hat{o}\right\|_{2}+\left\|u\right\|_{2})$
$\displaystyle\leq$
$\displaystyle\left\|u-\hat{o}\right\|_{2}\cdot\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert\cdot(\left\|f(u)\right\|_{2}+\left\|u-\hat{o}\right\|_{2})+\left\|\hat{o}\right\|_{2}\cdot(\left\|u-\hat{o}\right\|_{2}+\left\|u\right\|_{2})$
$\displaystyle\leq$
$\displaystyle\left(1+\frac{\epsilon}{100}\right)\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert\left(1+\frac{3.002\epsilon}{\alpha}+1+\frac{\epsilon}{100}\right)+\frac{\epsilon}{100}\cdot\left(2+\frac{\epsilon}{100}\right)$
$\displaystyle\leq$
$\displaystyle\left(2+\frac{3.01\epsilon}{\alpha}\right)\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert+\frac{\epsilon}{40}\leq\left(2+\frac{1}{9\alpha}\right)\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert+\frac{\epsilon}{40},$
where we have used $\left\|\hat{o}\right\|\leq\epsilon/100$,
$\left\|u-\hat{o}\right\|_{2}\leq\left\|u\right\|_{2}+\left\|\hat{o}\right\|_{2}\leq
1+\epsilon/100$, and the bound on the norms of the embedding within $\hat{I}$.
Additionally, we have that
$\displaystyle\big{\lvert}\left\|f(u)-f(v)\right\|_{2}^{2}-\left\|u-v\right\|_{2}^{2}\big{\rvert}=$
$\displaystyle=$
$\displaystyle\lvert\left\|f(u)-f(v)\right\|_{2}-\left\|u-v\right\|_{2}\rvert(\left\|f(u)-f(v)\right\|_{2}+\left\|u-v\right\|_{2})$
$\displaystyle\leq$
$\displaystyle\left\|u-v\right\|_{2}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert(\left\|f(u)\right\|_{2}+\left\|f(v)\right\|_{2}+\left\|u-v\right\|_{2})$
$\displaystyle\leq$
$\displaystyle\sqrt{2}\left(2\left(1+\frac{3.002\epsilon}{\alpha}\right)+\sqrt{2}\right)\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert\leq\left(5+\frac{1}{4\alpha}\right)\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert,$
where the second to last inequality holds since $\left\|u-v\right\|_{2}\leq
diam(I)=\sqrt{2}$. It follows that:
$\displaystyle\frac{1}{|{\binom{\hat{I}}{2}}|}\sum_{(u,v)\in{\binom{\hat{I}}{2}}}\big{\lvert}\langle
f(u),f(v)\rangle-\langle u,v\rangle\big{\rvert}\leq$ $\displaystyle\leq$
$\displaystyle\left(2+\frac{1}{9\alpha}\right)\cdot\frac{1}{|{\binom{\hat{I}}{2}}|}\left(\frac{|\hat{I}|-1}{2}\right)\sum_{u\in\hat{I},u\neq\hat{o}}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert$
$\displaystyle+$
$\displaystyle\frac{1}{2}\left(5+\frac{1}{4\alpha}\right)\cdot\frac{1}{|{\binom{\hat{I}}{2}}|}\sum_{(u,v)\in{\binom{\hat{I}}{2}}}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert+\frac{\epsilon}{40}.$
By definition of $\hat{I}$, and using (2.0.2) we have that
$\displaystyle\frac{1}{|{\binom{\hat{I}}{2}}|}\left(\frac{|\hat{I}|-1}{2}\right)\sum_{u\in\hat{I},u\neq\hat{o}}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert$
$\displaystyle=$
$\displaystyle\frac{1}{|\hat{I}|}\sum_{u\in\hat{I},u\neq\hat{o}}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert$
$\displaystyle\leq$ $\displaystyle\frac{1}{|I|}\sum_{u\in
I,u\neq\hat{o}}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert\leq
3\epsilon.$
Therefore (2) yields that
$\displaystyle\frac{1}{|{\binom{\hat{I}}{2}}|}\sum_{(u,v)\in{\binom{\hat{I}}{2}}}\big{\lvert}\langle
f(u),f(v)\rangle-\langle u,v\rangle\big{\rvert}\leq$ $\displaystyle\leq$
$\displaystyle\left(2+\frac{1}{9\alpha}\right)\cdot
3\epsilon+\frac{1}{2}\left(5+\frac{1}{4\alpha}\right)\cdot\frac{1}{|{\binom{\hat{I}}{2}}|}\sum_{(u,v)\in{\binom{\hat{I}}{2}}}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert+\frac{\epsilon}{40}$
$\displaystyle\leq$
$\displaystyle\frac{1}{2}\left(5+\frac{1}{4\alpha}\right)\cdot\frac{1}{|{\binom{\hat{I}}{2}}|}\sum_{(u,v)\in{\binom{\hat{I}}{2}}}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert+\left(7+\frac{1}{3\alpha}\right)\epsilon.$
Now, we have that
$\displaystyle\frac{1}{|{\binom{\hat{I}}{2}}|}\sum_{(u,v)\in{\binom{\hat{I}}{2}}}\lvert({\mathop{\mbox{$expans$}}}_{f}(u,v))-1\rvert\leq\frac{6}{5}\frac{1}{|{\binom{I}{2}}|}\sum_{(u,v)\in{\binom{I}{2}}}\lvert({\mathop{\mbox{$expans$}}}_{f}(u,v))-1\rvert\leq\frac{6}{5}\epsilon,$
using $|\hat{I}|\geq(1-\alpha)|I|$, so that $\alpha\leq 1/16$ we have
$|{\binom{\hat{I}}{2}}|\geq(1-\frac{1}{3(1-\alpha)d})(1-\alpha)^{2}\cdot|{\binom{I}{2}}|\geq\frac{5}{6}|{\binom{I}{2}}|$
and applying (2.0.1). Finally, we obtain
$\displaystyle\frac{1}{|{\binom{\hat{I}}{2}}|}\sum_{(u,v)\in{\binom{\hat{I}}{2}}}\big{\lvert}\langle
f(u),f(v)\rangle-\langle
u,v\rangle\big{\rvert}\leq\frac{6}{5}\cdot\frac{1}{2}\left(5+\frac{1}{4\alpha}\right)\epsilon+\left(7+\frac{1}{3\alpha}\right)\epsilon\leq\left(10+\frac{1}{2\alpha}\right)\epsilon.$
∎
We have shown thus far that for the large subset ${\hat{\rm I}}$ of the set
${\rm I}$, the average of the inner products between the images equals up to
an additive factor $\Theta(\epsilon)$ to the average of the inner products
between the original points. Moreover, all the images of ${\hat{\rm I}}$ are
in the constant radius ball. We next show that rounding these images to the
(randomly chosen) points of the sufficiently small grid will not change the
sum of the inner products too much, implying that instead of encoding the
original images ${f(\hat{{\rm I}})}$ we can encode its rounded counterpart. To
show this, we use a technique of randomized rounding as proposed in [6].
###### Lemma 2.2.
Let $X\subset\ell_{2}^{k}$ such that $X\subset B_{2}(r)$. For
$\delta<r/\sqrt{k}$ let $G_{\delta}\subseteq B_{2}(r)$ denote the intersection
of the $\delta$-grid with $B_{2}(r)$. There is a mapping $g:X\to G_{\delta}$
such that
$\frac{1}{|{\binom{X}{2}}|}\sum_{(u,v)\in{\binom{X}{2}}}\lvert\langle
g(u),g(v)\rangle-\langle u,v\rangle\rvert\leq 3\delta r$, and the points of
the grid can be represented using $L_{G_{\delta}}=k\log(4r/(\delta\sqrt{k}))$
bits.
###### Proof.
For each point $v\in X$ randomly and independently match a point
$\tilde{v}=g(v)$ on the grid by rounding each of its coordinates $v_{i}$ to
one of the closets integral multiplies of $\delta$ in such a way that
$E[\tilde{v}_{i}]=v_{i}$. This distribution is given by assigning
$\left\lceil\frac{v_{i}}{\delta}\right\rceil\delta$ with probability
$p=\left(\frac{v_{i}}{\delta}-\left\lfloor\frac{v_{i}}{\delta}\right\rfloor\right)$,
and assigning $\left\lfloor\frac{v_{i}}{\delta}\right\rfloor\delta$ with
probability $1-p$. For any $u\neq v\in X$ we have:
$\displaystyle{\rm E}\left[\lvert\langle\tilde{u},\tilde{v}\rangle-\langle
u,v\rangle\rvert\right]$ $\displaystyle\leq$ $\displaystyle{\rm
E}\left[\lvert\langle\tilde{u}-u,v\rangle\rvert\right]+{\rm
E}\left[\lvert\langle\tilde{u},\tilde{v}-v\rangle\rvert\right]$
$\displaystyle\leq$ $\displaystyle\left({\rm
E}\left[(\langle\tilde{u}-u,v\rangle)^{2}\right]\right)^{1/2}+\left({\rm
E}\left[(\langle\tilde{u},\tilde{v}-v\rangle)^{2}\right]\right)^{1/2},$
where the last inequality is by Jensen’s. We bound each term separately.
$\displaystyle{\rm E}[({\langle\tilde{u}-u,v\rangle})^{2}]={\rm
E}\left[\left({\sum_{i=1}^{k}{(\tilde{u}_{i}-u_{i})v_{i}}}\right)^{2}\right]=$
$\displaystyle=$ $\displaystyle\sum_{i=1}^{k}v_{i}^{2}\;{\rm
E}\left[({\tilde{u}_{i}-u_{i}})^{2}\right]+2\sum_{1\leq i\neq j\leq
k}v_{i}v_{j}{\rm E}[\tilde{u_{i}}-u_{i}]{\rm
E}[\tilde{u_{j}}-u_{j}]\leq\delta^{2}\left\|v\right\|_{2}^{2}$
since $\lvert\tilde{u}_{i}-u_{i}\rvert\leq\delta$ and
$E[\tilde{u}_{i}]=u_{i}$. Similarly, for the second term we have
$\displaystyle{\rm E}\left[({\langle\tilde{u},\tilde{v}-v\rangle})^{2}\right]$
$\displaystyle={\rm
E}\left[\left({\sum_{i=1}^{k}{\tilde{u}_{i}(\tilde{v}_{i}-v_{i})}}\right)^{2}\right]\leq\sum_{i=1}^{k}{\rm
E}\left[{\tilde{u}_{i}}^{2}\right]{\rm
E}\left[({\tilde{v}_{i}-v_{i}})^{2}\right]$ (2.0.4)
$\displaystyle+2\sum_{1\leq i\neq j\leq k}{\rm
E}[\tilde{u}_{i}\tilde{u_{j}}(\tilde{v}_{i}-v_{i})]{\rm
E}[\tilde{v}_{j}-v_{j}]\leq\delta^{2}\sum_{i=1}^{k}{\rm
E}[\tilde{u}_{i}^{2}],$
because the random variables $\tilde{u}_{i}$ and $\tilde{v}_{i}$ are
independent. We also have that
$\sum_{i=1}^{k}{\rm E}[\tilde{u}_{i}^{2}]=\sum_{i=1}^{k}{\rm
E}[(u_{i}+(\tilde{u}_{i}-u_{i}))^{2}]=\sum_{i=1}^{k}\left(u_{i}^{2}+2u_{i}{\rm
E}[\tilde{u}_{i}-u_{i}]+{\rm
E}[(\tilde{u}_{i}-u_{i})^{2}]\right)\leq\left\|u\right\|_{2}^{2}+\delta^{2}k.$
Therefore, putting all together, ${\rm
E}\left[\lvert\langle\tilde{u},\tilde{v}\rangle-\langle
u,v\rangle\rvert\right]\leq\delta r+\delta(r^{2}+\delta^{2}k)^{1/2}\leq
2\delta r+\delta^{2}\sqrt{k}\leq 3\delta r$.
The bound on the average difference in inner product in the lemma follows by
the linearity of expectation, and the implied existence of a mapping with
bound at most the expectation. The upper bound on the representation of the
grid points was essentially given in [6]: The $i$th coordinate of a point $x$
on the grid is given by a sign and an absolute value $n_{i}\delta$, where
$0\leq n_{i}\leq r/\delta$ are integers satisfying $\sum_{1\leq i\leq
k}n_{i}^{2}\leq(r/\delta)^{2}$. So can be represented by the sign and their
binary representation of size at most $\sum_{i=1}^{k}(\log(n_{i})+1)$, which
is maximized when all $n_{i}^{2}$’s are equal, which gives the bound of
$k\log(4r/(\delta\sqrt{k}))$.∎
Combining the lemmas we obtain:
###### Corollary 2.1.
For any $I\in\mathcal{P}$ let $f:I\to\ell_{2}^{k}$ be an embedding with
$\mathop{\mbox{$Energy$}}_{1}(f)\leq\epsilon$, with $\epsilon\leq 1/36$. Let
$0<\alpha\leq 1/16$ be a parameter. There is a subset $\hat{I}\subseteq I$ of
size $\big{\lvert}\hat{I}\big{\rvert}\geq(1-\alpha)\lvert I\rvert$ such that
there is a set $G\subset\ell_{2}^{k}$ and a map $g:\hat{I}\to G$ such that
$\frac{1}{\big{\lvert}\binom{\hat{I}}{2}\big{\rvert}}\sum_{(u,v)\in{\binom{\hat{I}}{2}}}\big{\lvert}\langle
g(f(u)),g(f(v))\rangle-\langle
u,v\rangle\big{\rvert}\leq\left(13+\frac{0.76}{\alpha}\right)\epsilon,$
and the points in $G$ can be uniquely represented by binary strings of length
at most $L_{G}=k\log(4r/(\epsilon\sqrt{k}))$ bits, where
$r<1+0.09\frac{1}{\alpha}$.
###### Proof.
The corollary follows by applying Lemma 2.1 followed by Lemma 2.2 with
$X=\hat{I}$ and $\delta=\epsilon$. Note that we may assume that
$\epsilon=\delta<1/\sqrt{k}<r/\sqrt{k}$, as otherwise we are done. ∎
We are now ready to obtain the main technical consequence which will imply the
lower bound. It shows that the very special subset of pairs from ${\rm I}$ has
all the inner products preserved up to an additive $\Theta(\epsilon)$ and can
be encoded using a few bits per point.
###### Corollary 2.2.
For any $I\in\mathcal{P}$ let $f:I\to\ell_{2}^{k}$ be an embedding with
$\mathop{\mbox{$Energy$}}_{1}(f)\leq\epsilon$, with $\epsilon\leq 1/36$. Let
$0<\alpha\leq 1/16$ and $0<\beta$ be parameters. There is a subset of points
$G$ that satisfies the following: there is a subset
${\mathcal{Y}^{G}}\subseteq Y$ of size
$\big{\lvert}\mathcal{Y}^{G}\big{\rvert}\geq(1-3\alpha-\frac{3}{\sqrt{2}}\beta)\lvert
Y\rvert$ such that for each $y\in\mathcal{Y}^{G}$ there is a subset
$\mathcal{E}^{G}_{y}\subseteq E$ of size
$\big{\lvert}\mathcal{E}^{G}_{y}\big{\rvert}\geq(1-3\alpha-\frac{3}{\sqrt{2}}\beta)\big{\lvert}E\big{\rvert}$
such that for all pairs $(y,e)\in\mathcal{Y}^{G}\times\mathcal{E}^{G}_{y}$ we
have: $\big{\lvert}\langle g(f(y)),g(f(e))\rangle-\langle
y,e\rangle\big{\rvert}\leq\frac{1}{\beta^{2}}\left(13+\frac{0.76}{\alpha}\right)\epsilon$,
where $g:\mathcal{Y}^{G}\cup\\{\mathcal{E}^{G}_{y}\\}_{y\in\mathcal{Y^{G}}}\to
G$. Moreover, the points in $G$ can be uniquely represented by binary strings
of length at most $L_{G}=k\log(4r/(\epsilon\sqrt{k}))$ bits, where
$r<1+0.09\frac{1}{\alpha}$.
###### Proof.
Applying Corollary 2.1 and Markov’s inequality there are at most $\beta^{2}$
fraction of pairs $(u,v)\in{\binom{\hat{{\rm I}}}{2}}$ such that
$\big{\lvert}\langle g(f(u)),g(f(v))\rangle-\langle
u,v\rangle\big{\rvert}>\frac{1}{\beta^{2}}\left(13+\frac{0.76}{\alpha}\right)\epsilon$.
It follows that the number of pairs in $Y\times E$ that are in
${\binom{\hat{I}}{2}}$ and have this property is at most
$\beta^{2}\cdot\frac{3d(3d-1)}{2}\leq\frac{9}{2}\beta^{2}\cdot d^{2}$.
Therefore there can be at most $\frac{3}{\sqrt{2}}\beta d$ points in $u\in Y$
such that there are more than $\frac{3}{\sqrt{2}}\beta d$ points in $v\in E$
with this property. Since there are at most $3\alpha d$ points in each of $Y$
and $E$ which may not be in $\hat{I}$ we obtain the stated bounds on the sizes
of $\lvert\mathcal{Y}^{G}\rvert$ and $\lvert\mathcal{E}^{G}_{y}\rvert$. ∎
In the next subsection we show that preserving such a special and partial
information about the inner products in ${\rm I}$ suffices to uniquely encode
the whole instance with a small number of bits.
### 2.1 Encoding algorithm
Let $t=8$. We set $\alpha={1}/({12t})$, $\beta={1}/({\sqrt{2}t})$, which
implies that $r\leq 10$. Therefore, by Corollary 2.2, we can find a subset
$G\subseteq B_{2}(10)$, and a mapping $g:f(I)\to G$, and a subset
$\mathcal{Y}^{G}\subseteq Y$, with
$\lvert\mathcal{Y}^{G}\rvert\geq\left(1-\frac{1}{t}\right)\lvert Y\rvert$,
where for all $y\in\mathcal{Y}^{G}$ we can find a susbet
$\mathcal{E}_{y}^{G}\subseteq E$ with
$\lvert\mathcal{E}_{y}^{G}\rvert\geq\left(1-\frac{1}{t}\right)\lvert E\rvert$,
such that for all pairs $(e,y)\in\mathcal{Y}^{G}\times\mathcal{E}_{y}^{G}$ the
inner products $\big{\lvert}\langle g(f(y)),g(f(e))\rangle-\langle
y,e\rangle\big{\rvert}\leq 12000\epsilon$. Moreover, each point in $G$ can be
uniquely encoded using at most $L_{G}=k\log(40/(\epsilon\sqrt{k}))$ bits.
We first encode all the points $Y\setminus\mathcal{Y}^{G}$. For each $y_{S}\in
Y\setminus\mathcal{Y}^{G}$ we explicitly write down a bit for each $e_{i}\in
E$ indicating whether $e_{i}\in S$. This requires $d$ bits for each $y_{S}$
and in total at most $\left(\frac{1}{t}\right)d^{2}$ bits for the subset
$Y\setminus\mathcal{Y}^{G}$. The next step is to encode all the points in
$\\{\mathcal{E}^{G}_{y}\\}_{y\in\mathcal{Y}^{G}}$ in a way tat will enable to
recover all the vectors in the set together with the indeces. We can do that
by writing an ordered list containing $d$ strings (one for each vector in the
set $E$, according to its order). Each string is of length $L_{G}$ bits, where
each point $e_{i}\in\\{\mathcal{E}^{G}_{y}\\}_{y\in\mathcal{Y}^{G}}$ is
encoded by its representation in $G$, i.e., $g(f(e_{i}))$, and rest of points
(if there are any) are encoded by zeros. This gives an encoding of total
length $dL_{G}$ bits.
Now we can encode the points in $\mathcal{Y}^{G}$. Each
$y_{S}\in\mathcal{Y}^{G}$ is encoded by the encoding of $g(f(y_{S}))$ using
$L_{G}$ bits, and in addition we add the encoding of the set of indices of the
points in $E\setminus\mathcal{E}^{G}_{y_{S}}$, using at most
$\log{\binom{d}{(1/t)d}}\leq(1/t)d\log(et)$ bits. Note that this information
is not enough in order to recover the vector $y_{S}$, as we can’t deduce
whether $i\in S$ for $e_{i}\in E\setminus\mathcal{E}^{G}_{y_{S}}$. So we add
this information explicitly, by writing down whether $i\in S$ for each
$e_{i}\in E\setminus\mathcal{E}^{G}_{y_{S}}$, using at most $(1/t)d$ bits. In
total, it takes $L_{G}+(1/t)d\log(et)+(1/t)d$ bits per point in
$\mathcal{Y}^{G}$.
Therefore, each instance ${\rm I}\in\mathcal{P}$ can be encoded using at most
$(1/t)d^{2}+dL_{G}+\lvert\mathcal{Y}^{G}\rvert\cdot(L_{G}+d(1/t)\log(et)+(1/t)d)\leq(1/t)d^{2}(2+\log(et))+2dL_{G}$
bits, since $\lvert\mathcal{Y}^{G}\rvert\leq d$. For our choice of $t=8$, this
is at most $\frac{7}{8}d^{2}+2dL_{G}$.
### 2.2 Decoding algorithm
To recover the instance ${\rm I}$ from the encoding it is enough to recover
the vectors $Y$, since the set of points $O$ and $E$ is the same in each ${\rm
I}$. We first recover the set $Y\setminus\mathcal{Y}^{G}$ in a straightforward
way from its naive encoding.
To recover a point $y_{S}\in\mathcal{Y}^{G}$ we need to know for each
$e_{i}\in E$ whether $i\in S$. An important implication of Corollary 2.2 is
that given $g(f(e_{i}))$ and $g(f(y_{S}))$ of any pair
$(e_{i},y_{S})\in\mathcal{Y}^{G}\times\mathcal{E}_{y_{S}}^{G}$, we can decide
whether $i\in S$ by computing $\langle g(f(e_{i})),g(f(y_{S}))\rangle$. Recall
that if $i\not\in S$ then $\langle e_{i},y_{S}\rangle=0$, and if $i\in S$ then
$\langle e_{i},y_{S}\rangle\geq(1/2)\gamma\epsilon$. Therefore, by setting
$\gamma=48001$ we have that if $\langle g(f(e_{i})),g(f(y_{S}))\rangle\leq
12000\epsilon$, then $i\not\in S$, and $i\in S$ otherwise. We can recover each
$g(f(y_{S}))$ for $y_{S}\in\mathcal{Y}^{G}$ from its binary representation.
Next, we recover the set of indices of the points in
$E\setminus\mathcal{E}^{G}_{y_{S}}$, from which we deduce the set of indices
of the points $e_{i}\in\mathcal{E}^{G}_{y_{S}}$. This gives the information
about the set $\\{g(f(e_{i}))\\}_{e_{i}\in\mathcal{E}^{G}_{y_{S}}}$. At this
stage we have all the necessary information to compute the inner products
$\langle g(f(y_{S})),g(f(e_{i}))\rangle$ for all the pairs $y_{S}$ and $e_{i}$
that enable us to correctly decide whether $i\in S$. Finally, for the rest
points $e\in E\setminus\mathcal{E}^{G}_{y_{S}}$ we have a naive encoding which
explicitly states whether $e$ is a part of $y_{S}$.
### 2.3 Deducing the lower bound
From the counting argument, the maximal number of different sets that can be
recovered from the encoding of length at most $\rho=\frac{7}{8}d^{2}+2dL_{G}$
is at most $2^{\rho}$. This implies
$\frac{7}{8}d^{2}+2dL_{G}\geq\log\lvert\mathcal{P}\rvert$. On the other hand,
the size of the family is $\lvert\mathcal{P}\rvert={\binom{d}{l}}^{d}$. Recall
that we have set $d=2l$ so we have that
$\lvert\mathcal{P}\rvert\geq{\binom{2l}{l}}^{2l}\geq\left(2^{(2l-1)}/\sqrt{l}\right)^{2l}\geq
2^{4l^{2}-2l\log l}\geq 2^{3.9l^{2}}$, where the last estimate follows from
our assumption on $\epsilon$. Therefore, $\frac{7}{2}l^{2}+4lL_{G}\geq
3.9l^{2}$, implying $L_{G}\geq(1/10)l$, where
$L_{G}=k\log(40/(\epsilon\sqrt{k}))=\frac{1}{2}k\log\left(16(\frac{10}{\epsilon})^{2}\frac{1}{k}\right)$.
This implies that
$k\log\left(16\left(\frac{10}{\epsilon}\right)^{2}\frac{1}{k}\right)\geq(1/5)l\geq
1/(5\gamma^{2}\cdot\epsilon^{2})$. Setting
$x=k\cdot(5\gamma^{2}\cdot\epsilon^{2})$ we have that
$1\leq x\log\left(\frac{0.5}{x}\cdot
2^{14}\gamma^{2}\right)=x\log(0.5/x)+x\log\left(2^{14}\gamma^{2}\right)\leq
1/2+2x(7+\log\gamma),$
where the last inequality we have used $x\log(0.5/x)\leq 0.5/(e\ln 2)<1/2$ for
all $x$. This implies the desired lower bound on the dimension: $k\geq
1/(20\gamma^{2}(7+\log\gamma)\cdot\epsilon^{2})$.
## 3 Lower bounds for $q$-moments of distortion
In this section we prove Theorem 1.3 which provides a lower bound for
$q$-moments of distortion. Similarly, to the proof for $\ell_{1}$-distortion
in Section 2, we prove the theorem first for metric space of fixed size
$\hat{n}=O(1/\epsilon^{2})\cdot e^{O(\epsilon q)}$, which can be extended for
metric spaces of size $\Theta(n)$ for any $n$, by the following variation of
lemma proved in [8]:
###### Lemma 3.1.
Let $(X,d_{X})$ be a metric space of size $\lvert X\rvert=n>1$, and let
$(Y,d_{Y})$ be a metric space. Assume that for any embedding $f:X\to Y$ it
holds that $\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}(f)\geq 1+\epsilon$. For
any ${n}\geq\hat{n}$ there is a metric space $Z$ of size $\lvert
Z\rvert=\Theta({n})$ such that any embedding $F:Z\to Y$ has
$\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}(F)\geq 1+\epsilon/2$. Moreover, if
$X$ is a Euclidean subset then there is an embedding from $Z$ into a finite
dimensional Euclidean space with distortion $1+\delta$ for any $\delta>0$.
For simplicity we may assume w.l.o.g that $q\geq\frac{3}{\epsilon}$, otherwise
the theorem follows from Theorem 1.2 by monotonicity of the
$\ell_{q}$-distortion. The proof strategy has exactly the same structure as in
the proof of Section 2, however the sets $I$ are constructed using different
parameters. For a given $\epsilon<0$, let
$l=\lceil\frac{1}{\gamma^{2}\epsilon^{2}}\rceil$ be an integer for some large
constant $\gamma>1$ to be determined later. We construct point sets ${\rm
I}\subset\ell_{2}^{d}$, where $d=l\tau$, $\tau=e^{\epsilon q}$, and $|{\rm
I}|=3d$. Assume that for all ${\rm I}\in\mathcal{P}$ there is an embedding
$f:I\to\ell_{2}^{k}$, with $\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}(f)\leq
1+\epsilon$. We show that this implies that $k=\Omega(q/\epsilon)$.
As before the strategy is to produce a unique binary encoding of ${\rm I}$ of
length ${\rm length(I)}$. We will obtain that
$\lvert\mathcal{P}\rvert=\binom{d}{l}^{d}\geq(d/l)^{ld}$, which will give that
${\rm length(I)}\geq dl\log(d/l)=dl\log(\tau)$. We will show that this implies
the bound on $k\geq\Omega(l\log(\tau))=\Omega(1/\epsilon^{2}\cdot\epsilon
q)=\Omega(q/\epsilon)$.
As in the proof of Theorem 1.2, we can assume w.l.o.g. that $\epsilon\leq
1/\gamma$, which by the choice of $\gamma$ later on implies $\epsilon<1/36$.
###### Lemma 3.2.
For any $I\in\mathcal{P}$ let $f:I\to\ell_{2}^{k}$ be an embedding with
$\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}(f)\leq 1+\epsilon$, for
$\epsilon<1/36$. There is a subset $\hat{I}\subseteq I$ of size
$\big{\lvert}\hat{I}\big{\rvert}\geq(1-3/\tau^{4})\lvert I\rvert$ such that
$f(\hat{I})\subset B_{2}\left(1+6.02\epsilon\right)$, and for $1-2/\tau^{4}$
fraction of the pairs $(u,v)\in{\binom{\hat{I}}{2}}$ it holds that
$\big{\lvert}\langle f(u),f(v)\rangle-\langle u,v\rangle\big{\rvert}\leq
32\epsilon$.
###### Proof.
By assumption we have
$\left(\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}(f)\right)^{q}=\frac{1}{\big{\lvert}{\binom{I}{2}}\big{\rvert}}\sum_{(u,v)\in{\binom{I}{2}}}{\left({\mathop{\mbox{$dist$}}}_{f}(u,v)\right)^{q}}\leq(1+\epsilon)^{q}$.
By the Markov inequality there are at least $1-1/\tau^{4}$ fraction of the
pairs $(u,v)\in\big{\lvert}{\binom{I}{2}}\big{\rvert}$ such that
$(\mathop{\mbox{$dist$}}_{f}(u,v))^{q}\leq\tau^{4}(1+\epsilon)^{q}\leq(1+\epsilon)^{q}\cdot
e^{4\epsilon q}$, implying that $\mathop{\mbox{$dist$}}_{f}(u,v)\leq
1+6\epsilon$. Therefore,
$\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\big{\rvert}\leq\max\\{{\mathop{\mbox{$expans$}}}_{f}(u,v)-1,1/{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\\}={\mathop{\mbox{$dist$}}}_{f}(u,v)-1\leq
6\epsilon.$
For every $o_{j}\in O$, let $F_{j}$ be the set of points $v\in
I\setminus\\{o_{j}\\}$ such that
${\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(o_{j},v)-1\big{\rvert}}>6\epsilon$.
Then the total number of pairs $(u,v)\in{\binom{I}{2}}$ with the property that
${\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\big{\rvert}}>6\epsilon$ is
at least $\sum_{j=1}^{d}|F_{j}|/2$, implying that there must be a point
$\hat{o}=o_{j^{*}}\in O$ such that
$|F_{j*}|\leq\frac{1}{\tau^{4}}\cdot\frac{3d(3d-1)}{d}\leq\frac{3}{\tau^{4}}(3d-1)$.
Define $\hat{I}=I\setminus F_{j*}$ to be the complement of this set, so that
$|\hat{I}|\leq(1-\frac{3}{\tau^{4}})|I|$. We assume without loss of generality
that $f(\hat{o})=0$. Let $\hat{O}=O\cap\hat{I}$. We have that
$\lvert{\mathop{\mbox{$expans$}}}_{f}(v,\hat{o})-1\rvert=\lvert\frac{\left\|f(v)\right\|_{2}}{\left\|v-\hat{o}\right\|_{2}}-1\rvert\leq
6\epsilon$, and using
$\left\|v-\hat{o}\right\|_{2}\leq\left\|v\right\|_{2}+\left\|\hat{o}\right\|_{2}\leq
1+\epsilon/100$, so that
$\left\|f(v)\right\|_{2}\leq(1+6\epsilon)(1+\epsilon/100)\leq 1+6.02\epsilon$,
implying that $f(v)\in B_{2}\left(1+6.02\epsilon\right)$.
Denote by $\hat{G}$ the set of pairs $(u,v)\in{\binom{\hat{I}}{2}}$ satisfying
that ${\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\big{\rvert}}\leq
6\epsilon$. To bound the fraction of these pairs from below, we can first
bound $|\hat{I}|\geq(1-\frac{3}{\tau^{4}})|I|\geq\frac{5}{2}d$ and
$|\hat{I}|-1\geq 2d$, using that $\tau>3$ by our assumption on $q$. Therefore,
we have that the fraction of pairs
$(u,v)\in{\binom{\hat{I}}{2}}\setminus\hat{G}$ is at most
$\frac{1}{\tau^{4}}\cdot\frac{3d(3d-1)}{|\hat{I}|(|\hat{I}|-1)}\leq\frac{1}{\tau^{4}}\cdot\frac{9}{5}\leq\frac{2}{\tau^{4}}.$
Finally, to estimate the absolute difference in inner products over the set of
pairs $\hat{G}$ we recall some of the estimates from the proof of Section 2.
For all $(u,v)\in\hat{G}$ we have:
$\displaystyle\big{\lvert}\langle f(u),f(v)\rangle-\langle
u,v\rangle\big{\rvert}$ $\displaystyle\leq$
$\displaystyle\frac{1}{2}\left[\big{\lvert}\left\|f(u)\right\|_{2}^{2}-\left\|u\right\|_{2}^{2}\big{\rvert}+\big{\lvert}\left\|f(v)\right\|_{2}^{2}-\left\|v\right\|_{2}^{2}\big{\rvert}\right]$
$\displaystyle+$
$\displaystyle\frac{1}{2}\left[\big{\lvert}\left\|f(u)-f(v)\right\|_{2}^{2}-\left\|u-v\right\|_{2}^{2}\big{\rvert}\right].$
We can bound each term as follows:
$\displaystyle\big{\lvert}\left\|f(u)\right\|_{2}^{2}-\left\|u\right\|_{2}^{2}\big{\rvert}=$
$\displaystyle=$
$\displaystyle\lvert\left\|f(u)-f(\hat{o})\right\|_{2}^{2}-\left\|u-\hat{o}\right\|_{2}^{2}+\left\|u-\hat{o}\right\|_{2}^{2}-\left\|u\right\|_{2}^{2}\rvert$
$\displaystyle\leq$
$\displaystyle\lvert\left\|f(u)-f(\hat{o})\right\|_{2}-\left\|u-\hat{o}\right\|_{2}\rvert\cdot(\left\|f(u)-f(\hat{o})\right\|_{2}+\left\|u-\hat{o}\right\|_{2})$
$\displaystyle+$
$\displaystyle\lvert\left\|u-\hat{o}\right\|_{2}-\left\|u\right\|_{2}\rvert\cdot(\left\|u-\hat{o}\right\|_{2}+\left\|u\right\|_{2})$
$\displaystyle\leq$
$\displaystyle\left\|u-\hat{o}\right\|_{2}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert\cdot(\left\|f(u)\right\|_{2}+\left\|u-\hat{o}\right\|_{2})+\left\|\hat{o}\right\|_{2}\cdot(\left\|u-\hat{o}\right\|_{2}+\left\|u\right\|_{2})$
$\displaystyle\leq$
$\displaystyle\left(1+\frac{\epsilon}{100}\right)\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert\left(1+6.02\epsilon+1+\frac{\epsilon}{100}\right)+\frac{\epsilon}{100}\cdot\left(2+\frac{\epsilon}{100}\right)$
$\displaystyle\leq$
$\displaystyle\left(2+6.06\epsilon\right)\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert+\frac{\epsilon}{40}\leq\left(2+6.06\epsilon\right)\cdot
6\epsilon+\frac{\epsilon}{40}\leq 14\epsilon,$
where we have used $\left\|\hat{o}\right\|\leq\epsilon/100$,
$\left\|u-\hat{o}\right\|_{2}\leq\left\|u\right\|_{2}+\left\|\hat{o}\right\|_{2}\leq
1+\epsilon/100$, the bound on the norms of the embedding within $\hat{I}$, and
the property of pairs in $\hat{G}$. Additionally, we have that
$\displaystyle\big{\lvert}\left\|f(u)-f(v)\right\|_{2}^{2}-\left\|u-v\right\|_{2}^{2}\big{\rvert}=$
$\displaystyle=$
$\displaystyle\lvert\left\|f(u)-f(v)\right\|_{2}-\left\|u-v\right\|_{2}\rvert\cdot(\left\|f(u)-f(v)\right\|_{2}+\left\|u-v\right\|_{2})$
$\displaystyle\leq$
$\displaystyle\left\|u-v\right\|_{2}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert\cdot(\left\|f(u)\right\|_{2}+\left\|f(v)\right\|_{2}+\left\|u-v\right\|_{2})$
$\displaystyle\leq$
$\displaystyle\sqrt{2}\left(2\left(1+6.02\epsilon\right)+\sqrt{2}\right)\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert\leq
6\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert\leq 36\epsilon,$
since $\left\|u-v\right\|_{2}\leq diam(I)=\sqrt{2}$, and the last step follows
using the property of pair in $\hat{G}$. We conclude that for all
$(u,v)\in\hat{G}$: $\lvert\langle f(u),f(v)\rangle-\langle
u,v\rangle\rvert\leq\frac{1}{2}\left(2\cdot
14\epsilon+36\epsilon\right)=32\epsilon$. ∎
As before, the goal is to encode the images of the embedding using a
sufficiently small number of bits, by rounding them to the points of a grid of
the Euclidean ball via the randomized rounding technique of [6] as to preserve
the inner product gap. The following lemma provides the probability that this
procedure fails.
###### Lemma 3.3.
Let $X\subset\ell_{2}^{k}$ such that $X\subset B_{2}(r)$. For $\delta\leq
r/\sqrt{k}$ let $G_{\delta}\subseteq B_{2}(r)$ denote the intersection of the
$\delta$-grid with $B_{2}(r)$. There is a mapping $g:X\to G_{\delta}$ such
that for any $\eta\geq 1$, there is a $1-4e^{-\eta^{2}}$ fraction of the pairs
$(u,v)\in{\binom{X}{2}}$ such that $\lvert\langle g(u),g(v)\rangle-\langle
u,v\rangle\rvert\leq 3\sqrt{2}\eta\delta r$, and the points of the grid can be
represented using $L_{G_{\delta}}=k\log(4r/(\delta\sqrt{k}))$ bits.
###### Proof.
For each point $v\in X$ randomly and independently match a point $\tilde{v}$
on the grid by rounding each of its coordinates $v_{i}$ to one of the closest
integral multiplies of $\delta$ in such a way that $E[\tilde{v}_{i}]=v_{i}$.
For any $u\neq v\in X$ we have:
$\lvert\langle\tilde{u},\tilde{v}\rangle-\langle
u,v\rangle\rvert\leq\lvert\langle\tilde{u}-u,v\rangle\rvert+\lvert\langle\tilde{u},\tilde{v}-v\rangle\rvert$.
Now, ${\rm E}[\langle\tilde{u}-u,v\rangle]=\sum_{i=1}^{k}{{\rm
E}[\tilde{u}_{i}-u_{i}]v_{i}}=0$ and ${\rm
E}[\langle\tilde{u},\tilde{v}-v\rangle]=\sum_{i=1}^{k}{{\rm
E}[\tilde{u}_{i}]{\rm E}[\tilde{v}_{i}-v_{i}]}=0$. Next, we wish to make use
of the Hoeffding bound. We therefore bound each of the terms
$|(\tilde{u}_{i}-u_{i})v_{i}|\leq\delta|v_{i}|$ and the sum
$\sum_{i=1}^{k}\delta^{2}v_{i}^{2}=\delta^{2}r$, and
$|\tilde{u}_{i}(\tilde{v}_{i}-v_{i})|\leq\delta(|u_{i}|+\delta)$, so that
$\sum_{i=1}^{k}\delta^{2}(v_{i}+\delta)^{2}=\delta^{2}\sum_{i=1}^{k}(v_{i}^{2}+2\delta
v_{i}+\delta^{2})\leq\delta^{2}(r+2\delta\left\|v_{i}\right\|_{1}+\delta^{2}k)\leq\delta^{2}(r^{2}+2\delta
r\sqrt{k}+\delta^{2}k)\leq 4\delta^{2}r^{2}.$
Applying the Hoeffding bound we get that
$\Pr[\lvert\langle\tilde{u}-u,v\rangle\rvert>\sqrt{2}\eta\delta r]\leq
2e^{-\eta^{2}}$
and $\Pr[\lvert\langle\tilde{u},\tilde{v}-v\rangle\rvert>2\sqrt{2}\eta\delta
r]\leq 2e^{-\eta^{2}}$, and therefore
$\Pr[\lvert\langle\tilde{u},\tilde{v}\rangle-\langle
u,v\rangle\rvert>3\sqrt{2}\eta\delta r]\leq 4e^{-\eta^{2}}.$
This probability also bounds the expected number of pairs with this property
so there must exist an embedding to the grid where the bound stated in the
lemma holds. The bound on the representation size is the same as in Lemma 2.2.
∎
Combining the lemmas we obtain:
###### Corollary 3.1.
For any $I\in\mathcal{P}$ let $f:I\to\ell_{2}^{k}$ be an embedding with
$\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}(f)\leq 1+\epsilon$, with
$\epsilon\leq 1/36$. There is a subset $\hat{I}\subseteq I$ of size
$\big{\lvert}\hat{I}\big{\rvert}\geq(1-3/\tau^{4})\lvert I\rvert$ such that
for a fraction of at least $1-6/\tau^{4}$ of the pairs
$(u,v)\in{\binom{\hat{I}}{2}}$ it holds that: $\big{\lvert}\langle
g(f(u)),g(f(v))\rangle-\langle u,v\rangle\big{\rvert}\leq 42\epsilon$, where
$g:\hat{I}\to G$. Moreover, the points in $G$ can be uniquely represented by
binary strings of length at most $L_{G}=k\log(5\sqrt{q/(\epsilon k)})$ bits.
###### Proof.
The corollary follows by applying Lemma 3.2 followed by Lemma 3.3 with
$X=\hat{I}$ with $\delta=2\sqrt{\epsilon/q}$ and $\eta=\sqrt{\ln(\tau)}$. Note
that we may assume that indeed
$2\sqrt{\epsilon/q}=\delta<1/\sqrt{k}<r/\sqrt{k}$, since otherwise we are
done. Therefore, the increase in the absolute difference of the inner products
due to the grid embedding is at most: $3\sqrt{2}\eta\delta
r=6r\sqrt{2\ln(\tau)\epsilon/q}=6r\sqrt{2(\epsilon q)\epsilon/q}\leq
10\epsilon$. The bound on representation of the grid follows from Lemma 3.3:
$L_{G}=k\log(4r/(\delta\sqrt{k}))=k\log(4r\sqrt{q/(\epsilon k)})\leq
k\log(5\sqrt{q/(\epsilon k)})$. ∎
We are ready to obtain the main technical consequence which will imply the
lower bound:
###### Corollary 3.2.
For any $I\in\mathcal{P}$ let $f:I\to\ell_{2}^{k}$ be an embedding with
$\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}(f)\leq\epsilon$, with
$\epsilon\leq 1/36$. There is a subset of points $G$ that satisfies the
following: there is a subset ${\mathcal{Y}^{G}}\subseteq Y$ of size
$\big{\lvert}\mathcal{Y}^{G}\big{\rvert}\geq(1-6/\tau^{2})\lvert Y\rvert$ such
that for each $y\in\mathcal{Y}_{G}$ there is a subset
$\mathcal{E}^{G}_{y}\subseteq E$ of size
$\big{\lvert}\mathcal{E}^{G}_{y}\big{\rvert}\geq(1-6/\tau^{2})\big{\lvert}E\big{\rvert}$
such that for all pairs $(y,e)\in\mathcal{Y}^{G}\times\mathcal{E}^{G}_{y}$ we
have: $\big{\lvert}\langle g(f(y)),g(f(e))\rangle-\langle
y,e\rangle\big{\rvert}\leq 42\epsilon$, where
$g:\mathcal{Y}^{G}\cup\\{\mathcal{E}^{G}_{y}\\}_{y\in\mathcal{Y^{G}}}\to G$.
Moreover, the points in $G$ can be uniquely represented by binary strings of
length at most $L_{G}=k\log(5\sqrt{q/(\epsilon k)})$ bits.
###### Proof.
Applying Corollary 3.1 we have that there are at most $6/\tau^{4}$ pairs
$(u,v)\in{\binom{\hat{I}}{2}}$ such that $\big{\lvert}\langle
g(f(u)),g(f(v))\rangle-\langle u,v\rangle\big{\rvert}>42\epsilon$. It follows
that the number of pairs in $Y\times E$ that are in ${\binom{\hat{I}}{2}}$ and
have this property is at most
$\frac{6}{\tau^{4}}\cdot\frac{3d(3d-1)}{2}\leq\frac{27}{\tau^{4}}\cdot d^{2}$.
Therefore there can be at most $\frac{3\sqrt{3}}{\tau^{2}}\cdot d$ points in
$u\in Y$ such that there are more than $\frac{3\sqrt{3}}{\tau^{2}}d$ points in
$v\in E$ with this property. Since there at most $\frac{3}{\tau^{4}}\cdot
d<\frac{0.5}{\tau^{2}}\cdot d$ points in each of $Y$ and $E$ which may not be
in $\hat{I}$ we obtain the stated bounds on the sizes of
$\lvert\mathcal{Y}^{G}\rvert$ and $\lvert\mathcal{E}^{G}_{y}\rvert$. ∎
As before we proceed to show that the whole instance ${\rm I}$ can be encoded
using sufficiently small number of bits.
### 3.1 Encoding and decoding
For a set ${\rm I}\in\mathcal{P}$ let $f:{\rm I}\to\ell_{2}^{k}$ be an
embedding with $\mathop{\mbox{$\ell_{q}$-$\mathit{dist}$}}(f)=1+\epsilon$,
where $\Omega\left(\frac{1}{\sqrt{n}}\right)\leq\epsilon<1/36$, and
$q=O(\log(\epsilon^{2}n)/\epsilon)$. Let $t=\tau^{2}/6$. Therefore, by
Corollary 3.2, we can find a subset $G\subseteq B_{2}(2)$, and a mapping
$g:f(I)\to G$, and a subset $\mathcal{Y}^{G}\subseteq Y$, with
$\lvert\mathcal{Y}^{G}\rvert\geq\left(1-\frac{1}{t}\right)\lvert Y\rvert$,
where for all $y\in\mathcal{Y}^{G}$ we can find a susbet
$\mathcal{E}_{y}^{G}\subseteq E$ with
$\lvert\mathcal{E}_{y}^{G}\rvert\geq\left(1-\frac{1}{t}\right)\lvert E\rvert$,
such that for all pairs $(e,y)\in\mathcal{Y}^{G}\times\mathcal{E}_{y}^{G}$ the
inner products $\big{\lvert}\langle g(f(y)),g(f(e))\rangle-\langle
y,e\rangle\big{\rvert}\leq 42\epsilon$. Moreover, each point in $G$ can be
uniquely encoded using at most $L_{G}=k\log(5\sqrt{q/(\epsilon k)})$ bits.
The encoding is done according to the description in Section 2.1 so we
similarly obtain the following bound on the bit length of the encoding:
$(1/t)d^{2}(2+\log(et))+2dL_{G}$.
The decoding works in the same way as before for an appropriate choice of
$\gamma=169$.
### 3.2 Deducing the lower bound
In this subsection we show that $k=\Omega(q/\epsilon)$, proving the desired
lower bound for the case of $n=3d=O(1/\epsilon^{2})\cdot e^{O(\epsilon q)}$.
From the counting argument, the maximal number of different sets that can be
recovered from the encoding of length at most
$\rho=(1/t)d^{2}(2+\log(et))+2dL_{G}$ is at most $2^{\rho}$. This implies
$(1/t)d^{2}(2+\log(et))+2dL_{G}\geq\log\lvert\mathcal{P}\rvert.$
On the other hand, the size of the family is
$\lvert\mathcal{P}\rvert={\binom{d}{l}}^{d}\geq(d/l)^{ld}=\tau^{ld}$, so that
$\log(\lvert\mathcal{P}\rvert)=ld\log(\tau)$. We therefore derive the
following inequality
$(1/t)d^{2}(2+\log(et))+2dL_{G}\geq ld\log(\tau)\Rightarrow
L_{G}\geq(1/4)l\log(\tau),$
as
$(1/t)d(2+\log(et))\leq d(2\log(\tau)+4)/\tau^{2}\leq
d/(2\tau)\log(\tau)=l\log(\tau)/2,$
using that $\log(\tau)>4$.
Recall that $L_{G}=k\log(5\sqrt{q/(\epsilon
k)})=\frac{1}{2}k\log\left(25(q/(\epsilon k))\right)$. This implies that
$k\log\left(25\left(\frac{q}{\epsilon k}\right)\right)\geq(1/2)l\log(\tau)\geq
1/(2\gamma^{2}\cdot\epsilon^{2})\cdot\epsilon q=1/(2\gamma^{2})\cdot
q/\epsilon.$
Setting $x=k\cdot(2\gamma^{2}\cdot\epsilon/q)$ we have that
$1\leq x\log\left(\frac{0.5}{x}\cdot
100\gamma^{2}\right)=x\log(0.5/x)+x\log\left(100\gamma^{2}\right)\leq
1/2+2x\log(10\gamma),$
where the last inequality we have used $x\log(0.5/x)\leq 0.5/(e\ln 2)<1/2$ for
all $x$. This implies the desired lower bound on the dimension: $k\geq
1/(8\gamma^{2}\log(10\gamma))\cdot q/\epsilon$.
## References
* [1] Ittai Abraham, Yair Bartal, T-H. Hubert Chan, Kedar Dhamdhere Dhamdhere, Anupam Gupta, Jon Kleinberg, Ofer Neiman, and Aleksandrs Slivkins. Metric embeddings with relaxed guarantees. In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’05, page 83–100, USA, 2005. IEEE Computer Society. doi:10.1109/SFCS.2005.51.
* [2] Ittai Abraham, Yair Bartal, and Ofer Neiman. Embedding metrics into ultrametrics and graphs into spanning trees with constant average distortion. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’07, page 502–511, USA, 2007. Society for Industrial and Applied Mathematics.
* [3] Ittai Abraham, Yair Bartal, and Ofer Neiman. Embedding metrics into ultrametrics and graphs into spanning trees with constant average distortion. In Proceedings of the 18th annual ACM-SIAM symposium on Discrete algorithms, SODA ’07, pages 502–511, Philadelphia, PA, USA, 2007. Society for Industrial and Applied Mathematics. URL: http://portal.acm.org/citation.cfm?id=1283383.1283437.
* [4] Ittai Abraham, Yair Bartal, and Ofer Neiman. Advances in metric embedding theory. Advances in Mathematics, 228(6):3026 – 3126, 2011. URL: http://www.sciencedirect.com/science/article/pii/S000187081100288X, doi:10.1016/j.aim.2011.08.003.
* [5] Noga Alon. Perturbed identity matrices have high rank: Proof and applications. Combinatorics, Probability & Computing, 18(1-2):3–15, 2009. URL: http://dx.doi.org/10.1017/S0963548307008917, doi:10.1017/S0963548307008917.
* [6] Noga Alon and Bo’az Klartag. Optimal compression of approximate inner products and dimension reduction. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 639–650, 2017. doi:10.1109/FOCS.2017.65.
* [7] Ehsan Amid and Manfred K. Warmuth. Trimap: Large-scale dimensionality reduction using triplets, 2019. arXiv:1910.00204.
* [8] Yair Bartal, Nova Fandina, and Ofer Neiman. Dimensionality reduction: theoretical perspective on practical measures. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 10576–10588, 2019. URL: https://proceedings.neurips.cc/paper/2019/hash/94f4ede62112b790c91d5e64fdb09cb8-Abstract.html.
* [9] Yair Bartal, Nathan Linial, Manor Mendel, and Assaf Naor. On metric ramsey-type phenomena. Annals of Mathematics, 162(2):643–709, 2005. URL: http://www.jstor.org/stable/20159927.
* [10] A. Censi and D. Scaramuzza. Calibration by correlation using metric embedding from nonmetric similarities. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(10):2357–2370, Oct. 2013. URL: doi.ieeecomputersociety.org/10.1109/TPAMI.2013.34, doi:10.1109/TPAMI.2013.34.
* [11] Leena Chennuru Vankadara and Ulrike von Luxburg. Measures of distortion for machine learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 4891–4900. Curran Associates, Inc., 2018.
* [12] Leena Chennuru Vankadara and Ulrike von Luxburg. Measures of distortion for machine learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
* [13] Russ Cox, Frank Dabek, Frans Kaashoek, Jinyang Li, and Robert Morris. Practical, distributed network coordinates. SIGCOMM Comput. Commun. Rev., 34(1):113–118, January 2004. URL: http://doi.acm.org/10.1145/972374.972394, doi:10.1145/972374.972394.
* [14] Michael Elkin, Arnold Filtser, and Ofer Neiman. Prioritized metric structures and embedding. In Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing, STOC ’15, page 489–498, New York, NY, USA, 2015. Association for Computing Machinery. doi:10.1145/2746539.2746623.
* [15] Michael Elkin, Arnold Filtser, and Ofer Neiman. Terminal Embeddings. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2015), volume 40 of Leibniz International Proceedings in Informatics (LIPIcs), pages 242–264, 2015.
* [16] Patrick J. F. Groenen, Rudolf Mathar, and Willem J. Heiser. The majorization approach to multidimensional scaling for minkowski distances. Journal of Classification, 12(1):3–19, 1995.
* [17] W. J Heiser. Multidimensional scaling with least absolute residuals. In In H. H. Bock (Ed.) Classification and related methods, pages 455–462. Amsterdam: NorthHolland, 1988a.
* [18] P. Indyk. Algorithmic applications of low-distortion geometric embeddings. In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science, FOCS ’01, page 10, USA, 2001. IEEE Computer Society.
* [19] Piotr Indyk and Jiri Matousek. Low-distortion embeddings of finite metric spaces. In in Handbook of Discrete and Computational Geometry, pages 177–196. CRC Press, 2004.
* [20] Piotr Indyk and Rajeev Motwani. Approximate nearest neighbors: Towards removing the curse of dimensionality. In Proceedings of the Thirtieth Annual ACM Symposium on Theory of Computing, STOC ’98, pages 604–613, New York, NY, USA, 1998. ACM. URL: http://doi.acm.org/10.1145/276698.276876, doi:10.1145/276698.276876.
* [21] William B. Johnson and Joram Lindenstrauss. Extensions of Lipschitz mappings into a Hilbert space. In Conference in modern analysis and probability (New Haven, Conn., 1982), pages 189–206. American Mathematical Society, Providence, RI, 1984\.
* [22] Jon Kleinberg, Aleksandrs Slivkins, and Tom Wexler. Triangulation and embedding using small sets of beacons. J. ACM, 56(6):32:1–32:37, September 2009. URL: http://doi.acm.org/10.1145/1568318.1568322, doi:10.1145/1568318.1568322.
* [23] Deepanshu Kush, Aleksandar Nikolov, and Haohua Tang. Near neighbor search via efficient average distortion embeddings. In 37th International Symposium on Computational Geometry, SoCG 2021, June 7-11, 2021, Buffalo, NY, USA (Virtual Conference), pages 50:1–50:14, 2021. doi:10.4230/LIPIcs.SoCG.2021.50.
* [24] Kasper Green Larsen and Jelani Nelson. Optimality of the johnson-lindenstrauss lemma. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 633–638, 2017. doi:10.1109/FOCS.2017.64.
* [25] N. Linial. Finite metric spaces- combinatorics, geometry and algorithms. In Proceedings of the ICM, 2002.
* [26] Jiří Matoušek. Bi-Lipschitz embeddings into low-dimensional Euclidean spaces. Commentat. Math. Univ. Carol., 31(3):589–600, 1990.
* [27] Jiří Matoušek. Lectures on Discrete Geometry. Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2002.
* [28] Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. Umap: Uniform manifold approximation and projection. Journal of Open Source Software, 3(29):861, 2018. doi:10.21105/joss.00861.
* [29] Assaf Naor. Comparison of metric spectral gaps. Analysis and Geometry in Metric Spaces, 2:2:1–52, 2014.
* [30] Assaf Naor. An average John theorem. Geometry and Topology, 25(4):1631 – 1717, 2021. doi:10.2140/gt.2021.25.1631.
* [31] Puneet Sharma, Zhichen Xu, Sujata Banerjee, and Sung-Ju Lee. Estimating network proximity and latency. Computer Communication Review, 36(3):39–50, 2006. URL: http://doi.acm.org/10.1145/1140086.1140092, doi:10.1145/1140086.1140092.
* [32] Yuval Shavitt and Tomer Tankel. Big-bang simulation for embedding network distances in euclidean space. IEEE/ACM Trans. Netw., 12(6):993–1006, December 2004. URL: http://dx.doi.org/10.1109/TNET.2004.838597, doi:10.1109/TNET.2004.838597.
* [33] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9(86):2579–2605, 2008. URL: http://jmlr.org/papers/v9/vandermaaten08a.html.
* [34] Santosh Srinivas Vempala. The random projection method, volume 65 of DIMACS series in discrete mathematics and theoretical computer science. Providence, R.I. American Mathematical Society, 2004. URL: http://opac.inria.fr/record=b1101689.
* [35] J. Fernando Vera, Willem J. Heiser, and Alex Murillo. Global optimization in any minkowski metric: A permutation-translation simulated annealing algorithm for multidimensional scaling. J. Classif., 24(2):277–301, September 2007.
* [36] Yingfan Wang, Haiyang Huang, Cynthia Rudin, and Yaron Shaposhnik. Understanding how dimension reduction tools work: An empirical approach to deciphering t-sne, umap, trimap, and pacmap for data visualization, 2020. arXiv:2012.04456.
## Appendix A Metric spaces of an arbitrary size
In order to extend the lower bound for the input metrics of an arbitrary size
$n\geq\hat{n}=\Theta(1/\epsilon^{2})$, we use the notion of the metric
composition proposed in [9]. Given a metric space $X$, we compose it with
another metric space $Y$ of size ${n}/\lvert X\rvert$ by substituting each
point in $X$ with a copy of $Y$. The first observation is that in such
composition pairs of the points from different copies constitute a constant
fraction of all the points in the space. The second observation is that,
loosely speaking, the average error over these pairs is, up to a constant, the
average of average errors over different ”copies” of $X$ in the composition.
The following lemma is a variant of a lemma that appeared in [8]:
###### Lemma A.1.
Let $(X,d_{X})$ be a metric space of size $\lvert X\rvert=\hat{n}>1$, and let
$(Y,d_{Y})$ be a metric space. Assume that $\alpha>0$ is such that for any
embedding $f:X\to Y$ it holds that
$\mathop{\mbox{$Energy$}}_{q}(f)\geq\alpha$. For any ${n}\geq\hat{n}$ there is
a metric space $Z$ of size $\lvert Z\rvert=\Theta({n})$ such that any
embedding $F:Z\to Y$ has $\mathop{\mbox{$Energy$}}_{q}(F)\geq\alpha/2$.
Moreover, if $X$ is a Euclidean subset then there is an embedding from $Z$
into a finite dimensional Euclidean space with distortion $1+\delta$ for any
$\delta>0$.
We prove the lemma here for completeness. We start with definition of the
composition of metric spaces given in [9]:
###### Definition A.1.
Let $(S,d_{S})$, $(T,d_{T})$ be finite metric spaces. For any $\beta\geq 1/2$,
the $\beta$-composition of $S$ with $T$, denoted by $Z=S_{\beta}[T]$, is a
metric space of size $|Z|=|S|\cdot|T|$ constructed in the following way. Each
point $u\in X$ is substituted with a copy of the metric space $T$, denoted by
$T^{(u)}$. Let $u,v\in S$, and $z_{i}\neq z_{j}\in Z$, such that $z_{i}\in
T^{(u)}$, and $z_{j}\in T^{(v)}$, then
$d_{Z}(z_{i},z_{j})=\begin{cases}\frac{1}{\gamma}\frac{1}{\beta}\cdot
d_{T}(z_{i},z_{j}),&u=v\\\ d_{S}(u,v),&u\neq v\\\ \end{cases}$
where $\gamma=\frac{\max_{t\neq t^{\prime}\in
T}\\{d_{T}(t,t^{\prime})\\}}{\min_{s\neq s^{\prime}\in
s}\\{d_{S}(s,s^{\prime})\\}}$.
###### Proof.
Given any ${n}\geq\hat{n}$ let $m=\lceil\frac{n}{\hat{n}}\rceil$, and let $T$
be any $m$-point metric space. For any $\beta\geq 1/2$ let $Z$ be the
$\beta$-metric composition of $X$ with $T$ (note that the choice of $T$ is
arbitrary), and let $N=|Z|=\hat{n}m=\Theta({n})$. Let $F:Z\rightarrow Y$ be
any embedding, and consider the set of pairs $B\subseteq{\binom{Z}{2}}$,
$B=\\{(z_{i},z_{j})|z_{i}\in T^{(u)},z_{j}\in T^{(v)},\forall u\neq v\in
X\\}$. Then, $|B|=m^{2}\cdot{\binom{\hat{n}}{2}}$. Let $q\geq 1$, and note
that for all $z_{i}\neq z_{j}\in Z$ it holds that
$\lvert{\mathop{\mbox{$expans$}}}_{F}(z_{i},z_{j})-1\rvert\geq 0$. Then
${({\mathop{\mbox{$Energy$}}}_{q}(F))}^{q}\geq\frac{1}{\binom{N}{2}}\sum\limits_{z_{i}\neq
z_{j}\in B}{\lvert{\\\
expans}_{F}(z_{i},z_{j})-1\rvert}^{q}\geq\frac{1}{2}\cdot\frac{1}{m^{2}\binom{\hat{n}}{2}}\sum\limits_{z_{i}\neq
z_{j}\in B}{\lvert{\\\ expans}_{F}(z_{i},z_{j})-1\rvert}^{q}.$
Let $\mathcal{X}$ denote the family of all possible $n$-point subsets
${\bar{X}}\subset Z$, where each point of ${\bar{X}}$ is chosen from exactly
one of the copies $T^{(x_{1})},T^{(x_{2})},\ldots T^{(x_{\hat{n}})}$. Namely,
each ${\bar{X}}$ is a metric space isometric to $X$. Let $F|_{{\bar{X}}}$
denote the embedding $F$ restricted to the points of ${\bar{X}}$. The size of
the family $|\mathcal{X}|=m^{\hat{n}}$, and it holds that
$\displaystyle\frac{1}{m^{\hat{n}}}\sum\limits_{{\bar{X}}\in\mathcal{X}}{({{\mathop{\mbox{$Energy$}}}_{q}}(F|_{{\bar{X}}})}^{q}$
$\displaystyle=\frac{1}{m^{\hat{n}}}\sum\limits_{{\bar{X}}\in\mathcal{X}}\frac{1}{\binom{\hat{n}}{2}}\sum\limits_{u,v\in{\bar{X}}}{\lvert{\mathop{\mbox{$expans$}}}_{F}(u,v)-1\rvert}^{q}$
$\displaystyle=\frac{1}{m^{\hat{n}}}\frac{1}{\binom{\hat{n}}{2}}\sum\limits_{z_{i}\neq
z_{j}\in
B}m^{\hat{n}-2}\cdot{\lvert{\mathop{\mbox{$expans$}}}_{F}(u,v)-1\rvert}^{q}$
$\displaystyle=\frac{1}{\binom{\hat{n}}{2}m^{2}}\cdot\sum\limits_{z_{i}\neq
z_{j}\in B}{\lvert{\mathop{\mbox{$expans$}}}_{F}(u,v)-1\rvert}^{q}.$
By the assumption it holds that
${{{\mathop{\mbox{$Energy$}}}_{q}}(F|_{{\bar{X}}})}^{q}\geq\alpha^{q}$,
implying that $\mathop{\mbox{$Energy$}}_{q}(F)\geq\alpha$.
Note that the bound on Energy does not depend on the value of $\beta$. It was
shown in [9] (Proposition $2.12$) that if $X$ and $T$ are both Euclidean
subsets, then their composition $Z=X_{\beta}[T]$ embeds into a finite
dimensional Euclidean space with distortion $(1+\epsilon)$, for
$\beta=O\left(1/\epsilon\right)$. This completes the proof of the lemma. ∎
The lemma implies that in order to obtain a family of metric spaces of any
size $\Theta(n)$ it is enough to compose the metric spaces ${\rm}I$ in the
family $\mathcal{P}$, of size $\lvert I\rvert=6l=\hat{n}$ with, for example,
an equilateral metric space on $\lceil n/6l\rceil$ points.
## Appendix B More additive distortion measures
In this section we prove Theorem 1.4 for the additive distortion measures. We
will use some of the observations about the basic relations between the
measures made in [8]:
###### Claim B.1.
For an embedding $f:X\to Y$, for any $r\geq 1$, there is an embedding
$f^{\prime}:X\to Y$ such that
$\sigma_{1,r}(f)=\mathop{\mbox{$Energy$}}_{1}(f^{\prime})$.
###### Claim B.2.
For an embedding $f:X\to Y$ there is an embedding $f^{\prime}:X\to Y$ such
that ${\rm Stress}_{1}(f^{\prime})\leq 4\cdot{\rm Stress^{*}}_{1}(f)$.
Together with Claim 1.1, these imply that
###### Corollary B.1.
In order to show a lower bound for $\ell_{1}$-distortion,
${\mathop{\mbox{$REM$}}_{1}}$ and $\sigma_{1,r}$ it is enough to lower bound
${\mathop{\mbox{$Energy$}}}_{1}$. In order to show a lower bound for ${\rm
Stress^{*}}_{1}$ it is enough to lower bound ${\rm Stress}_{1}$.
We are ready now to prove Theorem 1.4. We restate it here for convenience:
###### Theorem B.3.
Given any integer $n$ and $\Omega(\frac{1}{\sqrt{n}})<\epsilon<1$, there
exists a $\Theta(n)$-point subset of Euclidean space such that any embedding
of it into $\ell_{2}^{k}$ with any of $\mathop{\mbox{$Energy$}}_{1}$,
$Stress_{1}$, $Stress^{*}_{1}$, $REM_{1}$ or $\sigma$-distortion bounded above
by $\epsilon$ requires $k=\Omega(1/\epsilon^{2})$.
###### Proof.
We already proved the theorem for ${\mathop{\mbox{$Energy$}}}_{1}$, therefore
by Corollary B.1 it remains to prove it for ${\rm Stress}_{1}$. First, not
that for any embedding $f:X\to Y$ it holds that
$Stress_{1}(f)=\frac{\sum_{u\neq v\in
X}|\hat{d}_{uv}-d_{uv}|}{S[X]}=\frac{1}{S[X]/{\binom{\lvert
X\rvert}{2}}}\frac{\sum_{u\neq v\in
X}d_{uv}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert}{\binom{\lvert
X\rvert}{2}},$
where $S[X]=\sum_{u\neq v\in X}d_{uv}$. We define
$\overline{Stress}_{1}(f):=\frac{\sum_{u\neq v\in
X}d_{uv}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert}{\binom{\lvert
X\rvert}{2}}.$
Observe that if $X$ is such that $S[X]/\binom{{\lvert X\rvert}{2}}{\leq}c$ for
a constant $c>0$ then $Stress_{1}(f)=\Omega(\overline{Stress}_{1}(f))$.
Therefore, it is enough to show that there is a metric space (of an arbitrary
size $\Theta(n)$) with at most constant average distance on which the lower
bound is obtained for $\overline{Stress}_{1}$. We note that the composition
Lemma A.1 also works for constructing arbitrary size metrics for
$\overline{Stress}$ notion.
Therefore, we show that there is a metric space ${\rm I}$ of size
$\Theta(1/\epsilon^{2})$ such that if any embedding $f:I\to\ell_{2}^{k}$ has
$\overline{Stress}_{1}(f)\leq\epsilon$ then $k=\Omega(1/\epsilon^{2})$. In
addition, ${\rm I}$ is such that $S[I]/{\binom{\lvert
I\rvert}{2}}\leq\sqrt{2}$. Since a metric space obtained by the composition in
Claim A.1 has the same diameter as the base space (${\rm I}$ in our case),
this will complete the proof of the lower bound on an arbitrary size example,
and its embedding in Euclidean space would increase this bound further by at
most an extra $1+\delta$ factor, for arbitrary $\delta>0$.
We argue that a slight variation on the proof in Section 2 for
${\mathop{\mbox{$Energy$}}}_{1}$ works for ${\overline{Stress}_{1}}$ as well.
The initial assumption (2.0.1) should be changed accordingly to:
${\overline{Stress}}_{1}(f)=\frac{1}{\big{\lvert}\binom{I}{2}\big{\rvert}}\sum_{(u,v)\in\binom{I}{2}}\left\|u-v\right\|_{2}{\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\big{\rvert}}\leq\epsilon.$
(B.0.1)
So then condition (2.0.2) defining $\hat{o}\in O$ becomes the point minimizing
$\frac{1}{|I|-1}\sum_{v\in
I,v\neq\hat{o}}\left\|v-\hat{o}\right\|_{2}{\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(\hat{o},v)-1\big{\rvert}}\leq
3\epsilon.$ (B.0.2)
As before, assume without loss of generality that $f(\hat{o})=0$. Let
$\hat{I}$ be the set of all $v\in I$ such that
$\left\|v-\hat{o}\right\|_{2}{\big{\lvert}{\mathop{\mbox{$expans$}}}_{f}(\hat{o},v)-1\big{\rvert}}\leq\frac{3\epsilon}{\alpha}$.
By Markov’s inequality,
$\lvert\hat{I}\rvert\geq(1-\alpha)\big{\lvert}I\big{\rvert}$. We have that for
all $v\in{\hat{\rm I}}$,
$\left\|v-\hat{o}\right\|_{2}\lvert{\mathop{\mbox{$expans$}}}_{f}(v,\hat{o})-1\rvert=\lvert\left\|f(v)\right\|_{2}-\left\|v-\hat{o}\right\|_{2}\rvert\leq\frac{3\epsilon}{\alpha}$,
and using
$\left\|v-\hat{o}\right\|_{2}\leq\left\|v\right\|_{2}+\left\|\hat{o}\right\|_{2}\leq
1+\epsilon/100$, so that $\left\|f(v)\right\|_{2}\leq
1+\epsilon/100+\frac{3\epsilon}{\alpha}\leq 1+\frac{3.001\epsilon}{\alpha}$,
implying that $f(v)\in B_{2}\left(1+\frac{3.01\epsilon}{\alpha}\right)$.
The main change is in the estimations made in Lemma 2.1 using that for all
$(u,v)\in\binom{\hat{I}}{2}$:
$\displaystyle\big{\lvert}\langle f(u),f(v)\rangle-\langle
u,v\rangle\big{\rvert}$ $\displaystyle\leq$
$\displaystyle\frac{1}{2}\left[\big{\lvert}\left\|f(u)\right\|_{2}^{2}-\left\|u\right\|_{2}^{2}\big{\rvert}+\big{\lvert}\left\|f(v)\right\|_{2}^{2}-\left\|v\right\|_{2}^{2}\big{\rvert}\right]$
$\displaystyle+$
$\displaystyle\frac{1}{2}\left[\big{\lvert}\left\|f(u)-f(v)\right\|_{2}^{2}-\left\|u-v\right\|_{2}^{2}\big{\rvert}\right].$
Using the bounds we have shown for each term we get:
$\displaystyle\big{\lvert}\left\|f(u)\right\|_{2}^{2}-\left\|u\right\|_{2}^{2}\big{\rvert}$
$\displaystyle\leq$
$\displaystyle\left\|u-\hat{o}\right\|_{2}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert(\left\|f(u)\right\|_{2}+\left\|u-\hat{o}\right\|_{2})+\left\|\hat{o}\right\|_{2}(\left\|u-\hat{o}\right\|_{2}+\left\|u\right\|_{2})$
$\displaystyle\leq$
$\displaystyle\left(2+\frac{1}{9\alpha}\right)\left\|u-\hat{o}\right\|_{2}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,\hat{o})-1\rvert+\frac{\epsilon}{40}.$
Similarly,
$\displaystyle\big{\lvert}\left\|f(u)-f(v)\right\|_{2}^{2}-\left\|u-v\right\|_{2}^{2}\big{\rvert}$
$\displaystyle\leq$
$\displaystyle\left\|u-v\right\|_{2}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert(\left\|f(u)\right\|_{2}+\left\|f(v)\right\|_{2}+\left\|u-v\right\|_{2})$
$\displaystyle\leq$
$\displaystyle\left(2\left(1+\frac{3.002\epsilon}{\alpha}\right)+\sqrt{2}\right)\left\|u-v\right\|_{2}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert$
$\displaystyle\leq$
$\displaystyle\left(5+\frac{1}{4\alpha}\right)\left\|u-v\right\|_{2}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert,$
The rest of the proof carries on exactly the same as in Lemma 2.1 where the
terms of the form $\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert$ are
replaced by
$\left\|u-v\right\|_{2}\lvert{\mathop{\mbox{$expans$}}}_{f}(u,v)-1\rvert$.
Recall that each ${\rm I}\in\mathcal{P}$ has ${\rm diam(I)}\leq\sqrt{2}$, so
that this bound applied to $S[I]/{\binom{\lvert I\rvert}{2}}$ as well, which
completes the proof of the theorem.
∎
|
# Dynamics of large oscillator populations with random interactions
Arkady Pikovsky Institute of Physics and Astronomy, University of Potsdam,
Karl-Liebknecht-Str. 24/25, 14476 Potsdam-Golm, Germany Lev A. Smirnov
Department of Control Theory, Lobachevsky State University of Nizhny Novgorod,
23 Gagarin Avenue, Nizhny Novgorod 603022, Russia
###### Abstract
We explore large populations of phase oscillators interacting via random
coupling functions. Two types of coupling terms, the Kuramoto-Daido coupling
and the Winfree coupling, are considered. Under the assumption of statistical
independence of the phases and the couplings, we derive reduced averaged
equations with effective non-random coupling terms. As a particular example,
we study interactions that have the same shape but possess random coupling
strengths and random phase shifts. While randomness in coupling strengths just
renormalizes the interaction, a distribution of the phase shifts in coupling
reshapes the coupling function.
oscillatory ensembles, synchronization, disorder, random systems
> Populations of globally coupled oscillators appear in different fields of
> physics, engineering, and life sciences. In many situations, there is
> disorder in the coupling, and the coupling terms are not identical but vary,
> for example, due to different phase shifts and coupling strengths. We show
> that for large ensembles, one can effectively average over the disorder and
> describe the synchronization transition via simpler averaged equations.
## I Introduction
Collective synchronization in oscillator populations attracted much interest
in the last decades [1, 2, 3, 4]. While the phenomenon is well-understood in a
regular situation, the influence of disorder remains a subject of intensive
current studies. The disordered case is relevant for many applications,
especially in neuroscience, where in the description of the correlated
activity of neurons [5, 6, 7], one can hardly assume the neurons themselves to
be identical and the coupling between them to be uniform.
Randomness in the form of quenched (time-independent) disorder can appear in
different forms. Already in the original paper by Kuramoto [2], a distribution
of the natural frequencies of oscillators was taken into account. In further
studies, effects of pairwise disorder in couplings have been formulated by a
random matrix of coupling strengths [8, 9, 10]. In this paper, we consider
another type of disorder in interactions, which is maximal in some sense: we
assume that all the pairwise coupling terms are different, taken from some
random distribution of random functions. In particular, the coupling functions
can have different shapes: some possess just one harmonic of the phases or of
the phase differences, and some can be more complex with many harmonics. Such
a setup accounts for a maximal heterogeneity of coupled units.
In numerical examples in this paper, we restrict to a subclass where the
coupling functions have the same shape but differ due to random coupling
strengths and to random phase shifts in the coupling [11]. Phase shifts in the
coupling lead to frustration, which can be understood as follows.
Synchronization is always possible for two coupled oscillators with different
phase shifts in the coupling terms. In this synchronous regime, the
frequencies of the oscillators coincide, but the phases are not equal; there
is some difference in the phases dependent on the phase shifts in the
coupling. If many oscillators interact with different phase shifts, optimal
phase relations in pairs may become conflicting, and the main problem is how
this frustration affects global synchrony. In a recent short communication
[12], we argued that in the thermodynamic limit of many coupled oscillators,
one can perform averaging over random phase shifts, which results in a new
effective coupling function. This paper extends this approach to different
setups and illustrates it with numerical simulations.
The paper is organized as follows. In Section II, we formulate different
setups for the phase dynamics of coupled oscillators and rotators. In Section
III we present our main theoretical result about reducing the disordered
situation to an effective regular averaged coupling function. A particular
situation where randomness is in the coupling strengths and in the phase
shifts is considered in Section IV. The validity of this reduction is
illustrated by numerical examples in Section V. In Section VI, we show how to
treat systems with partial disorder, where random phase shifts are attributed
to a driving or to a driven unit. We conclude with a discussion in Section
VII.
## II Different regular setups for coupled phase oscillators
This section describes several popular models of interacting phase oscillators
without randomness in coupling. For our purposes, it is important to separate
the individual dynamics and the coupling terms, which we denote “c.t.” .
### II.1 Winfree and Kuramoto-Daido coupling terms
The “standard” model describes the individual phase dynamics of an oscillator
as rotations with a natural frequency $\omega_{k}$, possibly with individual
noises $\sigma\xi_{k}(t)$:
$\dot{\varphi}_{k}=\omega_{k}+\sigma\xi_{k}(t)+\text{c.t.}\;.$ (1)
The model (1) is most popular because it can be directly derived for generic
coupled oscillators. However, in the literature, several similar particular
setups have been discussed. The model of coupled active rotators [13, 14, 15]
includes a non-uniform rotation of the variable $\varphi_{k}$ (which is thus
not interpreted as the oscillator phase):
$\dot{\varphi}_{k}=\omega_{k}-\nu\sin\varphi_{k}+\sigma\xi_{k}(t)+\text{c.t.}\,.$
(2)
Another case is where $\varphi_{k}$ is an angle variable (or the difference of
the phases of macroscopic wave functions for Josephson junctions) with a
second-order in time dynamics [16, 17, 18, 19] (recently, this model became
popular in the context of modeling power grids [20, 21]):
$\mu\ddot{\varphi}_{k}+\dot{\varphi}_{k}=\omega_{k}+\sigma\xi_{k}(t)+\text{c.t.}\,.$
(3)
Next, we specify the coupling terms “c.t.” according to the Winfree and the
Kuramoto-Daido approaches. In the Winfree model, the action on the oscillator
$k$ from the oscillator $j$ is proportional to the product
$S(\varphi_{k})Q(\varphi_{j})$, where $S(\varphi_{k})$ is the phase
sensitivity function of the unit $k$, and $Q(\varphi_{j})$ describes the force
with which the element $j$ is acting. The latter is usually renormalized
according to the number $N$ of interacting oscillators to have a well-defined
thermodynamic limit. The full Winfree-type coupling term reads [1, 22, 4]
$\frac{1}{N}S(\varphi_{k})\sum_{j}Q(\varphi_{j})\,.$ (4)
The Winfree model can be directly derived from the original equations
governing the oscillator dynamics, in the first order in the small parameter
describing the coupling strength [22, 4].
Further usage of this small parameter allows for a reduction of the Winfree
model (4) to the Kuramoto-Daido model. Because the coupling is weak, the phase
dynamics is represented by a fast rotation with frequency $\omega_{k}$ and a
slow variation due to the coupling. Because only the slow dynamics is
responsible for large deviations of the phases, one can perform averaging over
fast oscillations, keeping only the components with slow phase dependencies in
the coupling term. If the frequencies of the oscillators are nearly equal,
then the slow combinations of the phases are $\sim(\varphi_{j}-\varphi_{k})$.
Thus, keeping only slow terms yields the coupling term in the Kuramoto-Daido
form
$\frac{1}{N}\sum_{j}F(\varphi_{j}-\varphi_{k})\,.$ (5)
### II.2 Formulation in terms of Kuramoto-Daido order parameters
Here, we show how the Kuramoto-Daido and the Winfree coupling functions can be
reformulated in terms of the Kuramoto-Daido order parameters. The latter are
defined as
$Z_{m}=\left\langle
e^{\mathrm{i}m\varphi_{j}}\right\rangle=\frac{1}{N}\sum_{j}e^{\mathrm{i}m\varphi_{j}}\,.$
(6)
We start with the Kuramoto-Daido-type coupling (5) and represent the
$2\pi$-periodic coupling function $F(x)$ as a Fourier series
$F(x)=\sum_{m}f_{m}e^{\mathrm{i}mx}\,,\qquad
f_{m}=\\!\frac{1}{2\pi}\int_{0}^{2\pi}\\!\\!dx\,F(x)e^{-\mathrm{i}mx}\,.$ (7)
Substituting this in the coupling term in (5) we obtain, using expression (6),
$\frac{1}{N}\sum_{j}F(\varphi_{j}-\varphi_{k})=\sum_{m}f_{m}e^{-\mathrm{i}m\varphi_{k}}\frac{1}{N}\sum_{j}e^{\mathrm{i}m\varphi_{j}}=\sum_{m}f_{m}e^{-\mathrm{i}m\varphi_{k}}Z_{m}\;.$
(8)
Similarly, for the Winfree case (4) we can represent the coupling functions
$Q(x)$ and $S(x)$ as Fourier series
$\begin{gathered}S(x)\\!=\\!\sum_{m}s_{m}e^{\mathrm{i}mx}\,,\qquad
s_{m}\\!=\\!\frac{1}{2\pi}\int_{0}^{2\pi}\\!\\!dx\,S(x)e^{-\mathrm{i}mx}\,,\\\
Q(x)=\sum_{l}q_{l}\,e^{\mathrm{i}lx}\,,\qquad
q_{l}=\frac{1}{2\pi}\int_{0}^{2\pi}\\!\\!dx\,Q(x)\,e^{-\mathrm{i}lx}\,.\end{gathered}$
(9)
Substitution in Eq. (4) allows for a representation of the coupling term as
$\frac{1}{N}\sum_{j}\sum_{m}s_{m}e^{\mathrm{i}m\varphi_{k}}\sum_{l}q_{l}e^{\mathrm{i}l\varphi_{j}}\\!=\\!\sum_{m}\sum_{l}s_{m}e^{\mathrm{i}m\varphi_{k}}\,q_{l}\left[\frac{1}{N}\sum_{j}e^{\mathrm{i}l\varphi_{j}}\right]\\!=\\!\sum_{m}s_{m}e^{\mathrm{i}m\varphi_{k}}\sum_{l}q_{l}Z_{l}\;.$
(10)
Below, we will use expressions (8) and (10) as “templates” for identifying the
effective coupling functions in the case of random interactions.
## III General random coupling functions
We start with the maximally disordered setup, where all pairwise coupling
functions are random. We consider first the Kuramoto-Daido-type coupling (5)
and rewrite it assuming that all the coupling terms are generally different
$\frac{1}{N}\sum_{j}F_{jk}(\varphi_{j}-\varphi_{k})\,.$ (11)
We use the Fourier representation like (7):
$F_{jk}(x)=\sum_{m}f_{m,jk}e^{\mathrm{i}mx}\;,\qquad
f_{m,jk}=\frac{1}{2\pi}\int_{0}^{2\pi}dxF_{jk}(x)e^{-\mathrm{i}mx}\,.$ (12)
Here, we treat the complex Fourier coefficients $f_{m,jk}$ as random numbers
with some distribution. The randomness in coupling is a quenched (i.e., time-
independent) disorder.
We now perform the averaging of the coupling (11). First, we substitute (12)
in (11)
$\frac{1}{N}\sum_{j}\sum_{m}f_{m,jk}e^{\mathrm{i}(m\varphi_{j}-m\varphi_{k})}=\sum_{m}e^{-\mathrm{i}m\varphi_{k}}\left[\frac{1}{N}\sum_{j}f_{m,jk}e^{\mathrm{i}m\varphi_{j}}\right].$
(13)
We identify the term in the squared brackets as the population average
$\left\langle f_{m,jk}e^{\mathrm{i}m\varphi_{j}}\right\rangle$. Next, we
assume statistical independence of the phases and the Fourier coefficients
$f_{m,jk}$. We expect this independence to be valid for a large population,
where many different couplings influence each phase. This assumption allows us
for representing this average as $\left\langle
f_{m,jk}\right\rangle\left\langle
e^{\mathrm{i}m\varphi_{j}}\right\rangle=\left\langle f_{m,jk}\right\rangle
Z_{m}$. As a result, we obtain the coupling term
$\sum_{m}\left\langle f_{m,jk}\right\rangle
e^{-\mathrm{i}m\varphi_{k}}Z_{m}\,.$ (14)
Comparing this expression with (8), we conclude that the coupling is described
with an effective deterministic coupling function, Fourier modes of which are
just $\left\langle f_{m,jk}\right\rangle$.
In the case of the Winfree-type coupling (4), the case of general randomness
is one with all different phase sensitivity functions $S_{jk}(x)$ and forcing
functions $Q_{jk}(x)$:
$\frac{1}{N}\sum_{j}S_{jk}(\varphi_{k})Q_{jk}(\varphi_{j})\;.$ (15)
Again, we represent these functions via random complex Fourier coefficients
$s_{m,jk}$ and $q_{l,jk}$
$\begin{gathered}S_{jk}(x)\\!=\\!\sum_{m}s_{m,jk}e^{\mathrm{i}mx}\,,\qquad
s_{m,jk}\\!=\\!\frac{1}{2\pi}\int_{0}^{2\pi}\\!\\!dx\,S_{jk}(x)e^{-\mathrm{i}mx}\,,\\\
Q_{jk}(x)=\sum_{l}q_{l,jk}\,e^{\mathrm{i}lx}\,,\qquad
q_{l,jk}=\frac{1}{2\pi}\int_{0}^{2\pi}\\!\\!dx\,Q_{jk}(x)\,e^{-\mathrm{i}lx}\,.\end{gathered}$
(16)
Substituting (16) in (15) and assuming statistical independence of the phases
and the Fourier coefficients, we arrive at an effective deterministic coupling
$\sum_{m}\left\langle s_{m,jk}\right\rangle
e^{\mathrm{i}m\varphi_{k}}\sum_{l}\left\langle q_{l,jk}\right\rangle Z_{l}\;.$
(17)
Comparison of this expression with (10) shows that the Fourier modes of the
effective phase sensitivity function and of the forcing are just the averaged
random Fourier modes.
The last remark is that because the Fourier transform is a linear operation,
averaging the Fourier coefficients is the same as averaging the functions.
Thus, the effective averaged coupling function in the Kuramoto-Daido case (11)
is
$\frac{1}{N}\sum_{j}F_{jk}(\varphi_{j}-\varphi_{k})\;\Rightarrow\;\frac{1}{N}\sum_{j}\mathcal{F}(\varphi_{j}-\varphi_{k})=\frac{1}{N}\sum_{j}\left\langle
F_{jk}(\varphi_{j}-\varphi_{k})\right\rangle\;.$ (18)
For the random Winfree coupling (15), we have
$\begin{gathered}\frac{1}{N}\sum_{j}S_{jk}(\varphi_{k})Q_{jk}(\varphi_{j})\;\Rightarrow\;\mathcal{S}(\varphi_{k})\frac{1}{N}\sum_{j}\mathcal{Q}(\varphi_{j})\;,\\\
\mathcal{S}(\varphi_{k})=\left\langle
S_{jk}(\varphi_{k})\right\rangle,\quad\mathcal{Q}(\varphi_{j})=\left\langle
Q_{jk}(\varphi_{j})\right\rangle\;.\end{gathered}$ (19)
## IV Randomness in coupling strengths and in the phase shifts
The case of general randomness of interactions, presented in Section III,
includes a situation where different interactions have different shapes. For
example, some oscillators can be coupled via the first harmonic coupling
function, while others are coupled with the second harmonic coupling function.
A particular situation is one where all the shapes are the same, but the
interactions differ in their phase shifts and the coupling strengths. For the
Kuramoto-Daido couplings (11), this means that
$F_{jk}(\varphi_{j}-\varphi_{k})=A_{jk}F(\varphi_{j}-\varphi_{k}-\alpha_{jk})\,.$
(20)
Here, random numbers $A_{jk}$ describe different random coupling strengths,
and $\alpha_{jk}$ are random phase shifts. For simplicity of presentation, we
assume statistical independence of $A_{jk}$ and $\alpha_{jk}$. Then, the
random coupling (20) is described by two distributions. We denote the
distribution of the coupling strengths $U(A)$ and the distribution of the
phase shifts $G(\alpha)$. This latter distribution is defined on a circle and
can be represented by the Fourier series:
$G(\alpha)=\frac{1}{2\pi}\sum_{m}g_{m}e^{\mathrm{i}m\alpha}\,,\quad
g_{m}=\\!\int_{0}^{2\pi}\\!\\!d\alpha\,G(\alpha)e^{-\mathrm{i}m\alpha}\,.$
(21)
Such pairwise phase shifts naturally appear if there are time delays for the
pairwise couplings. As has been demonstrated in Ref. 23, a phase shift
$\alpha$ is determined in the leading order in coupling strength as follows:
$\alpha=\omega\tau$, where $\tau$ is the signal propagation time and $\omega$
is the characteristic frequency (e.g., a median frequency if the natural
frequencies are different). Remarkably, phase shifts like in (20) naturally
appear in the description of power grids (cf. Eq. (3) in Ref. 21).
Applying general expression (18) to this particular case, we obtain the
effective coupling function as
$\mathcal{F}(x)=\left\langle
A\right\rangle\int_{0}^{2\pi}F(x-\alpha)G(\alpha)\;d\alpha\;.$ (22)
where $\left\langle A\right\rangle=\int dA\,A\,U(A)$. One can see that the
randomness of coupling strengths renormalizes the total coupling strength, but
does not influence the shape of the coupling function. In contradistinction,
the randomness of the phase shifts changes the form of the coupling function
because of the convolution operator in (22). The best way to see this is to
return to the Fourier representation, where the effect of the randomness in
the phase shifts reduces to a factorization of the Fourier modes
$f_{m}\;\Rightarrow\;f_{m}g_{m}\;.$ (23)
One can see that some modes in the coupling function can even disappear if the
corresponding factors $g_{m}$ vanish. We will explore such cases in Section V
below.
In the context of the Winfree model (4), one can also restrict randomness to
coupling strengths and phase shifts so that the shapes of the phase
sensitivity function and of the forcing remain the same. We have, in general,
two phase shifts $\alpha_{jk}$ and $\beta_{jk}$, entering the phase
sensitivity function and the driving term, respectively, and coupling
strengths $A_{jk}$ (for simplicity of presentation we consider all the three
random quantities as statistically independent):
$\frac{1}{N}\sum_{j}A_{jk}S(\varphi_{k}-\alpha_{jk})Q(\varphi_{j}-\beta_{jk})\,.$
(24)
Of course, one of these phase shifts may be absent in a particular situation.
The distribution $B(\beta)$ of phase shifts $\beta$ is represented similarly
to (21):
$B(\beta)=\frac{1}{2\pi}\sum_{m}b_{m}e^{\mathrm{i}m\beta},\quad
b_{m}=\int_{0}^{2\pi}d\beta\,B(\beta)e^{-\mathrm{i}m\beta}\,.$ (25)
Now, we apply general expressions (19) to the case of randomness in phase
shifts and coupling strengths and obtain:
$\begin{gathered}\mathcal{S}(x)=\left\langle A\right\rangle\left\langle
S(x-\alpha_{jk})\right\rangle=\left\langle
A\right\rangle\int_{0}^{2\pi}S(x-\alpha)G(\alpha)\;d\alpha\;,\\\
\mathcal{Q}(y)=\left\langle
Q(y-\beta_{jk})\right\rangle=\int_{0}^{2\pi}Q(y-\beta)B(\beta)\;d\beta\;.\end{gathered}$
(26)
Here, we, somewhat arbitrarily, attributed the average coupling strength to
the phase sensitivity function. Again, like in the case of the Kuramoto-Daido-
type coupling, the coupling strengths do not influence the shape of the
functions, while the distributions of the phase shifts do reshape them. In
terms of Fourier modes, one has a factorization by the modes of the
distributions:
$s_{m}\;\Rightarrow\;s_{m}g_{m}\;,\qquad q_{l}\;\Rightarrow q_{l}b_{l}\;.$
(27)
## V Numerical examples
The theory above predicts that in the presence of random phase shifts in the
coupling, the effective coupling function is the convolution of the original
one with the phase shift distribution density. In terms of the Fourier modes,
one has a product of the modes of the original coupling function with the
Fourier modes of the distribution density of the phase shifts. The most
prominent effect appears if the original coupling function is rather complex
(has several harmonics), but the distribution of the phase shifts is simple,
possessing just one harmonic. For example, we take the density of the phase
shifts $G(\alpha)$ in the form ($M$ is an integer)
$G(\alpha)=\frac{1}{2\pi}(1+\cos M\alpha)\;.$ (28)
In the Fourier representation, this corresponds to one nontrivial Fourier mode
$g_{M}=0.5$. Correspondingly, only the $M$-th harmonic is present in the
effective coupling function with such a distribution of the phase shifts.
Figure 1: Estimations of the phase probability densities via histograms (200
bins are used) in model (29), (28) with $M=1$ (red lines), $M=2$ (green
lines), and $M=3$ (blue lines). These distributions are almost the same for
different system sizes $N$.
In Fig. 1, we show the distributions of the phases in globally coupled
populations of noisy identical phase oscillators ($\omega_{k}=0$) with the
Kuramoto-Daido coupling
$\dot{\varphi}_{k}=\sigma\xi_{k}(t)+\frac{1}{N}\sum_{j}F(\varphi_{j}-\varphi_{k}-\alpha_{jk}),\quad
F(x)=\sin(x)+2\sin(2x)+3\sin(3x)\;.$ (29)
We show the results for $\sigma=0.12$ and three distributions of type (28)
with $M=1,2,3$. One can see that in these cases, the densities possess the
corresponding $(2\pi/M)$-periodicities as functions of $\varphi$, confirming
that the phase shifts result in the effective one-mode coupling.
Figure 2: Dashed and dotted lines (order parameters
$\left\langle\left|Z_{1}\right|\right\rangle=\left\langle\left|\left\langle
e^{\mathrm{i}\varphi_{j}}\right\rangle\right|\right\rangle$ and
$\left\langle\left|Z_{2}\right|\right\rangle=\left\langle\left|\left\langle
e^{2\mathrm{i}\varphi_{j}}\right\rangle\right|\right\rangle$, respectively):
the behavior of an ensemble of noisy identical oscillators with Kuramoto-Daido
coupling in the form (31) with $K=1$ in dependence on the noise intensity
$\sigma^{2}$ in the absence of phase shifts in coupling. Markers: order
parameter $\left\langle\left|Z_{1}\right|\right\rangle$ in simulation of a
disordered population with the phase shifts $\alpha_{jk}$ sampled according to
(28) with $M=1$ ($N=500$: red pluses; $N=1000$: cyan circles, $N=2000$: brown
squares). Here and below, we also apply time averaging when calculating
complex mean fields to smooth out small-magnitude, non-regular fluctuations
caused by noise and finite-size effects. Solid green line: theoretical
prediction of the first order parameter $\left|Z_{1}\right|$ in the reduced
model according to Ref. 18. Figure 3: Panel (a): order parameter
$\left\langle\left|Z_{1}\right|\right\rangle$ for ensembles with random phase
shifts $\alpha_{jk}$ taken from the one-mode distribution (28) with $M=1$,
random natural frequencies $\omega_{k}$ drawn from the Cauchy distribution
(30) with $\Delta=1$, and coupling function (31). Dashed line: theoretical
prediction of the order parameter according to Ref. 2:
$|Z_{1}|=\sqrt{(K-4)\bigl{/}K\bigr{.}}$. Panels (b, c): averaged over time
absolute values of the second and third circular cumulants
$\kappa_{2}\\!=\\!\left\langle\left|Z_{2}-Z_{1}^{2}\right|\right\rangle$ and
$\kappa_{3}\\!=\\!\left\langle\left|Z_{3}-3Z_{2}Z_{1}+2Z_{1}^{3}\right|\right\rangle$,
respectively; their small values confirm that the distribution of the phases
is a wrapped Cauchy distribution.
A more quantitative test can be performed if an exact analytical solution
exists for the reduced system. Such solutions are known for the Kuramoto-Daido
system with only the first harmonic in the coupling function, in the cases of
noisy identical oscillators [18] and for deterministic oscillators with a
Cauchy distribution of natural frequencies [2]:
$W(\omega)=W_{C}(\omega-\Omega),\qquad
W_{C}(\varpi)=\Delta\Bigl{/}\pi\left(\varpi^{2}+\Delta^{2}\right)\Bigr{.}\,,$
(30)
where $\Omega$ is the mean value, and $\Delta$ is the parameter governing the
width of the distribution. In the next two examples, we fix $M=1$ in (28) and
take a two-harmonic original coupling function
$F(x)=K\bigl{(}\sin(x)+4\sin(2x)\bigr{)}\,.$ (31)
The effective coupling function is thus $\mathcal{F}(x)\\!=\\!0.5\,K\sin(x)$,
so that the analytical results mentioned are applicable. Figure 2 shows the
case of noisy oscillators. Figure 3 shows the case of deterministic
oscillators with the Cauchy distribution of the natural frequencies (the width
parameter of the Cauchy distribution is $\Delta\\!=\\!1$). In both cases, the
dynamics of a system with random phase shifts nicely corresponds to the
analytically predicted behavior of the reduced system. For the deterministic
case, there is an additional (to the comparison of the order parameter with
the theoretical prediction) indicator for the validity of the effective
coupling. As it follows from the Ott-Antonsen [24] solution of the problem,
the distribution of the phases in the thermodynamic limit $N\to\infty$ is a
wrapped Cauchy distribution. The most direct check of this prediction is the
calculation of the higher circular cumulants of the distribution [25], which
should vanish for the wrapped Cauchy distribution. We show the absolute values
of the second and the third cumulants in Fig. 3. Their values are indeed small
compared to the first cumulant (which is the Kuramoto order parameter
$Z_{1}$), and this smallness is even improved as the size of the population
grows.
We illustrate the case of disorder in coupling strengths in Fig. 4. As already
can be seen in the case of purely phase shifts disorder Fig. 3, one needs
large system sizes $N$ to become closer to the theoretical prediction. This
effect is even more pronounced when both the phase shifts and the coupling
strengths are random. Therefore, for the same setup as presented in Fig. 3, we
performed calculations for a selected coupling strength $K\\!=\\!0.625$, for
which theory predicts $|Z_{1}|\\!=\\!0.6$. We added disorder in coupling
strengths in such a way that $\left\langle A_{jk}\right\rangle\\!=\\!1$, and
followed deviations of the obtained values of
$\left\langle|Z_{1}|\right\rangle$ from the theoretical prediction. As it
follows from Fig. 4, the deviations decrease with $N$ roughly as
$\sim\\!N^{-1}$, although they are rather pronounced for small
$N\lesssim\\!1000$. We tested two distributions of $A_{jk}$, one the uniform
in the interval $0\leq A_{jk}\\!\leq\\!2$, and one bimodal, where $A_{jk}$
takes values $0$ and $2$ with probability $1/2$. In the latter case, finite-
size deviations from the theoretical prediction are stronger.
Figure 4: Deviations from the theoretical prediction of the main order
parameter vs. ensemble size $N$. The same coupling function and phase disorder
as in Fig. 3 were used but with an additional disorder in coupling strengths.
Red boxes: uniform distribution of $A_{jk}$ in the interval $0\leq
A_{jk}\\!\leq\\!2$; blue circles: $A_{jk}$ takes values $0$ or $2$ with
probability $1/2$. The black dotted line shows the law $\\!\sim\\!N^{-1}$.
Figure 5: Oscillatory partially synchronized regime the noiseless case the
Kuramoto-Daido model with two components of the symmetric bimodal distribution
having the Cauchy shapes:
$W(\omega)=\bigl{(}W_{C}(\omega+\Omega)+W_{C}(\omega-\Omega)\bigr{)}\bigl{/}2\bigr{.}$,
where $\Omega=0.25$ and $\Delta=0.125$. Bold blue solid lines: the dynamics of
the order parameter $\left\langle|Z_{1}|\right\rangle$ in the systems of
$N=64\times 10^{3}$ oscillators with three coupling functions: panel (a):
$F(x)=1.25\sin(x)+0.75\sin(3x)\bigr{)}$, panel (b):
$F(x)=1.25\sin(x)+2.25\sin(2x)\bigr{)}$, and panel (c):
$F(x)=1.25\sin(x)+2.25\sin(3x)\bigr{)}$. Thin red dashed curves: the dynamics
corresponding to a limit cycle in the ordinary differential equations for the
two subgroup order parameters, which can be obtained for the Kuramoto model
with the effective coupling $\mathcal{F}(x)=0.625\sin(x)$ using the Ott-
Antonsen approach in the thermodynamic limit.
Our following example is a system with a bimodal distribution of natural
frequencies [26, 27, 28, 29]. Now, the corresponding dynamics may indeed be
more complicated than for an unimodal distribution, with a region of
bistability between asynchronous and synchronous states. A partially
synchronous state may be characterized either by a constant or oscillating
order parameter. We consider the noiseless case of the Kuramoto-Daido model
with several harmonics in the coupling function (7) and with two components of
the symmetric bimodal distribution $W(\omega)$ having the Cauchy shapes, i.e.
$W(\omega)=\bigl{(}W_{C}(\omega+\Omega)+W_{C}(\omega-\Omega)\bigr{)}\bigl{/}2\bigr{.}$.
According to our analysis, with the random phase shifts $\alpha_{jk}$ drawn
from the one-mode distribution (28) with $M=1$, the model under consideration
can be simplified to a system with an effective coupling function
$\mathcal{F}(x)\sim\sin(x)$ that contains only the first harmonic. The
dynamics of the latter oscillator systems with a symmetric bimodal
distribution was studied in detail [26, 27, 28, 29]. In particular, in Ref.
27, possible dynamical regimes and bifurcation between them were
comprehensively analyzed using the macroscopic approaches based on the Ott-
Antonsen ansatz.
In Fig. 5, we show the oscillatory regimes for the Kuramoto-Daido models with
three different coupling functions, each having two harmonics. However, the
effective averaged dynamics is the same, which is clearly seen in the Figure.
It is worth mentioning that for good quantitative (not just qualitative)
agreement, the number $N$ of ensemble units should be quite large; we used
$N=64\times 10^{3}$.
Figure 6: Behavior of the first order parameter
$\left\langle\left|Z_{1}\right|\right\rangle$ an ensemble of
$N\\!=\\!12\\!\times\\!10^{3}$ noisy rotators (3) with equal natural
frequencies ($\omega_{k}=\Omega$) and coupling function (31) with $K\\!=\\!1$
in dependence on $\sqrt{\sigma}$ for different values of the moment of inertia
$\mu$: $(\mathrm{a})\,\mu=0.02$, $(\mathrm{b})\,\mu=0.1$, and
$(\mathrm{c})\,\mu=0.5$. Green squares and blue circles are simulations
without and with phase shifts (taken from a distribution (28) with $M=1$),
respectively. Solid red lines show the theoretical prediction for this order
parameter according to Eq. (32).
Next, we present simulations of the rotator model (3). We consider the case of
noisy oscillators with equal natural frequencies and coupling function (31).
We consider random phase shifts $\alpha_{jk}$ distributed according to (28)
with $M=1$. Thus, the effective coupling function is
$\mathcal{F}(x)=0.5\,K\sin(x)$. For such coupling, the analytical expression
for the order parameter in dependence on the noise intensity $\sigma^{2}$ can
be written in the parametric (parameter $R$) form [18]
$|Z_{1}|=\frac{2\pi RI_{0}^{2}(R)I_{1}(R)}{2\pi RI_{0}^{2}(R)+\mu
KI_{1}(R)},\quad\sigma^{2}=\frac{K|Z_{1}|}{2R}\;.$ (32)
Here, $I_{0}(R)$ and $I_{1}(R)$ are the principal branches of the modified
Bessel functions of the first kind with orders $0$ and $1$, respectively.
Relation (32) is valid for small $\mu$, in the first order in this parameter.
Note, for a vanishing $\mu$, we obtain the equations determining the solid
green line in Fig. 2. In Fig. 6, we compare the dynamics of a noisy ensemble
of rotators with and without random phase shifts. One can see that the effect
of the moment of inertia $\mu$ on the coherence level of the established state
is relatively small.
## VI Reduction in the case of global random phase shifts
The theory above was based on the assumption of independence of the phases and
the phase shifts, which is plausible if there are many different phase shifts
for a given oscillator, like for random pairwise phase shifts. Here, we
consider a slightly different setup, where phase shifts $\alpha_{jk}$ are not
attributed to connections $j\to k$, but separately to the driven or to the
driving system; we call such a situation “global random phase shifts”. Thus,
these phase shifts have one index instead of two. In the case of global phase
shifts, the averaging requires additional justification. Such an analysis has
been performed in Ref. 12 for the Kuramoto-Daido setup; here, we present a
similar consideration for the Winfree model with noise.
We consider noisy oscillators with global random phase shifts in coupling
$\dot{\varphi}_{k}=\sigma\xi_{k}(t)+\frac{1}{N}S(\varphi_{k}-\alpha_{k})\sum_{j}Q(\varphi_{j}-\beta_{j})\;,$
(33)
where $\xi_{k}(t)$ are independent white Gaussian forces with zero mean
$\left\langle\xi\right\rangle=0$, and auto-correlation
$\left\langle\xi_{k}(t^{\prime})\,\xi_{j}(t+t^{\prime})\right\rangle\\!=\\!2\delta_{jk}\delta(t)$.
The phase shifts $\alpha_{k}$ are attributed to the driven systems; they
characterize the phase shifts in the phase sensitivity function independently
of the driving. On the other hand, the phase shifts $\beta_{j}$ are attributed
to the driving units; they characterize the force functions. Remarkably, by a
change of variables $\theta_{k}=\varphi_{k}-\alpha_{k}$, one can transfer both
shifts to the driving unit:
$\dot{\theta}_{k}=\sigma\xi_{k}(t)+\frac{1}{N}S(\theta_{k})\sum_{j}Q(\theta_{j}+\alpha_{j}-\beta_{j})$
(34)
We thus denote $\gamma_{j}=\beta_{j}-\alpha_{j}$ and consider it as a single
combined global random phase shit with probability density
$\mathit{\Gamma}(\gamma)$.
In the thermodynamic limit $N\to\infty$, we describe the population with the
probability density $P(\theta,t|\gamma)$, which generally can depend on the
phase shift $\gamma$. The Fokker-Planck equation for this density reads
$\frac{\partial P}{\partial
t}+\frac{\partial}{\partial\theta}\Bigl{(}S(\theta)\bar{Q}(t)\Bigr{)}=\sigma^{2}\frac{\partial^{2}P}{\partial\theta^{2}}\;,$
(35)
where
$\bar{Q}(t)=\\!\int_{0}^{2\pi}\\!\\!d\theta\int_{0}^{2\pi}\\!\\!d\gamma\,Q(\theta-\gamma)\mathit{\Gamma}(\gamma)P(\theta,t|\gamma)$
(36)
contains the averaging over the phases and the phase shifts. The crucial
observation is that, although $P(\theta,t|\gamma)$ can potentially depend on
$\gamma$, for example, an initial condition at time $t=0$ can contain a
$\gamma$-dependence, the equation for $P(\theta,t|\gamma)$ does not contain
$\gamma$. Thus, if this equation, for any prescribed time-dependent function
$\bar{Q}(t)$, has a unique attracting solution $P(\theta,t)$ which does not
depend on the initial conditions, then the statistical dependence of the
phases on the phase shits $\gamma$ disappears, even if it was presented
initially. The property to have a unique solution is sometimes called “Global
Asymptotic Stability”. It is rather natural for the dissipative parabolic
equation (35) on the circle. Although we found only proofs for time-
independent $\bar{Q}$ in the literature, see Refs. 30, 31, it appears that
such a proof can be extended to the nonstationary case too [32]. For a master
equation, the global asymptotic stability has been established in Ref. 33.
Taking into account that $P(\theta,t|\gamma)\to P(\theta,t)$ at large times,
we insert the $\gamma$-independent density in Eq. (36) and obtain
$\bar{Q}=\\!\int_{0}^{2\pi}\\!\\!d\theta\,P(\theta,t)\mathcal{Q}(\theta),\qquad\mathcal{Q}(\theta)=\\!\int_{0}^{2\pi}\\!\\!d\gamma\,\mathit{\Gamma}(\gamma)Q(\theta-\gamma)\,.$
(37)
We see that the forcing function reduces to an effective phase-shift-
independent forcing function $\mathcal{Q}(\theta)$, which is the convolution
of the original coupling function and the distribution of the phase shifts.
This result parallels expression (19). A similar expression holds for the
Kuramoto-Daido coupling (cf. Ref. 12).
Figure 7: The average values of the order parameter
$\left\langle|Z_{1}|\right\rangle$ vs the parameter $K$ of the coupling
function $F(x)=K\bigl{(}\sin(x)-1.5\sin(2x)+0.5\cos(2x)\bigr{)}$ for two
values of the width parameter $\Delta$ of the Cauchy distribution (30) of the
frequencies $\omega_{k}$: (a) $\Delta=0.2$ and (b) $\Delta=0.4$. Green boxes:
without phase shifts; blue circles: with phase shifts. The red lines show the
theoretical prediction for the order parameter $\left|Z_{1}\right|$ according
to Ref. 18. System size $N=48\times 10^{4}$.
In the example Fig. 7 we consider for the coupled rotators (3) (parameter of
inertia $\mu\\!=\\!0.1$), we include both a Cauchy distribution of natural
frequencies and noise with amplitude $\sigma\\!=\\!0.05$. Here, we take the
global phase shifts according to $F(\varphi_{j}-\varphi_{k}-\alpha_{j})$. In
Fig. 7, we choose the coupling function in the form
$F(x)=K\bigl{(}\sin(x)-1.5\sin(2x)+0.5\cos(2x)\bigr{)}$ and present the
average values of the order parameter $\left\langle|Z_{1}|\right\rangle$ for
simulations without phase shifts and for phase shifts distributed according to
(28) with $M=1$. One can see that the latter data nicely fits the theoretical
prediction of Ref. 18.
## VII Conclusion
We first summarize the results of this paper. We have considered different
models of globally coupled phase oscillators and rotators. In the case of a
“maximal disorder”, all the coupling functions are distinct and random,
sampled from some distribution. Based on the assumption of independence of the
phases and the coupling functions in the thermodynamic limit, we derived the
averaged equations for the phases, where effective deterministic coupling
functions enter. A more detailed consideration was devoted to the case where
the shapes of the random coupling functions are the same, but the amplitudes
and the phase shifts are random. Then, the effective functions are
renormalized convolutions of the original coupling functions and the
distribution densities of the phase shifts. In the Fourier representation, the
Fourier modes of the coupling functions are multiplied by Fourier modes of the
distribution densities of the phase shifts. This means that the effective
averaged coupling is “simpler” than the original one. In particular, if the
distribution of the phase shifts possesses just one Fourier mode, the
effective coupling function will possess only this mode, too. This property
allows us to check the validity of the approach numerically because, for the
one-mode coupling function, there are theoretical predictions for the behavior
of the order parameters.
A special case is a maximally frustrating one, where the averaged coupling
function vanishes. For the coupling strengths disorder, this happens in the
Daido model [34]; in the case of random phase shifts, this occurs for a
uniform in the range $0\leq\alpha_{jk}\leq 2\pi$ distribution of the phase
shifts. Our theory predicts that in the thermodynamic limit the interaction
vanishes. However, certain synchronization phenomena can still be observed in
finite ensembles with random phase shifts, as demonstrated recently in Ref.
35. We also expect that for other distributions there can be pronounced
deviations from the averaged behavior for finite ensembles. This issue
definitely deserves further exploration.
###### Acknowledgements.
L.A.S. acknowledges support from the Russian Science Foundation (grant no.
22-12-00348) and the Ministry of Science and Education of Russian Federation
(project no. FSWR-2024-0005).
## Data availability
All numerical experiments are described in the paper and can be reproduced
without additional information.
## References
* Winfree [1967] A. T. Winfree, “Biological rhythms and the behavior of populations of coupled oscillators,” J. Theor. Biol. 16, 15–42 (1967).
* Kuramoto [1975] Y. Kuramoto, “Self-entrainment of a population of coupled nonlinear oscillators,” in _International Symposium on Mathematical Problems in Theoretical Physics_ , edited by H. Araki (Springer Lecture Notes Phys., v. 39, New York, 1975) p. 420.
* Strogatz and Stewart [1993] S. H. Strogatz and I. Stewart, “Coupled oscillators and biological synchronization,” Scientific American , 68–75 (1993).
* Pikovsky, Rosenblum, and Kurths [2001] A. Pikovsky, M. Rosenblum, and J. Kurths, _Synchronization. A Universal Concept in Nonlinear Sciences._ (Cambridge University Press, Cambridge, 2001).
* Glass and Mackey [1988] L. Glass and M. C. Mackey, _From Clocks to Chaos: The Rhythms of Life._ (Princeton Univ. Press, Princeton, NJ, 1988).
* Buzsáki [2006] G. Buzsáki, _Rhythms of the brain_ (Oxford UP, Oxford, 2006).
* Daffertshofer and Pietras [2020] A. Daffertshofer and B. Pietras, “Phase synchronization in neural systems,” Synergetics , 221–233 (2020).
* Kalloniatis [2010] A. C. Kalloniatis, “From incoherence to synchronicity in the network Kuramoto model,” Physical Review E 82, 066202 (2010).
* Chiba, Medvedev, and Mizuhara [2018] H. Chiba, G. S. Medvedev, and M. S. Mizuhara, “Bifurcations in the Kuramoto model on graphs,” Chaos: An Interdisciplinary Journal of Nonlinear Science 28 (2018).
* Juhász, Kelling, and Ódor [2019] R. Juhász, J. Kelling, and G. Ódor, “Critical dynamics of the Kuramoto model on sparse random networks,” Journal of Statistical Mechanics: Theory and Experiment 2019, 053403 (2019).
* Park, Rhee, and Choi [1998] K. Park, S. W. Rhee, and M. Y. Choi, “Glass synchronization in the network of oscillators with random phase shifts,” Phys. Rev. E 57, 5030–5035 (1998).
* Smirnov and Pikovsky [2024] L. A. Smirnov and A. Pikovsky, “Dynamics of oscillator populations globally coupled with distributed phase shifts,” Phys. Rev. Lett. 132, 107401 (2024).
* Shinomoto and Kuramoto [1986] S. Shinomoto and Y. Kuramoto, “Phase transitions in active rotator systems,” Prog. Theor. Phys. 75, 1105–1110 (1986).
* Sakaguchi, Shinomoto, and Kuramoto [1988] H. Sakaguchi, S. Shinomoto, and Y. Kuramoto, “Phase transitions and their bifurcation analysis in a large population of active rotators with mean-field coupling,” Prog. Theor. Phys. 79, 600–607 (1988).
* Klinshov _et al._ [2021] V. V. Klinshov, S. Y. Kirillov, V. I. Nekorkin, and M. Wolfrum, “Noise-induced dynamical regimes in a system of globally coupled excitable units,” Chaos: An Interdisciplinary Journal of Nonlinear Science 31 (2021).
* Tanaka, Lichtenberg, and Oishi [1997] H. Tanaka, A. Lichtenberg, and S. Oishi, “First order phase transition resulting from finite inertia in coupled oscillator systems,” Phys. Rev. Lett. 78, 2104–2107 (1997).
* Hong _et al._ [1999] H. Hong, M. Y. Choi, J. Yi, and K.-S. Soh, “Inertia effects on periodic synchronization in a system of coupled oscillators,” Phys. Rev. E 59, 353–363 (1999).
* Munyaev _et al._ [2020] V. O. Munyaev, L. A. Smirnov, V. A. Kostin, G. V. Osipov, and A. Pikovsky, “Analytical approach to synchronous states of globally coupled noisy rotators,” New Journal of Physics 22, 023036 (2020).
* Munyayev _et al._ [2023] V. O. Munyayev, M. I. Bolotov, L. A. Smirnov, G. V. Osipov, and I. Belykh, “Cyclops states in repulsive kuramoto networks: The role of higher-order coupling,” Phys. Rev. Lett. 130, 107201 (2023).
* Filatrella, Nielsen, and Pedersen [2008] G. Filatrella, A. H. Nielsen, and N. F. Pedersen, “Analysis of a power grid using a Kuramoto-like model,” Eur. Phys. J. B 61, 485–491 (2008).
* Dorfler and Bullo [2012] F. Dorfler and F. Bullo, “Synchronization and transient stability in power networks and nonuniform Kuramoto oscillators,” SIAM Journal on Control and Optimization 50, 1616–1642 (2012).
* Kuramoto [1984] Y. Kuramoto, _Chemical Oscillations, Waves and Turbulence_ (Springer, Berlin, 1984).
* Izhikevich [1998] E. M. Izhikevich, “Phase models with explicit time delays,” Phys. Rev. E 58, 905–908 (1998).
* Ott and Antonsen [2008] E. Ott and T. M. Antonsen, “Low dimensional behavior of large systems of globally coupled oscillators,” CHAOS 18, 037113 (2008).
* Tyulkina _et al._ [2018] I. V. Tyulkina, D. S. Goldobin, L. S. Klimenko, and A. Pikovsky, “Dynamics of noisy oscillator populations beyond the Ott-Antonsen ansatz,” Phys. Rev. Lett. 120, 264101 (2018).
* Bonilla, Vicente, and Spigler [1998] L. L. Bonilla, C. J. P. Vicente, and R. Spigler, “Time-periodic phases in populations of nonlinearly coupled oscillators with bimodal frequency distributions,” Physica D: Nonlinear Phenomena 113, 79 – 97 (1998).
* Martens _et al._ [2009] E. A. Martens, E. Barreto, S. H. Strogatz, E. Ott, P. So, and T. M. Antonsen, “Exact results for the Kuramoto model with a bimodal frequency distribution,” Phys. Rev. E 79, 026204 (2009).
* Campa [2020] A. Campa, “Phase diagram of noisy systems of coupled oscillators with a bimodal frequency distribution,” Journal of Physics A: Mathematical and Theoretical 53, 154001 (2020).
* Kostin _et al._ [2023] V. A. Kostin, V. O. Munyaev, G. V. Osipov, and L. A. Smirnov, “Synchronization transitions and sensitivity to asymmetry in the bimodal Kuramoto systems with Cauchy noise,” Chaos: An Interdisciplinary Journal of Nonlinear Science 33, 083155 (2023).
* Gardiner [1996] C. W. Gardiner, _Handbook of Stochastic Methods_ (Springer, Berlin, 1996).
* Calogero [2012] S. Calogero, “Exponential convergence to equilibrium for kinetic Fokker-Planck equations,” Comm. Part. Diff. Eqs. 37, 1357–1390 (2012).
* [32] S. Zelik (private communication, 2023).
* Earnshaw and Keener [2010] B. A. Earnshaw and J. P. Keener, “Global asymptotic stability of solutions of nonautonomous master equations,” SIAM Journal on Applied Dynamical Systems 9, 220–237 (2010).
* Daido [1992] H. Daido, “Quasientrainment and slow relaxation in a population of oscillators with random and frustrated interactions,” Phys. Rev. Lett. 68, 1073–1076 (1992).
* Pikovsky and Bagnoli [2024] A. Pikovsky and F. Bagnoli, “Dynamics of oscillator populations with disorder in the coupling phase shifts,” New Journal of Physics 26, 023054 (2024).
|
# Voronoi diagrams for the distributed sensor network system data processing
111Link to the journal Future Generation Computer Systems.
Kondybayeva Almagul Giovanna Di Marzo Carouge 1212, Geneve Centre
Universitaire d’Informatique, Universite de Geneve<EMAIL_ADDRESS>Carouge, Universite de Geneve, Switzerland
###### Abstract
This article represents the computational model for spacial addresation of the
sensors in the dynamically changing real-time internet of things system. The
model bases on the Voronoi diagrams as a basic data structure. Problem - the
correct data addresation without time delays in real-time processing and
database indexation in distributed storages. Goal - to develop the real-time
processing model of the object location identification in the Voronoi diagram
data structure. Relevance - the research and development on the contemporary
issues on the convergence (the N limit up to which the model presents the
algorithm convergence), time-delay, correct indexation in the database
transactions and adressation throughout wireless radio frequencies in the
distribiuted sensor systems. Solution proposes the Voronoi diagram
computational geometry data structure and the sweeping curve algorithm.
Methods represents the following steps: simulation of the dynamically changing
agent system using set of points that are based on a contemporary paths of the
public transport routes, bykes, vehicles circulations; 3D map rendering on the
SITG Canton of Geneva map; Voronoi diagrams calculation with the Fortune’s
algorithm. Results - this work presents 2D static and dynamic Fortune’s
realization of the Voronoi diagrams, the architecture of the model for
constructing a distributed sensor system based on Voronoi diagrams is
described. Scope - geographic charts showing the distribution of parameters,
determination of the nearest points from the object, spacial arrangement of
objects in nature, solving the problem of finding all nearest neighbors to the
object, database indexing and search transactions based on quandrant trees.
Conclusions - the research shows the great potential of the new data
structures based on a class of computational geometry data structures and
sweeping curves in spacial distributed dimensions.
###### keywords:
sensor networks, computational geometry, internet of things , Voronoi diagrams
, spacial join , spacial index, sweeping curves, Morton code, Lebesgue curve ,
search operations , distributed database , publisher subscriber
###### MSC:
[2010] 68-06, 68P05
††journal: Future Generation Computer Systems
## 1 Introduction
Historically, noteworthy applications of the Voronoi diagram or Tissen’s
polygons emerged in the writings of Jon Snow, widely regarded as the father of
modern epidemiology, who studied the cholera that plagued London in 1854 At
that time, neither the etiology (note. Cause of occurrence), nor the mode of
transmission of this disease were precisely known, there were disputes about
two possibilities: infection through contact with the patient, his clothes and
/ or property, and the theory that the disease spread through atmospheric
phenomena such as wind. Snow, using a geographical method, revealed that the
cause of the disease was the consumption of water contaminated with feces. To
do this, he mapped the distribution of cholera deaths. Then he studied the
location of drinking water sources in the city and identified the Voronoi
regions for each of them. After calculating the distance between the residence
of each victim and the nearest pumping station, he concluded that the area
most severely affected by cholera corresponds to the Voronoi area associated
with the pumping station on Broad street, as 73 of 83 people died here. After
removing the handle of this pumping station, the outbreak of cholera was
extinguished. The Figure 1 represents that map.
Figure 1: The area most severely affected by cholera
### Systematization of the main problems
IT architecture includes two seemingly incompatible things: on the one hand,
it is a large number of peripheral devices with low computing power, low power
consumption, high speed of reaction to events, with delay in signal
transmission, and on the other hand, cloud servers with high computing power
for processing large amounts of data, storing and classifying them, often with
elements of machine intelligence and analytics. These two worlds use
completely different principles of construction and internal architecture [1].
### Problem identification
There are several levels of integration at which the Internet of Things and
edge technologies pose challenges. The first level is the integration of IoT
and Edge with the underlying IT systems deployed in manufacturing, financial,
engineering and other areas. Many of these basic systems are obsolete. If they
do not have APIs that enable IoT integration, then batch ETL (extract,
transform, load) software may be required to load data into these systems.
The second area of challenge is the IoT itself. Many IoT devices are built
by independent vendors. They use their own proprietary OS. This makes it
difficult to ”mix and match” different IoT devices in a single Edge
architecture. The area of IoT security and compliance is still evolving.
Meanwhile, organizations’ IT professionals can now ask potential IoT vendors
about what is already available for heterogeneous integration and whether they
plan to provide interoperability in future products.
When it comes to integrating legacy systems, ETL is the one integration method
that can help if APIs for the systems are not available. Another alternative
is to write an API, but it takes a long time. The good news is that most
legacy vendors are aware of the upcoming IoT wave and are already developing
their own APIs if they haven’t already. IT departments should check with major
system vendors to find out what their plans for IoT integration are. Once the
data volume grows significantly, it turns out that building the spacial index
itself can be problematic. Either the algorithm goes beyond the definition, or
the build time becomes unacceptably long, or the search is ineffective … As
soon as the spacial data begins to change intensively, the spacial index can
begin to degrade. Rebuilding the index helps, but not for long. Etl layer -
the level of collection, processing and storage of data. The back-end ETL
(extract, transform, and load) is the third ETL operation. The first was in
the peripheral, the second was in the gateway. The back-end ETL collects data
from all peripherals and gateways and is responsible for the following
operations:
* 1.
Collection of information,
* 2.
Bringing information to standard form,
* 3.
Saving information for future use,
* 4.
Information lifecycle management including archiving and destruction,
* 5.
Notifying other services when new data arrives.
The Figure 2 represents The typical ETL layer in the IoT sensor network
systems.
Figure 2: The typical ETL layer in the IoT sensor network systems
To organize spacial search in multidimensional indices, this work proposes to
find a logarithmic way of organizing a computational model. In connection with
the problems that arose with R trees [2], quadrant trees [3], this work
settled on the development of algorithms based on self-similar functions [4],
isomorphic to binary trees[5]. Self-similar functions [6] are used to organize
multidimensional indexing and searching.
### Basic mathematical model
Basic mathematical model represents the architechture model for the sensor
network system based on publisher/subscriber architecture with the Voronoi
diagrams data structure [7] data circulation. Storage operations (load) are
intended for storing, sorting and subsequent retrieval of information.
Different tools are used depending on the type of information and its use
cases. If the data does not have a strict schema (table columns), then it is
stored in NoSQL databases. However, if the data can be systematized with a
fixed schema, then SQL database types are used. The latter, in turn, have 2
types - OLTP (Online Transactional Processing) and OLAP (Online Analytic
Processing). As the name suggests, the first type is more suitable for the ETL
process itself - writing new values to the database, while the second is
more convenient for searching and analyzing data. Therefore, often after
loading the OLTP database, in the background, the data is copied to OLAP.
There are situations when it is not convenient or possible to store data in
databases, for example, in the form of a record. This data is written to the
data bucket, and the metadata of records is stored in databases. To reduce
storage costs, obsolete data is archived or deleted. And the last component of
this layer is the internal notification of the presence of new stored data for
presentation to clients and for analysis services [8]. This work proposes the
NoSQL approach for data processing based on Voronoi diagrams representation of
the set of the objects in the system within the construction of the spacial
indexation in NoSQL databases. Every sensor has its own unique identifier ID
correspondenting to his IP address. The appriximate model of the traffic
exchanges from the sensor to the server proposes the following (1):
$\gamma=\frac{\rho\bar{t}}{2(1-\rho)}\frac{\sigma_{a}^{2}+\sigma_{s}^{2}}{\bar{t}^{2}}\frac{\bar{t}^{2}+\sigma_{s}^{2}}{\bar{a}^{2}+\sigma_{s}^{2}}+\bar{t}$
(1)
where $\sigma_{a}^{2},\sigma_{s}^{2}$ \- variance values of time interval
between packets and time service, $\bar{a}$ \- average value of the interval
between packets, $\bar{t}$ \- average service time, $\rho$ \- system’s
loading.
The flow of messages entering the messaging queue from publishers in the
system (2):
$\varrho(x)=\frac{\exp^{-\lambda}\lambda^{x}}{x!}$ (2)
where $\varrho(x)$ \- possibility of obtaining the $x$ income signals in the
time unit, $x$ \- number of requests per unit of time, $\lambda$ \- the
average number of applications per unit of time (rate of receipt of
applications), $exp$ = 2.7182 - base of the natural logarithm.
## 2 Methodology
The main specificity of the Lebesgue’s curve [9] is localized in the splitting
of the subquery.
* 1.
first we find the starting extent, which is the minimum rectangle that
includes the search extent and contains one contiguous range of key values,
* 2.
calculate the key values,
* 3.
for the lower left and upper right points of the search extent (KMin, KMax),
* 4.
find a common binary prefix (from high order to low order) for KMin, KMax,
* 5.
zeroing all the digits behind the prefix, we get SMin, filling them with ones,
we get SMax,
* 6.
do the inverse transformation of keys to coordinates and find the corners of
the starting extent.
In the case of the Hilbert curve, by the way, the lower left corner of the
starting extent does not necessarily come from SMin, one needs to choose the
minimum value. The same with the upper right corner. The starting extent can
be noticeably larger than the search extent, if one is not unlucky, it will
turn out to be the maximum extent of the layer (empty prefix). For the
Z-curve, optimization can be done and the starting extent can be equated to
the search extent. This is possible due to the peculiarity of the z-curve -
for any rectangle, its lower left corner gives the minimum key value, and the
upper right corner gives the maximum (in the rectangle). Moreover, such a
starting extent can contain more than one range of values, but this is further
removed by filtering. If one pushes the starting extent onto the subquery
stack until the subquery stack is empty one will get the top element, if it
does not overlap with the search extent, one should discard it and skip
iteration. This is in the case the starting extent is larger than the search
extent and one needs to ignore the excess. Furthermore, if the minimum point
of the subquery is within the search extent, one queries the index by the
value of this point. As a result, one gets two values - ”the first key” higher
or equals the desired one and the ”last key” on the physical (leaf) page,
where the ”first key” lies:
* 1.
if “first key” lower than the maximum value of the current subquery, one
should ignore the current subquery, then skip iteration,
* 2.
if the entire subquery is inside the search extent, one subtracts it by the
traverse of the index from the minimum to the maximum value, then end of
iteration,
* 3.
if the “last key” higher than the maximum value of the current subquery, then
all the data of the current subquery is on this page, one needs to read the
page to the end and filter out the unnecessary, then end of iteration,
* 4.
the separate case ”last key” equials to the maximum value of the current
subquery, then one processes separately and traverse forward splitting the
current subquery,
* 5.
the one adds 0 and 1 to its prefix - to get two new prefixes,
* 6.
then one fills in the remainder of the key 0 or 1 - to get the minimum and
maximum values of the new subqueries,
* 7.
one pushes them onto the stack, first the one that added 1, then 0 (this is
for unidirectional reading of the index).
The Figure 3 represents the proposed transactional model for the ETL level.
Figure 3: The proposed transactional model for the ETL level
### Results
The results represents the first step realizations of the developed
transactional ETL model based on Voronoi Diagrams and Lebesgue curves and
consists of the realization of the Voronoi diagrams using the C++, Ubuntu OS
and OpenGL in 2D dimension in two performances: static version and dynamically
recalculating version in time. The Figure 4 represents the static realization
of the Voronoi diagrams.
Figure 4: The static realization of the Voronoi diagrams in 2D
The Figures 5 and 6 represents the dynamic realization of the Voronoi
diagrams.
Figure 5: The dynamic realization of the Voronoi diagrams in 2D Figure 6: The
static realization of the Voronoi diagrams in 2D
Moreover, the results represents the transactional model based on the
publisher/subscriber architecture. The Figure 3 represents the proposed
transactional model for the ETL level. The static Voronoi Diagrams situates in
this repository "the static Voronoi Diagrams realization". The dynamic Voronoi
Diagrams situates in this repository "the dynamic Voronoi Diagrams
realization".
## 3 Research
The research potential of the work lies in the possibility of using algorithms
for building maps with the need to plot the density of parameters. The
potential lies in the displaying the density maps for the necessary parameters
(for example, the dynamics of animals migration, the assessment of the
geological resources in different regions, the dynamics of the extinction of
wild animal species, the dynamics of the disappearance of forests, the traffic
congestion, the consumption of housing and communal resources, the sensitivity
of crystals on the surface, the chemical properties of materials, the physical
properties of materials, etc…) in loci / zones of the same density.
The research potential of the work lies in the possibility of applying
algorithms for constructing multidimensional spacial indexes to optimize
search queries in a growing volume of data.
### Applications
The applications of the proposed model can be realized in multivarious
applications in the sensor network systems:
* 1.
for search operations in the database optimization of the multidimentional
spacial geo systems,
* 2.
for search operations in the distibuted transactions such as blockchain,
* 3.
for identification of the closest neighbor objects to the object.
### Implication examples
The class of the Lebesgue curves (Morton code)[10], [11] can be found in the
following database management systems:
* 1.
AWS database Amazon Aurora [12],
* 2.
Amazon RDS for MySQL [12].
## 4 Discussion
Due to the recursive structure of the Voronoi diagrams and Lebesgue curves
there exists a possibility to use the recursive defragmentation of the
iterations in the both algorithms. Due to the spacial adaptiveness of the
Voronoi diagrams this work proposes to explore the properties of the
adaptiveness and time costs of these algorithms in real-time systems. The
estimated time for Voronoi diagrams calculation is $N\log{N}$, where $N$ \- is
the number of iterations. The estimated time of the Lebesque curve indexation
also is $N\log{N}$, where $N$ \- is the number of iterations.
## References
## References
* [1] D. Nimratz, Iot architecture - a first look under the hood, Electronic resource (Aug. 2018).
URL https://habr.com/ru/post/420173/
* [2] A. Guttman, R-trees: A dynamic index structure for spatial searching, SIGMOD Rec. 14 (2) (1984) 47–57. doi:10.1145/971697.602266.
URL https://doi.org/10.1145/971697.602266
* [3] B. Muratshin, Hilbert, lebesgue … and emptiness, Electronic resource (Aug. 2019).
URL https://habr.com/ru/users/zzeng/
* [4] B. Muratshin, Hilbert curve vs z-order, Electronic resource (Oct. 2017).
URL https://habr.com/ru/post/340100/
* [5] B. Muratshin, About z-order and r-tree, Electronic resource (Jan. 2017).
URL https://habr.com/ru/post/319096/
* [6] H. C. J. Moon, BongkiJagadish, Analysis of the clustering properties of hilbert space-filling curve, IEEE Transactions on Knowledge and Data Engineering.
URL https://drum.lib.umd.edu/handle/1903/804
* [7] L. Guibas, J. Stolfi, Primitives for the manipulation of general subdivisions and the computation of voronoi, ACM Trans. Graph. 4 (2) (1985) 74–123. doi:10.1145/282918.282923.
URL https://doi.org/10.1145/282918.282923
* [8] D. Nimratz, Iot architecture, Electronic resource (Jun. 2019).
URL https://habr.com/ru/post/455377/
* [9] A. Esculier, Lebesgue curve, lebesguesche kurve, Electronic resource (2006).
URL https://mathcurve.com/fractals/lebesgue/lebesgue.shtml
* [10] G. M. Morton, A computer Oriented Geodetic Data Base; and a New Technique in File Sequencing, Technical report, IBM Ltd., 1966.
* [11] Wikipedia, Z-order curve, Electronic public library article (Jan. 2022).
URL https://en.wikipedia.org/wiki/Z-order_curve
* [12] S. Chandrasekaran, Amazon aurora under the hood: indexing geospatial data using z-order curves, AWS Database Blog (Jan. 2018).
URL https://aws.amazon.com/blogs/database/amazon-aurora-under-the-hood-
indexing-geospatial-data-using-z-order-curves/
|
# First-principles molecular dynamics simulation of liquid indium
Yu. D. Fomin 111Corresponding author<EMAIL_ADDRESS>Vereshchagin Institute
of High Pressure Physics, Russian Academy of Sciences, Kaluzhskoe shosse, 14,
Troitsk, Moscow, 108840 Russia Moscow Institute of Physics and Technology
(National Research University), 9 Institutskiy Lane, Dolgoprudny, Moscow
region, 141701, Russia
###### Abstract
We report an ab-initio simulation of liquid Indium in a wide range of
pressures and temperatures. We calculate equation of state, thermal expansion
and compressibility coefficients. The structure of the system is analyzed by
radial distribution functions and structure factors. The results are compared
with available experimental data.
###### pacs:
61.20.Gy, 61.20.Ne, 64.60.Kw
## I Introduction
Liquid indium is an important metal in many fields of industry. It can be used
as coverege to reduce friction coefficient, as a solder, as an additive to
glasses to modify their optical properties, etc. It also attracted attention
of researchers for many years. Since 196-th there were several important works
which studied the structure of liquid indium at different temperatures str-1 ;
str-2 ; str-3 . Most of them, however, considered the system at ambient
pressure only. At the same time the melting curve of indium is measured up to
rather high pressure of 10.5 GPa dudley (see also mc for the melting curve
up to about 1 GPa).
Basing on the results of Refs. dudley and mc the authors of Ref. shen
studied the structure and the density of liquid indium up to the melting line
at $T=710$ K. Basing on X-ray techniques the authors measured the density of
molten indium at several pressures up to the melting point and the structure
factors. They also determined the coordination number by integrating of radial
distribution functions $g(r)$. From their results one can see that the
coordination number monotonically increases with pressure.
Another X-ray study of liquid indium is reported in Ref. mudry . In this study
experiments were performed at ambient pressure and a set of temperatures. The
authors report that the coordination nember behaves non-monotonically and
conclude that a liquid-liquid phase transition takes place.
In Ref. shen1 the molar volume of liquid indium at $T=710$ K was measured up
to $P=8.5$ GPa. Later on these data were fitted to Birch-Murnaghan equation in
Ref. liq-in . Basing on these fits the authors calculated the bulk modulus and
thermal expansion coefficient of liquid indium along the $T=710$ K isotherm.
In the present work we perform ab-initio molecular dynamics calculations of
liquid indium. We calculate the equation of state of the system, its bulk
modulus and thermal expansion coefficient. We also compute the structure
factors. All quantities are compared with the experimental data where it is
possible.
## II System and methods
In the present study we investigate a system of 100 atoms of indium in a cubic
box with periodic boundary conditions. The system is simulated by means of ab-
inition molecular dynamics method as realized in VASP simulation package.
Projector augmented wave method is used for the treatment of the electronic
structure. The energy cut-off is set to 96 eV. The time step is set to 3 fs.
Only gamma point was taken into account in the ${\bf k}$ space calculations.
Large set of densities and temperatures is considered. The densities varied
from $\rho_{min}=3.88$ to $\rho_{max}=19.06$ $g/cm^{3}$. At each density we
firstly simulated the high temperature system with $T_{max}=1000$ K. Then the
last configuration of this simulation was used as an initial one for
simulation of the system at the temperatures from $T=300$ up to $T=900$ K with
the step of $100$ K. Each simulation lasted for 10000 steps, i.e. 30 ps. The
set of simulated points is shown on the phase diagram in Fig. 1
Figure 1: Points where simulations were performed on the phase diagram. The
curve ’Dudley et. al.’ shows the results from Ref. dudley , the curve
’McDaniel’ - results from Ref. mc , ’Dudley et. al. S’ - fitting the results
of Ref. dudley to Simon equation (Eq. (2) of Ref. dudley ), TW - the points
of the present work.
## III Results and discussion
At the first step we calculate the equation of state on liquid indium and
compare it to the experimental data from ref. liq-in .
Fig. 2 presents a comparison of the data of this work and the experimental
results from liq-in . The experimental data are taken at $710$ K, while in the
present work $T=700$ K is used. We believe that this difference sufficiently
small to not affect the comparison.
The difference between the experimental and simulation data does not exceed
$1.12\%$. Basing on this we conclude that the results of simulations are
reliable.
Figure 2: Comparison of experimental data at $T=710$ K from Ref. liq-in and
the data from ab-initio simulation of this work at $T=700$ K.
In order to evaluate the equation of state of liquid indium on more solid
basis we fit it to a polynomial function
$P(\rho,T)=\sum_{n,m}a_{n,m}\rho^{n}T^{m},$ (1)
where $\rho$ is density in $g/cm^{3}$, T is temperature in Kelvins, $a_{n,m}$
are fitting coefficients and the exponents n and m are choosen in such a way
that $n+m<=5$.
Fig. 3 shows a comparison of the data for equation of state from ab-inition
simulation and from fitting to Eq. (1). One can see that the quality of the
fitting is perfect. Below we use this fitted equation of state for calculation
of the response functions $B_{T}=\rho\left(\frac{\partial
P}{\partial\rho}\right)_{T}$ and
$\alpha_{P}=-\frac{1}{T}\left(\frac{\partial\rho}{\partial T}\right)_{T}$.
Similarly, fitting of the energy is used for calculation of the heat capacity
of the system.
n | m | $a_{n,m}$
---|---|---
0 | 0 | -305.57333657145500
1 | 0 | 238.70598753111230
2 | 0 | -63.833351593789722
3 | 0 | 6.5433130551773946
4 | 0 | -0.24008639734032150
5 | 0 | 3.9155839840238878E-003
0 | 1 | -0.17373675573617220
1 | 1 | 1.4237276981756433E-002
2 | 1 | 8.0437749691983397E-003
3 | 1 | -5.7392554358567341E-004
4 | 1 | 1.5599905130353485E-005
2 | 2 | 5.8408548738952959E-004
1 | 2 | -1.3516570420262682E-004
2 | 2 | -1.5393806829344470E-006
3 | 2 | -3.4804293612014242E-008
0 | 3 | -5.1700880554506057E-007
1 | 3 | 1.7197816344291688E-007
2 | 3 | 1.3138966034853920E-009
0 | 4 | -6.8034449601794833E-011
1 | 4 | -7.7188558042335661E-011
0 | 5 | 1.8898204983398251E-013
Table 1: Coefficients of Eq. (1). The resulting pressure is in kbars (0.1
GPa). The dimensionality of coefficients $a_{n,m}$ are choosen in such a way
that $a_{n,m}\rho^{n}T^{m}$ is in kbars. Figure 3: Comparison of the data
from ab-inition molecular dymanics and fitting to Eq. LABEL:fit-pr. The data
for $T=1000$ K are shifted to 100 GPa upwards in order to avoid overlaps.
Fig. 4 (a) and (b) show a comparison of the data of the present calculations
with the experimental data from Ref. liq-in for the bulk modulus and thermal
expansion coefficient. In the case of bulk modulus the curves are very close.
However, the slope of the calculated curve is slightly larger. The largest
descreepancy takes place at low pressure and reaches about $20\%$.
The agreement between simulation and experement becomes worse in the case of
thermal expansion coefficient. Experemental values are smaller then the
calculated ones. The descreepancy increases with pressure reaching about
$40\%$ at $P=8$ GPa.
Figure 4: (a) Comparison of bulk modulus $B_{T}$ at $T=700$ K from ab-initio
simulation of the present work (AMD) and from experiments of Ref. liq-in . (b)
The same for the thermal expansion coefficient. (c) The same for isochoric
heat capacity $c_{V}$ (in units of $k_{B}$). The straight line shows
$c_{V}=2k_{B}$ level.
In order to characterize the structure of the liquid we calculated the radial
distribution functions $g(r)$. Fig. 5 shows $g(r)$ for a set of pressures
along the isotherm $T=700$ K. According to Ref. liq-in the melting point of
indium at $T=710$ K is $P_{m}=7.77$ GPa. From Fig. 5 (a) one can see that even
at the pressure as high as $P=26.70$ GPa the system preserves liquid-like
structure which should be related to formation of metastable liquid in ab-
initio simulations. Some signs of crystallinity can be observed at pressures
as high as $P=50.75$ GPa whish is much higher then experimental melting
pressure.
Figure 5: (a) Radial distribution functions of liquid indium at $T=700$ K
isotherm for the pressures from $P=-0.96$ GPa up to 26.60 GPa. (b) The same
for higher pressures.
X-ray experiments measure structure factor $S({\bf k})$ which is a Fourier
transform of radial distribution function. Fig. 6 shows the structure factor
of the system obtained in the present work in comparison to the experimental
ones from Refs. shen ; mudry . The structure factors of this work are obtained
by numerical Fourier transform of $g(r)$. The results for $k<1.6$ $\AA^{-1}$
should be addressed to numerical errors and should be discarded from
consideration.
One can see that experimental structure factors demonstrate very slow decay of
the right branch of the first peak. Unlike this the first peak of $S({\bf k})$
of our calculations look more symmetric and ’Gauss-like’. However, the overall
aggreement is sufficiently good.
Figure 6: Comparison of the structure factors obtained in the present work
with the experimental ones. The curves are shifted upward to avoid overlaps.
Pressures are given next to the curves. The curve at $P=0$ GPa and $T=690$ K
is from Ref. mudry . The curves at $P=1.0$, $2.0$, $3.1$, $5.1$ and $6.3$ GPa
are from Ref. shen . The curves with label ’MD’ after the pressure are our
calculations.
Although the crystallization is not visible in radial distribution functions,
it is more pronounced in the dynamical properties of the system characterized
by means square displacement. Fig. 7 (a) shows mean square displacement long
the $T=700$ K isotherm. One can see that some mobility of atoms preserves up
to the pressure $=12.28$ GPa and disappearsh at higher pressures. Fig. 7 (b)
demonstrates the diffusion coefficient along the same isotherm. One can see
that it monotonously decreases with pressure and vanishes at $P=16.82$ GPa.
Figure 7: (a) Mean square displacement of liquid indium at a set of pressures
at $T=700$ K. (b) Diffusion coefficient of liquid indium along the same
isotherm.
This work was carried out using computing resources of the federal collective
usage center ”Complex for simulation and data processing for mega-science
facilities” at NRC ”Kurchatov Institute”, http://ckp.nrcki.ru, and
supercomputers at Joint Supercomputer Center of the Russian Academy of
Sciences (JSCC RAS). The work was supported by the Council of the President of
the Russian Federation for State Support of Young Scientist (Grant
MD-6103.2021.1.2).
## References
* (1) H. Ocken and C. N. J. Wagner, Temperature Dependence of the Structure of Liquid Indium, Phys. Rev. 149, 122 (1966)
* (2) H. Ruppersberg and K. H. Winterberg, Structure factor and resistivity of liquid indium at temperature between 165 and 665 o C
* (3) M. L. Harris, R. L. Collier, R. W. Gruebel and T. O. Callaway, The direct pair correlation function of liquid indium, Physics Letters 65A, 244 (1978).
* (4) J. D. Dudleyj and H. T. Hall, Experimental Fusion Curves of Indium and Tin to 105 000 Atmospheres, Phys. Rev. 118, 1211 (1960)
* (5) M. L. McDaniel, S. E. Babb Jr., and G. J. Scott, Melting Curves of Five Metals under High Pressure, J. Chem. Phys. 37, 822 (1962)
* (6) G. Shen, N. Sata, N. Taberlet, M. Newville, M. L. Rivers, and St. R. Sutton, Melting studies of indium: determination of the structure and density of melts at high pressures and high temperature, J. Phys.: Condens. Matter 14, 10533 10540 (2002)
* (7) S. Mudry, I. Shtablavyi, and U. Liudkevych, The relation between structure changes and thermal expansion in liquid indium, Phys. and Chem. of Liquids 55, 254-263 (2017)
* (8) G. Shen, N. Sata, M. Newville, M. L. Rivers, and S. R. Sutton, Molar volumes of molten indium at high pressures measured in a diamond anvil cell, Applied Physics Letters 81, 1411 (2002)
* (9) H. Li, Yo. Sun, and M. Li, Equation of state of liquid Indium under high pressure, AIP Advances 5, 097163 (2015).
|
# Distance-Based Propagation for
Efficient Knowledge Graph Reasoning
Harry Shomer1 Yao Ma2 Juanhui Li1 Bo Wu3
Charu C. Aggarwal4 Jiliang Tang1
1 Michigan State University 2 Rensselaer Polytechnic Institute 3 Colorado
School of Mines
4 IBM T. J. Watson Research Center
{shomerha, lijuanh1<EMAIL_ADDRESS><EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Knowledge graph completion (KGC) aims to predict unseen edges in knowledge
graphs (KGs), resulting in the discovery of new facts. A new class of methods
have been proposed to tackle this problem by aggregating path information.
These methods have shown tremendous ability in the task of KGC. However they
are plagued by efficiency issues. Though there are a few recent attempts to
address this through learnable path pruning, they often sacrifice the
performance to gain efficiency. In this work, we identify two intrinsic
limitations of these methods that affect the efficiency and representation
quality. To address the limitations, we introduce a new method, TAGNet, which
is able to efficiently propagate information. This is achieved by only
aggregating paths in a fixed window for each source-target pair. We
demonstrate that the complexity of TAGNet is independent of the number of
layers. Extensive experiments demonstrate that TAGNet can cut down on the
number of propagated messages by as much as $90\%$ while achieving competitive
performance on multiple KG datasets 111The code is available at
https://github.com/HarryShomer/TAGNet.
## 1 Introduction
Knowledge graphs (KGs) encode facts via edges in a graph. Because of this, one
can view the task of predicting unknown edges (i.e. link prediction) as
analogous to uncovering new facts. This task is referred to as knowledge graph
completion (KGC) and has attracted a bevy of research over the past decade
Bordes et al. (2013); Trouillon et al. (2016); Schlichtkrull et al. (2018);
Zhu et al. (2021). Most work has focused on learning quality representations
for all nodes (i.e. entities) and edge types (i.e. relations) in the graph to
facilitate KGC.
Recently, methods Zhu et al. (2021); Sadeghian et al. (2019); Zhang and Yao
(2022), have been introduced that move away from the embedding-based approach
and focus instead on learning directly from path-based information. One recent
GNN-based method, NBFNet Zhu et al. (2021), draws inspiration from the
Bellman-Ford algorithm by computing path information through dynamic
programming. By doing so, it learns pairwise embeddings between all node pairs
in an inductive fashion. It achieves state-of-the-art performance in both the
transductive and inductive KGC settings. In this work, we refer to such
methods as path-based GNNs. However, a downside of path-based GNNs is their
inefficiency. This limits their ability in large real-world graphs.
Furthermore, it inhibits their ability to propagate deeply in the graph. Two
recent methods have been proposed to address the inefficiency problem, i.e.,
$\text{A}^{*}\text{Net}$ Zhu et al. (2022) and AdaProp Zhang et al. (2023), by
only propagating to a subset of nodes every iteration. However, they still
tend to propagate unnecessary and redundant messages.
For path-based GNNs, only the source node is initialized with a non-zero
message at the beginning of the propagation process. Such models often run a
total of $T$ layers, where, in each layer, all nodes aggregate messages from
their neighboring edges. We identify that this design is inefficient by making
the following two observations. (1) Empty Messages: In the propagation
process, a node only obtains non-empty messages when the number of propagation
layers is $\geq$ the shortest path distance between the source and the node.
This means that a large number of nodes far from the source node only
aggregate “empty” messages in the early propagation layers. Nonetheless, path-
based GNN models such as NBFnet propagate these unnecessary “empty messages”
in these early propagation layers. (2) Redundant Messages: To ensure path
information from the source reach distant nodes, the number of layers $T$
needs to be sufficiently large. However, a large $T$ induces the propagation
of redundant messages for those nodes that are close to the source node.
Intuitively, short paths contain more significant information than long ones
Katz (1953). The “close” nodes typically aggregate enough information from
shorter paths in the early propagation layers. Propagating messages for longer
paths in later layers for “close” nodes does not provide significant
information and needlessly adds to the complexity. More details on these two
observations are provided in Section 3.1.
To address these limitations and make the propagation process more efficient,
we aim to develop an algorithm that limits the propagation of “empty” and
“redundant” messages. In particular, we propose a new method TAGNet \-
TruncAted propaGation Network. TAGNet only aggregates paths in a fixed window
for each source-target pair, which can be considered a form of path pruning.
Our contributions can be summarized as follows:
* •
We propose a new path-based GNN, TAGNet, which customizes the amount of path-
pruning for each source-target node pair.
* •
We demonstrate that the complexity of TAGNet is independent of the number of
layers, allowing for efficient deep propagation.
* •
Extensive experiments demonstrate that TAGNet reduces the number of aggregated
messages by up to $90\%$ while matching or even slightly outperforming NBFNet
on multiple KG benchmarks.
## 2 Preliminary
In this section, we first introduce the notation used throughout the paper. We
then introduce the path formulation from Zhu et al. (2021), the generalized
Bellman-Ford algorithm Baras and Theodorakopoulos (2010), and NBFNet Zhu et
al. (2021).
### 2.1 Notations
We denote a KG as $\mathcal{G}=\\{\mathcal{V},\mathcal{R},\mathcal{E}\\}$ with
entities $\mathcal{V}$, relations $\mathcal{R}$, and edges $\mathcal{E}$. An
edge is denoted as a triple and is of the form $(s,q,o)$ where $s$ is the
subject, $q$ the query relation, and $o$ the object. for an incomplete fact
$(s,q,?)$. In such a problem, we refer to the node entity $s$ as the source
node and any possible answer ? as the target node. Lastly, we denote the
shortest path distance between nodes $s$ and $o$ as $\text{dist}(s,o)$. We
assume an edge weight of 1 since KGs typically don’t contain edge weights.
### 2.2 Path Formulation
Zhu et al. (2021) introduce a general path formulation for determining the
existence of an edge $(s,q,o)$. They consider doing so by aggregating all
paths between $s$ and $o$, conditional on the query $q$. We denote the maximum
path length as $T$ (in their paper they set $T=\infty$), $P_{s,o}^{t}$
represents all paths of length $t$ connecting nodes $s$ and $o$, and
$\mathbf{w}_{q}(e_{i})$ is the representation of an edge $e_{i}$ conditional
on the relation $q$. The representation of an edge $(s,q,o)$ is given by
$\mathbf{h}_{q}(s,o)$:
$\mathbf{h}_{q}(s,o)=\bigoplus_{t=1}^{T}\bigoplus_{p\in
P_{s,o}^{t}}\bigotimes_{i=1}^{\lvert p\rvert}\mathbf{w}_{q}(e_{i}).$ (1)
Zhu et al. (2021) show that this formulation can capture many existing graph
algorithms including the Katz index Katz (1953), Personalized PageRank Page et
al. (1999) and others.
(a) Example of propagating for three iterations with $\delta{=}1$.
(b) Update status of nodes when $\delta{=}1$.
Figure 1: Example of our algorithm when $\delta=1$. We note that an undirected
blue edge indicates that both nodes aggregate each other. A directed edge
indicates that only the head node aggregates the tail node. E.g., at iteration
2 node 2 aggregates node 1, however node 1 doesn’t aggregate node 2.
### 2.3 Generalized Bellman-Ford
Due to the exponential relationship between path length and the number of
paths, calculating Eq. (1) for large $T$ is unfeasible. As such, Zhu et al.
(2021) instead model Eq. (1) via the generalized Bellman-Ford algorithm Baras
and Theodorakopoulos (2010) which recursively computes such path information
in a more efficient manner. It is formulated as:
$\displaystyle\mathbf{h}_{q}^{(0)}(s,o)=\mathbf{1}_{q}(s=o),$ (2)
$\displaystyle\mathbf{h}_{q}^{(t)}(s,o)=\bigg{(}\bigoplus_{(x,r,o)\in\mathcal{E}(o)}\mathbf{h}_{q}^{(t-1)}(s,x)\otimes\mathbf{w}_{q}(x,r,o)\bigg{)}$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\>\oplus\mathbf{h}_{q}^{(0)}(s,o),$
(3)
where $\mathcal{E}(o)$ represents all edges with $o$ as the object entity,
i.e., $(*,*,o)$. Zhu et al. (2021) prove that $T$ iterations of the
generalized Bellman-Ford is equal to Eq. (1) with a max path length of $T$.
### 2.4 NBFNet
Zhu et al. (2021) extend Eq. (2) via the inclusion of learnable parameters.
$\mathbf{w}_{q}(x,r,o)$ is replaced with a learnable embedding
$\mathbf{w}_{q}(r)$ for each relation $r$. A linear transformation is further
included in the aggregation. It is formulated as the following where for
convenience we set $\mathbf{h}_{q}^{(t)}(s,o)=\mathbf{h}_{o}^{(t)}$ and
$\mathbf{w}_{q}(x,r,o)=\mathbf{w}_{q}(r)$:
$\displaystyle\mathbf{h}_{o}^{(0)}={}$ $\displaystyle\text{INDICATOR}(u,v,q),$
(4) $\displaystyle\mathbf{h}_{o}^{(t)}={}$
$\displaystyle\text{AGG}\Big{(}\Big{\\{}\text{MSG}(\mathbf{h}_{x}^{(t-1)},\mathbf{w}_{q}(r))\,|$
$\displaystyle\;\;\;\;\;\;\;\;(x,r,o)\in\mathcal{E}(o)\Big{\\}}\cup\\{\mathbf{h}_{o}^{(0)}\\}\Big{)}.$
The representation of the source node $\mathbf{h}_{s}^{(0)}$ is initialized to
a learnt embedding, $\mathbf{q}_{r}$, corresponding to the query relation $r$.
For all other nodes $(o\neq s)$, they learn a separate initial embedding.
However in practice they simply initialize the other nodes to the 0 vector.
For the AGG function they consider the sum, max, min and PNA operations. For
the MSG function they consider the TransE Bordes et al. (2013), DistMult Yang
et al. (2015), and RotatE Sun et al. (2019) operators. The final
representation is passed to a score function $f$ which is modeled via an MLP.
## 3 The Proposed Framework
In this section, we propose a new approach to improve the efficiency of path-
based GNN models. Inspired by two observations in Section 3.1, we proposed a
simple but effective distance-based pruning strategy. We then introduce a
truncated version of the generalized Bellman-Ford algorithm that achieves the
goal of our proposed pruning strategy. Finally, we describe a neural network
model based on the truncated Bellman-Ford.
### 3.1 Motivation
In this subsection, we discuss the motivation behind our framework design. In
particular, we suggest that the inefficiency of path-based GNNs is mainly due
to two observations: (1) the aggregation of many empty messages and (2) the
proliferation of redundant messages when the number of layers is large. Next,
we detail our observations and how they inspire us to design a more efficient
method. Observation #1: Empty Messages. Most path-based GNNs aggregate empty
messages that do not contain any path information. This has the effect of
increasing the model complexity without any obvious benefit. We provide an
illustrative example. In Figure 1(a), during the first iteration, node $7$
will try to aggregate path information from node $6$. However, all node
representations, outside of the source, are initialized to zero ("empty
messages"). Hence, a non-informative “empty message” will be passed to node
$7$ from node $6$. In fact, in the first iteration, only the $1$-hop neighbors
of the source aggregate non-empty messages which contains information on paths
with length 1. Only after two iterations will node $6$ contain path
information from the source. Therefore aggregating any messages before the
third iteration will not lead to any path information for node $7$. However,
both NBFNet Zhu et al. (2021) and $\text{A}^{*}\text{Net}$ Zhu et al. (2022)
will aggregate such messages, leading to increased complexity without any gain
in additional path information. This observation suggests that a node $o$ of
distance $\text{dist}(s,o)$ from the source can only aggregate path
information from iteration $t=\text{dist}(s,o)$ onwards. Observation #2:
Redundant Messages. Due to their design, path-based GNNs with $T$ layers can
only learn representations for nodes within $T$ hops of the source node.
However, since the time complexity of all existing methods is proportional to
the number of layers, learning representations for nodes far from the source
(i.e., distant nodes) can be very inefficient. In particular, as we discussed
in Section 1, this mainly afflicts target nodes closer to the source. Again,
we utilize Figure 1(a) for illustration. In the first two iterations the node
4 aggregates two paths including (source, 4) and (source, 3, 4). These paths
provide significant information between the source and 4. Comparatively, in
the $6$-th iteration node $4$ aggregates paths222Strictly, these walks are not
paths, as they contain repeated nodes and edges. In this paper, we follow the
convention of the path-based GNN papers to loosely call them paths. of length
6, which reach further nodes and return to node $4$. Since these paths already
contain information present in shorter paths, little information is gained by
aggregating them. Our empirical study in Section 4.3 also verifies that
aggregating paths of longer length relative to the target node have little to
no positive effect on performance.
These two observations suggest that the efficiency of path-based GNN methods
is low when there are nodes of diverse distances to the source. We verify this
by analyzing the distance distribution for all test samples on the WN18RR
Dettmers et al. (2018) dataset. For each sample we calculate the shortest path
distance between both nodes and plot the distribution of the distances over
all samples. The results are shown in Figure 2. We note that around $25\%$ of
samples have a shortest distance $\geq 5$. To aggregate information for these
distant nodes, it is necessary to set $T$ to $\geq 5$. In this case, nodes of
larger distance will propagate empty messages for the first few iterations
(Observation 1). Furthermore, about $35\%$ of the samples have a shortest
distance of $1$. Such samples will aggregate redundant messages after a few
iterations (Observation 2). Our Design Goal: The key to improving the
efficiency of path-based GNNs is to modify their aggregation scheme. In
particular, based on the aggregation scheme of path-based GNNs, all target
nodes are aggregating paths with lengths ranging from $1$ to $T$. Such paths
contain many empty and redundant messages. To reduce the aggregation of those
non-informative messages, we propose to customize the aggregations for each
target node. Specifically, for close nodes, we do not aggregate long paths as
they are redundant. For distant nodes, we do not aggregate short paths as they
are empty. As such, we customize the aggregation process for each target node
according to its distance from the source. Based on this intuition, we
reformulate the path formulation, Eq. (1), as follows.
$\mathbf{x}_{q}(s,o)=\bigoplus_{t=\text{dist}(s,o)}^{\text{dist}(s,o)+\delta}\bigoplus_{p\in
P_{s,o}^{t}}\bigotimes_{i=1}^{\lvert p\rvert}w(e_{i}),$ (5)
where $\delta\geq 0$ is an offset. The parameter $\delta$ can be considered as
a form of path pruning as it controls the paths we aggregate relative to the
shortest path distance. For example, when $\delta=0$, it only aggregates those
paths of the shortest distance for all node pairs. Empirical observations in
Section 4.3 validate our use of pruning based on an offset $\delta$.
Due to the high complexity of Eq. (5), it is not practical to directly
calculate it. Hence, based on the generalized Bellman-Ford algorithm Baras and
Theodorakopoulos (2010), we propose a truncated version of the Bellman-Ford
algorithm for calculating Eq. (5) in a more efficient fashion.
Figure 2: Test Distance Distribution for WN18RR
### 3.2 Truncated Bellman-Ford
From our design goal, we are interested in capturing all paths of length
$\text{dist}(s,o)\leq l\leq\text{dist}(s,o)+\delta$. To achieve this goal, for
node $o$, we begin aggregating at iteration $t=\text{dist}(s,o)$ and stop
aggregation after iteration $t=\text{dist}(s,o)+\delta$. This helps avoid
aggregating empty messages before $\text{dist}(s,o)$-th iteration and
redundant messages after $\text{dist}(s,o)+\delta$ iterations. However, during
the iterations between $\text{dist}(s,o)$ and $\text{dist}(s,o)+\delta$, there
are still potential empty messages. For example, any node $v$ with the
shortest distance to source larger than $\text{dist}(s,o)+\delta$ always
contains empty messages during these iterations. Hence, to further avoid
aggregating many empty messages, we only allow aggregation from a subset of
the neighboring nodes of $o$. More formally, we formulate the above intuition
into the following constrained edge set $\mathcal{C}(s,o,t)$ through which
node $o$ aggregates information at iteration $t$.
$\displaystyle\mathcal{C}(s,o,t)=\begin{cases}\emptyset,\text{ if
}t<\text{dist}(s,o)\text{ or }\\\ \;\;\;\;\;\;\;t>\text{dist}(s,o)+\delta\\\
\\{(v,r,o)\in\mathcal{E}(o)\>|\\\
\>\>\>\text{dist}(s,v)<\text{dist}(s,o)+\delta\\},\text{else}\end{cases}$ (6)
Based on this constraint set of edges for node $o$, we update the generalized
Bellman-Ford algorithm (Eq. 2) as follows where
$\mathcal{C}=\mathcal{C}(s,o,t)$:
$\displaystyle\mathbf{x}_{q}^{(t)}(s,o)=\bigg{(}\bigoplus_{(v,r,o)\in\mathcal{C}}\mathbf{x}_{q}^{(t-1)}(s,v)\otimes\mathbf{w}_{q}(v,r,o)\bigg{)}$
$\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\>\oplus\mathbf{x}_{q}^{(0)}(s,o).$
(7)
The following theorem shows that the aggregation scheme proposed in Eq. (3.2)
results in aggregation of the correct paths as described in Eq. (5).
###### Theorem 1.
Given a source node $s$, query $q$, and target node $o$, the final
representation, $\mathbf{x}_{q}^{F}(s,o)$ only aggregates all path
representations whose path length is between $\text{dist}(s,o)$ and
$\text{dist}(s,o)+\delta$ for all $o\in V$. It therefore contains all
information present in Eq. (5) such that,
$\mathbf{x}_{q}^{F}(s,o)=\bigoplus_{t=\text{dist}(s,o)}^{\text{dist}(s,o)+\delta}\bigoplus_{p\in
P_{s,o}^{t}}\bigotimes_{i=1}^{\lvert p\rvert}w(e_{i}).$ (8)
The detailed proof of Theorem 1 is provided in Appendix A. This design has the
following advantages. (1) We don’t begin aggregating messages until layer
$t=\text{dist}(s,o)$. This helps avoid the aggregation of many empty messages
for nodes far from the source. (2) We stop aggregating messages at layer
$t=\text{dist}(s,o)+\delta$. This ensures that for close nodes we don’t
aggregate many redundant messages. Furthermore, it ensures that we will always
aggregate paths of $\delta+1$ different lengths for all target nodes
regardless of their distance from the source. (3) In Section B.2, we
demonstrate that the complexity of this design is independent of the number of
layers, allowing for deep propagation.
An Illustrative Example. We given an example of the effect of constraints on
propagation in Figure 1 where $s=\text{source}$. Figure 1(a) shows the
involved nodes and edges over three iterations when $\delta=1$. We observe
that only a portion of the nodes and edges are involved at any one iteration.
For example, at iteration 1 only the 1-hop neighbors and the edges connecting
them to the source are involved. This is because they are the only nodes and
edges able to receive any path information at that stage. Figure 1(b) details
the update status of nodes by distance from the source node. We note how as
the iteration increases the number of nodes updated shift to the right in
groups of two. Furthermore since we only iterate for three iterations, the 4+
hop neighbors never update as there is no available path information for them
until iteration 4.
### 3.3 Degree Messages
An effect of pruning paths, especially with low $\delta$, is that it can lead
to very few messages being aggregated. This is especially true for smaller or
sparser graphs. One consequence of few messages being aggregated is that it
can make it difficult for a node to discern the properties of its neighborhood
(e.g. degree). We give an example of node 4 in Figure 1. For each of the first
2 iterations, it only aggregates messages from 2/4 of it’s neighbors. As such,
it never aggregates messages from all its neighbors at the same iteration.
This can lead to a failure of node $4$ to properly discern it’s degree, as the
number of non-empty messages in each iteration is only a portion of the
overall degree. Since the degree is known to be an important factor in link
prediction Newman (2001); Adamic and Adar (2003), we want to preserve the
degree information for all nodes.
In order to preserve the degree information for each node, we consider
encoding the degree via the use of pseudo messages. Specifically, we want to
add enough messages such that the total number of messages aggregated for a
node $o$ is equivalent to its degree. We refer to such messages as degree
messages. Going back to our example in Figure 1, for node $4$ at iteration $1$
and $2$ we would add $2$ degree messages so that the total number of messages
is 4. Formally, we denote the degree of a node $o$ as $b_{o}$. The number of
messages to add at iteration $t$ is given by $\rho_{o}=b_{o}-\lvert
C(s,o,t)\rvert$.
For the value of the messages, we learn a separate embedding denoted as
$\mathbf{x}_{\text{deg}}^{(t)}$ that is the same across all nodes. Since the
value of each message is the same we can avoid explicitly aggregating each
degree message individually. Instead, we just aggregate one message that is
equal to the number of degree messages multiplied by the degree embedding,
$\mathbf{x}_{\text{deg}}^{(t)}(s,o)=\rho_{o}\cdot\mathbf{x}_{\text{deg}}^{(t)},$
(9)
where $\mathbf{x}_{\text{deg}}^{(t)}(s,o)$ is the value of the degree message
for node $o$ at iteration $t$. This edge is then added to the set of messages
to be aggregated, $C(s,o,t)$. Since this is equivalent to computing and
aggregating only one edge, it has no effect on the model complexity.
Experimental results in Section 4.4 validate the effectiveness of degree
messages.
### 3.4 GNN Formulation
We follow similar conventions to NBFNet when converting Eq. (6) and Eq. (3.2)
to a GNN. We denote the embedding of a source node $s$ and arbitrary target
node $o$ as $\mathbf{x}_{q}(s,o)$. We further represent the indicator query
embeddings as $\mathbf{x}_{q}$ and the layer-wise relation embeddings as
$\mathbf{x}_{r}^{(t)}$.
We utilize the INDICATOR function described in Section 2.4, PNA Corso et al.
(2020) for the AGGREGATE function, and DistMult Yang et al. (2015) for the MSG
function. The probability of a link existing between a source-target pair is
determined via a score function $f$. Both the final representation of the pair
and the query embedding are given as input. The output of $f$ is then passed
to a sigmoid to produce a probability,
$p(s,o)=\sigma\left(f\left(\mathbf{x}_{q}^{\text{F}}(s,o),\,\mathbf{x}_{q}\right)\right),$
(10)
where $\mathbf{x}_{q}^{\text{F}}(s,o)$ is the final pair representation. The
full algorithm is detailed in Appendix B.1. We run a total of $T$ layers. We
further show in in Appendix B.2 that time complexity is independent of the
number of layers. This enables TAGNet to propagate for more layers than
existing path-based GNNs.
Furthermore, due to its general design, TAGNet can also be integrated with
other efficiency-minded methods like $\text{A}^{*}$Net. This is described in
more detail in Appendix B.3. Extensive experiments in Sections 4.1 and 4.2
also demonstrate that combining both methods can significantly reduce the
number of messages propagated by $\text{A}^{*}$Net without sacrificing
performance.
### 3.5 Target-Specific $\delta$
A drawback of our current design is that we assume a single offset $\delta$
for all possible node pairs. However, for some pairs we may want to consider
propagating more or less iterations. For example, in Figure 1 we may only want
to consider $\delta=0$ for the target node $2$ due to the limited number of
paths connecting it to the source. However for node $4$, which is concentrated
in a denser portion of the subgraph, we may want to consider a higher value of
$\delta$ such as 1 or 2 to capture more path information. We next detail our
method for achieving this.
Table 1: Transductive Results. Best results are in bold and the 2nd best underlined. Method Type | Method | FB15k-237 | WN18RR
---|---|---|---
| | MRR | Hits@1 | Hits@10 | MRR | Hits@1 | Hits@10
Embeddings | TransE | 0.294 | - | 0.465 | 0.226 | - | 0.501
DistMult | 0.241 | 0.155 | 0.419 | 0.43 | 0.39 | 0.49
ComplEx | 0.247 | 0.158 | 0.428 | 0.44 | 0.41 | 0.51
GNNs | R-GCN | 0.273 | 0.182 | 0.456 | 0.402 | 0.345 | 0.494
CompGCN | 0.355 | 0.264 | 0.535 | 0.479 | 0.443 | 0.546
Path-Based | DRUM | 0.343 | 0.255 | 0.516 | 0.486 | 0.425 | 0.586
RED-GNN | 0.374 | 0.283 | 0.558 | 0.533 | 0.485 | 0.624
AdaProp | 0.392 | 0.309 | 0.555 | 0.553 | 0.502 | 0.652
NBFNet | 0.415 | 0.321 | 0.599 | 0.551 | 0.497 | 0.666
$\text{A}^{*}$Net | 0.414 | 0.324 | 0.592 | 0.547 | 0.490 | 0.658
TAGNet | \+ $\text{A}^{*}$Net | 0.409 | 0.323 | 0.577 | 0.555 | 0.502 | 0.657
Fixed $\delta$ | 0.421 | 0.328 | 0.602 | 0.562 | 0.509 | 0.667
Specific $\delta$ | 0.417 | 0.328 | 0.592 | 0.565 | 0.513 | 0.667
Table 2: Inductive Results (evaluated with Hits@10). Ours results are averaged over 5 runs Method | FB15k-237 | WN18RR
---|---|---
v1 | v2 | v3 | v4 | v1 | v2 | v3 | v4
NeuralLP | 0.468 | 0.586 | 0.571 | 0.593 | 0.772 | 0.749 | 0.476 | 0.706
DRUM | 0.474 | 0.595 | 0.571 | 0.593 | 0.777 | 0.747 | 0.477 | 0.702
GraIL | 0.429 | 0.424 | 0.424 | 0.389 | 0.760 | 0.776 | 0.409 | 0.687
RED-GNN | 0.483 | 0.629 | 0.603 | 0.621 | 0.799 | 0.780 | 0.524 | 0.721
AdaProp | 0.470 | 0.651 | 0.620 | 0.614 | 0.798 | 0.836 | 0.582 | 0.732
NBFNet | 0.607 | 0.704 | 0.667 | 0.668 | 0.826 | 0.798 | 0.568 | 0.694
$\text{A}^{*}\text{Net}$ | 0.535 | 0.638 | 0.610 | 0.630 | 0.810 | 0.803 | 0.544 | 0.743
TAGNet + $\text{A}^{*}$Net | 0.541 | 0.646 | 0.604 | 0.623 | 0.813 | 0.805 | 0.535 | 0.745
TAGNet (fixed $\delta$) | 0.596 | 0.700 | 0.677 | 0.666 | 0.816 | 0.796 | 0.534 | 0.734
TAGNet (specific $\delta$) | 0.596 | 0.698 | 0.675 | 0.661 | 0.818 | 0.803 | 0.544 | 0.737
#### 3.5.1 Target-Specific $\delta$ via Attention
A target-specific $\delta$ can be attained by realizing the connection between
the hidden representations and the value of $\delta$. Let’s denote the value
of the hyperparameter $\delta$ as $\hat{\delta}$. For a source-target node
pair $(s,o)$, we only aggregate paths from length $\text{dist}(s,o)$ to
$\text{dist}(s,o)+\hat{\delta}$. At iteration $t=\text{dist}(s,o)$ we
aggregate paths of length $\text{dist}(s,o)$ and at iteration
$t=\text{dist}(s,o)+1$ only those paths of length $\text{dist}(s,o)+1$, and so
on until $t=\text{dist}(s,o)+\hat{\delta}$. The set of hidden representations
for a node pair is as follows where for convenience we represent
$\mathbf{x}_{q}(s,o)$ as $\mathbf{x}_{(s,o)}$:
$\text{Hiddens}(s,o)=\left[\mathbf{x}_{(s,o)}^{\text{dist}(s,o)},\cdots,\mathbf{x}_{(s,o)}^{(\text{dist}(s,o)+\hat{\delta})}\right].$
(11)
The first hidden representation only contains paths of shortest length and
therefore corresponds to $\delta=0$. Since the paths accumulate over hidden
representations via a self-loop, $\mathbf{x}_{(s,o)}^{(\text{dist}(s,o)+1)}$
contains all paths of length $\text{dist}(s,o)$ and $\text{dist}(s,o)+1$,
corresponding to $\delta=1$. As such, the final hidden representation is
equivalent to $\delta=\hat{\delta}$. Therefore, choosing a target-specific
$\delta$ is achieved by selecting one of the hidden representations as the
final representation.
We utilize attention to determine which value of $\delta$ is best for a
specific target node. This is formulated as the following:
$\mathbf{x}_{(s,o)}^{\text{F}}=\sum_{\delta=0}^{\hat{\delta}}\alpha_{(s,o)}^{\delta}\mathbf{x}_{(s,o)}^{(\text{dist}(s,o)+\delta)},$
(12)
where $\alpha_{(s,o)}^{\delta}$ is the corresponding attention weight for the
hidden representation $\mathbf{x}_{(s,o)}^{(\text{dist}(s,o)+\delta)}$. For
each possible value of $\delta$, $\alpha_{(s,o)}^{\delta}$ is given by:
$\displaystyle\tilde{\alpha}_{(s,o)}^{\delta}=g\left(\mathbf{x}_{(s,o)}^{(\text{dist}(s,o)+\delta)},\mathbf{x}_{q}\right)$
$\displaystyle\alpha_{(s,o)}^{\delta}=\text{Softmax}(\tilde{\alpha}_{(s,o)}^{\delta}).$
We model $g$ as an MLP that takes both the hidden representation and the query
embedding as input. Taking inspiration from $\text{A}^{*}\text{Net}$ Zhu et
al. (2022), we conjecture that a well-learned score function can help
determine which representations are better than others. As such, we further
consider modeling $g$ as its own function or having it share parameters with
the score function $f$, Eq. (10). Lastly, we show in Appendix B.2 that the
time complexity is unchanged when using a target-specific $\delta$.
## 4 Experiment
In this section, we evaluate the effectiveness of our proposed framework on
KGC under both the transductive and inductive settings. We also empirically
analyze the efficiency and conduct ablation studies on each component. The
experimental details are listed in Appendix C. We note that for a fair
comparison between path-based GNNs, we run each model using 6 layers and a
hidden dimension of 32 as is done in both Zhu et al. (2021) and Zhu et al.
(2022). Please see Appendix C.2 for more details.
### 4.1 Effectiveness of TAGNet
In this subsection, we present the results of TAGNet compared with baselines
on both transductive and inductive settings. We further detail the results
when combining TAGNet with $\text{A}^{*}$Net. Transductive Setting: The
results on the transductive setting are shown in Table 1. We observe that
TAGNet achieves strong performance with just a fixed $\delta$. In particular,
it outperforms $\text{A}^{*}\text{Net}$ and AdaProp on most metrics. Also
compared to NBFnet, which doesn’t utilize pruning, TAGNet achieves comparable
or even stronger performance. This indicates that the proposed pruning
strategy mostly reduces redundant aggregations that do not impair the models
effectiveness. Inductive Setting: Table 2 shows the results on the inductive
setting. TAGNet achieves strong performance on both datasets. In particular,
it achieves comparable performance to the non-pruning version of NBFNet.
Furthermore, TAGNet significantly outperforms $\text{A}^{*}\text{Net}$ and
AdaProp on the FB15k-237 splits, demonstrating the advantage of the proposed
pruning strategy. TAGNet + $\text{A}^{*}$Net: We further test combining the
pruning strategy of both TAGNet and $\text{A}^{*}$Net together (see Appendix
B.3 for more details). Compared to $\text{A}^{*}$Net, we observe that
TAGNet+$\text{A}^{*}$Net achieves comparable if not better performance under
all settings despite aggregating much fewer messages (see subsection 4.2).
This suggests that the pruning strategy in $\text{A}^{*}$Net fails to prune
many irrelevant paths, allowing TAGNet to work complementary to it.
### 4.2 Efficiency of TAGNet
In this subsection, we empirically evaluate the efficiency of our model
against NBFNet. Specifically, we compare the mean number of messages
aggregated per sample during training.
Figure 3 shows the % decrease in the number of messages of TAGNet as compared
to NBFNet. All models are fit with 6 layers. We observe two trends. The first
is that both FB15k-237 datasets follow a similar relationship that is close to
what’s expected of the worst-case complexity detailed in Appendix B.2. On the
other hand, the WN18RR datasets pass much fewer messages as they hover above
$90\%$ for all $\delta$. This is likely because WN18RR is a very sparse graph.
This gives TAGNet plenty of opportunities to prune paths.
Figure 3: % Decrease in NBFNet Messages
We further compare the efficiency of just $\text{A}^{*}$Net and
$\text{A}^{*}$Net + TAGNet. As before, we calculate the total number of
messages passed for both methods. We fix $\delta=2$. Table 3 show the %
decrease in the number of messages when utilizing both techniques compared to
just $\text{A}^{*}$Net. We observe a large reduction in both the inductive and
transductive setting. Since the performance of $\text{A}^{*}$Net + TAGNet is
on par with just A*Net, it suggests that A*Net fails to prune many unneeded
messages that do not improve performance. Furthermore, we find that the
reduction in the number of messages becomes more pronounced with more layers,
suggesting that TAGNet is even more useful when deep propagation is necessary.
Table 3: % Decrease in # Msgs for $\text{A}^{*}$Net vs. $\text{A}^{*}$Net +
TAGNet
Dataset 6 Layers 7 Layers 8 Layers FB15k-237 39% 51% 59% FB15k-237 v1 30% 44%
66% WN18RR 10% 17% 26% WN18RR v1 25% 37% 46%
### 4.3 Effect of $\delta$
In this subsection, we evaluate the effect of the offset $\delta$ on TAGNet
test performance (w/o the target-specific setting). We fix the number of
layers at $6$ and vary $\delta$ from 0 to 5. We report results for both the
transductive and inductive settings in Figures 4 and 5, respectively. For the
inductive setting, we chose version v1 of both datasets as the representative
datasets. For both transductive datasets, we find that the performance
plateaus at $\delta=2$. A similar trend is observed for FB15k-237 v1.
Interestingly, for WN18RR v1,the performance is constant when varying
$\delta$. This suggests that for some datasets almost all of the important
information is concentrated in paths of the shortest length.
Figure 4: Performance varying $\delta$ on Transductive setting. Figure 5:
Performance varying $\delta$ on Inductive setting.
### 4.4 Effect of Degree Messages
We demonstrate the effect of the degree messages described in Section 3.3.
Table 4 shows the performance of TAGNet when trained with and without degree
messages. We report the performance on all of the inductive splits for both
FB15k-237 and WN18RR. Interestingly, we observe that while there is a
consistent gain on FB15k-237, it often hurts performance on WN18RR. This may
imply that preserving the degree information of each node is more important on
FB15k-237 than WN18RR.
Table 4: Effect of Degree Messages on Inductive Splits
Dataset Split w/o Msgs with Msgs FB15k-237 V1 0.594 0.596 V2 0.684 0.698 V3
0.653 0.675 V4 0.648 0.661 WN18RR V1 0.815 0.818 V2 0.803 0.781 V3 0.544 0.465
V4 0.737 0.718
## 5 Related Work
We give a brief overview of different types of KGC methods. (1) Embedding-
Based Methods: Such methods are concerned with modeling the interactions of
entity and relation embeddings. TransE Bordes et al. (2013) models each fact
as translation in the embedding space while DistMult Yang et al. (2015) scores
each fact via a bilinear diagonal function. ComplEx Trouillon et al. (2016)
extends DistMult by further modeling the embeddings in the complex space.
Lastly, Nodepiece Galkin et al. (2021) attempts to improve the efficiency of
embedding-based KGC methods by representing each entity embedding as a
combination of a smaller set of subword embeddings. Since this method concerns
embedding-based techniques, it is orthogonal to our work. (2) GNN-Based
Methods: GNN methods extend traditional GNNs by further considering the
relational information. CompGCN Vashishth et al. (2019) encodes each message
as a combination of neighboring entity-relation pairs via the use of
compositional function. RGCN Schlichtkrull et al. (2018) instead considers a
relation-specific transformation matrix to integrate the relation information.
(3) Path-Based Methods: Path-based methods attempt to leverage the path
information connecting two entities to perform KGC. NeuralLP Yang et al.
(2017) and DRUM Sadeghian et al. (2019) learn to weight different paths by
utilizing logical rules. More recently, NBFNet Zhu et al. (2021) considers
path information by learning a parameterized version of the Bellman-Ford
algorithm. A similar framework, RED-GNN Zhang and Yao (2022) also attempts to
take advantage of dynamic programming to aggregate path information. Both
$\text{A}^{*}\text{Net}$ Zhu et al. (2022) and AdaProp Zhang et al. (2023)
attempt to prove upon the efficiency of the previous methods by learning which
nodes to propagate to.
## 6 Conclusion
In this paper we identify two intrinsic limitations of path-based GNNs that
affect the efficiency and representation quality. We tackle these issues by
introducing a new method, TAGNet, which is able to efficiently propagate path
information. This is realized by only aggregating paths in a fixed window for
each source-target pair. We demonstrate that the complexity of TAGNet is
independent of the number of layers. For future work, we plan on exploring
methods to capture path information without having to perform a separate round
of propagation for every individual source node.
## Limitations
Our work has a couple of limitations. One is that it our study is limited to
only knowledge graph completion. This excludes non-relational link prediction
tasks. Future work can ascertain the effectiveness of TAGNet on other types of
link prediction. Second, all path-based GNNs still require propagating from
each source-relation pair individually. This can pose a significant bottleneck
when many samples need to be tested. We plan on exploring methods to capture
path information without having to propagate for each individual pair.
## References
* Adamic and Adar (2003) Lada A Adamic and Eytan Adar. 2003. Friends and neighbors on the web. _Social networks_ , 25(3):211–230.
* Baras and Theodorakopoulos (2010) John S Baras and George Theodorakopoulos. 2010. Path problems in networks. _Synthesis Lectures on Communication Networks_ , 3(1):1–77.
* Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi-relational data. _Advances in neural information processing systems_ , 26.
* Corso et al. (2020) Gabriele Corso, Luca Cavalleri, Dominique Beaini, Pietro Liò, and Petar Veličković. 2020. Principal neighbourhood aggregation for graph nets. _Advances in Neural Information Processing Systems_ , 33:13260–13271.
* Dettmers et al. (2018) Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. 2018. Convolutional 2d knowledge graph embeddings. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 32.
* Galkin et al. (2021) Mikhail Galkin, Etienne Denis, Jiapeng Wu, and William L Hamilton. 2021. Nodepiece: Compositional and parameter-efficient representations of large knowledge graphs. In _International Conference on Learning Representations_.
* Katz (1953) Leo Katz. 1953. A new status index derived from sociometric analysis. _Psychometrika_ , 18(1):39–43.
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_.
* Mai et al. (2021) Sijie Mai, Shuangjia Zheng, Yuedong Yang, and Haifeng Hu. 2021. Communicative message passing for inductive relation reasoning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 35, pages 4294–4302.
* Newman (2001) Mark EJ Newman. 2001. Clustering and preferential attachment in growing networks. _Physical review E_ , 64(2):025102.
* Nguyen et al. (2018) Tu Dinh Nguyen, Dat Quoc Nguyen, Dinh Phung, et al. 2018. A novel embedding model for knowledge base completion based on convolutional neural network. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 327–333.
* Page et al. (1999) Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The pagerank citation ranking: Bringing order to the web. Technical report, Stanford InfoLab.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems 32_ , pages 8024–8035. Curran Associates, Inc.
* Sadeghian et al. (2019) Ali Sadeghian, Mohammadreza Armandpour, Patrick Ding, and Daisy Zhe Wang. 2019. Drum: End-to-end differentiable rule mining on knowledge graphs. _Advances in Neural Information Processing Systems_ , 32.
* Safavi and Koutra (2020) Tara Safavi and Danai Koutra. 2020. Codex: A comprehensive knowledge graph completion benchmark. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 8328–8350.
* Schlichtkrull et al. (2018) Michael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. 2018. Modeling relational data with graph convolutional networks. In _European semantic web conference_ , pages 593–607. Springer.
* Sun et al. (2019) Zhiqing Sun, Zhi-Hong Deng, Jian-Yun Nie, and Jian Tang. 2019. Rotate: Knowledge graph embedding by relational rotation in complex space. In _International Conference on Learning Representations_.
* Teru et al. (2020) Komal Teru, Etienne Denis, and Will Hamilton. 2020. Inductive relation prediction by subgraph reasoning. In _International Conference on Machine Learning_ , pages 9448–9457. PMLR.
* Toutanova and Chen (2015) Kristina Toutanova and Danqi Chen. 2015. Observed versus latent features for knowledge base and text inference. In _Proceedings of the 3rd workshop on continuous vector space models and their compositionality_ , pages 57–66.
* Trouillon et al. (2016) Théo Trouillon, Johannes Welbl, Sebastian Riedel, Éric Gaussier, and Guillaume Bouchard. 2016. Complex embeddings for simple link prediction. In _International conference on machine learning_ , pages 2071–2080. PMLR.
* Vashishth et al. (2019) Shikhar Vashishth, Soumya Sanyal, Vikram Nitin, and Partha Talukdar. 2019. Composition-based multi-relational graph convolutional networks. In _International Conference on Learning Representations_.
* Xiong et al. (2017) Wenhan Xiong, Thien Hoang, and William Yang Wang. 2017. Deeppath: A reinforcement learning method for knowledge graph reasoning. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 564–573.
* Yang et al. (2015) Bishan Yang, Scott Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding entities and relations for learning and inference in knowledge bases. In _Proceedings of the International Conference on Learning Representations (ICLR) 2015_.
* Yang et al. (2017) Fan Yang, Zhilin Yang, and William W Cohen. 2017. Differentiable learning of logical rules for knowledge base reasoning. _Advances in neural information processing systems_ , 30.
* Zhang and Yao (2022) Yongqi Zhang and Quanming Yao. 2022. Knowledge graph reasoning with relational digraph. In _Proceedings of the ACM Web Conference 2022_ , pages 912–924.
* Zhang et al. (2023) Yongqi Zhang, Zhanke Zhou, Quanming Yao, Xiaowen Chu, and Bo Han. 2023. Adaprop: Learning adaptive propagation for graph neural network based knowledge graph reasoning. In _KDD_.
* Zhu et al. (2022) Zhaocheng Zhu, Xinyu Yuan, Louis-Pascal Xhonneux, Ming Zhang, Maxime Gazeau, and Jian Tang. 2022. Learning to efficiently propagate for reasoning on knowledge graphs. _arXiv preprint arXiv:2206.04798_.
* Zhu et al. (2021) Zhaocheng Zhu, Zuobai Zhang, Louis-Pascal Xhonneux, and Jian Tang. 2021. Neural bellman-ford networks: A general graph neural network framework for link prediction. _Advances in Neural Information Processing Systems_ , 34:29476–29490.
## Appendix A Proof Details of Theorem 1
We prove Theorem 1 via induction on the path length $l$. We denote all nodes a
distance $l$ from the source node $s$ as $V_{s}^{l}$. The path length offset
is represented by $\delta$. Lastly, for convenience we split the constraints
in Eq. (6) into two: a node constraint and an edge constraint. We formulate it
as the following where $\text{Node}_{\delta}(s,o,t)$ represents the node
constraint and $\text{EdgeC}_{\delta}(s,o,u)$ the edge constraint:
$\displaystyle\text{Node}_{\delta}(s,o,t)=t-\delta\leq\text{dist}(s,o)\leq t,$
(13)
$\displaystyle\text{EdgeC}_{\delta}(s,o,u)=\text{dist}(s,u)<\text{dist}(s,o)+\delta$
(14)
Base Case ($l$=1): We want to show for all $l=1$ hop neighbors of $s$, $o\in
V_{s}^{1}$, their final representation $x_{q}^{F}(s,o)$ aggregates all path
representations in the range $[0,1+\delta]$. To be true, the embedding
$x_{q}^{F}(s,o)$ must satisfy two conditions:
1. 1.
Condition 1: The final embedding $x_{q}^{F}(s,o)$, contains all paths
representations of length less than or equal to $1+\delta$ between $s$ and
$o$.
2. 2.
Condition 2: The final embedding $x_{q}^{F}(s,o)$ contains no other path
information.
Condition 1: For it to be true, a node $o\in V_{s}^{1}$ must aggregate all
edges of the form $(u,r,o)$ where $u$ belongs to the set:
$U_{s,o}^{(0,\delta)}=\\{u\>\>|\>\>(u,r,o)\in\mathcal{E}_{o},\>u\in\\{V_{s}^{0},V_{s}^{1},\cdots,V_{s}^{\delta}\\}\\},$
(15)
where $\mathcal{E}_{o}$ represents all edges where $o$ is the target node.
It’s intuitive that all paths starting at $s$ of length $\in[0,\delta+1]$ must
pass through the nodes in the set $U_{s,o}^{(0,\delta)}$ in order to reach
$o$. We prove in Theorem 2 that $o$ will aggregate all nodes in the set
$U_{s,o}^{(0,\delta)}$.
Condition 2: We want to demonstrate that the representation of node $o$
aggregates no other path information such that
$x_{q}^{(\delta+1)}(s,o)=x_{q}^{F}(s,o)$. This is true as per the node
constraint (Eq. (13)) the representation of a node $o$ stops updating after
iteration $k=1+\delta$.
Inductive Step: We assume that for all m-hop neighbors of $s$, $o\in
V_{s}^{m}$, their final representation $x_{q}^{F}(s,o)$ aggregates all path
representations of length between $[m,m+\delta]$. This is achieved by a node
$o$ aggregating all edges $(u,r,o)$ where $u$ belongs to the set:
$U_{s,o}^{(m-1,m-1+\delta)}=\\{u\>\>|\>\>(u,r,o)\in\mathcal{E}_{o},\>u\in\\{V_{s}^{m-1},\cdots,V_{s}^{m-1+\delta}\\}\\},$
(16)
as all such paths must pass through these nodes. We note that this implies
that:
* •
The set of nodes $U_{s,o}^{(m-1,m-1+\delta)}$ must themselves only contain all
path representations of lengths $[m-1,m-1+\delta]$ when aggregated by $o\in
V_{s}^{m}$.
* •
The set of nodes $U_{s,o}^{(m-1,m-1+\delta)}$ must obtain such path
information by iteration $k=m-1+\delta$. This must be true as per the node
constraint $o$ will last update at iteration $k=m+\delta$.
We now want to show for all $(m+1)$ hop neighbors of $s$, $o\in V_{s}^{m+1}$,
their final representation $x_{q}^{F}(s,o)$ aggregates all path
representations of of length between $[m+1,m+1+\delta]$. This requires showing
that $x_{q}^{F}(s,o)$ (1) contains all paths representations between
$[m+1,m+1+\delta]$ between $s$ and $o$ and (2) it contains no other path
information.
Condition 1: For $o\in V_{s}^{m+1}$ to aggregate all paths of length between
$m+1$ and $m+1+\delta$, their representation must aggregate all edges
$(u,r,o)$ where $u$ belongs to the set:
$U_{s,o}^{(m,m+\delta)}=\\{u\>\>|\>\>(u,r,o)\in\mathcal{E}_{o},\>u\in\\{V_{s}^{m},\cdots,V_{s}^{m+\delta}\\}\\}.$
(17)
Such edges are aggregated by $o\in V_{s}^{m+1}$ via the edge constraint.
Furthermore,
* •
From the inductive step we know that nodes
$U_{s,o}^{(m-1,m-1+\delta)}=U_{s,v}^{(m,m+\delta)}\setminus V_{s}^{m+\delta}$
have already aggregated all path representations of lengths $[m-1,m-1+\delta]$
by iteration $k=m+\delta$.
* •
From both constraints we know that $\forall u\in V_{s}^{m+\delta}$ will only
contain all path representations of length $m+\delta$ (i.e. shortest path) by
iteration $k=m+\delta$.
As such, after aggregating the nodes in the set $U_{s,o}^{(m,m+\delta)}$ the
representation $x_{q}^{(m+\delta)}(s,u)$ will contain all paths
representations between $m$ and $m+\delta$. Per the node constraint, $\forall
o\in V_{s}^{m+1}$ last update at iteration $k=m+1+\delta$. Therefore by
aggregating $U_{s,o}^{(m,m+\delta)}$ at iteration $k=m+1+\delta$, the
representation $x_{q}^{(m+1+\delta)}(s,o)$ will contain all path
representations between length $m+1$ and $m+1+\delta$.
Condition 2: Lastly, we want to show that $\forall o\in V_{s}^{m+1}$ the final
representation $x_{q}^{F}(s,o)$ will only contain path representations of
length $m+1$ to $m+1+\delta$. This is true as per the node constraint the
representation of a node $o\in V_{s}^{m+1}$ last updates at iteration
$k=m+1+\delta$. Therefore $x_{q}^{(m+1+\delta)}(s,o)=x_{q}^{F}(s,o)$. As such,
the final representation only aggregates paths of length between $m+1$ and
$m+1+\delta$.
###### Theorem 2.
We are given a source node $s$, query $q$, and target node $o$ which is a
1-hop neighbor of $s$. The final representation of a 1-hop neighbor $o$,
$\mathbf{x}_{q}^{F}(s,o)$, will at minimum aggregate all path representations
whose path length is between $1$ and $1+\delta$. It therefore at least
contains the path information,
$\eta=\bigoplus_{l=1}^{1+\delta}\bigoplus_{p\in
P_{s,o}^{l}}\bigotimes_{i=1}^{\lvert p\rvert}w(e_{i}).$ (18)
This is equivalent to stating that $o$ will aggregate all nodes in the
following set by iteration $k=1+\delta$,
$U_{s,o}^{(0,\delta)}=\\{u\>\>|\>\>(u,r,o)\in\mathcal{E}_{o},\>u\in\\{V_{s}^{0},V_{s}^{1},\cdots,V_{s}^{\delta}\\}\\}.$
(19)
We prove this Theorem via induction on the layer iteration $k$ in our
algorithm 1 (denoted their as $l$).
Base Case ($k$=1): We want to first show that after one iteration, the
representation of a 1-hop neighbor $x_{q}^{1}(s,o)$ aggregates all paths of
length 1 from the source. This is achieved by $x_{q}^{1}(s,o)$ aggregating all
edges connecting $o$ to $s$, i.e. $(s,r,o)$. Such edges are aggregated by $o$
as both the edge and node constraints are satisfied:
$\displaystyle\text{EdgeC}_{\delta}(s,o,s)=0<1+\delta,$ (20)
$\displaystyle\text{NodeC}_{\delta}(s,o,1)=1-\delta\leq 1\leq 1.$ (21)
Inductive Step: We assume that at some iteration $k=n$, s.t. $n<1+\delta$, the
representation $x_{q}^{n}(s,o)$ for $o\in V_{s}^{1}$ aggregates all path
representations up to a length $n$ from the source. This is achieved by
aggregating all edges that contain nodes in the set:
$U_{s,o}^{(0,n-1)}=\\{u\>\>|\>\>(u,r,o)\in\mathcal{E}_{o},\>u\in\\{V_{s}^{0},V_{s}^{1},\cdots,V_{s}^{n-1}\\}\\}.$
(22)
Since we assume that $x_{q}^{n}(s,o)$ contains all path representations up to
length $n$, then it follows that $\forall u\in U_{s,o}^{(0,n-1)}$ their
corresponding representation $x_{q}^{n}(s,o)$ must also contain all paths up
to length $n-1$. As such, by node $o$ aggregating $U_{s,o}^{(0,n-1)}$ it
extend the length of each path by 1.
We want to prove that at iteration $k=n+1$, the representation
$x_{q}^{(n+1)}(s,o)$ aggregates all path representations up to a length $n+1$
from the source. This is achieved by aggregating all edges that contain the
nodes in the set:
$U_{s,o}^{(0,n)}=\\{u\>\>|\>\>(u,r,o)\in\mathcal{E}_{o},\>u\in\\{V_{s}^{0},V_{s}^{1},\cdots,V_{s}^{n}\\}\\}.$
(23)
Per the previous inductive step, we assumed that the representations
$x_{q}^{n}(s,o)\;\forall o\in V_{s}^{n}$ contain all path representations up
to length $n$. Furthermore we noted that at iteration $k=n$, the
representations for each node in the set $U_{s,o}^{(0,n-1)}$ must also contain
all path representations up to a length $n-1$. Since
$U_{s,o}^{(0,n)}=U_{s,o}^{(0,n-1)}\cup V_{s}^{n}$, this implies that
$U_{s,o}^{(0,n)}$ contain all path representations up to length $n$. Thereby
when $x_{q}^{(n+1)}(s,o)$ aggregates the nodes in $U_{s,o}^{(0,n)}$ it
aggregates all path representations up to a length $n+1$. A node $o\in
V_{s}^{1}$ will aggregate such nodes at iteration $k=n+1$ per both
constraints.
This proves by induction that for $o\in V_{s}^{1}$, their representation
$x_{q}^{(1+\delta)}(s,o)$ aggregates all path representations of length less
than or equal to $1+\delta$.
## Appendix B Further Details on TAGNet
### B.1 TAGNet Algorithm
The algorithm for TAGNet, with a fixed $\delta$, is presented in Algorithm 1.
Algorithm 1 TAGNet Algorithm (fixed $\delta$)
1:
2:$s=$ Source node
3:$q=$ Query relation
4:$T=$ Max Number of Layers
5:$\mathbf{x}=$ Embeddings
6:$\delta=$ Offset
7:Agg-Degree = Whether to include degree msgs
8:Initialize:
9: $x_{(s,o)}^{(0)}=\mathbf{0},\;\forall o\in\mathcal{V}$
$x_{(s,o)}^{(0)}=\mathbf{x}_{q}$
10:for $t=1...T$ do
11: for $o\in\mathcal{V}$ do
12: if $t-\delta\leq\text{dist}(s,o)\leq t$ then
13:
$\mathcal{C}(s,o,t)=\left\\{(u,r,o)\in\mathcal{E}(o)\>|\>\text{dist}(s,u)<\text{dist}(s,o)+\delta\right\\}$
14: Msgs
$=\\{\mathbf{x}_{(s,u)}^{(t-1)}\odot\mathbf{x}_{r}^{(t)}\>|\>(u,r,o)\in\mathcal{C}(s,o,t)\\}$
15: if Agg-Degree then
16: $\rho_{o}=b_{o}-\lvert\text{Msgs}\rvert$
17:
$\text{Msgs}=\text{Msgs}\cup\left\\{\rho_{v}\cdot\mathbf{x}_{\text{deg}}^{(t)}\right\\}$
18: end if
19: $\mathbf{x}_{(s,o)}^{(t)}$ = Aggregate{ Msgs }
20: end if
21: end for
22:end for
23:return $\mathbf{x}_{(s,o)}^{(\text{dist}(s,o)+\delta)}$ for all
$o\in\mathcal{V}$
### B.2 Time Complexity Analysis
Per the constraints in Eq. (6), each node can be updated at most $\delta+1$
times and each edge can be aggregated at most $\delta+1$ times. The shortest
path distance from a source node $s$ to all other nodes can be calculated in
linear time via a breadth-first search. The worst-case complexity for the
standard version of TAGNet is therefore:
$O\left((\delta+1)\cdot\left(\lvert V\rvert d^{2}+\lvert E\rvert
d\right)\right).$ (24)
Of note is that the worst case-complexity is independent of the number of
layers. This allows for much deeper propagation.
We further discuss the complexity when utilizing degree messages and a target-
specific $\delta$. As noted in Section 3.3, the inclusion of degree messages
is equivalent to aggregating an additional edge each iteration. As such, it
doesn’t effect the model complexity. Furthermore, when utilizing a target-
specific $\delta$, an additional $(\delta+1)\cdot d^{2}$ operations are added
to calculate the attention scores. This is equivalent to updating each one
node one additional time and therefore also has no effect on the model
complexity.
### B.3 TAGNet + $\text{A}^{*}$Net
We further experiment with combining the pruning strategy of both
$\text{A}^{*}$Net and TAGNet. This is achieved by taking the intersection of
the edge sets produced by both methods for a node pair $(s,o)$ at iteration
$t$. This is because we only want to aggregate an edge if it is not pruned by
both methods. For TAGNet, the edge set $\mathcal{C}(s,o,t)$ is defined as in
Eq. (6). We further denote the edge set for $\text{A}^{*}$Net as
$\mathcal{A}(s,o,t)$. Adapting Eq. (3.2) we arrive at:
$\displaystyle\mathbf{x}_{q}^{(t)}(s,o)=\left(\bigoplus_{(v,r,o)\in\mathcal{C}(s,o,t)\cap\mathcal{A}(s,o,t)}\mathbf{x}_{q}^{(t-1)}(s,v)\otimes\mathbf{w}_{q}(v,r,o)\right)\oplus\mathbf{x}_{q}^{(0)}(s,o).$
(25)
The performance and efficiency when combining both methods is detailed in
Section 4.1 and 4.2, respectively. Lastly, we note that we don’t consider
combining with the pruning strategy in AdaProp Zhang et al. (2023) due to its
strong similarity with that of $\text{A}^{*}$Net.
## Appendix C Experimental Settings
### C.1 Datasets
We conduct experiments on both the transductive and inductive settings. For
the transductive setting, we consider FB15K-237 Toutanova and Chen (2015) and
WN18RR Dettmers et al. (2018). For the inductive setting, where the train and
test entities are disjoint, we consider the splits generated by Teru et al.
(2020) from both FB15K-237 and WN18RR. Of note is that we omit the NELL-995
Xiong et al. (2017) dataset from both sets of our experiments. This is due to
concerns raised by Safavi and Koutra (2020), where they argue that most of the
triples in NELL-995 are either meaningless or trivial. The statistics for all
the transductive and inductive datasets are given in Tables 5 and 6,
respectively.
Table 5: Statistics for Transductive Datasets. Statistic | FB15K-237 | WN18RR
---|---|---
#Entities | 14,541 | 40,943
#Relations | 237 | 11
#Train | 272,115 | 86,835
#Validation | 17,535 | 3,034
#Test | 20,466 | 3,134
Table 6: Statistics for Inductive Datasets. Dataset | | #Relations | Train | Validation | Test
---|---|---|---|---|---
| #Entities | #Query | #Fact | #Entities | #Query | #Fact | #Entities | #Query | #Fact
FB15k-237 | v1 | 180 | 1,594 | 4,245 | 4,245 | 1,594 | 489 | 4,245 | 1,093 | 205 | 1,993
v2 | 200 | 2,608 | 9,739 | 9,739 | 2,608 | 1,166 | 9,739 | 1,660 | 478 | 4,145
v3 | 215 | 3,668 | 17,986 | 17,986 | 3,668 | 2,194 | 17,986 | 2,501 | 865 | 7,406
v4 | 219 | 4,707 | 27,203 | 27,203 | 4,707 | 3,352 | 27,203 | 3,051 | 1,424 | 11,714
WN18RR | v1 | 9 | 2,746 | 5,410 | 5,410 | 2,746 | 630 | 5,410 | 922 | 188 | 1,618
v2 | 10 | 6,954 | 15,262 | 15,262 | 6,954 | 1,838 | 15,262 | 2,757 | 441 | 4,011
v3 | 11 | 12,078 | 25,901 | 25,901 | 12,078 | 3,097 | 25,901 | 5,084 | 605 | 6,327
v4 | 9 | 3,861 | 7,940 | 7,940 | 3,861 | 934 | 7,940 | 7,084 | 1,429 | 12,334
### C.2 Baselines
In the transductive setting, following Zhu et al. (2021), we consider a
variety of different models. For embedding-based methods we consider TransE
Bordes et al. (2013) (performance from Nguyen et al. (2018)), DistMult Yang et
al. (2015), ComlEx Trouillon et al. (2016). For GNN methods we include R-GCN
Schlichtkrull et al. (2018) (performance on WN18RR taken from Zhu et al.
(2021)) and CompGCN Vashishth et al. (2019). For path-based methods we include
DRUM Sadeghian et al. (2019), NBFNet Zhu et al. (2021), RED-GNN Zhang and Yao
(2022), $\text{A}^{*}\text{Net}$ Zhu et al. (2022), and AdaProp Zhang et al.
(2023). We note that for AdaProp the original results from Zhang et al. (2023)
utilize 7 and 8 layers for FB15k237 and WN18RR, respectively (see Table 7 in
Zhang et al. (2023)). For other methods such as TAGNet, NBFNet, and
$\text{A}^{*}$NET, the number of layers is fixed at 6. To facilitate a fair
comparison, we run AdaProp on both datasets using 6 layers. We utilize the
official source code 333https://github.com/LARS-research/AdaProp and the
published hyperparameters.
For the inductive setting, following Teru et al. (2020); Zhu et al. (2021), we
include GraIL Teru et al. (2020), CoMPILE Mai et al. (2021), and NeuralLP Yang
et al. (2017) in addition to NBFNet and $\text{A}^{*}\text{Net}$. We note that
embedding methods aren’t applicable to the inductive setting as the train and
test entities are disjoint. For NBFNet, the results on the inductive FB15k-237
splits are reported by us while the results for the WN18RR splits are from Zhu
et al. (2022). This is because we observed that we can achieve better
performance for NBFNet on the FB15k-237 splits than what was reported in Zhu
et al. (2022). Lastly, as with the transductive setting, we run AdaProp with 6
layers to facilitate a fair comparison between it and other path-based GNNs.
We also set the hidden dimension to 32 as is with all other path-based GNNs.
### C.3 Evaluation Metrics
In the transductive setting, we report the mean reciprocal rank (MRR), Hits@1,
and Hits@10 following the filtered setting as described in Bordes et al.
(2013). For the inductive setting, following Zhang et al. (2023); Zhu et al.
(2022), we only report the Hits@10.
### C.4 Hyperparameter Settings
We list the parameters settings for TAGNet. Under the fixed-$\delta$
formulation it is trained for 20 and 16 epochs on the transductive and
inductive setting, respectively. For the specific-$\delta$ formulation, we
train for 25 and 20 epochs on the transductive and inductive setting,
respectively, as we’ve found it takes longer to converge. For all transductive
and inductive experiments in Table 1 and 2 we set the maximum number of layers
to 6 and the hidden dimension to 32. This is to facilitate a fair comparison
with NBFNet and $\text{A}^{*}\text{Net}$. Furthermore the transductive batch
size is fixed at 16. The number of negative samples is tuned from
$\\{128,512,1024,2048\\}$, the dropout from the range $[0,0.7]$, the learning
rate decay from $\\{0.9,0.95,1\\}$, the weight decay from [1e-8, 1e-3], and
the adversarial temperature from $\\{0.5,1\\}$. For the target specific
setting we further test on setting $g$ as its own function or as equal to the
score function, $g=f$. We further tune the softmax temperature for attention
from $\\{0.5,1,5\\}$. For the inductive setting we further tune the batch size
from $\\{16,32,64,128\\}$ and the learning rate from [1e-4, 1e-2]. Lastly, for
all experiments, the offset $\delta$ is tuned from $\\{1,2,3\\}$.
### C.5 Implementation Details
The framework is implemented with PyTorch Paszke et al. (2019). All
experiments were run on a single 32G Tesla V100 GPU. We train TAGNet with the
binary cross-entropy loss optimized via the Adam optimizer Kingma and Ba
(2014). We follow Yang et al. (2017) and augment the graph by including
reciprocal edges, such that for an edge $(h,r,t)$, its reciprocal edge
$(t,r^{-1},h)$ is included. In this scenario $r^{-1}$ is considered a distinct
relation from $r$.
## Appendix D Additional Analysis on TAGNet
In this section we take a closer look as to what kind of messages are pruned
by TAGNet. As noted in Section 3.1 we strive to limit the number of empty and
redundant messages. We first analyze how well TAGNet can prune both of those
messages. We then examine the reason why some datasets may prune more empty or
redundant messages.
We first analyze the number of empty and redundant messages pruned for both
transductive datasets. We report the results in Table 7 as a % of the total
number of pruned messages. E.g., For FB15k-237 51% of the total number of
pruned messages are empty messages. For simplicity, we limit this study to the
the best versions of each model, i.e. $\delta=2$ for FB15K-237 and $\delta=3$
for WN18RR. We find that on FB15k-237, the messages pruned are split evenly
between empty and redundant messages. On the other hand, for WN18RR over 90%
of the messages pruned are empty messages.
Table 7: % of Messages Pruned that are either Empty or Redundant
Dataset % Empty % Redundant FB15k-237 51% 49% WN18RR 91% 9%
An obvious question is: Why does the composition of pruned messages differ
between datasets? We believe this can be explained via two properties of each
datasets, the density and distance distribution. We measure the sparsity via
the mean degree, which is shown in Table 8. We do this as graphs with a low
mean degree will contain few connections between nodes, resulting in fewer
paths between different nodes and thereby fewer redundant paths. Furthermore,
there will be a lower chance of a node visiting another node already on the
path, as most nodes are linked to only a handful of nodes. We further show the
distance distribution of the test samples, i.e., the % of test samples that
are a distance $k$ from each other, in Table 9. This is because when nodes are
typically far from each other, the target nodes will aggregate many empty
messages. Using Figure 1(a) as an example, the source and node 7 are a
distance 3 from each other. Because of this, in the first two iterations
NBFNet will propagate node 6 to node 7, even though node 6 contains no
information. However, this is less of an issue between nodes of shorter
distances as there fewer iterations needed to reach it. From this, we
hypothesize that graphs that feature, on average, a larger distance between
nodes will propagate more empty messages.
Table 8: Mean Degree of Transductive Datasets
Dataset Mean Degree FB15k-237 18.7 WN18RR 2.1
Table 9: Distance Distribution of Test Samples on the Transductive Datasets
Distance FB15k-237 WN18RR 1 0% 35% 2 73% 9% 3 26% 21% 4 0.2% 7% 5 0.005% 9% 6+
0% 18%
From the results in Table 8 and 9 we make the following observations: (a)
WN18RR is much sparser than FB15k-237. The higher density of FB15k-237 leads
to many more paths and subsequent opportunities to visit a node already on the
path. The opposite is true for WN18RR as since the average degree is low, few
paths exist in the graph. This results in many more redundant paths existing
in FB15k-237 as compared to WN18RR. (b) For FB15k-237, the vast majority of
test samples are close to each other. This leads to less empty messages.
However, for WN18RR the distance covers a much wider range. For example, over
33% of test samples have a distance of 4+ between them. This is only true for
0.205% of samples on FB15k-237. This helps explain why TAGNet mostly prunes
empty messages on WN18RR, as the larger distance between nodes leads to many
messages that contain no information.
|
# Semiparametric Estimation of the Shape of the Limiting Bivariate Point Cloud
Reetam Majumder
SECASC
North Carolina State University
Raleigh, NC, 27695
Benjamin A. Shaby
Department of Statistics
Colorado State University
Fort Collins, CO, 80523
Brian J. Reich
Department of Statistics
North Carolina State University
Raleigh, NC, 27695
Daniel Cooley
Department of Statistics
Colorado State University
Fort Collins, CO, 80523
###### Abstract
We propose a model to flexibly estimate joint tail properties by exploiting
the convergence of an appropriately scaled point cloud onto a compact limit
set. Characteristics of the shape of the limit set correspond to key tail
dependence properties. We directly model the shape of the limit set using
Bézier splines, which allow flexible and parsimonious specification of shapes
in two dimensions. We then fit the Bézier splines to data in pseudo-polar
coordinates using Markov chain Monte Carlo sampling, utilizing a limiting
approximation to the conditional likelihood of the radii given angles. By
imposing appropriate constraints on the parameters of the Bézier splines, we
guarantee that each posterior sample is a valid limit set boundary, allowing
direct posterior analysis of any quantity derived from the shape of the curve.
Furthermore, we obtain interpretable inference on the asymptotic dependence
class by using mixture priors with point masses on the corner of the unit box.
Finally, we apply our model to bivariate datasets of extremes of variables
related to fire risk and air pollution.
Keywords: Bayesian inference, Bézier curves, extreme values, gauge function,
limit set
## 1 Introduction
Multivariate tail risk calculations require knowledge of the strength of
dependence in the joint tail of the relevant distribution. Here, we propose a
model to flexibly estimate joint tail characteristics in a way that coherently
links several existing measures of tail dependence. To do this, we describe
tail dependence of a multivariate distribution through its associated _gauge
function_ (Balkema et al., 2010; Balkema and Nolde, 2010; Nolde, 2014). The
homogeneity property of the gauge function allows us to recover the entire
gauge function from its unit level set, which bounds the support of the
appropriately scaled data points in the limit (Nolde and Wadsworth, 2022;
Wadsworth and Campbell, 2024). We represent the unit level set of the gauge
function using a semiparemtric model, specified such that the required
constraints on such functions are automatically satisfied. In this way, we
obtain a posterior sample of gauge functions, with each member of the sample
being a valid gauge function not requiring any _post hoc_ adjustments such as
re-scaling or truncation.
Efforts to exploit the limit set representation of multivariate extreme values
(Davis et al., 1988; Kinoshita and Resnick, 1991; Balkema et al., 2010;
Balkema and Nolde, 2010) have appeared only recently. Wadsworth and Campbell
(2024) decompose the data into pseudo-polar coordinates and use a limiting
argument to approximate the distribution of the radii with a truncated Gamma
distribution whose parameters depend on the gauge function. They first
transform the data to unit exponential margins, and assuming a parametric form
for the gauge function, perform maximum likelihood estimation with the
truncated Gamma likelihood. They extend this approach using mixtures of
parametric forms, but need to perform post-hoc re-scaling of the mixtures to
satisfy the required properties of valid gauge functions.
In contrast to Wadsworth and Campbell (2024), whose primary focus is
estimating probabilities of sets in the joint tail region, Simpson and Tawn
(2022) focus on inference for the limit set boundary itself, taking a flexible
semiparametric approach. They estimate the sample limit set by approximating
the limiting upper endpoint of the distribution of radii with an estimated
high quantile, as a function of angle. To do this, they fit a generalized
Pareto distribution, whose scale parameter varies by angle, to the large
radii. The radii are calculated by decomposing the bivariate data points
transformed to unit exponential margins with a rank transformation. As the
result is not a valid limit set, they perform a subsequent scaling and
truncation procedure based on a Hill estimator (Hill, 1975) to force their
estimate to satisfy the required conditions.
Like Simpson and Tawn (2022), our focus is flexible estimation of the limit
set boundary, though our methodology is quite different. Here, we directly
model the boundary of the limiting scaled point cloud, which is prescribed by
the unit level set of the gauge function, as a Bézier spline. Bézier splines
are constituted of Bézier curves, with points on the curve that can be
represented as Bernstein polynomials (for reviews, see Hazewinkel, 2012;
Farouki, 2012). Similar semiparametric approaches have been used previously in
multivariate extremes to characterize the Pickands dependence function for
extremal dependence (Marcon et al., 2014, 2017; Vettori et al., 2018), the
angular density in the context of multivariate regular variation (Hanson et
al., 2017), and the angular dependence function (Murphy-Barltrop et al.,
2023), which we will explore below as a direct byproduct of the gauge
function. Bézier splines are convenient here because they allow parsimonious
specification of shapes in $\mathbb{R}^{2}$ which are defined by a small
number of control points. Placing appropriate constraints on the control
points can ensure that the resultant shapes satisfy the conditions required of
limit set boundaries. To estimate the parameters of the Bézier spline, we use
the result from Wadsworth and Campbell (2024) which says that, given a gauge
function evaluated at the data points, the distribution of the large radial
components decays like a Gamma distribution whose rate parameter depends on
the gauge function. We then use standard Markov chain Monte Carlo machinery to
sample from the posterior distribution.
Our approach has several advantages. First, we model the shape of the limiting
point cloud in a way that automatically results in a valid limit set, without
the need for _post hoc_ fixes. Second, our model allows the boundary of the
limit set to exactly touch the corners of the unit box; this in particular
gives a clean interpretation of the distinction between asymptotic
independence (AI) and asymptotic dependence (AD) classes, since this
distinction corresponds to whether or not the boundary touches the upper right
corner. Third, our approach produces a posterior sample of valid limit set
curves which yields a realistic picture of the state of knowledge about the
joint tail region given the data. In addition, we also note that our work
builds on a growing literature on Bayesian approaches to multivariate extreme
value analysis (e.g. Boldi and Davison, 2007; Sabourin et al., 2013; Sabourin
and Naveau, 2014; de Carvalho et al., 2022; Padoan and Rizzelli, 2022, to name
a few).
The rest of the paper is arranged as follows. Section 2 introduces the
limiting scaled point cloud and how it can be used to model the tail behavior
of bivariate data. Section 3 develops the modeling of the limit set boundary
using Bézier splines. Section 4 contains a simulation study demonstrating our
approach. Section 5 contains two applications—the Santa Ana Winds dataset, and
ozone concentration data for the contiguous US—where we use Bézier splines to
model the tail dependence in the data. Section 6 concludes.
## 2 The Limiting Scaled Point Cloud
Consider a collection of $n$ independent random vectors in
$\mathbb{R}^{2}_{+}$, $\bm{X}_{1},\ldots,\bm{X}_{n}$, each having joint
density $f_{\bm{X}}$, with standard exponential margins. At times, it will be
convenient to transform the components of $\bm{X}=(X_{1},X_{2})^{\text{T}}$
into pseudo-polar coordinates $(R,\bm{W})$, as $R=X_{1}+X_{2}$, and
$\bm{W}=\bm{X}/R$. Note that for $\bm{W}=(W_{1},W_{2}),W_{2}=1-W_{1}$.
Now define the scaled point cloud as the collection of points divided by
$\log{n}$, $\\{\bm{X}_{1}/\log{n},\ldots,\bm{X}_{n}/\log{n}\\}$. If we assume
that $\lim_{t\rightarrow\infty}-\log f_{\bm{X}}(t\bm{x})/t=g(\bm{x})$,
$\bm{x}\in\mathbb{R}^{2}_{+}$, for some continuous function $g$, then the
scaled point cloud converges onto a compact limit set
$G=\\{\bm{x}\in\mathbb{R}^{2}:g(\bm{x})\leq 1\\}$
as $n\rightarrow\infty$ (Davis et al., 1988; Kinoshita and Resnick, 1991;
Nolde, 2014; Nolde and Wadsworth, 2022). The function $g$ is called the _gauge
function_ associated with the density $f_{\bm{X}}$. Denote the boundary of $G$
as $\partial G$.
Every gauge function $g$ is homogeneous of order one, with
$g(c\bm{x})=cg(\bm{x})$ for any $c>0$ (Nolde, 2014). We will exploit this
property by modeling the limit set boundary $\partial G$ directly and using
its associated gauge function, induced by homogeneity, for estimation (see
Section 3.2). Any valid limit set $G$ must satisfy the following constraints
on its shape:
1. 1.
$G$ is _star-shaped_ , meaning that for any $t\in(0,1)$, if $\bm{x}$ is in
$G$, then $t\bm{x}$ is also in $G$.
2. 2.
The supremum of the boundary $\partial G$ is 1 in each component direction.
That is, $\partial G$ touches, but does not cross, the upper and right-hand
sides of the unit box.
Figure 1: Schematic of $\eta$. The blue curve is the unit level set of gauge
function $g(\bm{x})$, which forms the limit set boundary $\partial G$, and the
proportional distance to the red point from the origin is the tail dependence
coefficient $\eta$. While the red point is always on the diagonal, the
intersection of the shaded red region and the blue curve does not necessarily
occur on the diagonal.
We seek a flexible way of representing the boundary $\partial G$ of the limit
set $G$ that satisfies conditions 1 and 2 and can be estimated from iid
samples of the random vector $\bm{X}$. The shape of the limit set contains
useful information about the extremal dependence of the distribution of the
data. Nolde and Wadsworth (2022) linked particular features of the shape of
$G$ with various indices of joint tail dependence in the literature. The
residual tail dependence coefficient (Ledford and Tawn, 1996), the angular
dependence function (Wadsworth and Tawn, 2013), components of the conditional
extremes model (Heffernan and Tawn, 2004), and the index $\tau_{1}(\delta)$
(Simpson et al., 2020) all have direct connections to the shape of $G$. Our
primary focus is on the residual tail dependence coefficient, $\eta\in(0,1]$,
which is defined by assuming that, for $\bm{X}$ in exponential margins, its
survivor function satisfies
$P(X_{1}>x,X_{2}>x)=\mathcal{L}(e^{x})e^{-x/\eta}$
as $x\rightarrow\infty$, for some function $\mathcal{L}$ that is slowly
varying at infinity (Ledford and Tawn, 1996). Then the coefficient $\eta$
describes the strength of dependence in the joint tail, with $\eta\in(1/2,1)$
indicating positive tail dependence but AI, and $\eta=1$ indicating AD,
assuming $\mathcal{L}(x)\nrightarrow 0$. The dependence class (AI vs. AD) is
defined by the limiting conditional probability $\chi$, where
$\lim_{x\rightarrow\infty}\frac{P(X_{1}>x,X_{2}>x)}{P(X_{1}>x)}>0,$
with $\chi=0$ characterizing AI, and $\chi>0$ characterizing AD.
The residual tail dependence coefficient, $\eta$, can be calculated (Nolde,
2014; Nolde and Wadsworth, 2022) from shape of the limit set as
$\eta=\min\\{r:r\times[1,\infty]^{2}\cap G=\emptyset\\}.$
This is illustrated schematically in Figure 1, where one can think of sliding
the shaded box down the ray with slope 1 until it first touches the boundary
$\partial G$. The radius corresponding to this first point of intersection is
$\eta$. A corollary is that (assuming as above that
$\mathcal{L}(x)\nrightarrow 0$) when $\bm{X}$ is AD, $\eta=1$, so $\partial G$
necessarily touches the upper right-hand corner of the unit box. Conversely,
when $\bm{X}$ is AI, $\eta<1$, so $\partial G$ does not touch the upper right-
hand corner, and is referred to as _blunt_.
## 3 Modeling the Shape Using Bézier Splines
### 3.1 A Bézier spline representation of the limit set boundary
Figure 2: Examples of Bézier curves of orders 1, 2, and 3. The red control
points (end points) $\mathbf{p}_{0}$ and $\mathbf{p}_{m}$ always lie on the
curve, while the blue control points usually do not.
Bézier curves (e.g. Hazewinkel, 2012; Farouki, 2012) are a class of parametric
functions that can be used as building blocks to represent complex shapes.
Bézier curves are defined by a set of control points $\mathbf{p}_{0}$ to
$\mathbf{p}_{m}$, where $m$ is the order of the curve. Figure 2 plots examples
of Bézier curves of orders 1–3. The end points (red) define the beginning and
end of the curve; intermediate control points (blue) of each curve control its
shape but generally do not lie on the curve. A quadratic Bézier curve, for
example, traces the path:
$B(t)=(1-t)[(1-t)\mathbf{p}_{0}+t\mathbf{p}_{1}]+t[(1-t)\mathbf{p}_{1}+t\mathbf{p}_{2}],$
for $0\leq t\leq 1$. Rearranging this equation simplifies it to:
$B(t)=(1-t)^{2}\mathbf{p}_{0}+2t(1-t)\mathbf{p}_{1}+t^{2}\mathbf{p}_{2}.$
A useful property is that if the three points are co-linear, then a quadratic
Bézier curve simplifies to a linear Bézier curve. Several Bézier curves can in
turn be linked together at the end points to form a Bézier spline. The end
points of each Bézier curve within the spline now function as knots for the
spline. Splines comprised of quadratic Bézier curves are particularly useful
since analytical solutions for quadratic equations are straightforward to
obtain. In addition, increasing the order to cubic splines would make it
difficult to constrain the shapes to the unit box, and would prevent the
shapes from having the sharp corners required to represent AD limit set
boundaries.
Figure 3: Examples of unit level sets of gauge functions that can be expressed
using Bézier splines comprised of 3 quadratic Bézier curves. The red points
are the end points of each curve, while the blue points are intermediate
points controlling the shapes of the curves.
Because they are parsimoniously parameterized and straightforward to
constrain, Bézier splines are convenient for modeling the boundary $\partial
G$ of the limit set $G$. We specify $\partial G$ as a Bézier spline comprised
of three quadratic Bézier curves $g_{B}=\\{B_{1}(t),B_{2}(t),B_{3}(t)\\}$,
where $B_{1}(t):=B(t;\mathbf{p}_{0},\mathbf{p}_{1},\mathbf{p}_{2})$,
$B_{2}(t):=B(t;\mathbf{p}_{2},\mathbf{p}_{3},\mathbf{p}_{4})$, and
$B_{3}(t):=B(t;\mathbf{p}_{4},\mathbf{p}_{5},\mathbf{p}_{6})$, for
$\mathbf{p}_{i}\in\mathbb{R}^{2}$, $i=0,1,\ldots,6$. The three curves trace
the paths:
$\displaystyle
B_{1}(t)=(1-t)^{2}\mathbf{p}_{0}+2t(1-t)\mathbf{p}_{1}+t^{2}\mathbf{p}_{2},$
$\displaystyle
B_{2}(t)=(1-t)^{2}\mathbf{p}_{2}+2t(1-t)\mathbf{p}_{3}+t^{2}\mathbf{p}_{4},$
$\displaystyle
B_{3}(t)=(1-t)^{2}\mathbf{p}_{4}+2t(1-t)\mathbf{p}_{5}+t^{2}\mathbf{p}_{6},$
for $0\leq t\leq 1$. We denote the point $\mathbf{p}_{i}:=(p_{i,1},p_{i,2})$,
$0\leq p_{i,1},p_{i,2}\leq 1$, and place two sets of constraints on the curves
in order to elicit valid gauge functions which satisfy conditions 1 and 2. The
first set of constraints ensure that the Bézier spline touches all four edges
of the unit square:
$\displaystyle p_{0,1}$ $\displaystyle=p_{6,2}=0,$ $\displaystyle p_{2,2}$
$\displaystyle=p_{4,1}=1.$
The second set of constraints are sufficient conditions to ensure that the
star-shaped property holds for the spline:
$\displaystyle p_{1,1}$ $\displaystyle\leq p_{2,1},$ $\displaystyle
m(\mathbf{0},\mathbf{p}_{1})$ $\displaystyle\geq
m(\mathbf{0},\mathbf{p}_{2}),$ $\displaystyle m(\mathbf{0},\mathbf{p}_{4})$
$\displaystyle\geq m(\mathbf{0},\mathbf{p}_{5}),$ $\displaystyle p_{4,2}$
$\displaystyle\geq p_{5,2},$ $\displaystyle p_{3,1}$ $\displaystyle=p_{3,2},$
$\displaystyle p_{3,1}$ $\displaystyle\geq\min(p_{2,1},p_{4,2})$
where $m(\mathbf{p},\mathbf{p^{\prime}})$ denotes the slope of the line
connecting the points $\mathbf{p}$ and $\mathbf{p^{\prime}}$, and
$\mathbf{0}=(0,0)$ is the origin. The final condition prevents unrealistic
cases where $p_{2,1}$ and $p_{4,2}$ are both 1, but $p_{3,1}<1$. Thus, we
arrive at a model for the limit set boundary $\partial G$, indexed by the 9
univariate parameters,
$\bm{\theta}_{g}=(p_{0,2},p_{1,1},p_{1,2},p_{2,1},p_{3,1},p_{4,2},p_{5,1},p_{5,2},p_{6,1})^{\text{T}}.$
Figure 3 plots Bézier splines under these constraints, each representing a
gauge function with different dependence properties. Top row plots correspond
to AI scenarios, whereas plots in the bottom row correspond to AD scenarios.
The four red control points are the knots of the spline. The three blue
control points affect the shape, and the spline passes through them only if
they are co-linear with the preceding and proceeding control points. In the
general case, there are 9 coordinates, each admitting a uniform support, which
need to be estimated to fully specify a valid gauge function. Richer models
can be achieved using more control points; however, it would come with
increased computational cost and additional constraints to ensure that
Conditions 1 and 2 hold. The quadratic Bézier spline with 4 knots therefore
constitutes a parsimonious representation for $\partial G$ which is still
flexible enough to capture multiple dependence regimes and mimic most of the
common parametric models.
### 3.2 Statistical inference for the limit set boundary
With a model defined for the limit set boundary $\partial G$, we turn to the
question of how to estimate the shape from iid copies of the random vector
$\bm{X}$ in standard exponential margins. After transforming $\bm{X}$ to
pseudo-polar coordinates $(R,\bm{W})$, a convenient form (Wadsworth and
Campbell, 2024) for the conditional density of a large radius $R$, given the
angle $\bm{W}$, is
$f_{R\,|\,W}(r\,|\,\bm{w})\propto r^{d-1}\exp\\{-rg(\bm{w})[1+o(1)]\\},\quad
r\rightarrow\infty,$
where $d$ is the dimension of $\bm{X}$ (we have only considered $d=2$ here).
For likelihood-based inference, Wadsworth and Campbell (2024) show that the
$o(1)$ term can be moved outside the exponent in most cases and therefore
ignored; they consequently consider the approximation adequate for radii
larger than a threshold $r_{0}(\bm{w})$. This yields a truncated Gamma
likelihood:
$R\,|\,\bm{W}=\bm{w},R>r_{0}(\bm{w}),\bm{\theta}_{g}\sim\text{truncGamma}\left(\alpha,g_{\bm{\theta}_{g}}(\bm{w})\right).$
(1)
For most common bivariate copulas, $\alpha=2$. However, the quality of the
approximation tends to vary (for further details, see Wadsworth and Campbell,
2024), and $\alpha$ is usually treated as a parameter to be estimated. Thus,
given a gauge function $g_{\bm{\theta}_{g}}$ an approximate likelihood for the
large radii given the angles is
$L(\bm{\theta}_{g},\alpha;(r_{1},\bm{w}_{1}),\ldots,(r_{n_{0}},\bm{w}_{n_{0}}))=\prod_{i=1}^{n_{0}}\frac{g_{\bm{\theta}_{g}}(\bm{w})^{\alpha}}{\Gamma(\alpha)}\frac{r_{i}^{\alpha-1}e^{-r_{i}g_{\bm{\theta}_{g}}(\bm{w}_{i})}}{1-F(r_{0}(\bm{w}_{i});\alpha,g_{\bm{\theta}_{g}}(\bm{w}_{i}))},$
(2)
where $n_{0}$ is the number of points exceeding the threshold $r_{0}(w)$, and
$F(\,\cdot\,;\alpha,g_{\bm{\theta}_{g}}(\bm{w}_{i}))$ is the CDF of a Gamma
distribution with shape parameter $\alpha$ and rate parameter
$g_{\bm{\theta}_{g}}(\bm{w}_{i})$.
Figure 4: Schematic of how to calculate $g_{\bm{\theta}_{G}}$ at a data point
$(\bm{x})$, given a boundary curve $\partial G$. The value of the gauge
function is the distance from the origin to $\bm{x}$, relative to the distance
from the origin of the intersection of the ray connecting $\bm{x}$ with the
origin and the boundary $\partial G$.
To calculate the gauge function at each data point, as required in the
likelihood (2), we exploit the homogeneity property of $g$. This gives us that
the value of the gauge function evaluated at a point $\bm{x}$ is the distance
from the origin to $\bm{x}$, relative to the distance from the origin of the
intersection of the ray connecting $\bm{x}$ with the origin and the boundary
$\partial G$. In the schematic in Figure 4, the intersection with $\partial G$
is denoted as $\bm{x}_{\partial G}$, so that
$g_{\bm{\theta}_{G}}(\bm{x})=\frac{\|\bm{x}\|}{\|\bm{x}_{\partial G}\|}.$ (3)
We also need to select a threshold $r_{0}(w)$, as a function of angle.
Wadsworth and Campbell (2024) and Simpson and Tawn (2022) both chose
thresholds as functions of the angle, first as empirical quantiles of moving
windows of angle and then using smooth semi parametric quantile regression. We
employ a much simpler approach, and choose a high quantile in each threshold
marginal component. We have found that this very basic strategy results in
estimation performance at least comparably as good as more complicated
alternatives. In addition, choosing marginal thresholds has two key
advantages. First, it is simple to implement and requires no intricate tuning.
Second, it permits, in principle, transformation to standard exponential
margins within a hierarchical model, whereas thresholds that depend jointly on
both components do not. With this in mind, we choose a value $\tau\in(0,1)$,
and then set each marginal threshold at the $\tau^{\text{th}}$ marginal
empirical quantile $q_{\tau,X_{1}}$ for $X_{1}$ and $q_{\tau,X_{2}}$ for
$X_{2}$. In pseudo-polar coordinates, this gives a radial threshold of
$r_{0}(\bm{w})=\begin{cases}\frac{q_{\tau,X_{2}}}{1-w},&\quad
w\in\left[0,\frac{q_{\tau,X_{1}}}{q_{\tau,X_{1}}+q_{\tau,X_{2}}}\right]\\\
\frac{q_{\tau,X_{1}}}{w},&\quad
w\in\left(\frac{q_{\tau,X_{1}}}{q_{\tau,X_{1}}+q_{\tau,X_{2}}},1\right]\end{cases}$
A sensitivity study comparing the effect of the choice of threshold on
estimation of $\eta$ is provided in the Supplementary Material (Majumder et
al., 2024a, Section A).
### 3.3 Prior distributions for control points
Our model for the limit set boundary $\partial G$ is indexed by the 9
univariate parameters, viz.
$\bm{\theta}_{g}=(p_{0,2},p_{1,1},p_{1,2},\allowbreak
p_{2,1},p_{3,1},p_{4,2},p_{5,1},p_{5,2},p_{6,1})^{\text{T}}$. To inform prior
selection for these control points, we examine the boundary $\partial G$ of
four parametric copula models, and learn the conditions on the control points
which allow the Bézier curve to mimic their shapes. The four copulas that we
consider are the Gaussian, inverted logistic, logistic, and asymmetric
logistic. The first two are AI, while the final two are AD; analytical
expressions of dependence measures for these models are provided in Table 1,
replicated from Simpson and Tawn (2022). Since the limit set boundary
$\partial G$ can take on a variety of shapes, including the AD case where it
touches the upper right-hand corner of the unit box (see e.g., Nolde and
Wadsworth, 2022, Section 4), we let the coordinates of the control points
(i.e., the members of $\bm{\theta}_{g}$) vary freely in $[0,1]$ for
flexibility, which permits the possibility of them being exactly equal to 0 or
1 to accommodate AD as seen in the logistic and asymmetric logistic copulas
and the very weak dependence as seen in the inverted logistic copula. We now
outline the support for the distributions of the control points which will
allow the Bézier splines to mimic our four copulas of interest, and then
specify priors for all parameters to be used in the remainder of this study.
The limit set boundary for an AD copula is obtained whenever $p_{2,1}=1$ or
$p_{4,1}=1$; additionally, the logistic copula is implied by
$\mathbf{p}_{2}=\mathbf{p}_{3}=\mathbf{p}_{4}=(1,1)$ (Figure 3, bottom-right).
This is equivalent to collapsing the second curve of the Bézier spline to a
single point, and can be incorporated into our model by having a semi-
continuous prior distribution for $p_{2,1},p_{3,1},\mbox{ and }p_{4,2}$ with
support over (0,1] which includes a point mass at 1. Similarly, collapsing the
second curve and having a semi-continuous prior on $p_{0,2}$ and $p_{6,1}$ can
incorporate an approximation of the limit set boundary for an asymmetric
logistic copula (Figure 3, bottom left). Finally, approximating the limit set
boundary for an inverted logistic copula using a Bézier spline would require
the first and third curves to collapse onto the $x=0$ and $y=0$ lines
respectively (Figure 3, top-right). To accommodate this case, we set semi-
continuous priors with point masses at 0 for $p_{1,1},p_{2,1},p_{4,2},\mbox{
and }p_{5,2}$. To flexibly accommodate the wide range of limit set boundary
shapes possible within our framework, we set priors as follows:
$\displaystyle\alpha$ $\displaystyle\sim\mbox{LogNormal}(1,1),$ $\displaystyle
p_{1,2},p_{5,1}$
$\displaystyle\stackrel{{\scriptstyle\text{iid}}}{{\sim}}\mbox{Uniform}(0,1).$
The LogNormal prior on $\alpha$ has its density concentrated near $\alpha=2$.
The remaining control points have priors that are the mixture of a standard
Uniform distribution and at least one point mass (to allow important geometric
features of $\partial G$ with positive probability). They have the following
forms:
$\displaystyle p_{1,1}$
$\displaystyle\stackrel{{\scriptstyle\text{iid}}}{{\sim}}0.1\cdot\mathbb{I}(p_{1,1}=0)+0.9\cdot\mbox{Uniform}(0,1),$
$\displaystyle p_{2,1}$
$\displaystyle\stackrel{{\scriptstyle\text{iid}}}{{\sim}}0.1\cdot\mathbb{I}(p_{2,1}=0)+0.8\cdot\mbox{Uniform}(0,1)+0.1\cdot\mathbb{I}(p_{2,1}=1),$
$\displaystyle p_{3,1}\,|\,p_{2,1},\,p_{4,2}$ $\displaystyle\sim
0.6\cdot\mbox{Uniform}(0,1)+0.4\cdot\mathbb{I}(p_{3,1}=1,\max(p_{2,1},p_{4,2})=1).$
The points $p_{5,2}$ and $p_{4,2}$ are distributed identically to $p_{1,1}$
and $p_{2,1}$, respectively. The point mass probabilities were chosen on the
basis of a sensitivity study whose aim was to have good discrimination between
the AD and AI cases based on posterior probabilities of $\eta=1$, while also
simultaneously being able to provide unbiased, consistent estimates of $\eta$
when $\eta<1$. In particular, the prior on $p_{3,1}$ ensures that it is 1 only
if AD is implied by either $p_{2,1}$ or $p_{4,2}$. These prior assumptions can
accommodate the logistic, inverted logistic, and asymmetric logistic copulas,
as well as intermediate forms such as the Gaussian copula which do not require
point masses.
### 3.4 Additional bivariate extremal dependence measures
Alongside the tail dependence coefficient $\eta$ (Ledford and Tawn, 1996)
discussed in Section 2, we consider two additional indices of tail dependence
which can be derived from the gauge function. The first of these is the
angular dependence function $\lambda(\omega)$ which considers different
scalings for the two components of $\bm{X}$. In particular, Wadsworth and Tawn
(2013) considered asymptotic probabilities of the following form:
$P(X_{1}>\omega
x,X_{2}>(1-\omega)x)=\mathcal{L}_{\omega}(e^{x})e^{-x\lambda(\omega)},$
for $\omega\in[0,1]$, $\lambda(\omega)\in(0,1]$, and some function
$\mathcal{L}_{\omega}$ that is slowly varying at infinity. The function
$\lambda(\omega)$ therefore captures both extremal dependence regimes, with AD
corresponding to the pointwise lower bound of $\lambda(\omega)$. Evaluation of
$\lambda(\omega)$ for rays $\omega$ near 0 and 1 corresponds to regions where
one variable is larger than the other. In particular, $\lambda(\omega)$ is a
generalization of $\eta$, with $\eta=1/\\{2\lambda(1/2)\\}$. Murphy-Barltrop
et al. (2023) found that global estimators (such as the Simpson-Tawn
estimator) which simultaneously estimate $\lambda(\omega)$ for all values of
$\omega$ tend to provide better estimates compared to pointwise estimators
(such as the Hill estimator). Since the Bézier spline estimator is also a
global estimator of $\lambda(\omega)$, examining the fit of $\lambda(\omega)$
is a useful measure to compare the two global estimators.
We also investigate the dependence measure $\tau_{1}(\delta)$ (Simpson et al.,
2020), given by:
$P(X_{1}>x,X_{2}\leq\delta
x)=\mathcal{L}_{\delta}(e^{x})e^{-x/\tau_{1}(\delta)},$
for some function $\mathcal{L}_{\delta}$ that is slowly varying at infinity,
with $\delta\in[0,1]$. $\tau_{1}(\delta)$ is monotonically increasing in
$\delta$, with $\tau_{1}(1)=1$. This dependence measure characterizes the
probability of $X_{1}$ being large while $X_{2}$ is of a smaller order.
Specifically, if there exists a $\delta^{*}<1$ such that
$\tau_{1}(\delta^{*})=1$, it implies that $X_{1}$ can be large while $X_{2}$
is small (with $\delta$ determining just how small). If no such $\delta^{*}$
exists, then $X_{1}$ can be large only if $X_{2}$ is also large. We can define
$\tau_{2}(\delta)$ analogously, and both
$\tau_{1}(\delta),\tau_{2}(\delta)\in(0,1]$.
Table 1 provides the analytical expressions for these measures for the four
dependence copulas that we consider in our study. Like with $\eta$, these
dependence measures can be exactly deduced from limit set boundaries, and
hence from gauge functions. Since the Bézier splines are quadratic
polynomials, we can easily calculate $\tau_{1}(\delta)$ and $\lambda(\omega)$
for any estimated limit set boundary, simply by finding the intersections of
polynomials and lines, which have closed-form solutions. While we do not
present results for $\tau_{2}(\delta)$ in our study, it can be calculated in
the same manner as $\tau_{1}(\delta)$. We refer the reader to Simpson and Tawn
(2022) for a detailed discussion on how each of these measures can be obtained
from gauge functions in more general settings.
Copula | $g(\bm{x})=g(x_{1},x_{2})$
---|---
Gaussian | $\\{x_{1}+x_{2}-2\rho(x_{1}x_{2})^{1/2}\\}/(1-\rho^{2})$
Logistic | $\gamma^{-1}\max(x_{1},x_{2})+(1-\gamma^{-1})\min(x_{1},x_{2})$
Inv-Logistic | $(x_{1}^{1/\gamma}+x_{2}^{1/\gamma})^{\gamma}$
Asy-Logistic | $\begin{cases}\gamma^{-1}\max(x_{1},x_{2})+(1-\gamma^{-1})\min(x_{1},x_{2}),&\mbox{ if }\gamma<1\\\ x_{1}+x_{2},&\mbox{ if }\gamma=1\end{cases}$
(a) Gauge function $g$ for the four bivariate copulas.
Copula | $\eta$ | $\lambda(\omega)$ | $\tau_{1}(\delta)=\tau_{2}(\delta)$
---|---|---|---
Gaussian | $(1+\rho)/2$ | $\begin{cases}\max(\omega,1-\omega),&\mbox{ if }t_{\omega}\leq\rho^{2}\\\ \frac{1-2\rho\sqrt{\omega(1-\omega)}}{1-\rho^{2}},&\mbox{ if }t_{\omega}\geq\rho^{2}\end{cases}$ | $\begin{cases}1,&\mbox{ if }\delta\geq\rho^{2}\\\ \frac{1-\rho^{2}}{1+\delta-2\rho\sqrt{\delta}},&\mbox{ if }\delta\leq\rho^{2}\end{cases}$
Logistic | 1 | $\max(\omega,1-\omega)$ | $(\gamma^{-1}+1-\gamma^{-1}\delta)^{-1}$
Inv-Logistic | $2^{-\gamma}$ | $\\{\omega^{1/\gamma}+(1-\omega)^{1/\gamma}\\}^{\gamma}$ | 1
Asy-Logistic | 1 | $\max(\omega,1-\omega)$ | 1
(b) Dependence measures for the four bivariate copulas. Here,
$t_{\omega}=\min(\omega,1-\omega)/\max(\omega,1-\omega)$.
Table 1: Gauge function $g$, and a summary of dependence measures for the four
bivariate copulas used in our study. Table has been reproduced from Simpson
and Tawn (2022).
## 4 Simulation Study
### 4.1 Study Setup
We demonstrate the appropriateness of using Bézier splines to model the limit
set boundary corresponding to the gauge function by means of a simulation
study comparing its performance with the Simpson-Tawn estimator (Simpson and
Tawn, 2022). We consider four bivariate copulas to generate dependent data
with exponential marginal distributions: the Gaussian, the logistic, the
inverted logistic, and the asymmetric logistic. The Gaussian copula is
parameterized by its correlation $\rho\in[0,1)$, while the dependence
parameter for the remaining three copulas is $\gamma\in(0,1)$. Table 1 lists
the gauge functions associated with each copula, as well as the set of
corresponding extremal dependence coefficients
$\\{\eta,\lambda(\omega),\tau_{1}(\delta)\\}$. The four copulas cover a range
of AD behavior; the logistic and asymmetric logistic copulas, in particular,
are AD. Further details for these four copulas can be found in Nolde and
Wadsworth (2022). For each copula, we consider five parameter settings:
$\rho,\gamma=\\{0.3,0.4,0.5,0.6,0.7\\}$. Smaller values of $\gamma$ correspond
to stronger tail dependence, whereas larger values of $\rho$ lead to stronger
tail dependence.
For each copula and parameter combination, we generate 100 datasets of
$n=5,000$ data points each. The datasets are converted to pseudo-polar
coordinates $(R,\bm{W})$, and the $n_{0}$ points that are above the
$\tau=0.75$ quantile marginal threshold for at least one variable are used to
model the gauge function for each dataset. The $n_{0}$ radii are assumed to
approximately distributed according to a truncated Gamma distribution with a
common shape parameter $\alpha$ and a rate parameter equal to an appropriate
gauge function evaluated at the data point. We use Metropolis updates for all
parameters and run 11,000 MCMC iterations for each dataset, discarding the
first 1,000 as burn-in. All Metropolis updates are tuned to give an acceptance
probability of 0.4, and posterior convergence is diagnosed based on the visual
inspection of trace plots. We compare our estimated limit set boundaries with
the Simpson-Tawn estimator in terms of how well they estimate the set of
dependence coefficients outlined in Table 1. The Simpson-Tawn estimator for
the datasets was evaluated using the default settings recommended by the
authors in Simpson and Tawn (2022). We use the root mean square error (RMSE)
to compare estimates of the scalar $\eta$, and the root mean integrated square
error (RMISE) to compare estimates of the functions $\lambda(\omega)$ and
$\tau_{1}(\delta)$. The methodology is implemented in R (R Core Team, 2023);
the code is available on GitHub through the BezELS package (Majumder et al.,
2024b).
### 4.2 Parameter estimates
(a) Estimated gauge functions in pseudo-polar space.
(b) Estimated gauge functions in Euclidean space.
Figure 5: Limit set boundaries based on Bézier splines (blue) and
corresponding functions for the data-generating model (red) in pseudo-polar
and Euclidean space for the Gaussian, logistic, inverted logistic, and
asymmetric logistic copulas with dependence parameters set to $0.5$
.
Each panel in Figure 5(b) plots the limit set boundaries elicited by the
estimated Bézier splines based on the posterior distribution from a single
dataset with the dependence parameter set to 0.5. Plots in Figure 5(a) display
the dependence modeling in pseudo-polar coordinates. The dashed grey line
corresponds to the threshold $r_{0}(\bm{w})$ for angles $\bm{w}$. Each
estimated limit set boundary is represented as a functional boxplot (Hyndman
and Shang, 2010; Sun and Genton, 2011), a visual representation of functional
data analogous to a classical boxplot. Each functional quantile depicted in
the functional boxplot is a function contained in the sample; in this case,
the sample consists of limit set boundaries based on Bézier splines, in
pseudo-polar coordinates, drawn from the posterior. The curves are ordered
according to a notion of modified band depth (López-Pintado and Romo, 2009).
The median is shown in dark blue, and the limit set boundary corresponding to
the data-generating model is shown in red. The envelope represents the $50\%$
central region, analogous to the box of a classical boxplot. The outer light
blue lines of the functional boxplot correspond to the whiskers of a classical
boxplot. Finally, the vertical lines indicate the maximum envelope of the
functions except outliers. Plots in Figure 5(b) display the dependence models
in Euclidean coordinates, with the median Bézier spline in dark blue and the
limit set boundary corresponding to the data-generating model in red. They are
overlaid on Bézier splines evaluated from 500 draws from the posterior
distribution, plotted in gray. The boxplots on the top and right margins
correspond to the posterior distributions of $p_{2,1}$ and $p_{4,2}$
respectively, which serve as a visual indicator of asymptotic dependence in
the data. Specifically, if the median of either boxplot is 1, the posterior of
$\eta$ is 1. The Bézier splines are able to adequately represent the geometric
form of all four dependence copulas.
Figure 6 shows boxplots of posterior median of $\eta$ for the four dependence
copulas based on the Bézier spline (blue) and Simpson-Tawn estimators (green).
Analytical values of $\eta$ obtained using expressions in Table 1 are shown as
red dots in each plot, and the coverage of equi-tailed 95% intervals are noted
in plain text below each boxplot. Coverage for the Simpson-Tawn estimator is
based on 100 bootstrapped samples from each of the 100 datasets. We plot the
median instead of the mean for the Bézier spline estimates since the posterior
distributions are often highly asymmetric due to point-mass prior
distributions. The Bézier spline estimator has low bias and nominal coverage
for estimates of $\eta$. The Simpson-Tawn estimator has nominal or near-
nominal coverage in most cases except for the asymmetric logistic copula when
the dependence parameter $\gamma$ is 0.5 or higher. It also has noticeably
higher bias than the Bézier spline estimator for both AD copulas, and shows a
sharp decline in coverage as the strength of dependence drops for the
asymmetric logistic copula.
Figure 6: Sampling distribution of the posterior medians of $\eta$ based on
the Bézier spline estimate (blue) for the four dependence copulas, alongside
estimates using the Simpson-Tawn estimator (green). The red dots indicate the
true values, and coverage of equi-tailed 95% intervals are noted below each
boxplot.
(a) Sampling distribution of the posterior medians of $\lambda(0.40)$.
(b) Sampling distribution of the posterior medians of $\tau_{1}(0.25)$.
Figure 7: Sampling distribution of the posterior medians of $\lambda(0.40)$
and $\tau_{1}(0.25)$ based on the Bézier spline (blue) and Simpson-Tawn
(green) estimators for four dependence copulas, with dependence parameters set
to 0.5. The red lines indicate the true values, and coverage of equi-tailed
95% intervals are noted below each boxplot.
We evaluate $\lambda(\omega)$ and $\tau_{1}(\delta)$ based on the Bézier
spline and Simpson-Tawn estimators for
$\omega,\delta=0.01,0.02,\allowbreak\ldots,0.99$. Figure 7(a) shows boxplots
for the posterior medians of $\lambda(0.40)$ based on the two estimators, with
the corresponding dependence parameter set to 0.5. Figure 7(b) similarly plots
the distribution of $\tau_{1}(0.25)$ for the two estimators. Both estimators
are better at estimating $\lambda(\omega)$ than $\tau_{1}(\delta)$, and the
Bézier spline estimator tends to have better coverage than the Simpson-Tawn
estimator.
Table 2 summarizes the RMSE ratio for estimates of $\eta$ and RMISE ratios for
estimates of $\lambda(\omega)$ and $\tau_{1}(\delta)$ based on the Bézier
spline and Simpson-Tawn estimators. Most of the values are greater than 1,
indicating that dependence measures based on Bézier spline estimates of the
gauge function have comparable or better RMSE/RMISE than those based on the
Simpson-Tawn estimator. However, for the inverted logistic copula, the
Simpson-Tawn estimator outperforms the Bézier spline estimator when estimating
of $\tau_{1}(\delta)$. Our experiments indicate that the error arises when the
posterior $p_{4,2}>\delta$; this leads to $\tau_{1}(\delta)$ estimates to be
less than 1, whereas the theoretical value for the inverted logistic copula is
1 for all $\delta$. Our approach, however, is still able to correctly estimate
$\tau(\delta)=1$ in all the inverted logistic scenarios for all but extremely
small values of $\delta$.
Table 2: RMSE ratios for estimates of $\eta$ and RMISE ratios for estimates of $\lambda(\omega)$ and $\tau_{1}(\delta)$ based on the Bézier spline $(\hat{\eta},\hat{\lambda},\mbox{ and }\hat{\tau})$ and Simpson-Tawn $(\tilde{\eta},\tilde{\lambda},\mbox{ and }\tilde{\tau})$ estimators over simulated datasets for four copulas and five dependence levels. Measure | Copula | Dependence parameter value
---|---|---
| | 0.3 | 0.4 | 0.5 | 0.6 | 0.7
$\frac{RMSE(\tilde{\eta})}{RMSE(\hat{\eta})}$ | Gaussian | 1.02 | 1.11 | 1.04 | 1.06 | 0.81
Logistic | 1.38 | 0.81 | 1.36 | 1.17 | 1.07
Inv-Logistic | 0.70 | 0.99 | 1.11 | 1.03 | 1.01
Asy-Logistic | 2.39 | 2.01 | 2.23 | 2.37 | 2.39
$\frac{RMISE(\tilde{\lambda})}{RMISE(\hat{\lambda})}$ | Gaussian | 1.19 | 1.18 | 1.05 | 0.94 | 0.75
Logistic | 1.35 | 0.61 | 1.10 | 1.04 | 1.28
Inv-Logistic | 0.82 | 1.10 | 1.20 | 1.10 | 1.04
Asy-Logistic | 6.17 | 6.65 | 10.91 | 7.43 | 5.67
$\frac{RMISE(\tilde{\tau})}{RMISE(\hat{\tau})}$ | Gaussian | 0.85 | 1.00 | 1.08 | 1.17 | 1.18
Logistic | 1.00 | 0.95 | 0.89 | 0.83 | 1.00
Inv-Logistic | 0.23 | 0.27 | 0.17 | 0.28 | 0.17
Asy-Logistic | 2.38 | 2.96 | 1.77 | 1.72 | 1.23
Table 3 provides the number of datasets (out of 100) in each scenario where
the posterior medians of $\eta$ is estimated to be 1. The values in the
parentheses are corresponding point estimates of $\eta$ based on the Simpson-
Tawn estimator. For the Bézier spline estimators, the values were always
estimated to be near 0 for the AI copulas, and always high for the AD copulas.
While there were some cases where the Bézier spline estimates the posterior of
$\eta$ as 1 when the dependence is high in an AI copula, both methods are good
at estimating AI behaviour correctly. On the other hand, the Simpson-Tawn
estimates show a decline in their ability to estimate the correct value of
$\eta=1$ for the AD copulas when dependence is low. This is especially
noticeable for the asymmetric logistic copula, and has been documented by
Simpson and Tawn (2022) as well. The Bézier spline estimator is much better at
predicting AD correctly across all scenarios. We conclude that the Bézier
splines are adept at representing limit set boundaries associated with common
parametric copula models, and are also flexible enough to represent a wider
variety of edge cases. In all cases, the true value of $\eta$ was well-
estimated from the posterior distribution, and the model is particularly adept
at identifying AD.
Table 3: Number of datasets (out of 100) where the posterior median of $\eta$ is 1 for each scenario. Values in parenthesis correspond to the Simpson-Tawn estimator. | Dependence parameter value
---|---
| 0.3 | 0.4 | 0.5 | 0.6 | 0.7
Gaussian | 00 (00) | 00 (00) | 00 (00) | 00 (00) | 06 (00)
Logistic | 98 (91) | 93 (79) | 96 (77) | 91 (74) | 82 (63)
Inv-Logistic | 02 (00) | 00 (00) | 00 (00) | 00 (00) | 00 (00)
Asy-Logistic | 96 (30) | 80 (25) | 82 (22) | 83 (14) | 84 (00)
### 4.3 Additional simulation studies
Two additional simulation studies are presented in the Supplementary Material
(Majumder et al., 2024a). In both cases, the results are compared based on
RMSE/RMISE values of bivariate extremal dependence coefficients, as well as
how often it estimates $\eta=1$. The first study considers how our relatively
straightforward method of selecting the quantile threshold affects the
estimation process. This is carried out by comparing it against an ‘oracle’
threshold which is an asymptotic approximation to the true conditional
quantile $q_{\tau}(w)$ and requires knowledge of the true gauge function to
compute. We are unable to find any meaningful improvement in the estimates
when we consider an oracle threshold, which indicates that our choice of
threshold is adequate for the scenarios we have considered. The second study
repeats the simulation study presented in this section, but for a sample size
$n=600$. This is carried out to ensure that our approach is still valid for
small data sizes like the one that arises in our second application, presented
in the following section. Our results indicate that despite slightly higher
RMISE values and slightly lower coverage, the Bézier splines can still capture
the shape of the true limit set boundary with low bias. Comparisons with the
Simpson-Tawn estimator provide results that are quite similar to the ones
presented in this section.
## 5 Applications
### 5.1 Analysis of the Santa Ana Winds data
Figure 8: Santa Ana wind speeds and dryness measured at the March Air Force
Base station. Data above the $0.75$ marginal quantile threshold are in red.
We apply our method to the Santa Ana winds and dryness data (Cooley et al.,
2019). The Santa Ana winds are a multivariate meteorological regime that has
been implicated as a major driver of large wildfires in southern California
(Billmire et al., 2014). Wildfires are related to several conditions like
temperature, humidity, wind speed, and fuel supply (Littell et al., 2018).
Historically, the autumn months of September, October, and November have had a
higher number of wildfires compared to the winter months, and are associated
with warm temperatures, low humidity, and high winds. Cooley et al. (2019)
surmised that the data exhibits AD and used the framework of regular variation
to estimate probabilities associated with two different risk regions. The
regular variation structure employed by them, however, cannot capture the
nuance of AI and models all levels of asymptotic independence (including exact
independence) within a single degenerate model.
We consider daily dryness (%) and wind speed (m/s) data collected at the March
Air Reserve Base station in Riverside County from the HadISD dataset (Dunn et
al., 2012). The dryness is defined in this case as the $100-$RH, where RH is
the relative humidity measured as a percentage. The bivariate time series
represents a measure of the daily risk of fire. The station’s variables have
appeared to be associated with known Santa Ana events. The data consists of
3,902 days for the months of September–November from 1972–2015; we assume
temporal stationarity as in Cooley et al. (2019). Figure 8 plots the data both
in its original scale as well as the rank transformed scale. The data shows
tail dependence, noticeable in the rank transformed data with a cluster of
values in the upper right corner. Our goal is to study the tail dependence
between the two variables by estimating a gauge function for the data after a
further transformation to unit exponential margins.
We analyze this data at a threshold of $\tau=0.75$, providing us with 1,529
points that are above the threshold in at least one margin. The conditional
distribution of the radii are assumed to be a truncated Gamma distribution for
this analysis. We run 2 MCMC chains of 11,000 iterations each, discarding the
first 1,000 from each chain as burn-in. The priors and the remainder of the
MCMC settings are identical to the simulation study.
Figure 9: Limit set boundaries based on Bézier splines in pseudo-polar space
(left) and Euclidean space (right) for assessing tail dependence between Santa
Ana windspeed ($X_{1}$) and dryness ($X_{2}$). Median curves are plotted in
dark blue.
Figure 10: PP plot (left) and exponential QQ plot (right) for the truncated
Gamma model fitted to the Santa Ana winds data. The black points correspond to
the fit for the median Bézier spline. Gray lines are based on 100 random draws
from the posterior.
Figure 9 plots the estimated limit set boundaries, with the median curve
plotted in blue. The plot on the left is in pseudo-polar coordinates and
depicts a functional boxplot of the estimated limit set boundary, while the
plot on the right is in Euclidean coordinates. The posterior median of $\eta$
is estimated to be 1, and $\mbox{P}(\eta=1\,|\,\bm{X})=0.60$, suggesting AD
between wind speed and dryness. Figure 10 evaluates the goodness-of-fit for
the truncated Gamma model in terms of PP plots as well as QQ plots, where each
point is transformed to unit exponential using the shape and rate parameters
of the assumed truncated Gamma distribution. In both cases, the black points
are obtained from Gamma parameters implied by the median curve of the limit
set boundary, while the gray lines correspond to 100 random draws from the
posterior. Both plots suggest that the model fits the data adequately.
To test sensitivity to the threshold level, we repeated the experiment at an
additional threshold levels of $\tau=0.70,0.80,\mbox{ and }0.90$. In all 3
additional cases, $\mbox{P}(\eta=1\,|\,\bm{X})>0.50$, with the smallest value
(of 0.56) occurring for $\tau=0.90$ and the largest value (of 0.64) for
$\tau=0.70$. Taken together, we conclude that dryness and windspeed for the
Santa Ana dataset is asymptotically dependent with high probability,
consistent with the results of Cooley et al. (2019).
### 5.2 Analysis of Ozone concentration data
Figure 11: PP plot (left) and exponential QQ plot (right) for the truncated
Gamma model fitted to the Ozone concentration data. The gray lines correspond
to the fit for the median Bézier splines at 100 random locations.
For our second study, we consider air pollution measurement data across the US
from the Community Multiscale Air Quality (CMAQ) model (Binkowski and Roselle,
2003; Wyat Appel et al., 2007, 2008) as well as EPA Air Quality System (AQS)
(US EPA, 2017) data for the contiguous US. While CMAQ is a numerical model
available across the entire country at a 12km resolution, the AQS dataset
consists of observations monitored at $1,376$ stations across the US. Among
them, only 519 stations had over 600 observations, which is what we use for
this analysis. The full dataset has previously been used by Gong et al. (2021)
to develop a combined data product for 12 air pollutants. When fusing data
products, it is important to calibrate the model data to ground truth. In our
application, we will verify how strong the dependence is between the AQS and
CMAQ datasets for ozone, one of the 12 pollutants made available by both
datasets.
Our data consists of daily ozone readings for the months of July–September
from 2010–2014, resulting in a bivariate time series of CMAQ and AQS data for
up to 610 days at each station. The sample correlations between the AQS and
CMAQ data for the 519 stations range from 0.29–0.86 with a median of 0.69,
suggesting a high level of agreement in the bulk of the distribution. To
assess tail dependence, we fit a gauge function with truncated Gamma
likelihood for data censored at $\tau=0.75$ threshold independently at every
station. We run 2 MCMC chains for 11,000 iterations each for each station’s
data, discarding the first 1,000 as burn-in.
Figure 12: Posterior median of $\eta$ at 519 AQS monitoring stations in the
US.
The posterior median of $\eta$ has an average value of 0.81 across the 519
locations, and is 1 (AD) for 79 of those stations. This suggests that the CMAQ
data product can adequately represent the tail behavior of observational
ambient ozone. Figure 12 plots the posterior probability of AD based on the
truncated Gamma model. While we are unable to discern any spatial pattern for
high or low posterior values of $\eta$ from the map, we do note that several
of the low values are in urban areas with high population densities.
Finally, To study the sensitivity of our posterior to $\tau$, we repeated our
analysis at two additional threshold levels of $\tau=0.70\mbox{ and }0.80$.
Estimates from both these cases were quite similar to the baseline case of
$\tau=0.75$, with correlations $>0.90$ for both the posterior median of $\eta$
and $\mbox{P}(\eta=1)$. There were 76 and 80 locations respectively where the
posterior median of $\eta$ was 1, and 62 of those locations were shared with
the baseline case. Thus, our results are not very sensitive to the choice of
threshold for small data sizes.
## 6 Discussion
Key aspects of tail dependence in multivariate distributions can be described
through their corresponding gauge functions. In this study, we propose a semi-
parametric method for estimating gauge functions by modeling their unit level
sets as Bézier splines comprised of three quadratic Bézier curves. The splines
can represent the gauge function unit level sets, and hence limit set
boundaries, of varying shapes, and are parsimoniously parameterized by a small
number of control points. The quadratic specification makes it straightforward
to obtain analytical solutions for the shape of the limit set, and constraints
on the control points ensure that the resultant shapes are valid limit set
boundaries. Bayesian estimation of the Bézier splines requires only standard
MCMC techniques and allows important cases on the edge of the parameter space
to be represented by employing mixture priors with point masses. We
demonstrate the efficacy of our model using numerical studies as well as two
real data applications involving fire weather in California, and ambient air
pollution from ozone across the US.
We have only considered bivariate random vectors here, but the modeling
strategy can scale to three dimensions by using Bézier surfaces, with the
control points of constituent Bézier curves set in $\mathbb{R}^{3}$ instead of
$\mathbb{R}^{2}$. It will, however, require more complicated constraints to
ensure that the star shaped property holds, and dimensions greater than three
appear to be infeasible. In addition, it appears possible to extend our
modeling framework to include negative dependence by transforming to Laplace
margins rather than exponential margins. This has been previously suggested
(Simpson and Tawn, 2022; Wadsworth and Campbell, 2024) and recently
implemented in the context of radially stable Pareto distributions
(Papastathopoulos et al., 2023), and within a semi-parametric angular-radial
model (Mackay and Jonathan, 2023). Implementing our Bézier model in Laplace
margins (in two dimensions) would require somewhere between two to four times
the number of control points as we have now, specifying appropriate
constraints on their support, and more sophisticated sampling algorithms to
ensure convergence. Finally, while three Bézier curves are sufficient to
ensure that the boundary of our estimated limit set in the two dimensional
case exactly touches the corners of the unit box, the curves themselves don’t
necessarily need to be quadratic. It is possible to use a nonparametric
Bayesian framework to construct curves that have an arbitrary number of
intermediate control points (i.e., control points excluding the start and end
points). Though this would be computationally more expensive, the resulting
Bézier spline is likely to have broader support over the space of all limit
set boundaries.
## Funding
This work was supported by grants from the Southeast National Synthesis
Wildfire and the United States Geological Survey’s National Climate Adaptation
Science Center (G21AC10045), and the National Science Foundation (DMS-2001433,
DMS-2152887).
## References
* Balkema et al. (2010) Balkema, A. A., Embrechts, P., and Nolde, N. (2010), “Meta densities and the shape of their sample clouds,” J. Multivariate Anal., 101, 1738–1754.
* Balkema and Nolde (2010) Balkema, G. and Nolde, N. (2010), “Asymptotic independence for unimodal densities,” Adv. in Appl. Probab., 42, 411–432.
* Billmire et al. (2014) Billmire, M., French, N. H. F., Loboda, T., Owen, R. C., and Tyner, M. (2014), “Santa Ana winds and predictors of wildfire progression in southern California,” Int. J. Wildland Fire, 23, 1119–1129.
* Binkowski and Roselle (2003) Binkowski, F. S. and Roselle, S. J. (2003), “Models-3 Community Multiscale Air Quality (CMAQ) model aerosol component 1. Model description,” J. Geophys. Res. - Atmos., 108.
* Boldi and Davison (2007) Boldi, M.-O. and Davison, A. C. (2007), “A mixture model for multivariate extremes,” Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69, 217–229.
* Cooley et al. (2019) Cooley, D., Hunter, B. D., and Smith, R. L. (2019), “Univariate and multivariate extremes for the environmental sciences,” in Handbook of environmental and ecological statistics, Chapman and Hall/CRC, pp. 153–180.
* Davis et al. (1988) Davis, R. A., Mulrow, E., and Resnick, S. I. (1988), “Almost sure limit sets of random samples in ${\bf R}^{d}$,” Adv. in Appl. Probab., 20, 573–599.
* de Carvalho et al. (2022) de Carvalho, M., Kumukova, A., and dos Reis, G. (2022), “Regression-type analysis for multivariate extreme values,” Extremes, 25, 595–622.
* Dunn et al. (2012) Dunn, R. J. H., Willett, K. M., Thorne, P. W., Woolley, E. V., Durre, I., Dai, A., Parker, D. E., and Vose, R. S. (2012), “HadISD: a quality-controlled global synoptic report database for selected variables at long-term stations from 1973–2011,” Clim. Past, 8, 1649–1679.
* Farouki (2012) Farouki, R. T. (2012), “The Bernstein polynomial basis: a centennial retrospective,” Comput. Aided Geom. D., 29, 379–419.
* Gong et al. (2021) Gong, W., Reich, B. J., and Chang, H. H. (2021), “Multivariate spatial prediction of air pollutant concentrations with INLA,” Environ. Res. Commun., 3, 101002.
* Hanson et al. (2017) Hanson, T. E., de Carvalho, M., and Chen, Y. (2017), “Bernstein polynomial angular densities of multivariate extreme value distributions,” Statistics & Probability Letters, 128, 60–66.
* Hazewinkel (2012) Hazewinkel, M. (2012), Encyclopaedia of Mathematics, no. Vol. 1 in Encyclopaedia of Mathematics, Springer Netherlands.
* Heffernan and Tawn (2004) Heffernan, J. E. and Tawn, J. A. (2004), “A conditional approach for multivariate extreme values,” J. R. Stat. Soc. Ser. B Stat. Methodol., 66, 497–546, with discussions and reply by the authors.
* Hill (1975) Hill, B. M. (1975), “A Simple General Approach to Inference About the Tail of a Distribution,” Ann. Stat., 3, 1163–1174.
* Hyndman and Shang (2010) Hyndman, R. J. and Shang, H. L. (2010), “Rainbow Plots, Bagplots, and Boxplots for Functional Data,” J. Computat. Graph. Stat., 19, 29–45.
* Kinoshita and Resnick (1991) Kinoshita, K. and Resnick, S. I. (1991), “Convergence of scaled random samples in ${\bf R}^{d}$,” Ann. Probab., 19, 1640–1663.
* Ledford and Tawn (1996) Ledford, A. W. and Tawn, J. A. (1996), “Statistics for near independence in multivariate extreme values,” Biometrika, 83, 169–187.
* Littell et al. (2018) Littell, J. S., McKenzie, D., Wan, H. Y., and Cushman, S. A. (2018), “Climate Change and Future Wildfire in the Western United States: An Ecological Approach to Nonstationarity,” Earth’s Future, 6, 1097–1111.
* López-Pintado and Romo (2009) López-Pintado, S. and Romo, J. (2009), “On the Concept of Depth for Functional Data,” J. Am. Stat. Assoc., 104, 718–734.
* Mackay and Jonathan (2023) Mackay, E. and Jonathan, P. (2023), “Modelling multivariate extremes through angular-radial decomposition of the density function,” arXiv preprint, arXiv:2310.12711.
* Majumder et al. (2024a) Majumder, R., Reich, B. J., Shaby, B. A., and Cooley, D. S. (2024a), “Supplement to “Semiparametric Estimation of the Shape of the Limiting Multivariate Point Cloud”,” .
* Majumder et al. (2024b) Majumder, R., Shaby, B. A., Reich, B. J., and Cooley, D. S. (2024b), BezELS: Bezier splines for Estimating Limit Sets, r package version 0.1.0.
* Marcon et al. (2017) Marcon, G., Padoan, S., Naveau, P., Muliere, P., and Segers, J. (2017), “Multivariate nonparametric estimation of the Pickands dependence function using Bernstein polynomials,” J. Stat. Plan. Infer., 183, 1–17.
* Marcon et al. (2014) Marcon, G., Padoan, S. A., Naveau, P., and Muliere, P. (2014), “Nonparametric estimation of the dependence among multivariate rainfall maxima,” in METMA VII-GRASPA 14.
* Murphy-Barltrop et al. (2023) Murphy-Barltrop, C., Wadsworth, J., and Eastoe, E. (2023), “Improving estimation for asymptotically independent bivariate extremes via global estimators for the angular dependence function,” arXiv preprint, arXiv:2303.13237.
* Nolde (2014) Nolde, N. (2014), “Geometric interpretation of the residual dependence coefficient,” J. Multivariate Anal., 123, 85–95.
* Nolde and Wadsworth (2022) Nolde, N. and Wadsworth, J. L. (2022), “Linking representations for multivariate extremes via a limit set,” Adv. Appl. Probab., 54, 688–717.
* Padoan and Rizzelli (2022) Padoan, S. A. and Rizzelli, S. (2022), “Consistency of Bayesian inference for multivariate max-stable distributions,” The Annals of Statistics, 50, 1490–1518.
* Papastathopoulos et al. (2023) Papastathopoulos, I., De Monte, L., Campbell, R., and Rue, H. (2023), “Statistical inference for radially-stable generalized Pareto distributions and return level-sets in geometric extremes,” arXiv preprint, arXiv:2310.06130.
* R Core Team (2023) R Core Team (2023), R: A Language and Environment for Statistical Computing, R Foundation for Statistical Computing, Vienna, Austria.
* Sabourin and Naveau (2014) Sabourin, A. and Naveau, P. (2014), “Bayesian Dirichlet mixture model for multivariate extremes: a re-parametrization,” Computational Statistics & Data Analysis, 71, 542–567.
* Sabourin et al. (2013) Sabourin, A., Naveau, P., and Fougères, A.-L. (2013), “Bayesian model averaging for multivariate extremes,” Extremes, 16, 325–350.
* Simpson and Tawn (2022) Simpson, E. S. and Tawn, J. A. (2022), “Estimating the limiting shape of bivariate scaled sample clouds for self-consistent inference of extremal dependence properties,” arXiv preprint, arXiv:2207.02626.
* Simpson et al. (2020) Simpson, E. S., Wadsworth, J. L., and Tawn, J. A. (2020), “Determining the dependence structure of multivariate extremes,” Biometrika, 107, 513–532.
* Sun and Genton (2011) Sun, Y. and Genton, M. G. (2011), “Functional Boxplots,” J. Computat. Graph. Stat., 20, 316–334.
* US EPA (2017) US EPA (2017), “Air Quality System Database,” Accessed on 3 August, 2017\.
* Vettori et al. (2018) Vettori, S., Huser, R., and Genton, M. G. (2018), “A comparison of dependence function estimators in multivariate extremes,” Stat. Comput., 28, 525–538.
* Wadsworth and Campbell (2024) Wadsworth, J. L. and Campbell, R. (2024), “Statistical inference for multivariate extremes via a geometric approach,” Journal of the Royal Statistical Society Series B: Statistical Methodology, qkae030.
* Wadsworth and Tawn (2013) Wadsworth, J. L. and Tawn, J. A. (2013), “A new representation for multivariate tail probabilities,” Bernoulli, 19, 2689–2714.
* Wyat Appel et al. (2008) Wyat Appel, K., Bhave, P. V., Gilliland, A. B., Sarwar, G., and Roselle, S. J. (2008), “Evaluation of the community multiscale air quality (CMAQ) model version 4.5: Sensitivities impacting model performance; Part II—particulate matter,” Atmos. Environ., 42, 6057–6066.
* Wyat Appel et al. (2007) Wyat Appel, K., Gilliland, A. B., Sarwar, G., and Gilliam, R. C. (2007), “Evaluation of the Community Multiscale Air Quality (CMAQ) model version 4.5: Sensitivities impacting model performance: Part I—Ozone,” Atmos. Environ., 41, 9603–9615.
|
\put(6.0,5.0){\line(1,0){3.0}}\put(6.0,2.0){\line(1,0){3.0}}\put(6.0,15.0){\line(0,-1){3.0}}\put(9.0,15.0){\line(0,-1){3.0}}
\put(9.0,5.0){\line(1,0){3.0}}\put(9.0,2.0){\line(1,0){3.0}}\put(9.0,15.0){\line(0,-1){3.0}}\put(12.0,15.0){\line(0,-1){3.0}}
\put(0.0,4.0){\line(1,0){3.0}}\put(0.0,1.0){\line(1,0){3.0}}\put(0.0,12.0){\line(0,-1){3.0}}\put(3.0,12.0){\line(0,-1){3.0}}
\put(3.0,4.0){\line(1,0){3.0}}\put(3.0,1.0){\line(1,0){3.0}}\put(3.0,12.0){\line(0,-1){3.0}}\put(6.0,12.0){\line(0,-1){3.0}}
\put(6.0,4.0){\line(1,0){3.0}}\put(6.0,1.0){\line(1,0){3.0}}\put(6.0,12.0){\line(0,-1){3.0}}\put(9.0,12.0){\line(0,-1){3.0}}
\put(9.0,4.0){\line(1,0){3.0}}\put(9.0,1.0){\line(1,0){3.0}}\put(9.0,12.0){\line(0,-1){3.0}}\put(12.0,12.0){\line(0,-1){3.0}}
\put(0.0,3.0){\line(1,0){3.0}}\put(0.0,0.0){\line(1,0){3.0}}\put(0.0,9.0){\line(0,-1){3.0}}\put(3.0,9.0){\line(0,-1){3.0}}
\put(3.0,3.0){\line(1,0){3.0}}\put(3.0,0.0){\line(1,0){3.0}}\put(3.0,9.0){\line(0,-1){3.0}}\put(6.0,9.0){\line(0,-1){3.0}}
\put(6.0,3.0){\line(1,0){3.0}}\put(6.0,0.0){\line(1,0){3.0}}\put(6.0,9.0){\line(0,-1){3.0}}\put(9.0,9.0){\line(0,-1){3.0}}
\put(0.0,2.0){\line(1,0){3.0}}\put(0.0,-1.0){\line(1,0){3.0}}\put(0.0,6.0){\line(0,-1){3.0}}\put(3.0,6.0){\line(0,-1){3.0}}
\put(0.0,1.0){\line(1,0){3.0}}\put(0.0,-2.0){\line(1,0){3.0}}\put(0.0,3.0){\line(0,-1){3.0}}\put(3.0,3.0){\line(0,-1){3.0}}
\end{picture}}}\;}(t)+(3c_{\leavevmode\;\vbox{\hbox{\begin{picture}(6.0,3.0)(0.0,0.0)
\put(0.0,1.0){\line(1,0){3.0}}\put(0.0,-2.0){\line(1,0){3.0}}\put(0.0,3.0){\line(0,-1){3.0}}\put(3.0,3.0){\line(0,-1){3.0}}
\put(3.0,1.0){\line(1,0){3.0}}\put(3.0,-2.0){\line(1,0){3.0}}\put(3.0,3.0){\line(0,-1){3.0}}\put(6.0,3.0){\line(0,-1){3.0}}
\end{picture}}}\;}+c_{\leavevmode\;\vbox{\hbox{\begin{picture}(3.0,6.0)(0.0,0.0)
\put(0.0,2.0){\line(1,0){3.0}}\put(0.0,-1.0){\line(1,0){3.0}}\put(0.0,6.0){\line(0,-1){3.0}}\put(3.0,6.0){\line(0,-1){3.0}}
\put(0.0,1.0){\line(1,0){3.0}}\put(0.0,-2.0){\line(1,0){3.0}}\put(0.0,3.0){\line(0,-1){3.0}}\put(3.0,3.0){\line(0,-1){3.0}}
\end{picture}}}\;})(\kappa)S_{\leavevmode\;\vbox{\hbox{\begin{picture}(36.0,18.0)(0.0,0.0)
\put(0.0,6.0){\line(1,0){3.0}}\put(0.0,3.0){\line(1,0){3.0}}\put(0.0,18.0){\line(0,-1){3.0}}\put(3.0,18.0){\line(0,-1){3.0}}
\put(3.0,6.0){\line(1,0){3.0}}\put(3.0,3.0){\line(1,0){3.0}}\put(3.0,18.0){\line(0,-1){3.0}}\put(6.0,18.0){\line(0,-1){3.0}}
\put(6.0,6.0){\line(1,0){3.0}}\put(6.0,3.0){\line(1,0){3.0}}\put(6.0,18.0){\line(0,-1){3.0}}\put(9.0,18.0){\line(0,-1){3.0}}
\put(9.0,6.0){\line(1,0){3.0}}\put(9.0,3.0){\line(1,0){3.0}}\put(9.0,18.0){\line(0,-1){3.0}}\put(12.0,18.0){\line(0,-1){3.0}}
\put(12.0,6.0){\line(1,0){3.0}}\put(12.0,3.0){\line(1,0){3.0}}\put(12.0,18.0){\line(0,-1){3.0}}\put(15.0,18.0){\line(0,-1){3.0}}
\put(15.0,6.0){\line(1,0){3.0}}\put(15.0,3.0){\line(1,0){3.0}}\put(15.0,18.0){\line(0,-1){3.0}}\put(18.0,18.0){\line(0,-1){3.0}}
\put(18.0,6.0){\line(1,0){3.0}}\put(18.0,3.0){\line(1,0){3.0}}\put(18.0,18.0){\line(0,-1){3.0}}\put(21.0,18.0){\line(0,-1){3.0}}
\put(21.0,6.0){\line(1,0){3.0}}\put(21.0,3.0){\line(1,0){3.0}}\put(21.0,18.0){\line(0,-1){3.0}}\put(24.0,18.0){\line(0,-1){3.0}}
\put(24.0,6.0){\line(1,0){3.0}}\put(24.0,3.0){\line(1,0){3.0}}\put(24.0,18.0){\line(0,-1){3.0}}\put(27.0,18.0){\line(0,-1){3.0}}
\put(27.0,6.0){\line(1,0){3.0}}\put(27.0,3.0){\line(1,0){3.0}}\put(27.0,18.0){\line(0,-1){3.0}}\put(30.0,18.0){\line(0,-1){3.0}}
\put(30.0,6.0){\line(1,0){3.0}}\put(30.0,3.0){\line(1,0){3.0}}\put(30.0,18.0){\line(0,-1){3.0}}\put(33.0,18.0){\line(0,-1){3.0}}
\put(33.0,6.0){\line(1,0){3.0}}\put(33.0,3.0){\line(1,0){3.0}}\put(33.0,18.0){\line(0,-1){3.0}}\put(36.0,18.0){\line(0,-1){3.0}}
\put(0.0,5.0){\line(1,0){3.0}}\put(0.0,2.0){\line(1,0){3.0}}\put(0.0,15.0){\line(0,-1){3.0}}\put(3.0,15.0){\line(0,-1){3.0}}
\put(3.0,5.0){\line(1,0){3.0}}\put(3.0,2.0){\line(1,0){3.0}}\put(3.0,15.0){\line(0,-1){3.0}}\put(6.0,15.0){\line(0,-1){3.0}}
\put(6.0,5.0){\line(1,0){3.0}}\put(6.0,2.0){\line(1,0){3.0}}\put(6.0,15.0){\line(0,-1){3.0}}\put(9.0,15.0){\line(0,-1){3.0}}
\put(9.0,5.0){\line(1,0){3.0}}\put(9.0,2.0){\line(1,0){3.0}}\put(9.0,15.0){\line(0,-1){3.0}}\put(12.0,15.0){\line(0,-1){3.0}}
\put(0.0,4.0){\line(1,0){3.0}}\put(0.0,1.0){\line(1,0){3.0}}\put(0.0,12.0){\line(0,-1){3.0}}\put(3.0,12.0){\line(0,-1){3.0}}
\put(3.0,4.0){\line(1,0){3.0}}\put(3.0,1.0){\line(1,0){3.0}}\put(3.0,12.0){\line(0,-1){3.0}}\put(6.0,12.0){\line(0,-1){3.0}}
\put(0.0,3.0){\line(1,0){3.0}}\put(0.0,0.0){\line(1,0){3.0}}\put(0.0,9.0){\line(0,-1){3.0}}\put(3.0,9.0){\line(0,-1){3.0}}
\put(3.0,3.0){\line(1,0){3.0}}\put(3.0,0.0){\line(1,0){3.0}}\put(3.0,9.0){\line(0,-1){3.0}}\put(6.0,9.0){\line(0,-1){3.0}}
\put(0.0,2.0){\line(1,0){3.0}}\put(0.0,-1.0){\line(1,0){3.0}}\put(0.0,6.0){\line(0,-1){3.0}}\put(3.0,6.0){\line(0,-1){3.0}}
\put(0.0,1.0){\line(1,0){3.0}}\put(0.0,-2.0){\line(1,0){3.0}}\put(0.0,3.0){\line(0,-1){3.0}}\put(3.0,3.0){\line(0,-1){3.0}}
\end{picture}}}\;}(t)$
$\displaystyle+(c_{\leavevmode\;\vbox{\hbox{\begin{picture}(6.0,3.0)(0.0,0.0)
\put(0.0,1.0){\line(1,0){3.0}}\put(0.0,-2.0){\line(1,0){3.0}}\put(0.0,3.0){\line(0,-1){3.0}}\put(3.0,3.0){\line(0,-1){3.0}}
\put(3.0,1.0){\line(1,0){3.0}}\put(3.0,-2.0){\line(1,0){3.0}}\put(3.0,3.0){\line(0,-1){3.0}}\put(6.0,3.0){\line(0,-1){3.0}}
\end{picture}}}\;}+3c_{\leavevmode\;\vbox{\hbox{\begin{picture}(3.0,6.0)(0.0,0.0)
\put(0.0,2.0){\line(1,0){3.0}}\put(0.0,-1.0){\line(1,0){3.0}}\put(0.0,6.0){\line(0,-1){3.0}}\put(3.0,6.0){\line(0,-1){3.0}}
\put(0.0,1.0){\line(1,0){3.0}}\put(0.0,-2.0){\line(1,0){3.0}}\put(0.0,3.0){\line(0,-1){3.0}}\put(3.0,3.0){\line(0,-1){3.0}}
\end{picture}}}\;})(\kappa)S_{\leavevmode\;\vbox{\hbox{\begin{picture}(27.0,18.0)(0.0,0.0)
\put(0.0,6.0){\line(1,0){3.0}}\put(0.0,3.0){\line(1,0){3.0}}\put(0.0,18.0){\line(0,-1){3.0}}\put(3.0,18.0){\line(0,-1){3.0}}
\put(3.0,6.0){\line(1,0){3.0}}\put(3.0,3.0){\line(1,0){3.0}}\put(3.0,18.0){\line(0,-1){3.0}}\put(6.0,18.0){\line(0,-1){3.0}}
\put(6.0,6.0){\line(1,0){3.0}}\put(6.0,3.0){\line(1,0){3.0}}\put(6.0,18.0){\line(0,-1){3.0}}\put(9.0,18.0){\line(0,-1){3.0}}
\put(9.0,6.0){\line(1,0){3.0}}\put(9.0,3.0){\line(1,0){3.0}}\put(9.0,18.0){\line(0,-1){3.0}}\put(12.0,18.0){\line(0,-1){3.0}}
\put(12.0,6.0){\line(1,0){3.0}}\put(12.0,3.0){\line(1,0){3.0}}\put(12.0,18.0){\line(0,-1){3.0}}\put(15.0,18.0){\line(0,-1){3.0}}
\put(15.0,6.0){\line(1,0){3.0}}\put(15.0,3.0){\line(1,0){3.0}}\put(15.0,18.0){\line(0,-1){3.0}}\put(18.0,18.0){\line(0,-1){3.0}}
\put(18.0,6.0){\line(1,0){3.0}}\put(18.0,3.0){\line(1,0){3.0}}\put(18.0,18.0){\line(0,-1){3.0}}\put(21.0,18.0){\line(0,-1){3.0}}
\put(21.0,6.0){\line(1,0){3.0}}\put(21.0,3.0){\line(1,0){3.0}}\put(21.0,18.0){\line(0,-1){3.0}}\put(24.0,18.0){\line(0,-1){3.0}}
\put(24.0,6.0){\line(1,0){3.0}}\put(24.0,3.0){\line(1,0){3.0}}\put(24.0,18.0){\line(0,-1){3.0}}\put(27.0,18.0){\line(0,-1){3.0}}
\put(0.0,5.0){\line(1,0){3.0}}\put(0.0,2.0){\line(1,0){3.0}}\put(0.0,15.0){\line(0,-1){3.0}}\put(3.0,15.0){\line(0,-1){3.0}}
\put(3.0,5.0){\line(1,0){3.0}}\put(3.0,2.0){\line(1,0){3.0}}\put(3.0,15.0){\line(0,-1){3.0}}\put(6.0,15.0){\line(0,-1){3.0}}
\put(6.0,5.0){\line(1,0){3.0}}\put(6.0,2.0){\line(1,0){3.0}}\put(6.0,15.0){\line(0,-1){3.0}}\put(9.0,15.0){\line(0,-1){3.0}}
\put(9.0,5.0){\line(1,0){3.0}}\put(9.0,2.0){\line(1,0){3.0}}\put(9.0,15.0){\line(0,-1){3.0}}\put(12.0,15.0){\line(0,-1){3.0}}
\put(12.0,5.0){\line(1,0){3.0}}\put(12.0,2.0){\line(1,0){3.0}}\put(12.0,15.0){\line(0,-1){3.0}}\put(15.0,15.0){\line(0,-1){3.0}}
\put(15.0,5.0){\line(1,0){3.0}}\put(15.0,2.0){\line(1,0){3.0}}\put(15.0,15.0){\line(0,-1){3.0}}\put(18.0,15.0){\line(0,-1){3.0}}
\put(18.0,5.0){\line(1,0){3.0}}\put(18.0,2.0){\line(1,0){3.0}}\put(18.0,15.0){\line(0,-1){3.0}}\put(21.0,15.0){\line(0,-1){3.0}}
\put(0.0,4.0){\line(1,0){3.0}}\put(0.0,1.0){\line(1,0){3.0}}\put(0.0,12.0){\line(0,-1){3.0}}\put(3.0,12.0){\line(0,-1){3.0}}
\put(3.0,4.0){\line(1,0){3.0}}\put(3.0,1.0){\line(1,0){3.0}}\put(3.0,12.0){\line(0,-1){3.0}}\put(6.0,12.0){\line(0,-1){3.0}}
\put(0.0,3.0){\line(1,0){3.0}}\put(0.0,0.0){\line(1,0){3.0}}\put(0.0,9.0){\line(0,-1){3.0}}\put(3.0,9.0){\line(0,-1){3.0}}
\put(3.0,3.0){\line(1,0){3.0}}\put(3.0,0.0){\line(1,0){3.0}}\put(3.0,9.0){\line(0,-1){3.0}}\put(6.0,9.0){\line(0,-1){3.0}}
\put(0.0,2.0){\line(1,0){3.0}}\put(0.0,-1.0){\line(1,0){3.0}}\put(0.0,6.0){\line(0,-1){3.0}}\put(3.0,6.0){\line(0,-1){3.0}}
\put(0.0,1.0){\line(1,0){3.0}}\put(0.0,-2.0){\line(1,0){3.0}}\put(0.0,3.0){\line(0,-1){3.0}}\put(3.0,3.0){\line(0,-1){3.0}}
\end{picture}}}\;}(t)+\cdots.$
where each $c_{\mu}(\kappa)$ is the Schur polynomial of
$(\kappa_{1}^{3},\kappa_{2}^{3})$ associated with the Young diagram $\mu$.
Note that the Young diagrams $\lambda$ in $S_{\lambda}(t)$ are obtained by
adding or to the previous diagrams. Also note that each coefficient is a
homogeneous polynomial but not a single Schur polynomial. The general formula
of the coefficients $c_{\hat{\mu}}(\kappa)$ for
$\kappa=(\kappa_{1},\ldots,\kappa_{m})$ will be discussed elsewhere.
## 6\. Proof of Theorem 5.1
We first summarize the notations and then give several steps to prove the
theorem.
* (1)
The $lmq$-dimensional base for the exponential functions is given by
$\hat{\mathsf{E}}=\left([\hat{\mathsf{E}}^{(q-1)}]_{m},~{}[\hat{\mathsf{E}}^{(q-2)}]_{m},~{}\ldots,~{}[\hat{\mathsf{E}}^{(0)}]_{m}\right)\qquad\text{with}\qquad
q=n_{l,k}=\left\lceil\frac{l-1}{k}\right\rceil,$
where $[\hat{\mathsf{E}}^{(j)}]_{m}$ is the $lm$-dimensional row vector
defined by
$[\hat{\mathsf{E}}^{(j)}]_{m}=\left(\hat{\mathsf{E}}^{(j)}(\kappa_{1}),~{}\hat{\mathsf{E}}^{(j)}(\kappa_{2}),~{}\ldots,~{}\hat{\mathsf{E}}^{(j)}(\kappa_{m})\right)\quad\text{with}\quad\hat{\mathsf{E}}^{(j)}(\kappa_{i})=\frac{1}{j!}\frac{\partial^{j}}{\partial\kappa_{i}^{j}}\kappa_{i}^{j}\mathsf{E}(\kappa_{i}).$
Each $l$-dimensional vector $\mathsf{E}(\kappa_{i})$ is given by
$\mathsf{E}(\kappa_{i})=\left(E_{1}(t,\kappa_{i}),~{}E_{2}(t,\kappa_{i}),~{}\ldots,~{}E_{l}(t,\kappa_{i})\right)\quad\text{with}\quad
E_{p}(t,\kappa_{i})=\exp\left(\mathop{\textstyle\sum}\limits_{n=1}^{\infty}(\kappa_{i}\omega_{l}^{p-1})^{n}t_{n}\right).$
The Schur expansion of $\hat{\mathsf{E}}^{(j)}(\kappa_{i})$ is expressed by
$\hat{\mathsf{E}}^{(j)}(\kappa_{i})=(1,~{}p_{1}(t),~{}p_{2}(t),~{}p_{3}(t),~{}\ldots)\,\hat{K}^{(j)}(\kappa_{i}),$
where $\hat{K}^{(j)}$ is the $\infty\times l$ matrix whose $n$th row is given
by
$\text{Row}_{n}\left(\hat{K}^{(j)}(\kappa_{i})\right)=\binom{n+j}{j}\kappa_{i}^{n}\Omega_{l}^{n}\quad\text{for}\quad
n=0,1,2,\ldots$
Here
$\Omega_{l}^{n}=(1,\omega_{l}^{n},\omega_{l}^{2n},\ldots,\omega_{l}^{n(l-1)})$
with $\omega_{l}=\exp(2\pi i/l)$. The Schur expansion of the base
$\hat{\mathsf{E}}$ is then expressed by
$\hat{\mathsf{E}}=(1,~{}p_{1}(t),~{}p_{2}(t),~{}\ldots)\,\hat{K},$
where
$\hat{K}=\left([\hat{K}^{(q-1)}]_{m},\ldots,[\hat{K}^{(0)}]_{m}\right)\quad\text{with}\quad[\hat{K}^{(j)}]_{m}=\left(\hat{K}^{(j)}(\kappa_{1}),\ldots,\hat{K}^{(j)}(\kappa_{m})\right).$
* (2)
The $A$-matrix is the $N\times mlq$ matrix given by
$A=\begin{pmatrix}I_{m}\otimes[\Omega_{l}]^{n_{1}}&0&\cdots&0\\\
0&I_{m}\otimes[\Omega_{l}]^{n_{2}}&\cdots&0\\\ \vdots&\ddots&\ddots&\vdots\\\
0&\cdots&0&I_{m}\otimes[\Omega_{l}]^{n_{q}}\end{pmatrix}\quad\text{with}\quad[\Omega_{l}]^{n}=\begin{pmatrix}\Omega_{l}^{1}\\\
\Omega_{l}^{2}\\\ \vdots\\\ \Omega_{l}^{n}\end{pmatrix},$
where $N=\mathop{\textstyle\sum}\limits_{i=1}^{q}mn_{i}$ with
$n_{i}=k(i-1)+1$. Calculating $\hat{K}A^{T}$, we have
$\hat{K}\,A^{T}=\left([B^{(q-1)}]^{n_{1}}_{m},~{}[B^{(q-2)}]^{n_{2}}_{m},~{}\ldots,~{}[B^{(0)}]^{n_{q}}_{m}\right).$
Each element $[B^{(j)}]^{n_{q-j}}_{m}$ is the $\infty\times mn_{q-j}$ matrix,
$[B^{(j)}]^{n_{q-j}}_{m}=\left([B^{(j)}(\kappa_{1})]^{n_{q-j}},~{}[B^{(j)}(\kappa_{2})]^{n_{q-j}},~{}\ldots,~{}[B^{(j)}(\kappa_{m})]^{n_{q-j}}\right).$
The $n$th column of the $\infty\times n_{q-j}$ matrix
$[B^{(j)}(\kappa_{i})]^{n_{q-j}}$ is given by
$\text{Col}_{n}\left([B^{(j)}(\kappa_{i})]^{n_{q-j}}\right)=\left(\underbrace{0,\ldots,0}_{l-n},~{}l\binom{l+j-n}{j}\kappa_{i}^{l-n},~{}\underbrace{0,\ldots,0}_{l-1},~{}l\binom{2l+j-n}{j}\kappa_{i}^{2l-n},~{}0,\ldots\right)^{T}.$
* (3)
Consider the embedding $\sigma_{K}:A\mapsto\widetilde{\hat{K}A^{T}}$. Then the
corresponding element of UGM can be expressed as follows: For each $0\leq
j\leq q-1$ and $1\leq n\leq n_{q-j}$, we have
$\displaystyle\psi_{j,n}(z,\kappa_{i}):$
$\displaystyle=(z^{-N+1},~{}z^{-N+2},~{}\ldots,~{}z^{-1},~{}1,~{}z^{1},~{}\ldots)\,\text{Col}_{n}\left([B^{(j)}(\kappa_{i})]^{n_{q-j}}\right)$
$\displaystyle=lz^{-N+1}\mathop{\textstyle\sum}\limits_{r=1}^{\infty}\binom{rl+j-n}{j}(\kappa_{i}z)^{rl-n}$
$\displaystyle=\left.lz^{-N+1}\frac{1}{j!}\frac{d^{j}}{du^{j}}\left(\mathop{\textstyle\sum}\limits_{r=1}^{\infty}u^{rl+j-n}\right)\right|_{u=\kappa_{i}z}$
$\displaystyle=\left.lz^{-N+1}\frac{1}{j!}\frac{d^{j}}{du^{j}}\left(\frac{u^{l+j-n}}{1-u^{l}}\right)\right|_{u=\kappa_{i}z},$
which form the set of $N$ elements in the base
$\\{\phi_{-i}(z):i\in\mathbb{N}_{0}\\}$ for a point of UGM, i.e. we assign
$\\{\phi_{-i}(z):0\leq i\leq N-1\\}=\\{\psi_{j,n}(z,\kappa_{i}):0\leq j\leq
q-1,~{}1\leq n\leq n_{q-j},~{}1\leq i\leq m\\}.$
And the rest in the base $\\{\phi_{-i}(z):i\in\mathbb{N}_{0}\\}$ is given by
$\\{\phi_{-i}(z)=z^{-i}:i\geq N\\}$.
* (4)
Then we note that the set $\\{\phi_{-i}(z):0\leq i\leq N-1\\}$ as a base of an
$N$-dimensional vector space is equivalent to
$\left\\{\frac{z^{-N-s_{j}+l}}{(1-(\kappa_{i}z)^{l})^{j+1}}:~{}0\leq
j<q,~{}0\leq s_{j}<n_{q-j},~{}1\leq i\leq m\right\\}.$
The other elements of the base are given by $\\{\phi_{-i}(z)=z^{-i}:i\geq
N\\}$.
Now multiplying the base by
$z^{N-mql}\prod_{i=1}^{m}(1-(\kappa_{i}z)^{l})^{q}$, we have a set of
polynomials in $z^{-1}$ of the form
$\\{\phi_{-i}(z)=z^{-i}F_{i}(z):i\in\mathbb{N}_{0}\\}$ where $F_{i}(z)$ is a
polynomial in $z$ with $F_{0}(z)=1$ and $\text{deg}(F_{i}(z))<i$ for $i\geq
1$. More precisely, we have the following: First we have
$\displaystyle\frac{z^{-(mq-1)l-s_{j}}}{(1-(\kappa_{i}z)^{l})^{j+1}}\prod_{p=1}^{m}(1-(\kappa_{p}z)^{l})^{q}$
$\displaystyle=$ $\displaystyle z^{-(mq-1)l-s_{j}}\prod_{p\neq
i}(1-(\kappa_{p}z)^{l})^{q}\cdot(1-(\kappa_{i}z)^{l})^{q-1-j}.$
Let us now express a set of basis for these meromorphic sections:
* (i)
For $s_{j}=0$, we have $0\leq j\leq q-1$, and one can have an equivalent base
for this case, that is,
$\displaystyle\left\\{z^{-(mq-1)l}\prod_{p\neq
i}(1-(\kappa_{p}z)^{l})^{q}\cdot(1-(\kappa_{i}z)^{l})^{q-1-j}:~{}0\leq j\leq
q-1,~{}1\leq i\leq m\right\\}$
$\displaystyle\equiv\left\\{1,~{}z^{-l},~{}z^{-2l},~{}\ldots,~{}z^{-(mq-1)l}\right\\}=\left\\{z^{-pl}~{}:~{}0\leq
p\leq mq-1\right\\}.$
Here the number of elements is $mq$.
* (ii)
For each $s_{j}$ with $1\leq s_{j}\leq k$, we have $0\leq j\leq q-2$. Then we
write
$\displaystyle z^{-(mq-1)l-s_{j}}\prod_{p\neq
i}(1-(\kappa_{p}z)^{l})^{q}\cdot(1-(\kappa_{i}z)^{l})^{q-1-j}$
$\displaystyle=$ $\displaystyle
z^{-(mq-1)l-s_{j}}\prod_{i=1}^{m}(1-(\kappa_{i}z)^{l})\left(\prod_{p\neq
i}(1-(\kappa_{p}z)^{l})^{q-1}\cdot(1-(\kappa_{i}z)^{l})^{q-2-j}\right),$
and an equivalent base can be expressed as
$\displaystyle\left\\{z^{-(mq-1)l-s_{j}}\prod_{i=1}^{m}(1-(\kappa_{i}z)^{l})\left(\prod_{p\neq
i}(1-(\kappa_{p}z)^{l})^{q-1}\cdot(1-(\kappa_{i}z)^{l})^{q-2-j}\right)\right\\}$
$\displaystyle\equiv\left\\{\prod_{i=1}^{m}(1-(\kappa_{i}z)^{l})\,z^{-pl-
s_{j}}~{}:~{}m\leq p\leq mq-1,~{}1\leq i\leq m\right\\}.$
Here the number of elements in the base is $m(q-1)$ for each $1\leq s_{j}\leq
k$.
* (iii)
For each $s_{j}$ with $k+1\leq s_{j}\leq 2k$, we have $0\leq j\leq q-3$. Then
an equivalent base is
$\displaystyle\left\\{z^{-(mq-1)l-s_{j}}\prod_{i=1}^{m}(1-(\kappa_{i}z)^{l})^{2}\left(\prod_{p\neq
i}(1-(\kappa_{p}z)^{l})^{q-2}\cdot(1-(\kappa_{i}z)^{l})^{q-3-j}\right)\right\\}$
$\displaystyle\equiv\left\\{\prod_{i=1}^{m}(1-(\kappa_{i}z)^{l})^{2}\,z^{-pl-
s_{j}}~{}:~{}2m\leq p\leq mq-1,~{}1\leq i\leq m\right\\}.$
This continues to $(q-2)k+1\leq s_{j}\leq(q-1)k$ for just $j=0$, and this
gives
$\displaystyle\left\\{z^{-(mq-1)l-s_{j}}\prod_{i=1}^{m}(1-(\kappa_{i}z)^{l})^{q-1}\,\prod_{p\neq
i}(1-(\kappa_{p}z)^{l})\right\\}$
$\displaystyle\equiv\left\\{\prod_{i=1}^{m}(1-(\kappa_{i}z)^{l})^{q-1}\,z^{-pl-
s_{j}}~{}:~{}(q-1)m\leq p\leq mq-1\right\\},$
whose number of elements is just $m$ for each $s_{0}$ in $(q-2)k+1\leq
s_{0}\leq(q-1)k$.
Notice that the total number of elements in the bases in (i) through (iii) is
just $N=mq(1+k(q-1)/2)$. Then we have a base
$\mathcal{A}(z)=\mathcal{A}_{1}(z)\cup\mathcal{A}_{2}(z)$ for a subspace of
$\mathbb{C}[z^{-1}]$, where
$\displaystyle\mathcal{A}_{1}(z):=\left\\{z^{-p(r)l-s_{j}(r)}\prod_{p=1}^{m}(1-(\kappa_{p}z)^{l})^{r}~{}:~{}\begin{array}[]{lllll}0\leq
r\leq q-1,\\\\[2.15277pt] rm\leq p(r)\leq mq-1,\\\\[2.15277pt] (r-1)k+1\leq
s_{j}(r)\leq rk\end{array}\right\\},$
$\displaystyle\mathcal{A}_{2}(z):=\left\\{z^{-mql-i+N}\prod_{p=1}^{m}(1-(\kappa_{p}z)^{l})^{q}~{}:~{}i\geq
N\right\\}.$
* (5)
Consider the index set for the minimal degrees of the polynomials in
$\mathcal{A}=\mathcal{A}_{1}\cup\mathcal{A}_{2}$, i.e.
$S=\left\\{p(r)l+s(r):~{}\begin{array}[]{lllll}0\leq r\leq q-1,\\\\[2.15277pt]
rm\leq p(r)\leq mq-1,\\\\[2.15277pt] (r-1)k+1\leq s(r)\leq
rk\end{array}\right\\}~{}\bigcup~{}\left\\{mql+i:~{}i\geq 0\right\\}.$
We now show that the set $S$ forms a numerical semigroup of type $\langle
l,lm+1,lm+2,\ldots,lm+k\rangle$. It is obvious that $0\in S$ and
$|\mathbb{N}_{0}\setminus S|<\infty$. The closeness under the addition is
given as follows: Write $p(r)$ and $s(r)$ for $1\leq r\leq q-1$ as
$\displaystyle p(r)$ $\displaystyle=rm+\alpha\qquad 0\leq\alpha\leq m(q-r)-1,$
$\displaystyle s(r)$ $\displaystyle=(r-1)k+\beta,\qquad 1\leq\beta\leq k.$
Then each element can be expressed as
$p(r)l+s(r)=\alpha l+(lm+\beta)+(r-1)(lm+k),$
which is in the span $\langle l,lm+1,\ldots,lm+k\rangle$. For the element
$mql+i$ with $i\geq 0$, we first write
$i=\gamma l+\delta\qquad\text{with}\qquad 0\leq\delta\leq l-1.$
Then noting $l-1\leq qk$, we have
$\delta=\mu k+\nu\qquad\text{with}\qquad 0\leq\mu\leq q\quad\text{and}\quad
0\leq\nu\leq k-1,$
which leads to
$mql+i=\mu(ml+k)+\gamma l+m(q-\mu)l+\nu.$
When $k=1$, we have $\nu=0$ and $mql+i$ is in the span $\langle
l,lm+1\rangle$. For the case with $k>1$ and $\nu>0$, we have $\mu<q$, hence
$m(q-\mu)l=m(q-\mu-1)l+ml.$
Therefore the element $mql+i$ is in the span $\langle
l,lm+1,\ldots,lm+k\rangle$ for all $i\geq 0$.
We introduce the following variables,
$x=z^{-l}\qquad\text{and}\qquad
y_{i}=z^{-ml-i}\prod_{p=1}^{m}(1-(\kappa_{p}z)^{l})\quad\text{for}~{}1\leq
i\leq k.$
Then, the subspace spanned by the basis $\mathcal{A}$ can be expressed by a
polynomial ring,
${\mathcal{R}}=\mathbb{C}[x,y_{1},\ldots,y_{k}]/\mathcal{P},$
where the prime ideal $\mathcal{P}$ is encoded in the set of $2\times 2$
minors of the following $2\times(k+1)$ matrix,
(6.1) $\begin{pmatrix}y_{1}&y_{2}&\cdots&y_{k}&G(x)\\\
F(x)^{l-k}&F(x)^{l-k-1}y_{1}&\cdots&F(x)^{l-k-1}y_{k-1}&y_{1}^{l-k-1}y_{k}\end{pmatrix}.$
Here the functions $F(x)$ and $G(x)$ are given by
$F(x)=\prod_{j=1}^{m}(x-\kappa_{j}^{l})\qquad\text{and}\qquad G(x)=xF(x).$
Note that the spectrum Spec$({\mathcal{R}})$ gives the affine part of a
singular curve in the $(k+1)$-dimensional space of $(x,y_{1},\ldots,y_{k})$.
We also remark that this space curve is an irreducible component of the
singular curve defined by
$\mathcal{C}=\left\\{(x,y_{1},\ldots,y_{k})\in\mathbb{C}^{k+1}~{}:~{}y_{i}^{l}=G(x)^{i}F(x)^{l-i},~{}i=1,\ldots,k\right\\}$
* (6)
Let $\lambda(S)$ be the Young diagram for the numerical semigroup $S$. Using
the Binet-Cauchy formula, we can see that the $\tau$-function has the
expansion,
$\displaystyle\tau(t)$
$\displaystyle=\left|\begin{pmatrix}1&p_{1}&p_{2}&\cdots&\cdots&\cdots\\\
0&1&p_{1}&p_{2}&\cdots&\cdots\\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\\
0&\cdots&0&1&p_{1}&\cdots\end{pmatrix}\,\hat{K}\,A^{T}\right|$
$\displaystyle=S_{\lambda}(\hat{t})+\mathop{\textstyle\sum}\limits_{\mu\supset\lambda}c_{\mu}(\kappa)S_{\mu}(\hat{t}),$
where $\hat{t}=\\{t_{n}:n\neq ml~{}\text{for any~{}}m\in\mathbb{N}\\}$ and
$c_{\mu}(\kappa)$ is a homogeneous symmetric polynomial of
$(\kappa_{1}^{l},\ldots,\kappa_{m}^{l})$, which is given by the $N\times N$
minor of the matrix $\hat{K}A^{T}$ with the rows indexed by the diagram $\mu$.
Also note that the Young diagram associated with the matrix $A$ is just
$\lambda(S)$ (see Theorem 2.2), i.e. $A\in
X_{\lambda(S)}\subset\text{Gr}(N,M)$. The pivot index of $A$ is given by
$i_{k}=s_{k-1}+1\qquad\text{for}\qquad 1\leq k\leq
N=mq\left(1+\frac{1}{2}k(q-1)\right),$
where $s_{j}$ for $j=0,1,\ldots,N-1$ are the first $N$ ordered elements in
$S$.
## 7\. Deformation of the singular space curves for soliton solutions
Her we construct a smooth space curve for the numerical semigroup of type
$\langle l,lm+1,\ldots,l(m+1)-1\rangle$, i.e. $k=l-1$. Then the smooth curve
naturally degenerates to the singular curve associated with a soliton solution
of the $l$-th generalized KdV hierarchy.
Let us recall that the singular curve for the soliton solution corresponding
to this type has the local coordinates,
$x=z^{-l},\qquad y_{i}=z^{-i}\prod_{j=1}^{m}(x-\kappa_{j}^{l})\quad(1\leq
i\leq l-1).$
Then we have the following algebraic relations (i.e. (6.1) with $k=l-1$),
(7.1) $p_{i,j}:=y_{i}y_{j-1}-y_{j}y_{i-1}=0\qquad\text{for}\qquad 1\leq
i<j\leq l,$
with $y_{0}:=F(x)=\prod_{j=1}^{m}(x-\kappa_{j}^{l})$ and $y_{l}:=G(x)=xF(x)$.
These relations can be expressed by the $2\times 2$ minors of the following
$2\times l$ matrix,
(7.2) $\begin{pmatrix}y_{1}&y_{2}&\ldots&y_{l-2}&y_{l-1}&G(x)\\\
F(x)&y_{1}&\ldots&y_{l-3}&y_{l-2}&y_{l-1}\end{pmatrix}.$
Note that the relations in (7.1) give a prime ideal in
$\mathbb{C}[x,y_{1},\ldots,y_{l-1}]$.
Inspired by the paper [12] (also see [24]), we consider the following
deformation,
(7.3)
$F(x)~{}\longrightarrow~{}\tilde{F}(x)=\prod_{j=1}^{m}(x-\lambda_{j}^{l})\qquad\text{and}\qquad
G(x)~{}\longrightarrow~{}\tilde{G}(x)=x\,\prod_{j=1}^{m}(x-\lambda_{m+j}^{l}),$
where $\lambda_{j}$ for $j=1,2,\ldots 2m$ are all distinct. The singular limit
is then given by coalescing two moduli parameters of the space curve,
$\lambda_{j}^{l},~{}\lambda_{m+j}^{l}~{}\longrightarrow~{}\kappa_{j}^{l}\qquad\text{for}\quad
j=1,\ldots,m.$
One should note here that the singularity obtained by the above process is an
_ordinary $l$-tuple point singularity_ [7] (page 247). When $l=2$, the
hyperelliptic case, this is an ordinary double point singularity [19, 28].
The smoothness of the curve can be shown as follows: We consider a commutative
ring defined by
$\mathcal{R}=\mathbb{C}[x,y_{1},y_{2},\ldots,y_{l-1}]/\mathcal{P},$
where the prime ideal $\mathcal{P}$ is given by the relations in (7.1),
$\mathcal{P}=\left\\{~{}p_{i,j}~{}:~{}1\leq i<j\leq
l~{}\right\\}\qquad\text{with}\quad y_{0}=\tilde{F}(x),~{}y_{l}=\tilde{G}(x).$
(Hereafter we will omit the $\,\widetilde{}\,$ on the functions $F(x)$ and
$G(x)$.) Then the affine part of the curve associated with the soliton
solution is given by $\text{Spec}(\mathcal{R})$. We now show that ${\rm
Spec}({\mathcal{R}})$ is non-singular.
###### Proposition 7.1.
For every $(x,y_{1},\ldots,y_{l-1})$ satisfying all $p_{i,j}=0$, the following
Jacobian $\mathcal{U}$ has rank $l-1$,
$\mathcal{U}:=\left(\frac{\partial}{\partial
x}p_{i,j},\,\frac{\partial}{\partial
y_{1}}p_{i,j},\,\ldots,\,\frac{\partial}{\partial
y_{l-1}}p_{i,j}\right)_{1\leq i<j\leq l}.$
Here $p_{i,j}$ in each column of the matrix $\mathcal{U}$ are arranged in the
following way,
$\left(p_{1,j}~{}(2\leq j\leq l);~{}p_{2,j}~{}(3\leq j\leq
l);~{}\ldots~{};~{}p_{i,j}~{}(i+1\leq j\leq
l);~{}\ldots~{};~{}p_{l-1,l}\right)^{T}.$
Proof. The $\binom{l}{2}\times l$ Jacobian matrix has the following structure:
* (1)
For the columns with $p_{1,j}$ ($2\leq j\leq l$), we have the $(l-1)\times l$
matrix,
$\begin{pmatrix}-y_{2}F^{\prime}&2y_{1}&-F&0&0&\cdots&0\\\
-y_{3}F^{\prime}&y_{2}&y_{1}&-F&0&\cdots&0\\\
-y_{4}F^{\prime}&y_{3}&0&y_{1}&-F&\cdots&0\\\
\vdots&\vdots&\vdots&\ddots&\ddots&\ddots&\vdots\\\
-y_{l-1}F^{\prime}&y_{l-2}&0&\cdots&0&y_{1}&-F\\\
-(FG)^{\prime}&y_{l-1}&0&\cdots&0&0&y_{1}\end{pmatrix}.$
* (2)
For the columns with $p_{i,j}~{}(i+1\leq j\leq l)$ and $2\leq i\leq l-3$, we
have the $(l-i)\times l$ matrix,
$\left(\begin{array}[]{cccccccccccccccccc}0&0&\cdots&0&-y_{i+1}&2y_{i}&-y_{i-1}&0&0&\cdots&0\\\
0&0&\cdots&0&-y_{i+2}&y_{i+1}&y_{i}&-y_{i-1}&0&\cdots&0\\\
0&0&\cdots&0&-y_{i+3}&y_{i+2}&0&y_{i}&-y_{i-1}&\cdots&0\\\
\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\ddots&\ddots&\vdots\\\
0&0&\cdots&0&-y_{l-1}&y_{l-2}&0&\cdots&0&y_{i}&-y_{i-1}\\\
-y_{i-1}G^{\prime}&0&\cdots&0&-G&y_{l-1}&0&\cdots&\cdots&0&y_{i}\end{array}\right),$
where the first nonzero entry in each row corresponds to $\partial
p_{i,j}/\partial y_{i-1}$.
* (3)
For the last three rows, we have
$\begin{pmatrix}0&0&\cdots&0&-y_{l-1}&2y_{l-2}&-y_{l-3}\\\
-y_{l-3}G^{\prime}&0&\cdots&0&-G&y_{l-1}&y_{l-2}\\\
-y_{l-2}G^{\prime}&0&\cdots&0&0&-G&2y_{l-1}\end{pmatrix}.$
We consider the cases: (a) $F(x)=0$, (b) $G(x)=0$, and (c) $F(x)G(x)\neq 0$.
* (a)
$F(x)=0$ implies $x=\lambda_{j}$ for some $1\leq j\leq m$. Then, we have
$y_{i}=0$ for all $i=1,\dots,l-1$. Then the Jacobian matrix has the following
nonzero entries,
$\frac{\partial p_{1,l}}{\partial
x}\Big{|}_{F=0,y=0}=-F^{\prime}G,\qquad\frac{\partial p_{i,l}}{\partial
y_{i-1}}\Big{|}_{F=0,y=0}=-G~{}(2\leq i\leq l-1).$
This implies $\text{rank}(\mathcal{U})=l-1$.
* (b)
$G(x)=0$ implies $x=0$ or $x=\lambda_{m+j}$ for some $1\leq j\leq m$. Then, we
have $y_{i}=0$ for all $i=1,\dots,l-1$. Then the Jacobian matrix has the
following nonzero entries,
$\frac{\partial p_{1,l}}{\partial
x}\Big{|}_{F=0,y=0}=-FG^{\prime},\qquad\frac{\partial p_{1,j}}{\partial
y_{j}}\Big{|}_{F=0,y=0}=-F~{}(2\leq j\leq l-1).$
This implies $\text{rank}(\mathcal{U})=l-1$.
* (c)
For the case $F(x)G(x)\neq 0$, we have all non zero variables
$(x,y_{1},\ldots,y_{l-1})$. Let us consider the null space of
$\mathcal{U}^{T}$, the transpose of $\mathcal{U}$. Each vector in
null$(\mathcal{U}^{T})$ can be found is the following form: For $3\leq i\leq
l$, we have $(l-i+1)\times\binom{l}{2}$ matrix whose rows satisfy
$r\mathcal{U}=(0,\ldots,0)$,
$\left(\begin{array}[]{ccccccccccccccccc}0&\cdots&0&y_{i}&-y_{i-1}&0&\cdots&0&y_{1}&0&\cdots&\cdots&\cdots&\cdots&0\\\
0&\cdots&0&y_{i+1}&0&-y_{i-1}&0&\cdots&0&y_{1}&0&\cdots&\cdots&\cdots&0\\\
\vdots&\vdots&\vdots&\vdots&\vdots&\ddots&\ddots&\ddots&\vdots&\ddots&\ddots&\ddots&\cdots&\cdots&\vdots\\\
0&\cdots&0&y_{l-1}&0&\cdots&0&-y_{i-1}&0&\cdots&0&y_{1}&0&\cdots&0\\\
0&\cdots&0&y_{l}&0&\cdots&0&0&-y_{i-1}&0&\cdots&0&y_{1}&\cdots&0\end{array}\right),$
where the first nonzero entry in each row is at $(i-2)$-th place, and
$y_{l}=G$. The total number of these rows is $\binom{l-1}{2}$. This implies
that the nullity of $\mathcal{U}^{T}$ is $\binom{l-1}{2}$, hence the rank of
$\mathcal{U}$ is $\binom{l}{2}-\binom{l-1}{2}=l-1$.
This proves that the affine variety given by ${\rm Spec}({\mathcal{R}})$ is a
smooth curve.
A Riemann surface associated with the curve $\mathcal{C}$ can be obtained by
adding one point $\infty$ to the affine smooth curve given by ${\rm
Spec}({\mathcal{R}})$. At $\infty$, we introduce the variables,
$\underline{x}=\frac{1}{x},\qquad\text{and}\qquad\underline{y}_{i}=\frac{y_{i}}{x^{m+1}}\quad\text{for
}~{}1\leq i\leq l-1.$
Then we have a commutative ring,
$\underline{{\mathcal{R}}}=\mathbb{C}[\underline{x},\underline{y}_{1},\ldots,\underline{y}_{l-1}]/\underline{\mathcal{P}},$
where the prime ideal $\underline{\mathcal{P}}$ is given by
$\underline{\mathcal{P}}=\left\\{\underline{p}_{i,j}=\left|\begin{matrix}\underline{y}_{i}&\underline{y}_{j}\\\
\underline{y}_{i-1}&\underline{y}_{j-1}\end{matrix}\right|~{}:~{}1\leq i<j\leq
l\right\\},$
with
$\underline{y}_{0}=\underline{x}\prod_{j=1}^{m}(1-\lambda_{j}^{l}\underline{x}),\qquad\underline{y}_{l}=\prod_{j=1}^{m}(1-\lambda_{m+j}^{l}\underline{x}).$
In the similar manner, we can prove that the affine curve given by ${\rm
Spec}(\underline{{\mathcal{R}}})$ is also smooth. Then the Riemann surface
associated to the curve $\mathcal{C}$ is obtained by patching these affine
curves. In terms of the notion of the Weierstrass semigroup [25], we have the
following Corollary.
###### Corollary 7.2.
Numerical semigroup of type $\langle l,lm+1,lm+2,\ldots,l(m+1)-1\rangle$ is
Weierstrass for $m\geq 2$.
Finally, we remark that the spectrum ${\rm Spec}({\mathcal{R}})$ is given by a
projection of ${\rm Spec}(\widehat{{\mathcal{R}}})$ with the following ring
$\widehat{{\mathcal{R}}}$,
$\widehat{{\mathcal{R}}}:=\mathbb{C}[x,w_{1},w_{2}]/(w_{1}^{l}-F(x),w_{2}^{l}-G(x)).$
We note a natural ring homomorphism ${\mathcal{R}}\to\widehat{{\mathcal{R}}}$
with
$y_{i}=w_{1}^{l-i}w_{2}^{i}\qquad\text{for}\qquad i=1,\ldots,l-1.$
This leads to a projection,
${\rm Spec}(\widehat{{\mathcal{R}}})~{}\longrightarrow~{}{\rm
Spec}({\mathcal{R}}).$
We conclude the section with the following remark about the cases for the
generalized soliton solutions, i.e. $k<l-1$.
###### Remark 7.3.
In the case of $k=1$ in (6.1), we have just one relation,
$\begin{pmatrix}y_{1}&G(x)\\\
F(x)^{l-1}&y_{1}^{l-1}\end{pmatrix}\qquad\text{which gives}\quad
y_{1}^{l}=G(x)F(x)^{l-1},$
which is a singular plane curve. One can deform the curve with
$\left\\{\begin{array}[]{llllll}F(x)^{l-1}{}&\longrightarrow{}&\displaystyle{\prod_{i=0}^{l-2}F_{i+1}(x)\quad\text{with}\quad
F_{i+1}}(x)=\prod_{j=1}^{m}(x-\lambda_{im+j}^{l}),\\\\[8.61108pt]
G(x){}&\longrightarrow{}&\displaystyle{x\prod_{j=1}^{m}(x-\lambda_{lm-j}^{l})},\end{array}\right.$
which then gives a smooth plane curve, called cyclic $(l,lm+1)$-curve,
$y_{1}^{l}=G(x)\prod_{i=1}^{l-1}F_{i}(x)=x\prod_{j=1}^{lm}(x-\lambda_{j}^{l}).$
The cases with $l=2$ are just the hyperelliptic ones. The singular curve
associated to the generalized soliton solution is obtained by coalescing $l$
points of the $(l,lm+1)$-curve,
$\lambda_{j}^{l},\,\lambda_{m+j}^{l},\,\ldots,\,\lambda_{(l-1)m+j}^{l}\quad\longrightarrow\quad\kappa_{j}^{l}\qquad\text{for}\quad
j=1,\ldots,m.$
For the cases with $1<k<l-1$, it seems in general that the singular curve is
not smoothable in $\mathbb{C}\mathbb{P}^{k+1}$ [24] [8]. One of a special case
was found in [12] for $l=6,m=2$ and $k=4$, i.e. the corresponding numerical
semigroup is of type $\langle 6,13,14,15,16\rangle$. One can also show that
the case of $\langle 4,4m+1,4m+2\rangle$ (i.e. $l=4$ and $k=2$) has the
following “smooth” deformation: The ring of the polynomials ${\mathcal{R}}$ is
given by $\mathbb{C}[x,y_{1},y_{2}]/{\tilde{\mathcal{P}}}$ where the prime
ideal is given by
$\left\\{y_{1}^{2}-y_{2}F_{2}(x),~{}~{}y_{2}^{2}-G(x)F_{1}(x)\right\\},\quad\text{from}\quad\begin{pmatrix}y_{1}&y_{2}&G(x)\\\
F_{1}(x)F_{2}(x)&F_{1}(x)y_{1}&y_{1}y_{2}\end{pmatrix},$
where $F_{1}(x),F_{2}(x)$ and $G(x)$ are given by
$F_{1}(x)=\prod_{j=1}^{m}(x-\lambda_{j}^{4}),\quad
F_{2}(x)=\prod_{j=1}^{m}(x-\lambda_{m+j}^{4}),\quad
G(x)=x\prod_{j=1}^{m}(x-\lambda_{2m+j}^{4}).$
This smooth curve is an irreducible component of the intersection of the
hypersurfaces given by
$y_{1}^{4}=G(x)F_{1}(x)F_{2}(x)^{2}\qquad\text{and}\qquad
y_{2}^{2}=G(x)F_{1}(x).$
We will discuss the deformation problem for the cases with $2\leq k\leq l-2$
for $l\geq 5$ in a future communication.
## 8\. Spectral curves for soliton solutions
In this section, we identify the singular space curve given by (7.1) as a
spectral curve of commuting differential operators in the Lax-Sato formulation
of the KP hierarchy (see Appendix A, and also e.g. [9] for the formulation).
This is an extension of the well-know theorem by Burchnall and Chaundy [4]. We
here consider the cases for the soliton solutions corresponding to $k=l-1$ and
the generalized soliton solutions corresponding to $k=l-2$. The general case
with $1\leq k\leq l-3$ will be left for the readers.
### 8.1. Spectral curve for the soliton solution ($k=l-1$)
We first recall that the $\tau$-function of the soliton solution is given by
$\tau(t)=|E(t)A^{T}|$ in (2.6) where the $m\times lm$ matrices $A$ is given by
$\displaystyle A$
$\displaystyle=I_{m}\otimes\Omega_{l}^{1}\qquad\text{where}\qquad\Omega_{l}^{1}=(1,\omega_{l},\omega_{l}^{2},\ldots,\omega_{l}^{l-1})\quad\text{with}\quad\omega_{l}=\exp\left(\frac{2\pi
i}{l}\right),$
and $E(t)$ whose base exponential functions (5.1) are given by
$[\hat{\mathsf{E}}^{(0)}]_{m}=(\mathsf{E}(\kappa_{1}),\ldots,\mathsf{E}(\kappa_{m}))\qquad\text{with}\qquad\mathsf{E}(\kappa_{j})=(E_{1}(t,\kappa_{j}),\ldots,E_{l}(t,\kappa_{j})).$
Here $E_{i}(t,\kappa_{j})$ is the exponential function given in (2.20), i.e.
$E_{i}(t,\kappa_{j})=\exp\left(\mathop{\textstyle\sum}\limits_{n=1}^{\infty}(\kappa_{j}\omega^{i-1})^{n}t_{n}\right).$
Now we define the following time-operator,
$T_{\alpha}:=\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)\partial_{(m-i)l+\alpha}\qquad\text{for}\quad
1\leq\alpha\leq l-1,$
where $\sigma_{i}(\kappa)$ is the elementary symmetric polynomial of degree
$i$ in $(\kappa_{1}^{l},\ldots,\kappa_{m}^{l})$, i.e.
(8.1)
$\prod_{j=1}^{m}(\lambda-\kappa_{j}^{l})=\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)\lambda^{m-i}.$
Then we have the following proposition.
###### Proposition 8.1.
The $\tau$-function generated by $(A,E(t))$ above satisfies
$T_{\alpha}\tau(t)=0\qquad\text{for}\quad 1\leq\alpha\leq l-1,$
Proof. Since the $\tau$-function depends only on the exponential functions
$E_{s}(t,\kappa_{j})$ for $1\leq s\leq l$, it is sufficient to show that each
$E_{s}(t,\kappa_{j})$ satisfies
$T_{\alpha}E_{s}(t,\kappa_{j})=0\qquad\text{for all}\quad 1\leq\alpha\leq
l-1.$
Direct computation shows
$T_{\alpha}E_{s}(t,\kappa_{j})=\left(\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)(\kappa_{j}\omega_{l}^{s-1})^{(m-i)l+\alpha}\right)E_{s}(t,\kappa_{j}).$
Since $\omega_{l}^{l}=1$, the term in the parenthesis in the equation becomes
$\left(\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)\kappa_{j}^{(m-i)l}\right)(\omega_{l}^{s-1}\kappa_{j})^{\alpha}=0,$
where we have used (8.1) with $\lambda=\kappa_{j}^{l}$. This proves the
proposition.
###### Remark 8.2.
Note here that the equation $T_{\alpha}\tau(t)=0$ does not depend on the
matrix $A$. This implies that any soliton solution of the $l$-th generalized
KdV hierarchy satisfies Proposition 8.1. Also note that the number of the free
parameters in the matrix $A$ is $m(l-1)$, which is the genus of the
corresponding numerical semigroup, i.e. $g(S)=m(l-1)$.
Since the solutions of the $l$-th generalized KdV hierarchy can be expressed
by a single $\tau$-function, they also satisfy $T_{\alpha}v_{i}=0$, where
$v_{i}$’s are from the $l$-th differential operator $L^{l}$ of (A.7) in
Appendix A.
Now we define the following differential operators of order $ml+\alpha$ in
$\partial$,
$L_{\alpha}:=\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)B_{(m-i)l+\alpha}\qquad\text{for}\quad
1\leq\alpha\leq l-1,$
where $B_{\beta}$ is the differential operator of order $\beta$ defined in
(A.1), i.e. $B_{\beta}=(L^{\beta})_{\geq 0}$. Then we have the following
proposition.
###### Proposition 8.3.
The $l$ differential operators $\\{L^{l},L_{1},\ldots,L_{l-1}\\}$ mutually
commute, i.e.
* (a)
$[L_{\alpha},\,L^{l}]=0$ for any $1\leq\alpha\leq l-1$, and
* (b)
$[L_{\alpha},\,L_{\beta}]=0$ for any $1\leq\alpha,\beta\leq l-1$.
Proof.
* (a)
Using the Lax equation (A.1), the commutator becomes
$\displaystyle[L_{\alpha},\,L^{l}]$
$\displaystyle=\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)\,[B_{(m-i)l+\alpha},\,L^{l}]=\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)\partial_{(m-i)l+\alpha}(L^{l})=T_{\alpha}(L^{l})$
Since $T_{\alpha}\tau=0$ (see above), we have $T_{\alpha}(L^{l})=0$.
* (b)
Using the Zakharov-Shabat equations in (A.3), the commutator becomes
$\displaystyle[L_{\alpha},\,L_{\beta}]$
$\displaystyle=\mathop{\textstyle\sum}\limits_{i=0}^{m}\mathop{\textstyle\sum}\limits_{j=0}^{m}(-1)^{i+j}\sigma_{i}(\kappa)\sigma_{j}(\kappa)\,[B_{(m-i)l+\alpha},\,B_{(m-j)l+\beta}]$
$\displaystyle=\mathop{\textstyle\sum}\limits_{j=0}^{m}(-1)^{j}\sigma_{j}(\kappa)T_{\alpha}B_{(m-j)l+\beta}-\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)T_{\beta}B_{(m-i)l+\alpha}.$
Again, note that the coefficients of the differential operators $B_{n}$ depend
on $t$ only through the $\tau$-function. This implies that
$T_{\alpha}B_{\beta}=0$ for any $1\leq\alpha,\beta\leq l-1$.
This completes the proof.
In order to find the spectral curve given by the commuting differential
operators $\\{L^{l},L_{1},\ldots,L_{l-1}\\}$, we first note the following
lemma.
###### Lemma 8.4.
For the wave function $\phi=W\phi_{0}$ in (A.4), we have
$L_{\alpha}\phi=\left(z^{-ml-\alpha}\prod_{j=1}^{m}(1-(\kappa_{j}z)^{l})\right)\phi\qquad\text{for}\quad
1\leq\alpha\leq l-1.$
Proof. We first note that
$L_{\alpha}\phi=\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)B_{(m-i)l+\alpha}\phi=T_{\alpha}\phi,$
where we have used $\partial_{n}\phi=B_{n}\phi$ in (A.2). Then we write the
wave function in the dressing form $\phi=W\phi_{0}$ as in (A.5) with (A.4).
Noting again that the dressing operator depends on $t$ only through the
$\tau$-function, we have
$\displaystyle T_{\alpha}\phi$
$\displaystyle=WT_{\alpha}\phi_{0}=W\left(\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)z^{-(m-i)l-\alpha}\right)\phi_{0}$
$\displaystyle=z^{-ml-\alpha}\left(\mathop{\textstyle\sum}\limits_{i=0}^{m}(-1)^{i}\sigma_{i}(\kappa)z^{il}\right)\phi=z^{-ml-\alpha}\prod_{j=1}^{m}(1-(\kappa_{j}z)^{l})\,\phi.$
This proves the lemma.
Now using the coordinates in (3.13), i.e.
$x=z^{-l},\qquad
y_{\alpha}=z^{-ml-\alpha}\prod_{j=1}^{m}(1-(\kappa_{j}z)^{l})\quad\text{for}\quad\alpha=1,\ldots,l-1,$
we have
$L^{l}\phi=x\phi,\qquad
L_{\alpha}=y_{\alpha}\phi\quad\text{for}\quad\alpha=1,\ldots,l-1.$
Then from Lemma 8.4, the following theorem is immediate (see Section 7).
###### Theorem 8.5.
The eigenvalues of the commuting differential operators
$\\{L^{l},L_{1},\ldots,L_{l-1}\\}$ satisfy the following relations,
$p_{i,j}=y_{i}y_{j-1}-y_{j}y_{i-1}=0\qquad\text{for}\quad 1\leq i<j\leq l,$
where $y_{0}=F(x)=\prod_{j=1}^{m}(x-\kappa_{j}^{l})$ and $y_{l}=G(x)=xF(x)$.
###### Remark 8.6.
Theorem 8.5 can be considered as an extension of the well-known theorem by
Burchnall and Chaundy [4] on the commuting pair of differential operators of
positive order.
### 8.2. Spectral curve for the generalized soliton solution with $k=l-2$
In this case, we have $q=n_{l,k}=\lceil\frac{l-1}{l-2}\rceil=2$, hence the
matrix $A$ in (5.5) is an $lm\times 2lm$ matrix given by
$A=\begin{pmatrix}I_{m}\otimes[\Omega_{l}]^{1}&0\\\
0&I_{m}\otimes[\Omega_{l}]^{l-1}\end{pmatrix}\qquad\text{with}\qquad[\Omega_{l}]^{n}=\begin{pmatrix}\Omega_{l}^{1}\\\
\vdots\\\ \Omega_{l}^{n}\end{pmatrix}.$
The base functions in (5.1) are given by
$\left([\hat{\mathsf{E}}^{(1)}]_{m},\,[\hat{\mathsf{E}}^{(0)}]_{m}\right)\qquad\text{with}\qquad[\hat{\mathsf{E}}^{(1)}]_{m}=\left(\hat{\mathsf{E}}^{(1)}(\kappa_{1}),\ldots,\hat{\mathsf{E}}^{(1)}(\kappa_{m})\right),$
where
$\hat{\mathsf{E}}^{(1)}(\kappa_{j})=\left(E^{(1)}_{1}(t,\kappa_{j}),\ldots,E^{(1)}_{l}(t,\kappa_{j})\right)\qquad\text{with}\qquad
E^{(1)}_{i}(t,\kappa_{j})=\frac{\partial}{\partial\kappa_{j}}\kappa_{j}E_{i}(t,\kappa_{j}).$
Since the matrix $A$ and the base functions consist of $m$ copies of the
single case with different $\kappa_{j}$’s, we here consider the case for
$m=1$. The general case for $m>1$ follows in the similar manner as the case
with $m=1$. The $\tau$-function for this case has the following expansion,
$\tau(t)=|E(t)A^{T}|=\mathop{\textstyle\sum}\limits_{i=1}^{l}\mathop{\textstyle\sum}\limits_{j=1}^{l}\Delta_{I_{i,\hat{j}}}(A)E_{I_{i,\hat{j}}}(t)\qquad\text{with}\qquad
I_{i,\hat{j}}=(i,\,l+1,\ldots,\,\widehat{l+j},\ldots,2l),$
where $\widehat{l+j}$ is the missing column index of the matrix $A$ and the
matrix $E(t)$ is given by
$E(t)=\begin{pmatrix}\hat{\mathsf{E}}^{(1)}(\kappa)&\mathsf{E}(\kappa)\\\
\partial_{1}\hat{\mathsf{E}}^{(1)}(\kappa)&\partial_{1}\mathsf{E}(\kappa)\\\
\vdots&\vdots\\\
\partial_{1}^{l-1}\hat{\mathsf{E}}^{(1)}(\kappa)&\partial_{1}^{l-1}\mathsf{E}(\kappa)\end{pmatrix}.$
Then the exponential function $E_{I_{i,\hat{j}}}(t)$ is expressed by the
$l\times l$ Wronskian determinant,
$E_{I_{i,\hat{j}}}(t)=\text{Wr}\left(E^{(1)}_{i}(t,\kappa),E_{1}(t,\kappa),\ldots,\widehat{E_{j}(t,\kappa)},\ldots,E_{l}(t,\kappa)\right),$
where the column of ${E_{j}(t,\kappa)}$ is removed from the matrix $E(t)$.
Then we have the following proposition.
###### Proposition 8.7.
The $\tau$-function for the generalized soliton solution of type $\langle
l,l+1,\ldots,2l-2\rangle$ satisfies
$T_{\alpha}\tau(t)=0\qquad\text{for}\qquad 1\leq\alpha\leq l-2,$
where $T_{\alpha}=\partial_{l+\alpha}-\kappa^{l}\partial_{\alpha}$.
Proof. First recall that $T_{\alpha}E_{i}(t,\kappa)=0$ for any $i$. Then we
have
$T_{\alpha}E_{i}^{(1)}(t,\kappa)=-\frac{\partial
T_{\alpha}}{\partial\kappa}\,\kappa
E_{i}(t,\kappa)=l\kappa^{l+\alpha}(\omega_{l}^{i-1})^{\alpha}E_{i}(t,\kappa).$
We then note that $T_{\alpha}E_{I_{i,\hat{j}}}$ assumes nonzero value only if
$i=j$, and
$T_{\alpha}E_{I_{i,\hat{i}}}(t)=(-1)^{i-1}l\kappa^{l+\alpha}(\omega_{l}^{i-1})^{\alpha}\text{Wr}\left(E_{1}(t,\kappa),\,\ldots,\,E_{l}(t,\kappa)\right).$
Now applying $T_{\alpha}$ on the $\tau$-function, we have
$T_{\alpha}\tau(t)=l\kappa^{l+\alpha}\left(\mathop{\textstyle\sum}\limits_{i=1}^{l}(-1)^{i-1}(\omega_{l}^{i-1})^{\alpha}\Delta_{I_{i,\hat{i}}}(A)\right)\,\text{Wr}(E_{1}(t,\kappa),\,\ldots,\,E_{l}(t,\kappa)).$
The term in the parenthesis is expressed by the $l\times l$ determinant,
$\left|\begin{matrix}1&\omega_{l}^{\alpha+1}&\omega_{l}^{2(\alpha+1)}&\cdots&\omega_{l}^{(l-1)(\alpha+1)}\\\
1&\omega_{l}&\omega_{l}^{2}&\cdots&\omega_{l}^{l-1}\\\
\vdots&\vdots&\vdots&\ddots&\vdots\\\
1&\omega_{l}^{l-1}&\omega_{l}^{l-2}&\cdots&\omega_{l}\end{matrix}\right|$
which vanishes for $1\leq\alpha\leq l-2$, and not zero for $\alpha=l-1$. This
completes the proof.
###### Remark 8.8.
Unlike the case of the soliton solution, one should note that for the
generalized soliton solution, $T_{\alpha}\tau=0$ is true only for
$1\leq\alpha\leq l-2$ and depends also on the matrix $A$.
Proposition 8.7 implies that the generalized soliton solutions of type
$\langle l,l+1,\ldots,2l-2\rangle$ also have Proposition 8.3 and Lemma 8.4 but
only for $1\leq\alpha\leq l-2$.
## Appendix A The Lax-Sato formulation of the KP hierarchy
Here we give a brief summary of the Lax-Sato formulation of the KP hierarchy
and the $l$-th generalized KdV hierarchy. We also give the Wronskian formula
of the $\tau$-function for the KP solitons.
### A.1. The $l$-th generalized KdV hierarchy
The Sato theory of the KP hierarchy is formulated on the basis of a pseudo-
differential operator,
$L=\partial+u_{2}\partial^{-1}+u_{3}\partial^{-2}+\cdots,$
where $\partial$ is a derivative satisfying
$\partial\partial^{-1}=\partial^{-1}\partial=1$ and the generalized Leibniz
rule,
$\partial^{\nu}f\cdot=\mathop{\textstyle\sum}\limits_{k=0}^{\infty}\binom{\nu}{k}(\partial_{1}^{k}f)\partial^{\nu-k}\cdot,$
for any smooth functions $f$. (Note that the series terminates if and only if
$\nu$ is a nonnegative integer.) Then the KP hierarchy can be written in the
Lax form,
(A.1) $\partial_{n}(L)=[B_{n},\,L]\qquad\text{with}\qquad B_{n}=(L^{n})_{\geq
0}\quad(n=1,2,\ldots),$
where $(L^{n})_{\geq 0}$ represents the polynomial (differential) part of
$L^{n}$ in $\partial$. The solution of the KP equation (2.1) is given by
$u=2u_{2}$. The Lax equation (A.1) is also given by the compatibility
condition of the linear system,
(A.2) $L\phi=z^{-1}\phi,\qquad\partial_{n}\phi=B_{n}\phi,$
where $\phi$ is called the wave function of the Lax pair $(L,B_{n})$. The
compatibility among the equations $\partial_{n}\phi=B_{n}\phi$, which is
$\partial_{n}\partial_{m}\phi=\partial_{m}\partial_{n}\phi$, gives
(A.3) $\partial_{m}(B_{n})-\partial_{n}(B_{m})+[B_{n},\,B_{m}]=0,$
which is called the Zakharov-Shabat equations.
The variable $z\in\mathbb{C}$ in (A.2) may be considered as a local coordinate
at $\infty$ in the spectral space of $L$. Note that if the functions $u_{i}$’s
are all zero, then we have $L=\partial$ and $B_{n}=\partial^{n}$ and the wave
function, denoted by $\phi_{0}$, is given by
(A.4)
$\phi_{0}(z;t)=\exp\left(\mathop{\textstyle\sum}\limits_{n=1}^{\infty}\frac{t_{n}}{z^{n}}\right).$
The wave function $\phi$ is then expressed in the dressing form,
(A.5) $\phi=W\phi_{0}\qquad\text{with}\qquad
W=1-w_{1}\partial^{-1}-w_{2}\partial^{-2}-\cdots,$
where the pseudo-differential operator $W$ is called the dressing operator.
Notice that all the functions $u_{i}$’s in $L$ can be determined by $w_{j}$’s
in $W$ through
$L=W\partial W^{-1}.$
For example, we have
$u_{2}=\partial_{1}w_{1},\quad
u_{3}=\partial_{1}w_{2}+w_{1}\partial_{1}w_{1},\quad\ldots.$
Then, from the Lax equation, the dressing operator $W$ satisfies
(A.6) $\partial_{n}(W)=B_{n}W-W\partial^{n}\qquad\text{for}\quad
n=1,2,\cdots,$
which is sometimes called the Sato equation.
The $l$-th generalized KdV hierarchy is the $l$-reduction of the KP hierarchy
defined by
$L^{l}=(L^{l})_{\geq 0},$
that is, the $l$-th power of $L$ becomes a differential operator. This means
that the functions $u_{i}$’s are determined by $l-1$ variables in $L^{l}$ in
the form,
(A.7)
$L^{l}=\partial^{l}+v_{2}\partial^{l-2}+v_{3}\partial^{l-3}+\cdots+v_{l-1}\partial+v_{l}.$
Also note that those variables are determined by $w_{i}$’s in $W$. From (A.1),
the $l$-reduction gives the constraints,
$\partial_{nl}(L)=0\qquad\text{for}\quad n=1,2,\ldots,$
that is, all the variables $v_{i}$’s do not depend on the times $t_{nl}$. The
original KdV hierarchy is given by the 2-reduction, and the solutions do not
depend on the times $t_{2n}$.
### A.2. $N$ truncation and the $\tau$-function
Here we explain the Wronskian formula of the $\tau$-function. First note that
a finite truncation of $W$ with some positive integer $N$, given by
$W=1-w_{1}\partial^{-1}-w_{2}\partial^{-2}-\cdots-w_{N}\partial^{-N},$
is invariant under (A.6). We then consider the $N$-th order differential
equation,
$W\partial^{N}f=f^{(N)}-w_{1}f^{(N-1)}-w_{2}f^{(N-2)}-\cdots-w_{N}f=0,$
where $f^{(n)}=\partial_{1}^{n}f$. Let $\\{f_{i}:i=1,\ldots,N\\}$ be a
fundamental set of solutions of the equation $W\partial^{N}f=0$. Then the
functions $w_{i}$’s are given by
(A.8) $w_{i}=-\frac{1}{\tau}p_{i}(-\tilde{\partial})\tau\qquad\text{for}\quad
i=1,\ldots,N,$
where $p_{i}(x)$ is the elementary Schur polynomial of degree $i$ and
$\tilde{\partial}=(\partial_{1},\frac{1}{2}\partial_{2},\frac{1}{3}\partial_{3},\ldots)$.
Here $\tau$ is the $\tau$-function given by the Wronskian form,
(A.9)
$\tau=\text{Wr}(f_{1},f_{2},\ldots,f_{N})=\left|\begin{matrix}f_{1}&f_{2}&\cdots&f_{N}\\\
\partial_{1}f_{1}&\partial_{1}f_{2}&\cdots&\partial_{1}f_{N}\\\
\vdots&\vdots&\ddots&\vdots\\\
\partial_{1}^{N-1}f_{1}&\partial_{1}^{N-1}f_{2}&\cdots&\partial_{1}^{N-1}f_{N}\end{matrix}\right|.$
For the time-evolution of the functions $f_{i}$, we consider the following
(diffusion) hierarchy,
$\partial_{n}f_{i}=\partial_{1}^{n}f_{i}\qquad\text{for}\quad 1\leq i\leq
N,\quad n\in\mathbb{N},$
which gives the solution of the Sato equation (A.6). Then the solution of the
KP equation can be expressed in terms of the $\tau$-function by
$u(t)=2u_{2}(t)=2\partial_{1}w_{1}(t)=2\partial_{1}^{2}\ln\tau(t).$
## References
* [1] S. Abenda, On a family of KP multi-line solitons associated to rational degenerations of real hyperelliptic curves and to the finite non-periodic Toda hierarchy, J. Geom. Phys. 119 (2017) 112–138.
* [2] S. Abenda and P. Grinevich, Rational degenerations of M-curve, totally positive Grassmannians and KP2-solitons Comm. Math. Phys. 361 (2018) 1029–1081.
* [3] E.B. Belokolos, A.I. Bobenko V.Z. Enol’skii, A.R. Its and V.B. Matveev, Algebro-Geometric Approach to Nonlinear Integrable Equations (Springer-Verlag Berlin Heidelberg, 1994).
* [4] J. L. Burchnall and T. W. Chaundy, Commutative ordinary differential operators, Proc. London Math. Soc. 21 (1923) 420–440.
* [5] V. M. Buchstaber, V. Z. Enolski and D. V. Leikin, Kleinian functions, hyperelliptic Jacobians and applications, Rev. Math. Math. Phys. 10, (1997) 1-103.
* [6] V. M. Buchstaber, V. Z. Enolski and D. V. Leikin, Rational analogues of Abelian functions, Func. Anal. Appl. 33, (1999) 83-94.
* [7] R.-O. Buchweitz and G.-M. Greuel, The Milnor number and deformations of complex curve singularities, Invent. Math 58, (1980) 241-281.
* [8] G.-M. Greuel, On deformation of curves and a formula of Deligne, Algebraic Geometry, proceedings La Rábida 1981, LNM 961, Springer-Verlag, 1982, pp. 141-168.
* [9] Y. Kodama, KP solitons and the Grassmannians: Combinatorics and Geometry of Two-Dimensional Wave Patterns, Springer Briefs in Mathematical Physics 22, (Springer, Singapore 2017).
* [10] Y. Kodama, L. Williams, KP solitons and total positivity for the Grassmannian, Invent. Math. 198, (2014) 647-699, (arXiv:1106.0023).
* [11] Y. Kodama, L. Williams, The Deodhar decomposition of the Grassmannian and the regularity of KP solitons, Adv. Math. 244 (2013) 979-1032. (arXiv:1204.6446).
* [12] J. Komeda, S. Matsutani and E. Previato, The sigma function for Weierstrass semigroups $\langle 3,7,8\rangle$ and $\langle 6,13,14,15,16\rangle$. Int. J. Math. 24, (2013) 1350085, 58, (arXiv: 1303.0451).
* [13] J. Komeda, S. Matsutani and E. Previato, The Riemann constant for a non-symmetric Weierstrass semigroup, Arch. Math. 107 (2016) 499-509.
* [14] J. Komeda, S. Matsutani and E. Previato, The sigma function for trigonal cyclic curves, Lett. Math. Phys. DOI 10.1007/s11005-018-1116-6 (arXiv:1712.00694).
* [15] I.M. Krichever, Integration of nonlinear equations by the methods of algebraic geometry Funct. Anal. Appl., 11 (1977), 12-26.
* [16] I. G. Macdonald, Symmetric Functions and Hall Polynomials, Oxford Mathematical Monographs, 2nd edn. (Oxford Science Publications, 1995).
* [17] S. Matsutani and J. Komeda, Sigma functions for a space curve of type $(3,4,5)$. J. Geom. Symmetry Phys. 30, (2013) 75-91, (arXiv:1112.4137).
* [18] M. Mulase, Cohomological structure in soliton equations and jacobian varieties, J. Diff. Geom. 19 (1984) 403–430.
* [19] D. Mumford, Tata Lectures on Theta II: Jacobian theta functions and differential equations, Progress in Mathematics 43 (Birkhäuser, 1984)
* [20] A. Nakayashiki, On algebraic expansions of sigma functions for $(n,s)$ curves, Asian J. Math. 14, (2010) 175-212, (arXiv:0803.2083).
* [21] A. Nakayashiki, Degeneration of trigonal curves and solutions of the KP hierarchy, Nonlinearity 31 (2018) 3567-3590, (arXiv:1708.03440).
* [22] A. Nakayashiki, On reducible degeneration of hyperelliptic curves and soliton solutions, (arXiv:1808.06748).
* [23] A. Nakayashiki, S. Okada and Y. Shigyo, On the expansion coefficients of KP tau function, J. Integrable Systems (JoIS), 2 (2017), 1-23. (arXiv:1704.03659).
* [24] H. C. Pinkham, Deformation of algebraic curve with $G_{m}$ action, Astérisque 20 (1974) 1-131.
* [25] J. C. Rosales and P. A. García-S anchez, Numerical Semigroups, Development Mathematics 20, (Springer Science+Business Media, LLC 2009).
* [26] M. Sato, Soliton equations as dynamical systems on an infinite dimensional Grassmannian manifold, RIMS Kokyuroku (Kyoto University) 439 (1981), 30-46.
* [27] M. Sato and M. Noumi, Soliton equation and universal Grassmann manifold, [in Japanese], Sophia University Kokyuroku in Mathematics, 18 (1984) 1–131.
* [28] G. Segal and G. Wilson, Loop groups and equations of KdV type, Inst. Hautes Etudes Sci. Publ. Math. 61 (1985), 5-65.
* [29] K. Takasaki, Geometry of universal Grassmann manifold from algebraic point of view, Rev. Math. Phys. 1 (1989) 1-46.
|
Families of Sets in Bishop Set Theory
Iosif Petrakis
Mathematics Institute, Ludwig-Maximilians-Universität München
CHAPTER: ABSTRACT
We develop the theory of set-indexed families of sets within the informal Bishop Set Theory $(\BST)$, a
reconstruction of Bishop's theory of sets,. The latter is the informal theory of sets and functions underlying
Bishop-style constructive mathematics $(\BISH)$ and it is developed in Chapter 3 of Bishop's seminal
book Foundations of Constructive Analysis [9] and in Chapter
3 of Constructive Analysis [19] that Bishop co-authored with Bridges.
In the Introduction we briefly present the relation of Bishop's set theory to the set-theoretic
and type-theoretic foundations of mathematics, and we describe the features of $\BST$ that “complete”
Bishop's theory of sets. These are the explicit use of the class “universe of sets”, a clear distinction
between sets and classes, the explicit use of dependent operations, and the concrete formulation of
various notions of families of sets.
In Chapter <ref> we present the fundamentals of Bishop's theory of sets, extended with the features which form $\BST$. The universe $\D V_0$ of sets is implicit in Bishop's work, while the notion of a dependent operation
over a non-dependent assignment routine from a set to $\D V_0$ is explicitly mentioned, although in a rough way.
These concepts are necessary to a concrete definition of a set-indexed family of sets,
the main object of our study, which is only mentioned by Bishop.
In Chapter <ref> we develop the basic theory of set-indexed families of sets and of
family-maps between them. We study the exterior union of a family of sets $\Lambda$, or the $\sum$-set of
$\Lambda$, and the set
of dependent functions over $\Lambda$, or the $\prod$-set of $\Lambda$. We prove the distributivity of
$\prod$ over $\sum$ for families of sets indexed by a product of sets, which is the translation
of the type-theoretic axiom of choice into $\BST$. Sets of sets are special set-indexed families of sets
that allow “lifting” of functions on the index-set to functions on them. The direct families of sets and the
set-relevant families of sets are introduced. The index-set of the former is a directed set, while the
transport maps of the latter are more than one and appropriately indexed. With the use of the introduced universe
$\D V_0^{\im}$ of sets and impredicative sets we study families of families of sets, the next rung of the
ladder of set-like objects in $\D V_0^{\im}$.
In Chapter <ref> we develop the basic theory of set-indexed families of subsets
and of the corresponding family-maps between them. In contrast to set-indexed families of sets, the properties
of which are determined “externally” through their transport maps, the properties of a set-indexed family
$\Lambda(X)$ of subsets of a given set $X$ are determined “internally” through the embeddings of the
subsets of $\Lambda(X)$ to $X$. The interior union of $\Lambda(X)$ is the internal analogue to the $\sum$-set
of a set-indexed family of sets $\Lambda$,
and the intersection of $\Lambda(X)$ is the internal analogue to the $\prod$-set of $\Lambda$.
Families of sets over products, sets of subsets, and direct families of subsets are the internal analogue to the
corresponding notions for families of sets. Set-indexed families of partial functions
and set-indexed families of complemented subsets, together with their corresponding family-maps, are studied.
In Chapter <ref> we connect various notions and results from the theory of families of sets
and subsets to the theory of Bishop spaces, a function-theoretic approach to constructive topology. Associating
in an appropriate way to each set $\lambda_0(i)$ of an $I$-family of sets $\Lambda$ a Bishop topology $F_i$, a
spectrum $S(\Lambda)$ of Bishop spaces is generated. The $\sum$-set and the $\prod$-set of a spectrum $S(\Lambda)$
are equipped with canonical Bishop topologies. A direct spectrum of Bishop spaces is a family of Bishop spaces
associated to a direct family of sets. The direct and inverse limits of direct spectra of Bishop spaces are studied.
Direct spectra of Bishop subspaces are also examined. Many Bishop topologies used in this chapter are defined inductively within the extension $\BISH^*$ of $\BISH$ with inductive definitions with rules of countably many premises.
In Chapter <ref> we study the Borel and Baire sets within Bishop spaces as a constructive
counterpart to the study of Borel and Baire algebras within topological spaces. As we use the inductively
defined least Bishop topology, and as the Borel and Baire sets over a family of $F$-complemented subsets are
defined inductively, we work again within $\BISH^*$. In contrast to the classical theory, we show that
the Borel and the Baire sets of a Bishop space coincide. Finally, our reformulation within $\BST$ of the Bishop-Cheng
definition of a measure space and of an integration space, based on the notions of families of complemented
subsets and of families of partial functions, facilitates a predicative reconstruction of the originally
impredicative Bishop-Cheng measure theory.
CHAPTER: INTRODUCTION
Bishop's theory of sets is Bishop's account of the informal theory of sets and functions that underlies
Bishop-style constructive mathematics $\BISH$. We briefly present the relation of this theory to
the set-theoretic and type-theoretic foundations of mathematics. Bishop Set Theory $(\BST)$ is
our “completion” of Bishop's theory of sets
with a universe of sets, with a clear distinction between sets and classes, with an explicit use
of dependent operations, and with a concrete formulation of various notions of families of sets.
We explain how the theory of families of sets within $\BST$ that is elaborated in this work is used,
in order to reveal proof-relevance in $\BISH$, to develop the theory of spectra of Bishop spaces, and
to reformulate predicatively the fundamental notions of the impredicative Bishop-Cheng measure theory.
§ BISHOP'S THEORY OF SETS
The theory of sets underlying Bishop-style constructive mathematics $(\BISH)$ was only sketched in Chapter 3 of
Bishop's seminal book [9]. Since Bishop's central aim in [9] was to show
that a large part of advanced mathematics can be done within a constructive and
computational framework that does not contradict the classical practice, the inclusion of a
detailed account of the set-theoretic foundations of $\BISH$ could possibly be against the effective
delivery of his message.
The Bishop-Cheng measure theory, developed in [18], was very
different from the measure theory of [9], and the inclusion of an enriched version of the former
into [19], the book on constructive analysis that Bishop co-authored with Bridges later,
affected the corresponding Chapter 3 in two main respects. First, the inductively defined notion of the set
of Borel sets generated by a given family of complemented subsets of a set $X$, with respect to a set
of real-valued functions on $X$, was excluded, as unnecessary, and, second, the operations on the
complemented subsets of a set $X$ were defined differently, and in accordance to the needs
of the new measure theory.
Yet, in both books many issues were left untouched, a fact that often was a source of confusion.
In many occasions, especially
in the measure theory of [18] and [19], the powerset was treated as a set, while in the
measure theory of [9], Bishop generally avoided the powerset by using appropriate families of
subsets instead. In later works of Bridges and Richman, like [20] and [76], the powerset was
clearly used as a set, in contrast though, to the predicative spirit of [9].
The concept of a family of
sets indexed by a (discrete) set, was asked to be defined in [9] (Exercise 2, p. 72), and a definition,
attributed to Richman, was given in [19] (Exercise 2, p. 78). An elaborate study though, of this concept
within $\BISH$ is missing, despite its central character in the measure theory of [9], its
extensive use in the theory of Bishop spaces [88] and in abstract constructive algebra [76].
Actually, in [76] Richman introduced the more general notion of a family of objects of
a category indexed by some set, but the categorical component in the resulting mixture of Bishop's set theory and
category theory was not explained in constructive terms[This was done e.g., in the
the formulation of category theory in homotopy type theory (Chapter 9 in [127]).].
Contrary to the standard view on Bishop's relation to formalisation, Bishop was very interested in it.
In [12], p. 60, he writes:
Another important foundational problem is to find a formal system that will efficiently
express existing predictive mathematics. I think we should keep the formalism as primitive as possible,
starting with a minimal system and enlarging it only if the enlargement serves a genuine mathematical
need. In this way the formalism and the mathematics will hopefully interact to the advantage
of both.
Actually, in [12] Bishop proposed $\Sigma$, a variant
of Gödel's $T$, as a formal system for $\BISH$. In the last two pages of [12] he sketched very
briefly how $\Sigma$ can be presented as a functional programming language, like fortran and algol.
In p. 72 he also added:
It would be interesting to take $\Sigma$ as the point of departure for a reasonable programming language,
and to write a compiler.
Bishop's views on a full-scale program on the foundations of mathematics are realised in a more developed
form in his, unfortunately, unpublished papers [10] and [11]. In the first, Bishop elaborated
a version of dependent type theory with one universe, in order to formalise $\BISH$. This was the first time that some form of type theory is used to formalise constructive mathematics.
As Martin-Löf explains in [71], p. 13, he got access to Bishop's book only shortly after his own book
on constructive mathematics [71] was finished. Bishop's book [9] also motivated his version of type theory.
Martin-Löf opened his first published paper on type theory ([72], p. 73) as follows.
The theory of types with which we shall be concerned is intended to be a full scale system for formalizing
intuitionistic mathematics as developed, for example, in the book of Bishop.
The type-theoretic interpretation of Bishop's set theory into the theory of setoids (see especially the
work of Palmgren [81]-[87]) has become nowadays the standard way to understand Bishop sets
(as far as I know, this is a term due to Palmgren). A setoid is a type $A$ in a fixed universe $\C U$ equipped
with a term $\simeq \colon A \to A \to \C U$ that satisfies the properties of an equivalence relation. The identity
type of Martin-Löf's intensional type theory ($\MLTT$) (see [74]), expresses, in a proof-relevant way,
the existence of the least reflexive relation on a type, a fact with no counterpart in Bishop's set theory. As
a consequence, the free setoid on a type is definable (see [85], p. 90), and the presentation axiom in
setoids is provable (see Note <ref>). Moreover, in $\MLTT$ the families of types over a type $I$ is the type $I \to \C U$,
which belongs to the successor universe $\C U{'}$ of $\C U$. In Bishop's set theory though, where only one
universe of sets is implicitly used, the set-character of the totality of all families of sets indexed by some set $I$
is questionable from the predicative point of view (see our comment after the Definition <ref>).
The quest $\B Q$$\B Q$ of finding a formal system suitable for Bishop's system of informal constructive
mathematics $\BISH$ dominated the foundational studies of the 1970's. Myhill's system $\CST$,
introduced in [80], and later Aczel's $\CZF$ (see [1]),
Friedman's system $B$, developed in [51], and Feferman's system of explicit mathematics $T_0$
(see [48] and [49]), are some of the systems related to $\B Q$,
but soon developed independently from it. These systems
were influenced a lot from the classical Zermelo-Fraenkel set theory, and could be described as “top-down”
approaches to the goal of $\B Q$, as they have many “unexpected” features with respect to
Using Feferman's terminology from [49], these formal systems are not completely
faithfulfaithful formalisation to $\BISH$. If $T$ is a formal theory of an informal body of mathematics $M$,
Feferman gave in [49] the following definitions.
(i) $T$ is adequate for $M$, if every concept, argument,adequate formalisation
and result of $M$ is represented by a (basic or defined) concept, proof, and a theorem, respectively, of $T$.
(ii) $T$ is faithful to $M$, if every basic concept of $T$ corresponds to a basic concept of $M$ and every
and rule of $T$ corresponds to or is implicit in the assumptions and reasoning followed in $M$ (i.e., $T$
does not go beyond $M$ conceptually or in principle).
In [5], p. 153, Beeson called $T$ suitable to $M$, if $T$ is adequate for $M$ and faithful to $M$.
Beeson's systems $S$ and $S_0$ in [5], and Greenleaf's system
of liberal constructive set theory $\LCST$ in [55] were dedicated to $\B Q$. Especially Beeson tried to
find a faithful and adequate formalisation of $\BISH$, and, by including a serious amount of proof relevance,
his systems stand in between the set-theoretic, proof-irrelevant point of view and the type-theoretic,
proof-relevant point of view.
All aforementioned systems though, were not really “tested” with respect to $\BISH$. Only very small parts of $\BISH$
were actually implemented in them, and their adequacy for $\BISH$ was mainly a claim, rather than
a shown fact. The implementation of Bishop's constructivism within a formal system for it was taken seriously
in the type-theoretic formalisations of $\BISH$, and especially in the work of Coquand (see e.g., [37]
and [40]), Palmgren (see e.g., [62] and the collaborative work [39]), the
Nuprl research group of Constable (see e.g., [36]), and of Sambin and Maietti
within the Minimalist Foundation (see [113] and [70]).
§ BISHOP SET THEORY $(\BST)$ AND BISHOP'S THEORY OF SETS
Bishop set theory $(\BST)$ is an informal, constructive theory of totalities and assignment routines
that serves as a “completion” of Bishop's theory of sets. Its first aim is to fill in the “gaps”, or
highlight the fundamental notions that were suppressed by Bishop in his account of the set theory underlying $\BISH$.
Its second aim is
to serve as an intermediate step between Bishop's theory of sets and a suitable, in Beeson's sense, formalisation of
$\BISH$. To assure faithfulness, we use concepts or principles that appear, explicitly or implicitly, in $\BISH$.
Next we describe briefly the features of $\BST$ that “complete” Bishop's theory of sets.
1. Explicit use of a universe of sets. Bishop used a universe of sets only implicitly. E.g., he “roughly”
describes in [9], p. 72, a set-indexed family of sets as
$\ldots$ a rule which assigns to each $t$ in a discrete set $T$ a set $\lambda(t)$.
Every other rule, or assignment routine mentioned by Bishop is from one given totality, the domain of the rule,
to some other totality, its codomain. The only way to make the rule of a family of sets compatible with this
pattern is to employ a totality of sets.
In [10] Bishop explicitly used a universe in his type theory.
Here we use the totality $\D V_0$ of sets, which is defined in an open-ended way, and it contains the primitive
set $\Nat$ and all defined sets. $\D V_0$ itself is not a set, but a class. It is a notion instrumental to the definition
of dependent operations, and of a set-indexed family of sets.
2. Clear distinction between sets and classes. A class is a totality defined through a membership condition
in which a quantification over $\D V_0$ occurs. The powerset $\C P(X)$ of a set $X$, the totality $\C P^{\Disj}(X)$
of complemented subsets of a set $X$, and the totality $\C F(X,Y)$ of partial functions from a set $X$ to a set $Y$
are characteristic examples of classes. A class is never used here as the domain of an assignment routine, only as
a codomain of an assignment routine.
3. Explicit use of dependent operations. The standard view, even among practicioners of Bishop-style
constructive mathematicians, is that dependency is not necessary to $\BISH$.
Dependent functions though, do appear explicitly in Bishop's definition of the intersection $\bigcap_{t \in T}
\lambda (t)$ of a family $\lambda$ of subsets of some set $X$ indexed by an inhabited set $T$
(see [9], p. 65, and [19], p. 70). We show that the elaboration of dependency within $\BISH$ is only fruitful
to it. Dependent functions are not only necessary to the definition of products of families
of sets indexed by an arbitrary set, but as we show throughout this work in many areas of constructive mathematics.
Some form of dependency is also formulated in Bishop's type theory [10]. The somewhat “silent” role of
dependency within Bishop's set theory is replaced by a central role within $\BST$.
4. Elaboration of the theory of families of sets. With the use of the universe $\D V_0$, of the notion of
a non-dependent assignment routine $\lambda_0$ from an index-set $I$ to $\D V_0$, and of a certain dependent operation $\lambda_1$, we
define explicitly in Definition <ref> the notion of a family of sets indexed by $I$.
Although an $I$-family of sets is a certain function-like object, it can be understood also as an object of
a one level higher than that of a set. The corresponding notion of a “function” from an $I$-family $\Lambda$
to an $I$-family $M$ is that of a family-map. Operations between sets generate operations between families of
sets and their family-maps. If the index-set $I$ is a directed set, the corresponding notion of a family of sets
over it is that of a direct family of sets. The constructions for families of sets can be generalised
appropriately for families of families of sets (see Section <ref>). Families of subsets of a
given set $X$ over an index-set $I$ are special $I$-families that deserve an independent treatment. Families
of equivalence classes, families of partial functions, families of complemented subsets and direct families
of subsets are some of the variations of set-indexed families of subsets that are studied here and have many
applications in constructive mathematics.
Here we apply the general theory of families of sets, in order:
I. To reveal proof-relevance in $\BISH$. Classical mathematics is proof-irrelevant, as it is indifferent
to objects that “witness” a relation or a more complex formula. On the other extreme, Martin-Löf type theory
is proof-relevant, as every element of a type $A$ is a proof of the “proposition” $A$. Bishop's presentation
of $\BISH$ was on purpose closer to the proof-irrelevance of classical mathematics, although a form of proof-relevance
was evident in the use of several notions of moduli (of convergence, of uniform continuity, of uniform
differentiability etc.). Focusing on membership and equality conditions for sets given by appropriate existential
formulas we define certain families of proof-sets that provide a $\BHK$-interpretation within $\BST$ of formulas
that correspond to the standard atomic formulas of a first order theory. With the machinery of the general theory
of families of sets this $\BHK$-interpretation within $\BST$ is extended to complex formulas. Consequently, we can associate to many formulas $\phi$ of $\BISH$ a set $\Prf(\phi)$ of “proofs” or witnesses of $\phi$. Abstracting
from several examples of totalities in $\BISH$ we define the notion of a set with a proof-relevant equality,
and of a Martin-Löf set, a special case of the former, the equality of which corresponds to the identity type of
a type in intensional $\MLTT$. Through the concepts and results of $\BST$ notions and facts of $\MLTT$ and its
extensions (either with the axiom of function extensionality, or with Vooevodsky's axiom of univalence) can be
translated into $\BISH$. While Bishop's theory of sets is standardly understood through its translation to
$\MLTT$ (see e.g., [39]), the development of $\BST$ offers a (partial) translation in the converse
II. To develop the theory of spectra of Bishop spaces. A Bishop space is a constructive,
function-theoretic alternative to the notion of a topological space. A Bishop topology $F$ on a set $X$ is a
subset of the real-valued function $\D F(X)$ on $X$ that includes the constant functions and it is closed
under addition, composition with Bishop continuous functions $\BR$ from $\Real $ to $\Real$, and uniform limits.
Hence, in contrast to topological spaces, continuity of real-valued functions is a primitive notion and a concept
of open set comes a posteriori. A Bishop topology on a set can be seen as an abstract and
constructive approach to the ring of continuous functions $C(X)$ of a topological space $X$.
Associating appropriately a Bishop topology to the set $\lambda_0(i)$ of a family of sets over a set $I$,
for every $i \in I$, the notion of a spectrum of Bishop spaces is defined. If $I$ is a directed set, we get a
direct spectrum. The theory of direct spectra of Bishop spaces and their limits is developed in
Chapter <ref>, in analogy to the classical theory of spectra of topological spaces and their limits.
The constructive theory of spectra of other structures, like groups, or rings, or modules, can be developed
along the same lines.
III. To reformulate predicatively the basics of Bishop-Cheng measure theory. The standard approach
to measure theory (see e.g., [125], [57]) is to take measure as a primitive notion, and to
define integration with respect to a given measure. An important alternative, and, as argued by
Segal in [120] and [121], a more natural approach to measure theory,
is to take the integral on a certain set of functions as a primitive notion, extend its definition to an appropriate,
larger set of functions, and then define measure at a later stage. This is the idea of the Daniell integral,
defined by Daniell in [43],
which was taken further by Weil, Kolmogoroff, and Carathéodory (see [130], [67], and [29],
In the general framework of constructive-computable mathematics, there are many approaches
to measure and probability theory. There is an extended literature in intuitionistic measure theory
(see e.g., [59]), in measure theory within the computability framework of Type-2 Theory of Effectivity
(see e.g., [46]), in Russian constructivism (especially in the work of Šanin [114]
and Demuth [21]), in type theory, where the main interest lies in the creation of probabilistic programming
(see e.g., [8]), and recently also in homotopy type theory (see [47]), where
homotopy type theory (see [127]) is applied to probabilistic programming.
Within $\BISH$, measure and probability theory have taken two main directions.
The first direction, developed by Bishop and Cheng in [18] and by Chan in [30]$-$[34],
is based on the notion of integration space, a constructive version of the Daniell integral, as
a starting point of constructive measure theory. Following the aforementioned
spirit of classical algebraic integration theory, Bishop and Cheng defined first
the notion of an integrable function through the notion of an integration space, and afterwords
the measure of an integrable set. In their definition of integration space though, Bishop and Cheng used the
impredicative concept $\C F(X)$ of all partial functions from a set $X$ to $\Real$. Such a notion makes the
extraction of the computational content of $\CMT$ and the implementation of $\CMT$ in some programming language impossible.
The second direction to constructive measure theory, developed by Coquand, Palmgren and Spitters
in [38], [123] and [41], is based on the recognition of the above problem of the
Bishop-Cheng theory and of the advantages of working within the abstract, algebraic, and point-free framework of
Boolean rings or of vector lattices. In analogy to Segal's notion of a probability algebra, the starting notion
is a boolean ring equipped with
an inequality and a measure function, which is called a measure ring, on which integrable and measurable
functions can be defined. One can show that the integrable sets of Bishop-Cheng form a measure ring. In general,
the second direction to constructive measure theory is considered technically and conceptually simpler.
In Chapter <ref> we reconstruct the Bishop-Cheng notion of measure space within $\BST$, where a
set of measurable sets is not an appropriate set of complemented subsets, as it is usually understood, but an
set-indexed family of complemented subsets. This fact is acknowledged by Bishop in [12], but it is
completely suppressed later by him and his collaborators (Cheng and Chan). A similar indexing appears in a
predicative formulation of the Bishop-Cheng notion of an integration space.
The notions of a set-indexed family of sets and of a set-indexed family of subsets of a given set are shown here to be
important tools in the precise formulation of abstract notions in constructive mathematics. Avoiding them,
makes the reading of constructive mathematics easier and very close to the reading of classical mathematics.
Using them, makes the writing of constructive mathematics more precise, and seriously enriches its content.
As the fundamental notion of a family of sets can be described both in categorical and type-theoretic terms,
many notions and constructions from category theory and dependent type theory are represented in $\BST$.
While category theory and standard set-theory, or dependent type theory and standard set-theory do not match perfectly,
large parts of category theory and dependent type theory are reflected naturally in Bishop Set Theory (see also section <ref>).
§ NOTES
Regarding the exact time that Bishop's unpublished papers [10] and [11] were written,
it was difficult to find an answer. Bishop's scheme of presenting a formal system for $\BISH$ and of elaborating its implementation in some functional programming language is found both in [12] and in Bishop's unpublished papers.
The first is Bishop's contribution to the proceedings of the Buffalo meeting in 1968 that were
published in [66]. As Per Martin-Löf informed me, Bishop was not present at the meeting.
The presentation of the formal system $\Sigma$ and its presentation as a programming language in [12] is
very sketchy. Instead, the presentation of the type theory for $\BISH$ in [10], and its presentation as
a programming language in [11] is an elaborated enterprise. I have heard a story of an unsuccessful
effort of Bishop to publish [10], due to some parallels between [10] and de Bruijn's work.
According to that story, Bishop was unwilling to pursue the publication of his type-theoretic formalism after
that rejection. In any event, Bishop's unpublished papers must have been written between 1967 and 1970. Maybe,
the period between 1968 and 1969 is a better estimation. In October 1970 Bishop and Cheng sent to the editors
of the Memoirs of the American Mathematical Society their short monograph [18], a work that deviates a
lot from the predicative character of [9]. In my view, the papers [10] and [11] do not
fit to Bishop's period after 1970.
[The presentation axiom for setoids]
If $A : \C U$, then, by Martin-Löf's $J$-rule, $=_A$ is the least reflexive relation on $A$, and
$\varepsilon A := (A, =_A)$ is the free setoidfree setoid on $A$. According to the universal
property of a free setoid, for every setoid $\C B := (B, \sim_B)$ and every function $f : A \to B$, there is
a setoid-map $\varepsilon f \colon A \to \C B$ such that the following left diagram commutes
(E) at (0,0) $A$;
[above=of E] (F) $A$;
[right=of F] (A) $B$;
[right=of A] (K) $A$;
[right=of K] (L) $B$.;
[below=of K] (M) $P$;
[->,dashed] (E)–(A) node [midway,right] $ \ \varepsilon f$;
[->] (F)–(E) node [midway,left] $\id_A$;
[->] (F)–(A) node [midway,above] $f$;
[->,dashed] (M)–(K) node [midway,left] $h$;
[->>] (K)–(L) node [midway,above] $f$;
[->] (M)–(L) node [midway,right] $\ g$;
To show this, let $(\varepsilon f)(a) := f(a),$ and since $=_B$ is the least reflexive relation on $B$, we get
$a =_A a{'} \To (\varepsilon f)(a) =_B (\varepsilon f)(a{'})$, hence $f(a) \sim_B f(a{'})$.
A setoid $\C A$ is a choice setoid,choice setoid if every $f: X \twoheadrightarrow A$, has a right
inverse i.e., there $g \colon A \to X$ such that $f \circ g = \id_A$.
With the use of the type-theoretic axiom of choice (see [127], section 1.6) one can show that the free setoid
$(A, =_A)$ is a choice setoid. Using the identity map, every setoid $\C A$ is the quotient of the free setoid on $A$,
hence every setoid is the quotient of a choice setoid.
If $\C C$ is a category, an object $P$ of $\C C$ is called projective,projective object of a category
if for every objects $A, B$ of $\C C$ and every arrow $f : A \twoheadrightarrow B$ and $g \colon P \to B$,
there is $h \colon P \to A$ such that the above right diagram commutes.
A category
$\C C$ satisfies the presentation axiom,presentation axiom if for every object $C$ in $\C C$ there
is $f: P \twoheadrightarrow C$, where
$P$ is projective. For the relation between the presentation axiom and various choice principles
see [105].
It is immediate to show that a projective setoid is a choice setoid. For the converse, and following [39],
p. 74, let $(P, \sim_P)$ be a choice setoid. To show that it is a projective, we need to define a setoid-map $h$,
given setoid maps $f$ and $g$ as above. Let
\[Q := \sum_{(a,p) : A \times P}f(a) =_B g(p),\]
and let the projections $p_1 : Q \to A,$, where $p_1(a,p,e) := a$,
and $p_2 \colon Q \to P$, where $p_2(a,p,e) := p$. By the definition of $Q$ we get $f \circ p_1 = g \circ p_2$.
Since $p_2 \colon Q \twoheadrightarrow P$ and $P$ is a choice set, there is $k \colon P \to Q$ such that
$p_2 \circ k = \id_P$. If $h := p_1 \circ k$, then
(E) at (0,0) $P$;
[above=of E] (F) $A$;
[right=of F] (A) $B$;
[left=of E] (B) $ \ Q$;
[left=of B] (C) $ \ P$;
[->>] (F)–(A) node [midway,above] $f$;
[->] (E)–(A) node [midway,right] $\ g$;
[->,bend left] (C) to node [midway,left] $h \ \ $ (F) ;
[->>] (B)–(E) node [midway,below] $p_2$;
[->] (B)–(F) node [midway,left] $p_1 \ $;
[->] (C)–(B) node [midway,below] $k$;
$f \circ (p_1 \circ k) = (f \circ p_1) \circ k = (g \circ p_2) \circ k = g \circ (p_2 \circ k) =
g \circ \id_P = g$. Consequently, every setoid is the surjective image of a choice setoid, hence of a projective setoid.
A very first and short presentation of $\BST$ is found in [95], where there we write
$\CSFT$ instead of $\BST$. In [95] we also expressed
dependency through the universe of functions $\D V_1$ i.e., the totality of triplets
$(A, B, f)$, where $A, B$ are sets and $f$ is a function from $A$ to $B$. Since dependent operations
are explicitly used by Bishop e.g., in the definition of the intersection $\bigcap_{t \in T}\lambda(t)$
of a $T$-family of subsets $(\lambda(t))_{t \in T}$ of a set $X$, while $\D V_1$ is neither explicitly, nor implicitly, mentioned, we
use here the former concept.
As it is noted by Palmgren in [82], p. 35, in $\ZF$, and also in its constructive version $\CZF$, a
family of sets is represented by the fibers of a function $\lambda \colon B \to I$, where the fibers
$\lambda_i := \{ b \in B \mid \lambda(b) = i\}$ of $\lambda$, for every $i \in I$, represent the sets of the
family. Hence the notion of a family of sets is reduced to that of a set. As this reduction rests on the
replacement scheme, such a reduction is not possible neither in $\MLTT$ nor in $\BST$.
CHAPTER: FUNDAMENTALS OF BISHOP SET THEORY
We present the basic elements of $\BST$, a reconstruction of Bishop's informal theory of sets, as this is
developed in chapters 3 of [9] and [19]. The main new features of $\BST$, with respect to Bishop's
account, are the explicit use of the universe $\D V_0$ of sets and the elaboration of the study of dependent
operations over a non-dependent assignment routines from a set to $\D V_0$. The first notion is implicit in Bishop's work,
while the second is explicitly mentioned, although in a rough way. These concepts are necessary to the concrete
definition of a set-indexed family of sets, the main object of our study, which is only roughly mentioned by Bishop.
The various notions of families of sets introduced later, depend on the various notions of sets, subsets and assignment
routines developed in this chapter.
§ PRIMITIVES
The logical framework of $\BST$ is first-order intuitionistic logic with equality (see [118], chapter 1).
This primitive equality between terms is
denoted by $s := t$$s := t$, and it is understood as a definitional, or logical,
equalitydefinitional equalitylogical equality. I.e., we read the equality $s := t$ as
“the term $s$ is by definition equal to the term $t$”. If $\phi$ is an appropriate formula, for the standard axiom
for equality $[a := b \ \& \ \phi(a)] \To \phi(b)$ we use the notation $[a := b \ \& \ \phi(a)] :\To \phi(b)$.
The equivalence notation $:\TOT$ is understood in the same way.
The set $(\Nat =_{\Nat}, \neq_{\Nat})$ of natural numbers, where its canonical equality is given by$=_{\Nat}$
$m =_{\Nat} n :\TOT m := n$, and its canonical inequality by $m \neq_{\Nat} n :\TOT \neg(m =_{\Nat} n)$, is primitive.
The standard Peano-axioms are associated to $\Nat$.
A global operation $( \cdot, \cdot)$ of pairingpairing is also considered primitive. I.e., if
$s, t$ are terms, their pair $(s,t)$ is a new term. The corresponding equality axiom is
$(s, t) := (s{'}, t{'}) :\TOT s := s{'} \ \& \ t := t{'}$. The $n$-tuples of given terms, for every $n$ larger
than $2$, are definable. The global projection routines $\prb_1(s, t) := s$ and $\prb_2(s, t) := t$ are also considered
primitive. The corresponding global projection routines for any $n$-tuples are definable.
An undefined notion of mathematical construction, or algorithm, or of finite routine is considered as primitive.
The main primitive objects of $\BST$ are totalitiestotality and assignment
routinesassignment routine. Sets are special totalities and
functions are special assignment routines, where an assignment routine is a a special finite routine.
All other equalities in $\BST$ are equalities on totalities defined though an equality condition.
A predicatepredicate on a set on a set $X$ is a bounded formula $P(x)$ with $x$ a free variable ranging over $X$,
where a formula is boundedbounded formula, if every quantifier occurring in it is over a given set.
§ TOTALITIES
A primitive setprimitive set $\D A$ is a totality with a given membership
$x \in \D A$, and a given equality $x =_{\D A} y$, that satisfies axiomatically the properties of
an equivalence relation. The set $\Nat$ of natural numbers is the only primitive set considered here.
A $($non-inductive$)$defined totalitydefined totality $X$ is defined by a membership
condition $x \in X : \TOT \C M_X(x),$ where
$\C M_X$ is a formula with $x$ as a free variable.
If $X, Y$ are defined totalities with membership conditions $\C M_X $ and $\C M_Y$, respectively, we define
$X := Y : \TOT \big[\C M_X (x) : \TOT \C M_Y (x)\big]$, and in this case
we say that $X$ and $Y$ are definitionally equaldefinitionally equal defined totalities.
There is a special “open-ended” defined totality $\D V_0$, which is called the universe of sets. $\D V_0$
is not defined through a membership-condition, but in an open-ended way. When we say that a defined totality $X$ is
considered to be a set we “introduce” $X$ as an element of $\D V_0$. We do not add the corresponding induction,
or elimination principle, as we want to leave open the possibility of adding new sets in $\D V_0$.
A defined presetdefined preset $X$, or simply, a presetpreset, is a defined totality
$X$ the membership condition $\C M_X$ of which expresses a construction that can, in principle,
be carried out in a finite time. Formally this is expressed by the requirement
that no quantification over $\D V_0$ occurs in $\C M_X$.
A defined totality $X$ with equalitydefined totality with equality, or simply, a
totality $X$ with equalitytotality with equality is a defined totality $X$ equipped with an equality condition
$x =_X y : \TOT \C E_X(x, y)$, where $\C E_X(x,y)$ is a formula with free variables $x$ and $y$ that
satisfies the conditions of an equivalence relation i.e., $\C E_X(x, x)$ and
$\C E_X(x, y) \To \C E_X(y, x)$, and $[\C E_X(x, y) \ \& \ \C E_X(y, z)] \To \C E_X(x, y)$.
Two defined totalities with equality $(X,=_X)$ and $(Y, =_Y)$ are definitionally equal, if
$\C M_X (x) : \TOT \C M_Y (x)$ and $\C E_X (x, y) : \TOT \C E_Y (x,y)$.
A defined setdefined set is a preset with a given equality.
A setset is either a primitive set, or a defined set.
A totality is a classclass, if it is the universe $\D V_0$, or if
quantification over $\D V_0$ occurs in its membership condition.
If $X, Y$ are sets, their productproduct of sets $X \times Y$$X \times Y$ is the
defined totality
with equality
\[ (x, y) \in X \times Y : \TOT x \in A \ \& \ y \in B, \]
\[ z \in X \times Y : \TOT \exists_{x \in A}\exists_{y \in B}\big(z := (x, y)\big). \]
$X \times Y$ is considered to be a set, and its membership condition is written simpler as follows:
\[ (x, y) =_{X \times Y} (x{'}, y{'}) : \TOT x =_X x{'} \ \& \ y =_Y y{'}. \]
A bounded formula on a set $X$ is called an extensional property on $X$extensional property on a set, if
\[\forall_{x, y \in X}\big([x =_{X } y \ \& \ P(x)] \To P(y)\big). \]
The totality $X_P$$P_X$ generated by $P(x)$ is defined by $x \in X_P : \TOT x \in X \ \& \ P(x)$,
\[ x \in X_P : \TOT x \in X \ \& \ P(x), \]
and the equality
of $X_P$ is inherited from the equality
of $X$. We also write $X_P := \{x \in X \mid P(x)\}$.
The totality $X_P$ is considered to be a set, and it is called the
extensional subsetextensional subset of $X$ generated by $P$.
Using the properties of an equivalence relation, it is immediate to show that an equality condition
$\C E_X(x,y)$ on a totality $X$ is an extensional property on the product $X \times X$ i.e.,
$[(x, y) =_{X \times Y} (x{'}, y{'}) \ \&\ x =_X y] \To x{'} =_X y{'}$. Let the following extensional subsets of
$\Nat$: $\D 1$ $\D 2$
\[ \D 1 := \{x \in \Nat \mid x =_{\Nat} 0\} := \{0\}, \]
\[ \D 2 := \{x \in \Nat \mid x =_{\Nat} 0 \ \vee x =_{\Nat} 1\} := \{0, 1\}. \]
Since $n =_{\Nat} m :\TOT n := m$, the property $P(x) :\TOT x =_{\Nat} 0 \ \vee x =_{\Nat} 1$ is extensional.
If $(X, =_X)$ is a set, its diagonaldiagonal of a set
$D(X, =_X)$
is the extensional subset of $X \times X$
\[ D(X, =_X) := \{(x, y) \in X \times X \mid x =_X y\}. \]
If $=_X$ is clear from the context, we just write $D(X)$$D(X)$.
Let $X$ be a set. An inequalityinequality on $X$, or an
apartness relationapartness relation on $X$, is a relation $x \neq_X y$$x \neq_X y$ such that
the following conditions are satisfied:
$(\Ap_1)$ $\forall_{x, y \in X}\big(x =_X y \ \& \ x \neq_X y \To \bot \big)$.
$(\Ap_2)$ $\forall_{x, y \in X}\big(x \neq_X y \To y \neq_X x\big)$.
$(\Ap_3)$ $\forall_{x, y \in X}\big(x \neq_X y \To \forall_{z \in X}(z \neq_X x \ \vee \ z \neq_X y)\big)$.
We write $(X, =_X, \neq_X)$$(X, =_X, \neq_X)$ to denote the equality-inequality structure of a
setequality-inequality structure of a set $X$, and for simplicity we refer the set
$(X, =_X, \neq_X)$. The set $(X, =_X, \neq_X)$ is called discretediscrete, if
\[ \forall_{x, y \in X}\big(x =_X y \ \vee \ x \neq_X y\big). \]
An inequality $\neq_X$ on $X$ is called tighttight inequality,
if $\neg(x \neq_X y) \To x =_X y$, for every $x,y \in X$.
An inequality relation $x \neq_X y$ is extensional on $X \times X$.
We show that if $x, y \in X$ such that $x \neq y$, and if $x{'}, y{'} \in X$ such that $x{'} =_X x$
and $y{'} =_X y$, then
$x{'} \neq y{'}$. By $\Ap_3$ we have that $x{'} \neq x$, which is excluded from $\Ap_1$, or $x{'} \neq y$,
which has to be the case. Hence, $y{'} \neq x{'}$, or $y{'} \neq y$. Since the last option is excluded similarly,
we conclude that $y{'} \neq x{'}$, hence $x{'} \neq y{'}$.
If $\neq_X$ is an inequality on $X$, and $P(x)$ is an extensional property on $X$, then $X_P$
inherits the inequality from $X$. Since $n \neq_{\Nat} m :\TOT \neg(n =_{\Nat} m)$,
the sets $\Nat$, $\D 1$, and $\D 2 $ are discrete. Clearly, if $(X, =_X, \neq_X)$ is discrete, then $\neq_X$ is tight.
Let the sets $(X, =_X, \neq_X)$ and $(Y, =_Y, \neq_Y)$.
The canonical inequality on $X \times Y$canonical inequality on $X \times Y$ induced
by $\neq_X$ and $\neq_Y$, which is
defined by
\[ (x, y) \neq_{X \times Y} (x{'}, y{'}) :\TOT x \neq_X x{'} \ \vee \ y \neq_Y y{'}, \]
for every $(x,y)$ and $(x{'}, y{'}) \in X \times Y$, is an inequality on $X \times Y$.
If $(X, =_X, \neq_X)$ and $(Y, =_Y, \neq_Y)$ are discrete, then
$(X \times Y, =_{X \times Y}, \neq_{X \times Y})$ is discrete.
The proof of (i) is immediate. To show (ii), let $(x, y), (x{'}, y{'}) \in X \times Y$. By our
hypothesis $x =_X x{'} \ \vee \ x \neq_X x{'}$ and
$y =_Y y{'} \ \vee \ y \neq_Y y{'}$. If $x =_X x{'}$ and $y =_Y y{'}$, then $(x, y) =_{X \times Y} (x{'}, y{'})$.
In any other case we get $(x, y) \neq_{X \times Y} (x{'}, y{'})$.
Uniqueness of an element of a set $X$ with respect to some property $P(x)$ on $X$ means that all elements
of $X$ having this property are $=_X$-equal. We use the following abbreviation:
\[\exists_{!x \in X}P(x) :\TOT \exists_{x \in X}\big(P(x) \ \& \ \forall_{z \in X}\big(P(z) \To z =_X x\big)\big). \]
Let $(X, =_X)$ be a set.
$X$ is inhabitedinhabited set, if
$\exists_{x \in X}\big(x =_X x\big)$.
$X$ is a singletonsingleton, or contractiblecontractible, or
a $(-2)$-set$(-2)$-set, if
$\exists_{x_0 \in X}\forall_{x \in X}\big(x_0 =_\mathsmaller{X} x\big)$. In this case,
$x_0$ is called a centre of contractioncentre of contraction for $X$.
$X$ is a subsingletonsubsingleton, or a mere propositionmere proposition,
or a $(-1)$-set$(-1)$-set, if $\forall_{x, y \in X}\big(x =_\mathsmaller{X} y\big)$.
The truncationtruncation of $(X, =_X)$ is the
set$\mathsmaller{\mathsmaller{\lvert \lvert =_X \rvert \rvert}}$
$(X, \mathsmaller{\mathsmaller{\lvert \lvert =_X \rvert \rvert}})$, where
\[ x \ \mathsmaller{\mathsmaller{\lvert \lvert =_X \rvert \rvert}} \ y :\TOT x =_X x \ \& \ y =_X y. \]
We use the symbol $||X||$ to denote that the set $X$ is equipped with the truncated
equality $\mathsmaller{\mathsmaller{\lvert \lvert =_X \rvert \rvert}}$.$||X||$
Clearly, $ x \ \mathsmaller{\mathsmaller{\lvert \lvert =_X \rvert \rvert}} \ y$, for every $x, y \in X$, and
$(X, \mathsmaller{\mathsmaller{\lvert \lvert =_X \rvert \rvert}})$ is a subsingleton.
§ NON-DEPENDENT ASSIGNMENT ROUTINES
Let $X, Y$ be totalities. A non-dependent assignment routinenon-dependent assignment routine
$f$ from $X$ to
$Y$, in symbols $f \colon X \sto Y$$f \colon X \sto Y$, is a finite routine that assigns an element $y$ of $Y$
to each given element $x$ of $X$.
In this case we write $f(x) := y$. If $g \colon X \sto Y$, let
\[f := g : \TOT \forall_{x \in X}\big(f(x) := g(x)\big). \]
If $f := g$, we say that $f$ and $g$ are definitionally equaldefinitionally equal functions.
If $(X, =_X)$ and $(Y, =_Y)$ are sets, an operationoperation from $X$ to $Y$ is a non-dependent assignment routine
from $X$ to $Y$, while a functionfunction from $X$ to $Y$, in symbols $f \colon X \to Y$$f \colon X \sto Y$,
is an operation from $X$ to $Y$ that respects equality i.e.,
\[\forall_{x, x{'} \in X}\big(x =_X x{'} \To f(x) =_Y f(x{'})\big). \]
If $f \colon X \sto Y$ is a function from $X$ to $Y$, we say
that $f$ is a function, without mentioning the expression “from $X$ to $Y$”.
A function $\fXY$ is an embeddingembedding, in symbols
$f \colon X \hookrightarrow Y$$f \colon X \hookrightarrow Y$, if
\[\forall_{x, x{'} \in X}\big( f(x) =_Y f(x{'}) \To x =_X x{'}). \]
Let the sets $(X, =_X, \neq_X)$ and $(Y, =_Y, \neq_Y)$. A function $f \colon X \to Y$ is strongly extensional,
strongly extensional function if
\[ \forall_{x, x{'} \in X}\big(f(x) \neq_Y f(x{'}) \To x \neq_X x{'}\big). \]
If $\simeq_X$ is another equality on $X$, we use a new symbol e.g., $X^*$, for the same totality $X$.
When we write $f \colon X^* \to Y$, then $f$ is a function from $X$, equipped with the equality
$\simeq_X$, to $Y$.
If $X$ is a set, the identity mapidentity map on a set $\id_X$$\id_X$ on $X$ is the operation
$\id_X \colon X \sto X$, defined by $\id_X (x) := x$, for every $x \in X$. Clearly, $\id_X$ is an embedding,
which is strongly extensional, if $\neq_X$ is a given inequality on $X$.
If $Y$ is also a set, the projection maps $\pr_X$ and
$\pr_Y$$\pr_Y$$\pr_X$ on $X$ and $Y$, respectively, are the operations
$\pr_X \colon X \times Y \sto X$ and $\pr_Y \colon X \times Y \sto Y$, where
\[ \pr_X (x, y) := \prb_1 (x, y) := x \ \ \& \ \ \pr_Y (x, y) := \prb_2 (x, y) := y; \ \ \ \ (x, y) \in X \times Y. \]
Clearly, the operations $\pr_X$ and $\pr_Y$ are functions, which are strongly extensional, if $\neq_X, \neq_Y$ are
inequalities on $X, Y$, and $\neq_{X \times Y}$ is the canonical inequality on $X \times Y$ induced from them.
After introducing the universe $\D V_0$ of sets in section <ref>, we shall define non-dependent
assignment routines from a set to a totality, like $\D V_0$, which is not considered to be a set.
In most of the cases the non-dependent assignment routines defined here have a set as a domain.There are cases though,
see e.g., Definitions <ref> <ref>, <ref>,
and <ref>, where
a non-dependent assignment routine is defined on a totality, before showing that this totality
is a set. We never define a non-dependent assignment routine from a class to a totality.
Let the operation $m^* \colon \Real \sto \D Q$, defined by $m^*(a) := q_m$, where a real number $a$
is a regular sequence of rational numbers $(q_n)_n$ (see [19], p. 18), and $q_m$ is the $m$-term of this sequence.
for some fixed $m$. The operation $m^*$ is an example of an operation, which is not a function, since unequal
real numbers, with respect to the definition of $=_{\Real}$ in [19], p. 18, may have equal $m$-terms in $\D Q$.
To define a function $\fXY$, first we define the operation $f \colon X \sto Y$, and afterwords
we prove that $f$ is a function (from $X$ to $Y$).
The compositioncomposition of operations $g \circ f$$g \circ f$ of the operations $f \colon X \sto Y$
and $g \colon Y \sto Z$ is the operation $g \circ f \colon X \sto Z$, defined by $(g \circ f)(x) := g(f(x))$, for every
$x \in X$. Clearly, $g \circ f$ is a function, if $f$ and $g$ are functions. If $h \colon Z \sto W$, notice the
following definitional equalities
\[ f \circ \id_X := f, \ \ \ \ \id_Y \circ f := f, \ \ \ \ h \circ (g \circ f) := (h \circ g) \circ f. \]
A diagram commutes always with respect to the equalities of the related sets.
E.g., the commutativity of the following diagram is the equality $e (f(x)) =_W g(h(x))$, for every $x \in X$.
(E) at (0,0) $Z$;
[right=of E] (F) $W$.;
[above=of F] (A) $Y$;
[above=of E] (D) $X$;
[->] (E)–(F) node [midway,below] $g$;
[->] (D)–(A) node [midway,above] $f $;
[->] (D)–(E) node [midway,left] $h$;
[->] (A)–(F) node [midway,right] $e$;
Let $X, Y$ be sets, and $\neq_Y$ an inequality on $Y$. The totality $\D O(X, Y)$$\D O(X, Y)$ of
operations from $X$ to $Y$
is equipped with the following canonical equality and inequality:
\[ f =_{\D O(X, Y)} g : \TOT \forall_{x \in X}\big(f(x) =_Y f(x)\big), \]
\[ f \neq_{\D O(X, Y)} g : \TOT \exists_{x \in X}\big(f(x) \neq_Y f(x)\big). \]
The totality $\D O(X, Y)$ is considered to be a set. The set $\D F(X, Y)$$\D F(X, Y)$ of functions
from $X$ to $Y$
is defined by separation on $\D O(X, Y)$ through the extensional property $P(f) :\TOT
\forall_{x, x{'} \in X}\big(x =_X x{'} \To f(x) =_Y f(x{'})\big)$. The equality $=_{\D F(X, Y)}$ and
the inequality $\neq_{\D F(X, Y)}$ are inherited from $=_{\D O(X, Y)}$ and $\neq_{\D O(X, Y)}$, respectively.
Let the sets $(X =_X)$ and $(Y, =_Y, \neq_Y)$. If $\fXY$, let
$x_1 \neq_X^f x_2 : \TOT f(x_1) \neq_Y f(x_2)$, for every $x_1, x_2 \in X$.
$x_1 \neq_X^f x_2$ is an inequality on $X$.
If $(Y, =_Y, \neq_Y)$ is discrete, then $(X =_X, \neq_X^f)$ is discrete if and only if $f$ is an
If $\neq_Y$ is tight, then $\neq_X^f$ is tight if and only if $f$ is an embedding.
(i) Conditions $(\Ap_1)$-$(\Ap_3)$ for $\neq_X^f$ are reduced to conditions $(\Ap_1)$-$(\Ap_3)$ for $\neq_Y$.
(ii) If $(X =_X, \neq_X^f)$ is discrete, let $f(x_1) =_X f(x_2)$, for some $x_1, x_2 \in X$. Since the possibility
$x_1 \neq_X^f x_2 : \TOT f(x_1) \neq_Y f(x_2)$ is impossible, we conclude that $x_1 =_X x_2$. If $f$ is an embedding,
and since $f(x_1) =_X f(x_2)$ or $f(x_1) \neq_Y f(x_2)$, either $x_1 =_X x_2$, or $x_1 \neq_X^f x_2$.
(iii) If $\neq_X^f$ is tight, and $f(x_1) =_X f(x_2)$,
then $\neg(x_1 \neq_X^f x_2)$,
hence $x_1 =_X x_2$. If $f$ is an embedding and $\neg(x_1 \neq_X^f x_2) \TOT \neg\big(f(x_1) \neq_Y f(x_2)\big)$,
then $f(x_1) =_X f(x_2)$, and $x_1 =_X x_2$.
A function $f \colon X \to Y$ is called surjectivesurjective function, if
$\forall_{y \in Y}\exists_{x \in X}\big(f(x) =_Y y\big)$. A function $g \colon Y \to X$ is called a modulus of
surjectivity for $f$modulus of surjectivity, if the following diagram commutes
(E) at (0,0) $Y$;
[right=of E] (F) $X$;
[right=of F] (A) $Y$.;
[->] (E)–(F) node [midway,above] $g$;
[->] (F)–(A) node [midway,above] $f$;
[->,bend right] (E) to node [midway,below] $\id_Y$ (A);
If $g$ is a modulus of surjectivity for $f$, we also say that $f$ is a retractionretraction
and $Y$ is a retractretract of $X$.
If $y \in Y$, the fiberfiber $\fib^f(y)$$\fib^f(y)$ of $f$ at $y$ is the following
extensional subset of $X$
\[ \fib^f(y) := \{x \in X \mid f(x) =_Y y \}. \]
A function $\fXY$ is contractiblecontractible function, if $\fib^f(y)$ is contractible, for every $y \in Y$.
If $\neq_Y$ is an inequality on $Y$, the cofibercofiber $\cofib^f(y)$$\cofib^f(y)$ of $f$ at $y$
is the following extensional subset of $X$
\[ \cofib^f(y) := \{x \in X \mid f(x) \neq_Y y \}. \]
§ THE UNIVERSE OF SETS
The totality of all sets is the universe $\D V_0$ of setsuniverse of sets$\D V_0$,
equipped with the canonical equality
\[ X =_{\D V_0} Y :\TOT \exists_{f \in \D F(X,Y)}\exists_{g \in \D F(Y,X)}\big(g \circ f = \id_X \ \& \
f \circ g = \id_Y\big) \]
(E) at (0,0) $X$;
[right=of E] (F) $Y$;
[right=of F] (A) $X$;
[right=of A] (B) $Y$.;
[->] (E)–(F) node [midway,above] $f$;
[->] (F)–(A) node [midway,above] $\ \ \ g$;
[->] (A)–(B) node [midway,below] $f$;
[->,bend right] (E) to node [midway,below] $\id_X$ (A) ;
[->,bend left] (F) to node [midway,above] $\id_Y$ (B) ;
In this case we write $(f, g) : X =_{\D V_0} Y$. If $X, Y \in \D V_0$ such that $X =_{\D V_0} Y$, we
define the set
\[ \Eq(X, Y) := \big\{(f, g) \in \D F(X, Y) \times \D F(Y, X) \mid (f, g) : X =_{\D V_0} Y\big\} \]
of all objects that “witness”, or “realise”, or prove the equality $X =_{\D V_0} Y$. The equality of
$\Eq(X, Y)$ is the canonical one i.e., $(f, g) =_{\Eq(X, Y)} (f{'}, g{'}) :\TOT f =_{\D F(X, Y)} f{'} \ \&
\ g =_{\D F(Y, X)}g{'}$. Notice that, in general, not all elements of $\Eq(X, Y)$ are equal. As in [127],
Example 3.1.9, if $X := Y := \D 2 := \{0, 1\}$, then $(\id_{\D 2}, \id_{\D 2}) \in \Eq(\D 2, \D 2)$, and if
$\sw_{\D 2} : \D 2 \to \D 2$ maps $0$ to $1$ and $1$ to $0$, then $(\sw_{\D 2}, \sw_{\D 2}) \in \Eq(\D 2, \D 2)$, while
$\sw_{\D 2} \neq \id_{\D 2}$.
It is expected that the proof-terms in $\Eq(X, Y)$ are compatible with the properties of the equivalence relation
$X =_{\D V_0} Y$. This means that we can define a distinguished proof-term
$\refl(X) \in \Eq(X, X)$ that proves the reflexivity of $X =_{\D V_0} Y$, an operation $^{-1}$, such that
if $(f, g) : X =_{\D V_0} Y$, then $(f, g)^{-1} : Y =_{\D V_0} X$, and an operation of
“composition” $\ast$ of proof-terms,
such that if $(f, g) : X =_{\D V_0} Y$ and $(h, k) : Y =_{\D V_0} Z$, then $(f, g) \ast (h, k) : X =_{\D V_0} Z$.
If $h \in \D F(Y, W)$ and $k \in \D F(W, Y)$, let
$$\refl(X) := \big(\id_X, \id_X\big) \ \ \& \ \ (f, g)^{-1} := (g, f) \ \ \& \ \
(f, g) \ast (h, k) := (h \circ f, g \circ k).$$
It is immediate to see that these operations satisfy the groupoid lawsgroupoid laws:
(i) $\refl (X) \ast (f, g) =_{\Eq(X, Y)} (f, g)$ and $(f, g) \ast \refl (Y) =_{\Eq(X, Y)} (f, g)$.
(ii) $(f, g) \ast (f, g)^{-1} =_{\Eq(X, X)} \refl (X)$ and $(f, g)^{-1} \ast (f, g) =_{\Eq(Y, Y)} \refl (Y)$.
(iii) $\big((f, g) \ast (h, k)\big) \ast (s, t) =_{\Eq(X, W)} (f, g) \ast \big((h, k) \ast (s, t)\big)$.
Moreover, the following compatibility conditioncompatibility condition is satisfied:
(iv) If $(f, g), (f{'}, g{'}) \in \Eq(X, Y)$ and $(h, k), (h{'}, k{'}) \in \Eq(Y, Z)$, then
if $(f, g) =_{\Eq(X, Y)} (f{'}, g{'})$ and $(h, k) =_{\Eq(Y, Z)} (h{'}, k{'})$, then
$(f, g) \ast (h, k) =_{\Eq(X, Z)} (f{'}, g{'}) \ast (h{'}, k{'})$.
Let $X, Y$ be sets, $f \in \D F(X, Y)$ and $g \in \D F(Y,X)$. If $(f, g) \colon X =_{\D V_0} Y$,
then the set $\fib^f(y)$ is contractible, for every $y \in Y$.
If $y \in Y$, then $g(y) \in \fib^f(y)$, as $f(g(y)) =_Y \id_Y(y) := y$.
If $x \in X$,
$x \in \fib^f(y) :\TOT f(x) =_Y y$, and
$x =_X g(f(x)) =_X g(y)$ i.e., $g(y)$ is a centre of contraction for $\fib^f(y)$.
Let $X, Y$ be sets. The evaluation mapevaluation map$\ev_{X,Y}$
$\ev_{X,Y} : \D F(X, Y) \times X
\sto Y$ is defined by $\ev_{X,Y} (f, x) := f(x)$, for every $f \in \D F(X, Y)$ and $x \in X$.
Let $X, Y, Z$ be sets.
The evaluation map $\ev_{X,Y}$ is a function from $\D F(X, Y) \times X$ to $Y$.
For every function $h : Z \times X \to Y$, there is a unique function $\hat{h} : Z \to \D F(X, Y)$
such that for every $z \in Z$ and $x \in X$
$\ev_{X,Y}\big(\hat h (z), x\big) =_Y h(z, x).$
(i) By definition $(f, x) =_{\D F(X, Y) \times X} (f{'}, x{'})$ if and only if $f =_{\D F(X, Y)} f{'}$ and $x =_X x{'}$.
$\ev_{X,Y}(f, x) := f(x) =_Y f{'}(x) =_Y f{'}(x{'}) := \ev_{X,Y}(f{'}, x{'})$.
(ii) For every $z \in Z$, we define the assignment routine $\hat h$ from $Z$ to $\D F(X, Y)$ by
$z \mapsto \hat h (z)$, where
$\hat h(z)$ is the assignment routine from $X$ to $Y$, defined by
$\hat h(z)(x) := h(z, x),$
for every $x \in X$. First we show that $\hat h (z)$ is a function from $X$ to $Y$; if $x =_X x{'}$,
then $(z, x) =_{Z \times X} (z, x{'})$, hence $\hat h(z)(x) := h(z, x) =_Y h(z, x{'}) := \hat h(z)(x{'}).$
Next we show that the assignment routine $\hat h$ is a function from $Z$ to $\D F(X, Y)$; if $z =_Z z{'}$,
then, if $x \in X$, and since then $(z, x) =_{Z \times X} (z{'}, x)$, we have that
$\hat h(z)(x) := h(z, x) =_Y h(z{'}, x) := \hat h(z{'})(x).$ Since $x \in X$ is arbitrary, we conclude
that $\hat h (z) =_{\D F(X, Y)} \hat h(z{'})$. Since
$\ev_{X,Y}\big(\hat h (z), x\big) := \hat h(z)(x) := h(z, x),$
we get the strong from of the required equality $\ev_{X,Y} \circ (\hat h \times 1_X) := h$.
If $g : Z \to \D F(X, Y)$ satisfying the required equality,
and if $z \in Z$, then, for every $x \in X$ we have that
$g(z)(x) := \ev_{X,Y}\big(g(z), x\big) =_Y h(z, x) =_Y \ev_{X,Y}\big(\hat h (z), x\big) := \hat h(z)(x),$
hence $g(z) =_{\D F(X, Y)} \hat h(z)$.
§ DEPENDENT OPERATIONS
Let $I$ be a set and $\lambda_0 \colon I \sto \D V_0$$\lambda_0 \colon I \sto \D V_0$ a non-dependent assignment
routine from $I$ to $\D V_0$.
A dependent operationdependent operation $\Phi$ over $\lambda_0$, in symbols
\[ \Phi \colon \bigcurlywedge_{i \in I} \lambda_0 (i), \]
is an assignment routine that assigns to each element $i$ in $I$ an element $\Phi(i)$ in the set
$\lambda_0(i)$. If $i \in I$, we call $\Phi(i)$ the $i$-componentcomponent of a dependent operation
of $\Phi$, and we also use the notation $\Phi_i := \Phi(i)$$\Phi(i)$$\Phi_i$.
An assignment routineassignment routine is either a non-dependent assignment routine, or a dependent operation
over some non-dependent assignment routine from a set to the universe.
If $\Psi \colon \bigcurlywedge_{i \in I} \lambda_0 (i)$,
let $\Phi := \Psi :\TOT \forall_{i \in I}\big(\Phi_i := \Psi_i\big).$
If $\Phi := \Psi$, we say that $\Phi$ and $\Psi$ are definitionally equaldefinitionally
equal dependent operations.
Let the non-dependent assignment routines $\lambda_0 \colon I \sto \D V_0, \mu_0 \colon I \sto \D V_0,
\nu_0 \colon I \sto \D V_0$ and $\kappa_0 \colon I \sto \D V_0$. Let
$\D F(\lambda_0, \mu_0) \colon
I \sto \D V_0$ be defined by $\D F (\lambda_0, \mu_0)(i) := \D F(\lambda_0(i), \mu_0(i)$, for every $i \in I$. The
identity operationidentity operation $\Id_{\lambda_0}$$\Id_{\lambda_0}$ over $\lambda_0$ is the
dependent operation
\[ \Id_{\lambda_0} \colon \bigcurlywedge_{i \in I}\D F(\lambda_0(i), \mu_0(i)) \ \ \ \ \Id_{\lambda_0}(i) :=
\id_{\lambda_0(i)}; \ \ \ \ i \in I.\]
$\Psi \colon \bigcurlywedge_{i \in I}\D F(\mu_0(i), \nu_0(i))$ and
$\Phi \colon \bigcurlywedge_{i \in I}\D F(\lambda_0(i), \mu_0(i))$. Their
compositioncomposition of dependent operations $\Psi \circ \Phi$$\Psi \circ \Phi$
is defined by
\[ \Psi \circ \Phi \colon \bigcurlywedge_{i \in I}\D F(\lambda_0(i), \nu_0(i)) \ \ \ \
(\Psi \circ \Phi)_i := \Psi_i \circ \Phi_i; \ \ \ \ i \in I. \]
If $\Xi \colon \bigcurlywedge_{i \in I}\D F(\nu_0(i), \kappa_0(i))$, notice the
following definitional equalities
\[ \Phi \circ \Id_{\lambda_0} := \Phi, \ \ \ \ \Id_{\mu_0} \circ \Phi := \Phi, \ \ \ \ \Xi \circ (\Psi \circ \Phi)
:= (\Xi \circ \Psi) \circ \Phi. \]
If $I$ is a set and $\lambda_0 : I \sto \D V_0$, let
$\D A(I, \lambda_0)$$\D A(I, \lambda_0)$ be the totality of dependent operations over $\lambda_0$,
equipped with the
canonical equality:
\[ \Phi =_{\D A(I, \lambda_0)} \Psi :\TOT \forall_{i \in I}\big(\Phi_i =_{\lambda_0(i)} \Psi_i\big). \]
The totality $\D A(I, \lambda_0)$ is considered to be a set. If $\neq_{\lambda_0(i)}$ is an inequality on $\lambda_0(i)$,
for every $i \in I$, the canonical inequality $\neq_{\D A(I, \lambda_0)}$ on $\D A(I, \lambda_0)$ is defined by
$\Phi \neq_{\D A(I, \lambda_0)} \Psi :\TOT \exists_{i \in I}\big(\Phi_i \neq_{\lambda_0(i)} \Psi_i\big)$.
Clearly, $\Phi =_{\D A(I, \lambda_0)} \Psi$ is an equivalence relation, and
$\Phi \neq_{\D A(I, \lambda_0)} \Psi$ is an inequality relation. If $i \in I$, the
$i$-projection map$i$-projection map on $\D A(I, \lambda_0)$ is the operation
$\pr_i^{\lambda_0} \colon \D A(I, \lambda_0) \sto \lambda_0(i)$, defined by $\pr_i^{\lambda_0}(\Phi) := \Phi_i$,
for every $i \in I$.
The operation $\pr_i^{\lambda_0}$ is a function. If $\Phi \colon \bigcurlywedge_{i \in I}\D F(\lambda_0(i),
\mu_0(i))$, a modulus of
surjectivity for $\Phi$modulus of surjectivity for a dependent operation is a
dependent operation $\Psi \colon \bigcurlywedge_{i \in I}\D F(\mu_0(i), \lambda_0(i)$ such that
$\Phi \circ \Psi =_{\D A(I, \D F(\lambda_0, \lambda_0)} \Id_{\lambda_0}$. In this case, $\Psi_i$ is a modulus of
surjectivity for $\Phi$, for every $i \in I$.
If $\fXY$, let $\fib^f \colon Y \sto \D V_0$ be defined by $y \mapsto \fib^f(y)$, for every $y \in Y$. If $f$ is
then by Definition <ref> every fiber $\fib^f(y)$ of $f$ is contractible. A modulus of centres of
contractionmodulus of centres of contraction for a contractible function $f$ is a dependent operation
$\centr^f \colon \bigcurlywedge_{y \in Y}\fib^f(y)$,
such that $\centr^f_y := \centr^f(y)$ is a centre of contraction for $f$.
§ SUBSETS
Let $X$ be a set. A subsetsubset of $X$ is a pair $(A, i_A^X)$$(A, i_A^X)$, where $A$ is a set and
$\i_A^X \colon A \hookrightarrow X$ is an embedding of $A$ into $X$.
If $(A, i_A^X)$ and $(B, i_B^X)$ are subsets of $X$, then $A$ is a subsetsubset of $B$, in symbols
$(A, i_A^X) \subseteq (B, i_B^X)$$(A, i_A^X) \subseteq (B, i_B^X)$, or simpler
$A \subseteq B$$A \subseteq B$,
if there is $f \colon A \to B$ such that the following diagram commutes
(E) at (0,0) $A$;
[right=of E] (B) ;
[right=of B] (F) $B$;
[below=of B] (A) $X$.;
[->] (E)–(F) node [midway,above] $f$;
[right hook->] (E)–(A) node [midway,left] $i_A^X \ $;
[left hook->] (F)–(A) node [midway,right] $\ i_B^X$;
In this case we use the notation $f \colon A \subseteq B$. Usually we write $A$ instead of $(A, i_A^X)$.
The totality of the subsets of $X$ is the powersetpowerset $\C P(X)$$\C P(X)$ of $X$,
and it is equipped with the equality
\[ (A, i_A^X) =_{\C P(X)} (B, i_B^X) :\TOT A \subseteq B \ \& \ B \subseteq A. \]
If $f \colon A \subseteq B$ and $g \colon B \subseteq A$, we write $(f, g) \colon A =_{\C P(X)} B$.
Since the membership condition for $\C P(X)$ requires quantification over $\D V_0$, the totality
$\C P(X)$ is a class. Clearly, $(X, \id_X) \subseteq X$. If $X_P$ is an extensional subset of $X$ (see Definition <ref>), then $(X_P, i_P^X) \subseteq X$, where $i_P^X \colon X_P \sto X$ is defined by $i_P^X(x) := x$, for every
$x \in X_P$.
If $A, B \subseteq X$, and $f, g : A \subseteq B$, then $f$ is an embedding,
and $f =_{\D F(A, B)} h$
(E) at (0,0) $A$;
[right=of E] (B) ;
[right=of B] (F) $B$;
[below=of B] (C) ;
[below=of C] (A) $X$.;
[left hook->,bend left] (E) to node [midway,above] $f$ (F);
[left hook->,bend right] (E) to node [midway,below] $h$ (F);
[right hook->] (E)–(A) node [midway,left] $i_A^X \ $;
[left hook->] (F)–(A) node [midway,right] $ \ i_B^X$;
If $a, a{'} \in A$ such that $f(a) =_B f(a{'})$, then
$i_B^X(f(a)) =_X i_B^X(f(a{'})) \TOT i_A^X(a) =_X i_A^X (a{'})$, which implies $a =_A a{'}$. Moreover, if
$i_B^X(f(a)) =_X i_A^X(a) =_X i_B^X(h(a))$, then $f(a) = h(a)$.
The “internal” equality of subsets implies their “external” equality as sets i.e.,
$(f, g) : A =_{\C P(X)} B \To (f, g) : A =_{\D V_0} B$. If
$a \in A$, then $i_A^X(g(f(a))) =_X i_B^X(f(a)) = i_A^X (a)$, hence $g(f(a)) =_A a$, and then
$g \circ f =_{\D F(A, A)} \id_A$. Similarly we get $f \circ g =_{\D F(B, B)} \id_B$.
Let the set
\[ \Eq(A, B) := \big\{(f, g) \in \D F(A, B) \times \D F(B, A) \mid f \colon A \subseteq B \ \& \ g
\colon B \subseteq A\big\}, \]
equipped with the canonical equality of pairs as in the case of $\Eq(X, Y)$.
Because of the Proposition <ref>,
the set $\Eq(A, B)$ is a subsingleton i.e.,
\[ (f, g) \colon A =_{\C P(X)} B \ \& \ (f{'}, g{'}) \colon A =_{\C P(X)} B \To (f, g) = (f{'}, g{'}). \]
If $f \in \D F(A, B), g \in \D F(B, A), h \in \D F(B, C)$, and $k \in \D F(C, B)$, let
$\refl(A) := \big(\id_A, \id_A\big)$ and $(f, g)^{-1} := (g, f)$, and $(f, g) \ast (h, k) := (h \circ f, g \circ k)$,
and the properties (i)-(iv) for $\Eq(A, B)$ hold by the equality of all their elements.
Let the set $(X, =_X, \neq_X)$ and $\big(A, =_A, i_A^X, \neq_{A}^{\mathsmaller{i_A^X}}\big) \subseteq X$,
where the canonical inequalitycanonical inequality on a subset $\neq_{A}^{\mathsmaller{i_A^X}}$ on $A$ is
given by
$a \neq_{A}^{\mathsmaller{i_A^X}} a{'} :\TOT i_A^X(a) \neq_X i_A^X(a{'})$, for every $a, a{'} \in A$.
If $(X, =_X, \neq_X)$ is discrete, then $\big(A, =_A, i_A^X, \neq_{A}^{\mathsmaller{i_A^X}}\big)$ is discrete,
and if $\neq_X$ is tight, $\neq_{A}^{\mathsmaller{i_A^X}}$ is tight.
Since $i_A^X$ is an embedding, it follows immediately from Remark <ref>.
If $P, Q$ are extensional properties on the set $X$, then
\[ X_P =_{\C P (X)} X_Q \TOT \forall_{x \in X}\big(P(x) \TOT Q(x)\big). \]
The implication $(\oT)$ is immediate to show, since the corresponding identity maps witness
the equality $X_P =_{\C P (X)} X_Q$.
For the converse implication, let $(f, g) : X_P =_{\C P (X)} X_Q$. Let $x \in X$ such that $P(x)$. By
the commutativity of the following outer diagram
(E) at (0,0) $X_P$;
[right=of E] (B) ;
[right=of B] (F) $X_Q$;
[below=of B] (C) ;
[below=of C] (A) $X$;
[left hook->,bend left] (E) to node [midway,above] $f$ (F);
[left hook->,bend left] (F) to node [midway,below] $g$ (E);
[right hook->] (E)–(A) node [midway,left] $i_P^X \ $;
[left hook->] (F)–(A) node [midway,right] $ \ i_Q^X$;
we get $ f(x) := i_Q^X(f(x)) =_X i_P^X(x) := x$, and by the extensionality of
$Q$ and the fact that $Q(f(x))$ holds we get $Q(x)$. By the commutativity of the above inner diagram and the
extensionality of $P$ we get similarly the inverse implication.
If $(A, i_A^X), (B,i_B^X) \subseteq X$, their unionunion of subsets
$A \cup B$$A \cup B$
is the totality defined by
\[ z \in A \cup B : \TOT z \in A \ \vee \ z \in B, \]
equipped with the non-dependent assignment routine[Here we define a non-dependent assignment routine
on the totality $A \cup B$, without knowing beforehand that $A \cup B$ is a set. It turns out that $A \cup B$
is set, but for that we need to define $i_{A \cup B}^X$ first.]
$i_{A \cup B}^X : A \cup B \sto X$, defined by
\[ i_{A \cup B}^X(z) := \left\{ \begin{array}{lll}
i_A^X (z) &\mbox{, $z \in A$}\\
{} \\
i_B^X (z) &\mbox{, $z \in B$.}
\end{array}
\right.\]
If $z, w \in A \cup B$, we define $z =_{A \cup B} w :\Leftrightarrow i_{A \cup B}^X(z) =_X i_{A \cup B}^X(w).$
Clearly, $=_{A \cup B}$ is an equality on $A \cup B$, which is considered to be a set,
$i_{A \cup B}^X$ is an embedding of $A \cup B$ into $X$, and the pair
$\big(A \cup B, i_{A \cup B}^X\big)$ is a subset of $X$. Note that if $P, Q$ are extensional properties on $X$, then
$X_P \cup X_Q := X_{P \vee Q},$ since
$z \in X_{P \vee Q} : \TOT (P \vee Q)(z) : \TOT P(z) \ \mbox{or} \ Q(z) \TOT : z \in X_P \cup X_Q,$
and the inclusion map $i : X_P \cup X_Q \eto X$ is the identity, as it is for $X_{P \vee Q}$
(see Definition <ref>). If $\neq_X$ is a given inequality on $X$, the canonical inequality
on $A \cup B$ is determined in Corollary <ref>.
If $(A, i_A^X), (B,i_B^X) \subseteq X$, their intersectionintersection of subsets
$A \cap B$$A \cap B$ is the totality defined by separation on $A \times B$ as follows:
\[ A \cap B := \{(a, b) \in A \times B \mid i_A^X (a) =_X i_B^X (b)\}. \]
Let the non-dependent assignment routine $i_{A \cap B}^X : A \cap B \sto X$, defined by
$i_{A \cap B}^X(a, b) := i_A^X (a)$, for every $(a, b) \in A \cap B$.
If $(a, b)$ and $(a{'}, b{'})$ are in $A\cap B$, let
\[ (a, b) =_{A \cap B} (a{'}, b{'}) :\TOT i_{A \cap B}^X(a, b) =_X i_{A \cap B}^X(a{'}, b{'}) :\TOT
i_A^X (a) =_X i_A^X (a{'}). \]
We write $A \between B$$A \between B$ to denote that the intersection $A \cap B$ is inhabited.
Clearly, $=_{A \cap B}$ is an equality on $A \cap B$, which is considered to be a set,
$i_{A \cap B}^X$ is an embedding of $A \cap B$ into $X$, and
$\big(A \cap B, i_{A \cap B}^X\big)$ is a subset of $X$.
If $\neq_X$ is a given inequality on $X$, the canonical inequality on $A \cap B$ is determined in Corollary <ref>.
If $P, Q$ are extensional properties on $X$, then $X_P \cap X_Q$ has elements in
$X \times X$, while $X_{P \wedge Q}$ has elements in $X$, hence the two subsets are not definitionally equal.
Next we show that they are “externally” equal i.e., equal in $\D V_0$.
If $P, Q$ are extensional properties on the set $X$, then $X_{P \wedge Q} =_{\D V_0} X_P \cap X_Q$.
Since the inclusion maps corresponding to $X_P$ and $X_Q$ are the identities, let $f : X_{P \wedge Q}
\to X_P \cap X_Q$ with $f(z) := (z, z)$, for every $z \in X_{P \wedge Q}$, and let $g : X_P \cap X_Q
\to X_{P \wedge Q}$ with $g(a, b) := a$, for every $(a, b) \in X_P \cap X_Q$. Hence, $f(g(a, b))
:= f(a) := (a, a)$, and since $(a, b) \in X_P \cap X_Q$, we have by definition that $P(a), Q(b)$
and $a =_X b$, hence $(a, a) =_{X \times X} (a, b)$. If $z \in X_{P \wedge Q}$, then $g(f(z)) := g(z, z) := z$.
Clearly, $X \cap X =_{\C P(X)} X$, while $\pr_A \colon (A \cap B, i_{A \cap B}^X) \subseteq (A, i_A)$ and the identity map
$e_A \colon A \to A \cup B$ witnesses the inequality $(A, i_A^X) \subseteq (A \cup B, i_{A \cup B}^X)$
(E) at (0,0) $A \cap B$;
[right=of E] (F) $A$;
[below=of F] (C) ;
[below=of C] (A) $X$;
[right=of F] (K) $A \cup B$;
[left hook->,bend left] (E) to node [midway,above] $\pr_A$ (F);
[right hook->] (F) to node [midway,left] $i_A^X$ (A);
[right hook->] (E)–(A) node [midway,left] $i_{A \cap B}^X \ $;
[left hook->] (K)–(A) node [midway,right] $ \ i_{A \cup B}^X$;
[left hook->,bend left] (F) to node [midway,above] $e_A$ (K);
The following properties of the union and intersection of subsets are easy to show.
Let $A, B$ and $C$ be subsets of the set $X$.
$A \cup B =_{\C P(X)} B \cup A$ and $A \cap B =_{\C P(X)} B \cap A$.
$A \cup (B \cup C) =_{\C P(X)} (A \cup B) \cup C$ and $A \cap (B \cap C) =_{\C P(X)} (A \cap B) \cap C$.
$A \cap (B \cup C) =_{\C P(X)} (A \cap B) \cup (A \cap C)$ and $A \cup (B \cap C) =_{\C P(X)}
(A \cup B) \cap (A \cup C)$.
Let $X, Y$ be sets, $(A, i_A^X) (C, i_C^X) \subseteq X$, $e \colon (A, i_A^X) \subseteq (C, i_C^X)$, $f \colon C \to Y$,
and $(B, i_B^Y) \subseteq Y$. The restrictionrestriction $f_{|_{A}}$$f_{|_{A}}$ of $f$
to $A$ is the function
$f_{_{A}} := f \circ e$
(E) at (0,0) $A$;
[right=of E] (F) $C$;
[right=of F] (A) $Y$.;
[right hook->] (E)–(F) node [midway,above] $e$;
[->] (F)–(A) node [midway,above] $f$;
[->,bend right] (E) to node [midway,below] $f_{|_{A}}$ (A);
The image image $f(A)$$f(A)$ of $A$ under $f$ is the pair $f(A) := (A, f_{_{A}}),$
where $A$ is equipped with the equality
$a =_{f(A)} a{'} : \TOT f_{|_{A}}(a) =_Y f_{|_{A}}(a{'}),$
for every $a, a{'} \in A$. We denote
$\{f(a) \mid a \in A\} := f(A)$$\{f(a) \mid a \in A\}$.
The pre-image pre-image $f^{-1}(B)$$f^{-1}(B)$ of $B$ under $f$ is the set
\[ f^{-1}(B) := \{(c, b) \in C \times B \mid f(c) =_Y i_B^Y (b)\}. \]
Let $i_{\mathsmaller{f^{-1}(B)}}^C \colon f^{-1}(B) \eto C$, defined by
$i_{\mathsmaller{f^{-1}(B)}}^C(c, b) := c$, for every $(c,b) \in f^{-1}(B)$.
The equality of the extensional subset $f^{-1}(B)$ of $C \times B$ is
inherited from the equality of $C \times B$.
Clearly, the restriction $f_{|_{A}}$ of $\fXY$ to $(A, \i_A) \subseteq X$ is the function
$f_{_{A}} := f \circ i_A^X$. It is immediate to show that $f(A) \subseteq Y$ and $f^{-1}(B) \subseteq C$.
Notice that
\[ (A, i_A^X) =_{\C P(X)} (B, i_B^X) \To i_A^X (A) =_{\C P(X)} i_B^X (B), \]
since, if $(f, g) : (A, i_A^X) =_{\C P(X)} (B, i_B^X)$, then $i_A^X (a) =_X i_B^X (f(a))$ and $i_B^X (b)
=_X i_A^X (g(b))$, for every $a \in A$ and $b \in B$, respectively.
If $\neq_Y$ is a given inequality on $Y$, the canonical inequality on $f(A)$ is determined in
Corollary <ref>. Similarly, if $\neq_X$ is an inequality on $X$, $f \colon X \to Y$,
and $(B, i_B^Y) \subseteq Y$, the canonical inequality on $f^{-1}(B)$ is given by
$(x, b) \neq_{f^{-1}(B)} (x{'}, b{'}) :\TOT x \neq_X x{'}$, and not by the canonical inequality on $X \times B$.
Let $X, Y$ be sets, $A, B$ subsets of $X$, $C, D$ subsets of $Y$, and $f : X \to Y$.
$f^{-1}(C \cup D) =_{\C P(X)} f^{-1}(C) \cup f^{-1}(D)$.
$f^{-1}(C \cap D) =_{\C P(X)} f^{-1}(C) \cap f^{-1}(D)$.
$f(A \cup B) =_{\C P(Y)} f(A) \cup f(B)$.
$f(A \cap B) =_{\C P(Y)} f(A) \cap f(B)$.
$A \subseteq f^{-1}(f(A))$.
$f(f^{-1}(C) \cap A) =_{\C P(Y)} C \cap f(A)$, and $f(f^{-1}(C)) =_{\C P(Y)} C \cap f(X)$.
$(A, i_A^X), (B, i_B^X), (A{'}, i_{A{'}}^X) , (B, i_{B{'}}^X) \subseteq X$, such that
$A =_{\C P(X)} A{'}$ and $B =_{\C P(X)} B{'}$. Let also $(C, i_C^Y), (C{'}, i_{C{'}}^Y), (D, i_D^Y) \subseteq Y$, such that
$C =_{\C P(Y)} C{'}$, and let $\fXY$.
$A \cap B =_{\C P(X)} A{'} \cap B{'}$, and $A \cup B =_{\C P(X)} A{'} \cup B{'}$.
$f(A) =_{\C P(Y)} f(A{'})$, and $f^{-1}(C) =_{\C P(X)} f^{-1}(C{'})$.
$(A \times C, i_A^X \times i_C^Y) \subseteq X \times Y$,
where the map $i_A^X \times i_C^Y \colon A \times C \hookrightarrow X \times Y$ is defined by
\[ (i_A^X \times i_C^Y)(a, c) := \big(i_A^X(a), i_C^Y(c)\big); \ \ \ \ (a, c) \in A \times C. \]
$A \times C =_{\C P(X \times Y)} A{'} \times C{'}$.
$A \times (C \cup D) =_{\C P(X \times Y)} (A \times C) \cup (A \times D)$.
$A \times (C \cap D) =_{\C P(X \times Y)} (A \times C) \cap (A \cap D)$.
All cases are straightforward to show.
§ PARTIAL FUNCTIONS
Let $X, Y$ be sets. A partial functionpartial function from $X$ to $Y$ is a triplet
$(A, i_A^X, f_A^Y)$$(A, i_A^X, f_A^Y)$, where $(A, i_A^X) \subseteq X$, and $f_A^Y \in \D F(A, Y)$.
Often, we use only the symbol $f_A^Y$ instead of the triplet $(A, i_A^X, f_A^Y)$, and we also write
$f_A^Y \colon X \pto Y$. If $(A, i_A^X, f_A^Y)$ and $(B, i_B^X, f_B^Y)$ are partial functions from $X$ to $Y$,
we call $f_A^Y$ a subfunctionsufunction of $f_B^Y$, in symbols
$(A, i_A^X, f_A^Y) \leq (B, i_B^X, f_B^Y)$$(A, i_A^X, f_A^Y) \leq (B, i_B^X, f_B^Y)$, or simpler
$f_A^Y \leq f_B^Y$$f_A^Y \leq f_B^Y$,
if there is $e_{AB} \colon A \to B$ such that the following inner diagrams commute
(E) at (0,0) $A$;
[right=of E] (B) ;
[right=of B] (F) $B$;
[below=of B] (A) $X$;
[below=of A] (C) $ \ Y$.;
[->] (E)–(F) node [midway,above] $e_{AB}$;
[->,bend right=40] (E) to node [midway,left] $f_A^Y$ (C);
[->,bend left=40] (F) to node [midway,right] $f_B^Y$ (C);
[right hook->] (E)–(A) node [midway,left] $i_A^X \ $;
[left hook->] (F)–(A) node [midway,right] $\ i_B^X $;
In this case we use the notation $e_{AB} \colon f_A^Y \leq f_B^Y$.
The totality of partial functions from $X$ to $Y$ is the partial function space
partial function space$\C F(X,Y)$$\C F(X,Y)$,
and it is equipped with the equality
\[ (A, i_A^X, f_A^Y) =_{\C F(X,Y)} (B, i_B^X, f_B^Y) :\TOT f_A^Y \leq f_B^Y \ \& \ f_B^Y \leq f_A^Y. \]
If $e_{AB} \colon f_A^Y \leq f_B^Y$ and $e_{BA} \colon f_B^Y \leq f_A^Y$, we write $(e_{AB}, e_{BA}) : f_A^Y
=_{\C F(X,Y)} f_B^Y$.
Since the membership condition for $\C F(X,Y)$ requires quantification over $\D V_0$, the totality
$\C F(X,Y)$ is a class. Clearly, if $\fXY$, then $(X, \id_X, f) \in \C F(X,Y)$.
If $(e_{AB}, e_{BA}) : f_A^Y =_{\C F(X,Y)} f_B^Y$, then $(e_{AB}, e_{BA}) : A =_{\C P(X)} B$, and
$(e_{AB}, e_{BA}) : A =_{\D V_0} B$.
Let the set
\[ \Eq(f_A^Y, f_B^Y) := \big\{(f, g) \in \D F(A, B) \times \D F(B, A) \mid f \colon f_A^Y \leq f_B^Y
\ \& \ g \colon f_B^Y \leq f_A^Y\big\}, \]
equipped with the canonical equality of the product.
All the elements of $\Eq(f_A^Y, f_B^Y)$ are equal to each other.
If $f \in \D F(A, B), g \in \D F(B, A), h \in \D F(B, C)$, and $k \in \D F(C, B)$, let
\[ \refl(f_A^Y) := \big(\id_A, \id_A\big) \ \ \& \ \ (f, g)^{-1} := (g, f) \ \ \& \ \
(f, g) \ast (h, k) := (h \circ f, g \circ k), \]
and the groupoid-properties for $\Eq(f_A^Y, f_B^Y)$ hold by the equality of its elements.
Let $(A, i_A^X, f_A^Y) \in \C F(X, Y)$ and $(B, i_B^Y, g_B^Z) \in \C F(Y, Z)$. Their compositioncomposition
of partial functions$g_B^Z \pcirc f_A^Y$
\[ g_B^Z \mathsmaller{\pcirc} f_A^Y := \bigg(\big(f_A^Y\big)^{-1}(B), \ i_A^X \circ e_{\mathsmaller{(f_A^Y)^{-1}(B)}}^A,
\ \big(g_B^Z \circ f_A^Y\big)^Z_{\mathsmaller{(f_A^Y)^{-1}(B)}}\bigg), \ \ \ \mbox{where}
\]
\[ \big(f_A^Y\big)^{-1}(B) := \big\{(a, b) \in A \times B \mid f_A^Y(a) =_Y i_B^Y(b)\big\}, \]
\[ e_{\mathsmaller{(f_A^Y)^{-1}(B)}}^A \colon \big(f_A^Y\big)^{-1}(B) \eto A, \ \ \ \ (a, b) \mapsto a; \ \ \ \ (a,b)
\in \big(f_A^Y\big)^{-1}(B), \]
\[ \big(g_B^Z \circ f_A^Y\big)^Z_{\mathsmaller{(f_A^Y)^{-1}(B)}}(a, b) := g_B^Z(b); \ \ \ \ (a,b)
\in \big(f_A^Y\big)^{-1}(B), \]
is a partial function that belongs to $\C F(X, Z)$. If $(A, i_A^X, i_A^X) \in \C F(X, X), (B, i_B^Y, i_B^Y) \in \C F(Y, Y)$, and
$(C, i_C^Z, h_C^W) \in \C F(Z, W)$, the following properties hold:
$f_A^Y \pcirc i_A^X =_{\C F(X,Y)} f_A^Y$ and $i_B^Y \pcirc f_A^Y =_{\C F(X,Y)} f_A^Y$.
$\big(h_C^W \pcirc g_B^Z\big) \pcirc f_A^X =_{\C F(X,Z)} h_C^W \pcirc \big(g_B^Z \pcirc f_A^X\big)$.
(i) We show only the first equality and for the second we work similarly. By definition
\[ f_A^Y \mathsmaller{\pcirc} i_A^X := \bigg(\big(i_A^X\big)^{-1}(A), \ i_A^X \circ e_{\mathsmaller{(i_A^X)^{-1}(A)}}^A,
\ \big(f_A^Y \circ i_A^X\big)^Y_{\mathsmaller{(i_A^X)^{-1}(A)}}\bigg), \ \ \ \mbox{where}
\]
\[ \big(i_A^X\big)^{-1}(A) := \big\{(a, a{'}) \in A \times A \mid i_A^X(a) =_X i_A^X(a{'})\big\}, \]
\[ e_{\mathsmaller{(i_A^X)^{-1}(A)}}^A \colon \big(i_A^X\big)^{-1}(A) \eto A, \ \ \ \ (a, a{'}) \mapsto a; \ \ \ \ (a,a{'})
\in \big(i_A^X\big)^{-1}(A), \]
\[ \big(f_A^Y \circ i_A^X\big)(a, a{'}) := f_A^Y(a{'}); \ \ \ \ (a,a{'}) \in \big(i_A^X\big)^{-1}(A). \]
Let the operations $\phi \colon A \sto \big(i_A^X\big)^{-1}(A)$, defined by $\phi(a) := (a, a)$, for every $a \in A$,
and $\theta \colon \big(i_A^X\big)^{-1}(A) \sto A$, defined by $\theta(a, a{'}) := a$, for every $(a, a{'}) \in
\big(i_A^X\big)^{-1}(A)$. It is immediate to show that $\phi$ and $\theta$ are well-defined functions. It is
straightforward to show the commutativity of the following inner diagrams
(E) at (0,0) $A$;
[right=of E] (B) ;
[right=of B] (F) $\big(i_A^X\big)^{-1}(A)$;
[below=of B] (A) $ \ X$;
[below=of A] (C) $ \ \ Y$.;
[->,bend left] (E) to node [midway,below] $\phi$ (F);
[->,bend left] (F) to node [midway,above] $\theta$ (E);
[->,bend right=50] (E) to node [midway,left] $f_A^Y$ (C);
[->,bend left=50] (F) to node [midway,right] $f_A^Y \circ i_A^X$ (C);
[right hook->,bend right=20] (E) to node [midway,left] $ \ i_A^X \ $ (A);
[left hook->,bend left=20] (F) to node [midway,right] $\ \ i_A^X \circ e_{\mathsmaller{(i_A^X)^{-1}(A)}}^A \ $ (A);
(ii) We have that
$ h_C^W \mathsmaller{\pcirc} g_B^Z := \bigg(\big(g_B^Z\big)^{-1}(C), \ i_B^Y \circ e_{\mathsmaller{(g_B^Z)^{-1}(C)}}^B,
\ \big(h_C^W \pcirc g_B^Z\big)^W_{\mathsmaller{(g_B^Z)^{-1}(C)}}\bigg)$, where
\[ \big(g_B^Z\big)^{-1}(C) := \big\{(b, c) \in B \times C \mid g_B^Z(b) =_Z i_C^Z(c)\big\}, \]
\[ e_{\mathsmaller{(g_B^Z)^{-1}(C)}}^B \colon \big(g_B^Z\big)^{-1}(C) \eto B, \ \ \ \ (b, c) \mapsto b; \ \ \ \ (b,c)
\in \big(g_B^Z\big)^{-1}(C), \]
\[ \big(h_C^W \circ g_B^Z\big)(b, c) := h_C^W(c); \ \ \ \ (b,c) \in \big(g_B^Z\big)^{-1}(C). \]
$\big(h_C^W \mathsmaller{\pcirc} g_B^Z\big) \mathsmaller{\pcirc} f_A^X := \bigg(D, i_A^X \circ e_D^A,
\big[\big(h_C^W \circ g_B^Z\big) \circ f_Z^X\big]_D^W\bigg)$, where
\[
D := \big(f_A^Y\big)^{-1}\big[\big(g_B^Z\big)^{-1}(C)\big]
:= \bigg\{(a, d) \in A \times \big[\big(g_B^Z\big)^{-1}(C)\big] \mid f_A^Y(a) =_Y \big(i_B^Y \circ
e_{\mathsmaller{(g_B^Z)^{-1}(C)}}^B\big)(d) \bigg\},
\]
with $d:= (b,c) \in B \times C$ such that $g_B^Z(b) =_Z i_C^Z(c)$. The map $e_D^A \colon D \eto A$ is defined by
the rule $(a, d) \mapsto a$, for every $(a,d) \in D$, and
\[ \big[\big(h_C^W \circ g_B^Z\big) \circ f_A^Y\big](a, d) := \big(h_C^W \circ g_B^Z\big)(d) := h_C^W(c); \ \ \ \
(a,d) := (a, (b,c)) \in D. \]
$h_C^W \mathsmaller{\pcirc} \big(g_B^Z \mathsmaller{\pcirc} f_A^Y\big) := \bigg(E, \
i_A^X \circ e_{(f_A^Y)^{-1}(B)}^A \circ e_E^{\mathsmaller{(f_A^Y)^{-1}(B)}}, \
\big[h_C^W \circ \big(g_B^Z \circ f_Z^X\big)\big]_E^W\bigg)$, where
\[ E := \bigg[\big(g_B^Z \circ f_A^Y\big)_{\mathsmaller{(f_A^Y)^{-1}(B)}}^Z\bigg]^{-1}(C)
:= \bigg\{(u, c) \in \big[\big(f_A^Y\big)^{-1}(B)\big] \times C \mid (g_B^Z \circ f_A^Y)(u) =_Z i_C^Z(c) \bigg\},
\]
$e_E^{\mathsmaller{(f_A^Y)^{-1}(B)}} \colon E \eto \big((f_A^Y\big))^{-1}(B)$ is defined by the rule
$(u, c) \mapsto u$, for every $(u,c) \in E$, and
\[ \big[h_C^W \circ \big(g_B^Z \circ f_A^Y\big)\big](u, c) := h_C^W(c); \ \ \ \
(u,c) \in E. \]
Let the operations $\phi \colon D \sto E$, defined by $\phi(a, (b,c)) := ((a,b), c)$, for every $(a, (b,c)) \in D$,
and $\theta \colon E \sto D$, defined by $\theta((a,b),c) := (a, (b,c))$, for every $((a,b),c) \in
E$. It is straightforward to show that $\phi$ and $\theta$ are well-defined functions, and that the
following inner diagrams commute
(E) at (0,0) $D$;
[right=of E] (B) ;
[right=of B] (F) $E$;
[below=of B] (A) $X$;
[below=of A] (C) $ \ W$.;
[->,bend left] (E) to node [midway,below] $\phi$ (F);
[->,bend left] (F) to node [midway,above] $\theta$ (E);
[->,bend right=80] (E) to node [midway,left] $\big(h_C^W \circ g_B^Z\big) \circ f_A^Y$ (C);
[->,bend left=80] (F) to node [midway,right] $h_C^W \circ \big(g_B^Z \circ f_A^Y\big)$ (C);
[right hook->,bend right=20] (E) to node [midway,left] $ \ \mathsmaller{i_A^X \circ e_D^A} \ $ (A);
[left hook->,bend left=20] (F) to node [midway,right] $\ \mathsmaller{i_A^X \circ e_{(f_A^Y)^{-1}(B)}^A
\circ e_E^{\mathsmaller{(f_A^Y)^{-1}(B)}}} \ $ (A);
The next proposition is straightforward to show.
Let $(A, i_A^X, f_A^Y), (B, i_B^X, f_B^X) \in \C F(X, Y)$
(E) at (0,0) $A$;
[right=of E] (B) $X$;
[right=of B] (F) $B$;
[below=of B] (C) $Y$.;
[right hook->] (E)–(B) node [midway,above] $i_A^X$;
[left hook->] (F)–(B) node [midway,above] $i_B^X$;
[->] (E)–(C) node [midway,left] $f_A^Y \ \ $;
[->] (F)–(C) node [midway,right] $ \ f_B^Y$;
Their left $f_A^Y \cap_l f_B^Y$ and right
intersectionleft intersection of partial functions$f_A^Y \cap_l f_B^Y$
$f_A^Y \cap_r f_B^Y$ are the partial functionsright intersection of partial functions$f_A^Y \cap_r f_B^Y$
\[ f_A^Y \cap_l f_B^Y := \bigg(A \cap B, \ i_{A \cap B}^X, \ \big(f_A^Y \cap_l f_B^Y\big)_{A \cap B}^Y\bigg), \ \ \ \mbox{where}
\]
\[ \big(f_A^Y \cap_l f_B^Y\big)_{A \cap B}^Y(a, b) := f_A^Y(a); \ \ \ \ (a,b) \in A \cap B, \ \ \ \ \mbox{and} \]
\[ f_A^Y \cap_r f_B^Y := \bigg(A \cap B, \ i_{A \cap B}^X, \ \big(f_A^Y \cap_r f_B^Y\big)_{A \cap B}^Y\bigg), \ \ \ \mbox{where}
\]
\[ \big(f_A^Y \cap_r f_B^Y\big)_{A \cap B}^Y(a, b) := f_B^Y(b); \ \ \ \ (a,b) \in A \cap B. \]
Their union $f_A^Y \cup f_B^Y$union of partial functions$f_A^Y \cup f_B^Y$ is the partial function
\[ f_A^Y \cup f_B^Y := \bigg(A \cup B, \ i_{A \cup B}^X, \ \big(f_A^Y \cup f_B^Y\big)_{A \cup B}^Y\bigg), \ \ \ \mbox{where}
\]
\[ \big(f_A^Y \cup f_B^Y\big)_{A \cup B}^Y(z) := \left\{ \begin{array}{ll}
f_A^Y(z) &\mbox{, $z \in A$}\\
f_B^Y(z) &\mbox{, $z \in B$.}
\end{array}
\right.
\]
$f_A^Y \cap_l f_B^Y \leq f_A^Y$ and $f_A^Y \cap_r f_B^Y \leq f_B^Y$.
If $f_A^Y(a) =_Y f_B^Y(b)$, for every $(a,b) \in A \cap B$, then $f_A^Y \cap_l f_B^Y =_{\C F(X,Y)}
f_A^Y \cap_r f_B^Y$.
$f_A^Y \leq f_A^Y \cup f_B^Y$ and $f_B^Y \leq f_A^Y \cup f_B^Y$.
$f_A^Y \cup f_B^Y =_{\C F(X,Y)} f_B^Y \cup f_A^Y$.
Let the operation of multiplication on $\D 2$, defined by $0 \cdot 1 := 1 \cdot 0 := 0 \cdot 0 := 0$ and $1 \cdot 1 := 1$.
If $(A, i_A^X, f_A^{\D 2}), (B, i_B^X, g_B^{\D 2}) \in \C F(X, \D 2)$, let
\[ f_A \cdot g_B := \big(A \cap B, i_{A \cap B}^X, (f_A \cdot g_B)_{A \cap B}^{\D 2}\big), \]
where $(f_A \cdot g_B)_{A \cap B}^{\D 2} \colon A \cap B \to \D 2$ is defined, for every $(a, b) \in A \cap B$,
by$f_A \cdot g_B$
\[ (f_A \cdot g_B)_{A \cap B}^{\D 2}(a, b) := f_A^{\D 2}(a) \cdot g_B^{\D 2}(b). \]
By the equality of the product on $A \cap B$, it is immediate to show that the operation
$(f_A \cdot g_B)_{A \cap B}^{\D 2}$ is a function. More generally, operations on $Y$ induce
operations on $\C F(X, Y)$. The above example with $Y := \D 2$ is useful to the next section.
§ COMPLEMENTED SUBSETS
An inequality on a set $X$ induces a positively defined notion of disjointness of subsets of $X$.
Let $(X, =_X, \neq_X)$ be a set, and $(A, i_A^X), (B, i_B^X) \subseteq X$. We say that
$A$ and $B$ are disjoint with respect to $\neq_X$disjoint subsets, in symbols
$A \Disj_{\mathsmaller{\neq_X}} B$$A \Disj_{\mathsmaller{\neq}} B$, if
\[ A \underset{\mathsmaller{\mathsmaller{\mathsmaller{\neq_X}}}} \Disj B :
\TOT \forall_{a \in A}\forall_{b \in B}\big(i_A^X(a) \neq_X i_B^X(b) \big). \]
If $\neq_X$ is clear from the context, we only write $A \Disj B$$A \Disj B$.
Clearly, if $A \Disj B$, then $A \cap B$ is not inhabited.
The positive disjointness of subsets of $X$ induces the notion of a complemented subset of $X$, and
the negative notion of the complement of a set is avoided. We use bold letters to denote a complemented subset of a set.
A complemented subsetcomplemented subset of a set $(X, =_X, \neq_X)$ is a pair
$\B A := (A^1, A^0)$$\B A := (A^1, A^0)$,
where $(A^1, i_{A^1}^X)$ and $(A^0, i_{A^0}^X)$ are subsets of $X$ such that $A^1 \Disj A^0$. We
call $A^1$ the $1$-component of $\B A$$1$-component of a complemented subset and $A^0$ the
$0$-component of $\B A$$0$-component of a complemented subset. If
$\Dm(\B A) := A^1 \cup A^0$$\Dm(\B A)$ is the domain of $\B A$domain of a complemented subset,
indicator functionindicator function of a complemented subset, or
characteristic functioncharacteristic function of a complemented subset, of
$\B A$ is the operation $\chi_{\B A} : \Dm(\B A) \sto \D 2$ defined by
\[ \chi_{\B A}(x) := \left\{ \begin{array}{ll}
1 &\mbox{, $x \in A^1$}\\
0 &\mbox{, $x \in A^0$.}
\end{array}
\right. \]
Let $x \in \B A :\TOT x \in A^1$ and $x \notin \B A :\TOT x \in A^0$. If $\B A, \B B$ are complemented subsets of $X$, let
\[ \B A \subseteq \B B : \TOT A^1 \subseteq B^1 \ \& \ B^0 \subseteq A^0. \]
Let $\C P^{\Disj}(X)$$\C P^{\Disj}(X)$$\B A \subseteq \B B$ be their totality,
equipped with the equality
$\B A =_{\C P^{\mathsmaller{\Disj}} (X)} \B B : \TOT \B A \subseteq \B B \ \& \ \B B \subseteq \B A$.
Let $\Eq(\B A, \B B) := \Eq(A^1, B^1) \times \Eq(A^0, B^0)$. A map $\B f \colon \B A \to \B B$ from $\B A$
to $\B B$ is a pair $(f^1, f^0)$, where $f^1 \colon A^1 \to B^1$ and
$f^0 \colon A^0 \to B^0$map between complemented subsets.
Clearly, $ \B A =_{\C P^{\mathsmaller{\Disj}} (X)} \B B \TOT A^1 =_{\C P(X)} B^1 \ \& \ A^0 =_{\C P(X)} B^0$,
and $\Eq(\B A, \B B)$ is a subsingleton, as the product of subsingletons.
Since the membership condition for $\C P^{\mathsmaller{\Disj}} (X)$ requires quantification over $\D V_0$, the
totality $\C P^{\mathsmaller{\Disj}} (X)$ is a class. The operation
$\chi_{\B A}$ is a function, actually, $\chi_{\B A}$ is a partial function in $\C F(X, \D 2)$.
Let $z, w \in A^1 \cup A^0$ such that $z =_{A^1 \cup A^0} w$ i.e.,
\[ \left.
\begin{array}{lll}
i_{A^1}^X(z) &\mbox{, $z \in A^1$}\\
i_{A^0}^X(z) &\mbox{, $z \in A^0$}
\end{array}
\right\}
:= i_{A^1 \cup A^0}^X(z)
i_{A^1 \cup A^0}^X(w) := \left\{ \begin{array}{lll}
i_{A^1}^X(w) &\mbox{, $w \in A^1$}\\
i_{A^0}^X(w) &\mbox{, $w \in A^0$.}
\end{array}
\right.
\]
Let $z \in A^1$. If $w \in A^0$, then $i_{A^1}^X(z) := i_{A^1 \cup A^0}^X(z) =_X i_{A^1 \cup A^0}^X(w) := i_{A^0}^X(w)$
i.e., $(z, w) \in A^1 \cap A^0$, which contradicts the hypothesis $A^1 \Disj A^0$. Hence $w \in A^1$, and
$\chi_{\B A}(z) = \chi_{\B A}(w)$. If $z \in A^0$, we proceed similarly.
If $(X, =_X)$ is a set, let the inequality on $X$ defined by
\[ x \neq_X^{\mathsmaller{\D F(X, \D 2)}} x{'} : \TOT \exists_{f \in \D F(X, \D 2)}\big(f(x)
=_{\mathsmaller{\D 2}} 1 \ \& \ f(x{'}) =_{\mathsmaller{\D 2}} 0 \big)
\]
If $f \in \D F(X, \D 2)$, the following extensional subsets of $X$
\[ \delta_0^1(f) := \{x \in X \mid f(x) =_{\mathsmaller{\D 2}} 1\}, \]
\[ \delta_0^0(f) := \{x \in X \mid f(x) =_{\mathsmaller{\D 2}} 0\}, \]
are called detachabledetachable subset, or freefree subset subsets of $X$. Let also
their pair $\B \delta (f) := \big(\delta_0^1(f), \delta_0^0(f)\big)$.
$ x \neq_X^{\mathsmaller{\D F(X, \D 2)}} x{'} \TOT \exists_{f \in \D F(X, \D 2)}\big(f(x) \neq_{\D 2} f(x{'})\big)$,
and $\B \delta (f)$ is a complemented subset of $X$ with respect to the inequality $\neq_X^{\mathsmaller{\D F(X, \D 2)}}$.
The characteristic function $\chi_{\B \delta(f)}$ of $\B \delta(f)$ is definitionally equal to $f$ (recall that
$f(x) =_{\mathsmaller{\D 2}} 1 :\TOT f(x) := 1$), and $\delta_0^1(f) \cup \delta_0^0(f) = X$.
If $\B A, \B B \in \C P^{\Disj}(X)$ and $\B C \in \C P^{\Disj}(Y)$,
let$\B A \cup \B B$$\B A \cap \B B$$- \B A$
$\B A - \B B$$\B A \times \B C$
\[ \B A \cup \B B := (A^1 \cup B^1, A^0 \cap B^0), \]
\[ \B A \cap \B B := (A^1 \cap B^1, A^0 \cup B^0), \]
\[- \B A := (A^0, A^1), \]
\[ \B A - \B B := (A^1 \cap B^0, A^0 \cup B^1), \]
\[ \B A \times \B C := \big(A^1 \times C^1, \ [A^0 \times Y] \cup [X \times C^0]\big). \]
The following diagrams depict $\B A \cup \B B, \B A \cap \B B$, $\B A - \B B$, and $\B A \times \B C$, respectively.
(AO) at (1,3) ;
(BO) at (6,5) ;
(AZ) at (10,3) ;
(BZ) at (5,0) ;
plot [smooth cycle,tension=0.75] coordinates (AO) (3,4) (4,3) (4,1) (2,1)
plot [smooth cycle,tension=0.75] coordinates (0,4) (2,5) (BO) (9,4) (5,3)
plot [smooth cycle,tension=0.75] coordinates (6,3) (8,4) (AZ) (9,1) (7,1)
plot [smooth cycle,tension=0.75] coordinates (0,1) (2,2) (8,2) (8,0) (BZ) (2,0)
[scale=1,anchor=east] at (AO) $A^1$;
[scale=1,anchor=south] at (BO) $B^1$;
[scale=1,anchor=west] at (AZ) $A^0$;
[scale=1,anchor=north] at (BZ) $B^0$;
(AO) at (1,3) ;
(BO) at (6,5) ;
(AZ) at (8,3) ;
(BZ) at (5,0) ;
plot [smooth cycle,tension=0.75] coordinates (AO) (2,4) (3.5,3.5) (3,1) (1,1)
plot [smooth cycle,tension=0.75] coordinates (0.5,4) (2,5) (BO) (8,4) (5,3)
plot [smooth cycle,tension=0.75] coordinates (5,3) (6,4) (AZ) (7,1) (5,1)
plot [smooth cycle,tension=0.75] coordinates (0,1) (2,2) (7,2) (8,0) (BZ) (2,0)
[scale=1,anchor=east] at (AO) $A^1$;
[scale=1,anchor=south] at (BO) $B^1$;
[scale=1,anchor=west] at (AZ) $A^0$;
[scale=1,anchor=north] at (BZ) $B^0$;
(AO) at (0.5,3) ;
(BO) at (6,5) ;
(AZ) at (8,3) ;
(BZ) at (5,0) ;
plot [smooth cycle,tension=0.75] coordinates (AO) (2,4) (3.5,3.5) (3,1) (1,1)
plot [smooth cycle,tension=0.75] coordinates (0.5,4) (2,5) (BO) (8,4) (5,3) (2.5,3.5)
plot [smooth cycle,tension=0.75] coordinates (5,3) (6,4) (AZ) (7,1.5) (6,1)
plot [smooth cycle,tension=0.75] coordinates (0,1) (2,2) (6,2) (7,0) (BZ) (2,0)
[scale=1,anchor=east] at (AO) $A^1$;
[scale=1,anchor=south] at (BO) $B^1$;
[scale=1,anchor=west] at (AZ) $A^0$;
[scale=1,anchor=north] at (BZ) $B^0$;
(O) at (0,0) ;
(AOI) at (0.75,0) ;
(AOII) at (2.25,0) ;
(BOI) at (0,0.5) ;
(BOII) at (0,1.5) ;
(AZI) at (2.75,0) ;
(AZII) at (3.5,0) ;
(BZI) at (0,2) ;
(BZII) at (0,2.5) ;
(y) at (0,3);
(x) at (4,0);
(y) node[above left] $Y$ – (0,0) – (x) node[below right] $X$;
[color=blue,opacity=0.3] (AOI |- BOI) rectangle (AOII |- BOII);
[color=red,opacity=0.3] (AZI |- O) rectangle (AZII |- BZI);
[color=red,opacity=0.3] (O |- BZI) rectangle (AZII |- BZII);
[color=red,opacity=0.3] (AZI |- BZII) rectangle (AZII |- y);
[color=red,opacity=0.3] (AZII |- BZI) rectangle (x |- BZII);
[color=black,thick] (AOI)++(0,-0.05) – (AOI |- 0,0.05);
[color=black,thick] (AOII)++(0,-0.05) – (AOII |- 0,0.05);
[color=black,thick] (BOI)++(-0.05,0) – (0.05,0 |- BOI);
[color=black,thick] (BOII)++(-0.05,0) – (0.05,0 |- BOII);
[color=black,thick] (AZI)++(0,-0.05) – (AZI |- 0,0.05);
[color=black,thick] (AZII)++(0,-0.05) – (AZII |- 0,0.05);
[color=black,thick] (BZI)++(-0.05,0) – (0.05,0 |- BZI);
[color=black,thick] (BZII)++(-0.05,0) – (0.05,0 |- BZII);
[color=black,dashed] (AOI) – (AOI |- BOII);
[color=black,dashed] (AOII) – (AOII |- BOII);
[color=black,dashed] (BOI) – (AOII |- BOI);
[color=black,dashed] (BOII) – (AOII |- BOII);
[color=black,dashed] (AZI) – (AZI |- y);
[color=black,dashed] (AZII) – (AZII |- y);
[color=black,dashed] (BZI) – (x |- BZI);
[color=black,dashed] (BZII) – (x |- BZII);
(AOII)++(0,-0.1) – (AOI |- 0,-0.1) node [black,midway,yshift=-0.7cm] $A^1$;
(AZII)++(0,-0.1) – (AZI |- 0,-0.1) node [black,midway,yshift=-0.7cm] $A^0$;
(BOI)++(-0.1,0) – (-0.1,0 |- BOII) node [black,midway,xshift=-0.7cm] $C^1$;
(BZI)++(-0.1,0) – (-0.1,0 |- BZII) node [black,midway,xshift=-0.7cm] $C^0$;
If $\B A, \B B \in \C P^{\mathsmaller{\Disj}}(X)$ and $\B C \in \C P^{\mathsmaller{\Disj}}(Y)$, then
$\B A \cup \B B$, $\B A \cap \B B$, $- \B A$, and $\B A - \B B$ are in $\C P^{\mathsmaller{\Disj}}(X)$
and $\B A \times \B C$ is in $\C P^{\mathsmaller{\Disj}} (X \times Y)$.
We show only the last membership. If $(a_1, b_1) \in A^1 \times B^1$ and $(a_0, b_0) \in A^0 \times B^0$,
then $i_{A^1}^X(a_1) \neq_X i_{A^0}^X(a_0)$ and $i_{B^1}^Y(b_1) \neq_Y i_{B^0}^Y(b_0)$. By definition
\[ i_{A^1 \times B^1}^{X \times Y}(a_1, b_1) := \big(i_{A^1}^X(a_1), i_{B^1}^Y(b_1)\big). \]
If $(a_0, y) \in A^0 \times Y$, then $(i_{A^0}^X \times \id_Y)(a_0, y) := (i_{A^0}^X(a_0), y)$, and if
$(x, b_0) \in X \times B^0$, then $(\id_X \times i_{B^0}^Y)(x, b_0) := (x, i_{B^0}^Y(b_0))$. In both
cases we get the required inequality.
Let $\B A, \B B$ and $\B C$ be in $\C P^{\mathsmaller{\Disj}} (X)$. The following hold:
$- (- \B A) := \B A$.
$- (\B A \cup \B B) := (- \B A) \cap (- \B B)$.
$- (\B A \cap \B B) := (- \B A) \cup (- \B B)$.
$\B A \cup (\B B \cap \B C) =_{\C P^{\mathsmaller{\Disj}} (X)} (\B A \cup \B B) \cap (\B A \cup \B C)$.
$\B A \cap (\B B \cup \B C) =_{\C P^{\mathsmaller{\Disj}} (X)} (\B A \cap \B B) \cup (\B A \cap \B C)$.
$\B A - \B B := \B A \cap (- \B B)$.
$\B A \subseteq \B B \TOT (\B A \cap \B B) =_{\C P^{\mathsmaller{\Disj}}(X)} \B A$.
$\B A \subseteq \B B \TOT - \B B \subseteq - \B A$.
If $\B A \subseteq \B B$ and $\B B \subseteq \B C$, then $\B A \subseteq \B C$.
Let $\B A \in \C P^{\mathsmaller{\Disj}}(X)$ and $B, C \in \C P^{\mathsmaller{\Disj}}(Y)$.
$\B A \times (\B B \cup \B C) =_{\C P^{\mathsmaller{\Disj}}(X \times Y)} (\B A \times \B B) \cup
(\B A \times \B C)$.
$\B A \times (\B B \cap \B C) =_{\C P^{\mathsmaller{\Disj}}(X \times Y)} (\B A \times \B B) \cap
(\B A \times \B C)$.
We prove only (i). We have that
\begin{align*}
\B A \times (\B B \cup \B C) & := (A^1, A^0) \times (B^1 \cup C^1, B^0 \cap C^0)\\
& := \big(A^1 \times (B^1 \cup C^1), (A^0 \times Y) \cup [X \times (B^0 \cap C^0)]\big)\\
& =_{\C P^{\mathsmaller{\Disj}}(X \times Y)} \big((A^1 \times B^1) \cup (A^1 \times C^1), [(A^0 \times Y)
\cup (X \times B^0)] \ \cap\\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \cap [(A^0 \times Y) \cup (X \times C^0)]\big)\\
& := (\B A \times \B B) \times (\B A \times \B C).\qedhere
\end{align*}
Let the sets $(X, =_X, \neq_X^f)$ and $(Y, =_Y, \neq_Y)$, where $f \colon X \to Y$ $($see
Remark <ref>$)$.
Let also $\B A := (A^1, A^0)$ and $\B B := (B^1, B^0)$ in $\C P^{\mathsmaller{\Disj}} (Y)$.
$f^{-1}(\B A) := \big(f^{-1}(A^1), f^{-1}(A^0)\big) \in \C P^{\Disj} (X)$.
$f^{-1}(\B A \cup \B B) =_{\mathsmaller{\C P^{\Disj} (X)}}
f^{-1}(\B A) \cup f^{-1}(\B B)$.
$f^{-1}(\B A \cap \B B) =_{\mathsmaller{\C P^{\Disj} (X)}} f^{-1}(\B A) \cap f^{-1}(\B B)$.
$f^{-1}(- \B A) =_{\mathsmaller{\C P^{\Disj} (X)}} - f^{-1}(\B A)$.
$f^{-1}(\B A - \B B) =_{\mathsmaller{\C P^{\Disj} (X)}} f^{-1}(\B A) - f^{-1}(\B B)$.
(i) By Definition <ref> we have that
\[ f^{-1}(A^1) := \{(x, a_1) \in X \times A^1 \mid f(x) =_Y i_{A^1}^X (a_1)\}, \ \ \ \
i_{\mathsmaller{f^{-1}(A^1)}}^X(x, a_1) := x, \]
\[ f^{-1}(A^0) := \big(\{(x, a_0) \in X \times A^0 \mid f(x) =_Y i_{A^0}^X (a_0)\}, \ \ \ \
i_{\mathsmaller{f^{-1}(A^0)}}^X(x, a_0) := x. \]
Let $(x, a_1) \in f^{-1}(A^1)$ and $(z, a_0) \in f^{-1}(A^0)$. By the extensionality of $\neq_Y$ we have that
\[ i_{\mathsmaller{f^{-1}(A^1)}}^X(x, a_1) \neq_X^f i_{\mathsmaller{f^{-1}(A^0)}}^X(z, a_0)
:\TOT x \neq_X^f z :\TOT f(x) \neq_Y f(z) \TOT i_{A^1}^X (a_1) \neq_Y i_{A^0}^X (a_0), \]
and the last inequality holds by the hypothesis $\B A \in \C P^{\mathsmaller{\Disj}} (Y)$.
Next we show only (ii):
\begin{align*}
f^{-1}(\B A \cup \B B) & := f^{-1}\big(A^1 \cup B^1, A^0 \cap B^0 \big)\\
& := \big(f^{-1}(A^1 \cup B^1), f^{-1}(A^0 \cap B^0)\big)\\
& = \big(f^{-1}(A^1) \cup f^{-1}(B^1), f^{-1}(A^0) \cap f^{-1}(B^0)\big)\\
& := f^{-1}(\B A) \cup f^{-1}(\B B).\qedhere
\end{align*}
Alternatively, one can define the following operations between complemented subsets.
If $\B A, \B B \in \C P^{\Disj}(X)$ and $\B C \in \C P^{\Disj}(Y)$, let$\B A \vee \B B$$\B A \wedge \B B$
$\B A \ominus \B B$$\B A \otimes \B C$
\[ \B A \vee \B B := \big([A^1 \cap B^1] \cup [A^1 \cap B^0] \cup [A^0 \cap B^1], \ A^0 \cap B^0\big), \]
\[ \B A \wedge \B B := \big(A^1 \cap B^1, \ [A^1 \cap B^0] \cup [A^0 \cap B^1] \cup [A^0 \cap B^0]\big), \]
\[ \B A \ominus \B B := \B A \wedge (- \B B), \]
\[ \B A \otimes \B C := \big(A^1 \times C^1, \ [A^1 \times C^0] \cup [A^0 \times C^1] \cup [A^0 \times C^0]\big), \]
The following diagrams depict $\B A \vee \B B, \B A \wedge \B B$, $\B A \ominus \B B$, and $\B A \otimes \B C$,
(AO) at (1,3) ;
(BO) at (6,5) ;
(AZ) at (10,3) ;
(BZ) at (5,0) ;
plot [smooth cycle,tension=0.75] coordinates (AO) (3,4) (4,3) (4,1) (2,1)
plot [smooth cycle,tension=0.75] coordinates (0,4) (2,5) (BO) (9,4) (5,3)
plot [smooth cycle,tension=0.75] coordinates (6,3) (8,4) (AZ) (9,1) (7,1)
plot [smooth cycle,tension=0.75] coordinates (0,1) (2,2) (8,2) (8,0) (BZ) (2,0)
[scale=1,anchor=east] at (AO) $A^1$;
[scale=1,anchor=south] at (BO) $B^1$;
[scale=1,anchor=west] at (AZ) $A^0$;
[scale=1,anchor=north] at (BZ) $B^0$;
(AO) at (1,3) ;
(BO) at (6,5) ;
(AZ) at (10,3) ;
(BZ) at (5,0) ;
plot [smooth cycle,tension=0.75] coordinates (AO) (3,4) (4,3) (4,1) (2,1)
plot [smooth cycle,tension=0.75] coordinates (0,4) (2,5) (BO) (9,4) (5,3)
plot [smooth cycle,tension=0.75] coordinates (6,3) (8,4) (AZ) (9,1) (7,1)
plot [smooth cycle,tension=0.75] coordinates (0,1) (2,2) (8,2) (8,0) (BZ) (2,0)
[scale=1,anchor=east] at (AO) $A^1$;
[scale=1,anchor=south] at (BO) $B^1$;
[scale=1,anchor=west] at (AZ) $A^0$;
[scale=1,anchor=north] at (BZ) $B^0$;
(AO) at (1,3) ;
(BO) at (6,5) ;
(AZ) at (10,3) ;
(BZ) at (5,0) ;
plot [smooth cycle,tension=0.75] coordinates (AO) (3,4) (4,3) (4,1) (2,1)
plot [smooth cycle,tension=0.75] coordinates (0,4) (2,5) (BO) (9,4) (5,3)
plot [smooth cycle,tension=0.75] coordinates (6,3) (8,4) (AZ) (9,1) (7,1)
plot [smooth cycle,tension=0.75] coordinates (0,1) (2,2) (8,2) (8,0) (BZ) (2,0)
[scale=1,anchor=east] at (AO) $A^1$;
[scale=1,anchor=south] at (BO) $B^1$;
[scale=1,anchor=west] at (AZ) $A^0$;
[scale=1,anchor=north] at (BZ) $B^0$;
(O) at (0,0) ;
(AOI) at (0.75,0) ;
(AOII) at (2.25,0) ;
(BOI) at (0,0.5) ;
(BOII) at (0,1.5) ;
(AZI) at (2.75,0) ;
(AZII) at (3.5,0) ;
(BZI) at (0,2) ;
(BZII) at (0,2.5) ;
(y) at (0,3);
(x) at (4,0);
(y) node[above left] $Y$ – (0,0) – (x) node[below right] $X$;
[color=blue,opacity=0.3] (AOI |- BOI) rectangle (AOII |- BOII);
[color=red,opacity=0.3] (AZI |- BOI) rectangle (AZII |- BOII);
[color=red,opacity=0.3] (AOI |- BZI) rectangle (AOII |- BZII);
[color=red,opacity=0.3] (AZI |- BZII) rectangle (AZII |- BZI);
[color=black,thick] (AOI)++(0,-0.05) – (AOI |- 0,0.05);
[color=black,thick] (AOII)++(0,-0.05) – (AOII |- 0,0.05);
[color=black,thick] (BOI)++(-0.05,0) – (0.05,0 |- BOI);
[color=black,thick] (BOII)++(-0.05,0) – (0.05,0 |- BOII);
[color=black,thick] (AZI)++(0,-0.05) – (AZI |- 0,0.05);
[color=black,thick] (AZII)++(0,-0.05) – (AZII |- 0,0.05);
[color=black,thick] (BZI)++(-0.05,0) – (0.05,0 |- BZI);
[color=black,thick] (BZII)++(-0.05,0) – (0.05,0 |- BZII);
[color=black,dashed] (AOI) – (AOI |- BOII);
[color=black,dashed] (AOII) – (AOII |- BOII);
[color=black,dashed] (BOI) – (AOII |- BOI);
[color=black,dashed] (BOII) – (AOII |- BOII);
[color=black,dashed] (AZI) – (AZI |- BZII);
[color=black,dashed] (AZII) – (AZII |- BZII);
[color=black,dashed] (BZI) – (AZII |- BZI);
[color=black,dashed] (BZII) – (AZII |- BZII);
(AOII)++(0,-0.1) – (AOI |- 0,-0.1) node [black,midway,yshift=-0.7cm] $A^1$;
(AZII)++(0,-0.1) – (AZI |- 0,-0.1) node [black,midway,yshift=-0.7cm] $A^0$;
(BOI)++(-0.1,0) – (-0.1,0 |- BOII) node [black,midway,xshift=-0.7cm] $C^1$;
(BZI)++(-0.1,0) – (-0.1,0 |- BZII) node [black,midway,xshift=-0.7cm] $C^0$;
With the previous definitions the corresponding characteristic functions are expressed through the characteristic
functions of
$\B A$ and $\B B$.
If $\B A, \B B$ are complemented subsets of $X$, then $\B A \vee \B B, \B A \wedge \B B, \B A - \B B$
and $- \B A$ are complemented subsets of $X$ with characteristic functions
\[ \chi_{\B A \vee \B B} =_{\C F(X, \D 2)} \chi_{\B A} \vee \chi_{\B B}, \ \ \chi_{\B A \wedge \B B} =_{\C F(X,
\D 2)} \chi_{\B A} \cdot \chi_{\B B}, \ \
\chi_{\B A - \B B} =_{\C F(X, \D 2)} \chi_{\B A}(1 - \chi_{\B B}), \ \ \ \]
\[ \chi_{\B A \otimes \B B}(x, y) =_{\C F(X \times X, \D 2)} \chi_{\B A}(x) \cdot \chi_{\B B}(y), \ \ \
\chi_{- \B A} =_{\C F(X, \D 2)} 1 - \chi_{\B A}.
\]
We show only the equality $\chi_{\B A \wedge \B B} =_{\C F(X, \D 2)} \chi_{\B A}
\cdot \chi_{\B B}$. By Definition <ref> the multiplication of the partial maps $\chi_{\B A} \colon
\Dm(\B A) \to \D 2$ and $\chi_{\B B} \colon \Dm(\B B) \to \D 2$ is
the partial function
\[ \chi_{\B A} \cdot \chi_{\B B} := \big(\Dm(\B A) \cap \Dm(\B B), i_{\Dm(\B A) \cap \Dm(\B B)}^X, (\chi_{\B A}
\cdot \chi_{\B B})_{\Dm(\B A) \cap \Dm(\B B)}^{\D 2} \big), \]
\[ (\chi_{\B A} \cdot \chi_{\B B})_{\Dm(\B A) \cap \Dm(\B B)}^{\D 2}(u, w) := \chi_{\B A}(u) \cdot \chi_{\B B}(w), \]
for every $(u, w) \in \Dm(\B A) \cap \Dm(\B B)$. The partial function $\chi_{\B A \wedge \B B}$ is the triplet
\[ \chi_{\B A \wedge \B B} := \big(\Dm(\B A \wedge \B B), i_{\Dm(\B A \wedge \B B)}^X, (\chi_{\B A
\wedge \B B})_{\Dm(\B A \wedge \B B)}^{\D 2} \big). \]
Since $\Dm(\B A \wedge \B B) =_{\C P(X)} \Dm(\B A) \cap \Dm(\B B)$, and if $(f, g) \colon \Dm(\B A \wedge \B B)
=_{\C P(X)} \Dm(\B A) \cap \Dm(\B B)$, it is straightforward to show that also the following outer diagram commutes
(E) at (0,0) $\Dm(\B A \wedge \B B)$;
[right=of E] (L) ;
[right=of L] (M) ;
[right=of M] (B) ;
[right=of B] (F) $\Dm(\B A) \cap \Dm(\B B)$;
[below=of M] (N) ;
[below=of N] (A) $X$;
[below=of A] (C) $\D 2$;
[->,bend left] (E) to node [midway,below] $f$ (F);
[->,bend left] (F) to node [midway,above] $g$ (E);
[->,bend right=50] (E) to node [midway,left] $(\chi_{\B A \wedge \B B})_{\Dm(\B A \wedge \B B)}^{\D 2} \ \ $ (C);
[->,bend left=50] (F) to node [midway,right] $ \ \ (\chi_{\B A} \cdot \chi_{\B B})_{\Dm(\B A) \cap \Dm(\B B)}^{\D 2}$ (C);
[right hook->,bend right=40] (E) to node [midway,right] $ \ i_{\Dm(\B A \wedge \B B)}^X$ (A);
[left hook->,bend left=40] (F) to node [midway,left] $i_{\Dm(\B A) \cap \Dm(\B B)}^X \ $ (A);
and hence the two partial functions are equal in $\C F(X, \D 2)$.
§ NOTES
In [55] Greenleaf introduced predicates on objects through the totality $\Omega$ of propositions
and then he defined $\C P(X)$ as $\D F(X, \Omega)$. A similar treatment of the powerset $\C P(X)$ is found
in [115]. For us a predicate on a set $X$ is a bounded formula $P(x)$ with $x$ as a free variable.
In order to define new objects from $X$ through $P$ we ask $P$ to be extensional.
In [27], pp. 114-5, Cantor described a set as follows:
A manifold (a sum, a set) of elements belonging to some conceptual
sphere is called well-defined if, on the basis of its definition and in
accordance with the logical principle of the excluded third, it must
be regarded as internally determined, both whether any object of
that conceptual sphere belongs as an element to the mentioned set,
and also whether two objects belonging to the set, in spite of formal
differences in the mode of givenness, are equal to each other or not.
Bishop's intuitive notion of set is similar to Cantor's, except that he does not invoke
the principle of the excluded middle ($\PEM$). As it was pointed to me by W. Sieg, Dedekind's
primitive notions in [44] were “systems” and “transformations of systems”.
Notice that here we study defined totalities that are not defined inductively.
The inductively defined sets are expected to be studied in a future work within an extension $\BST^*$ of $\BST$.
Although $\Nat$ is the only primitive set considered in $\BST$, one
could, in principle, add more primitive sets. E.g., a primitive set of Booleans, of integers, and,
more interestingly, a primitive continuous interval, or a primitive real line (see [23] for an axiomatic
treatment of the set $\Real$ of reals within $\BISH$).
In Martin-Löf type theory the definitional, or judgemental equality $a := b$, where $a, b$ are terms
of some type $A$, is never used in a formula. We permit the use of the
definitional equality $:=$ for membership conditions only.
In the membership condition for the product we use the primitive notion of a pair.
The membership condition for an extensional subset $X_P$ of $X$ implies that an object $x$ “has not unique typing”,
as it can be an element of more than one sets.
The positively defined notion of discrete set used here comes from [76], p. 9. There it is also mentioned that
a set without a specified inequality i.e., a pair $(X, =_X)$, is discrete, if
$\forall_{x,y \in X}\big(x =_X y \ \vee \ \neg(x =_X y)\big)$.
In [84] it is mentioned that the above discreteness of $\D F(\Nat, \Nat)$ implies
the non-constructive principle “weak $\LPO$”
\[ \forall_{f \in \D F(\Nat, \Nat)}\bigg(\forall_{n \in \Nat}\big(f(n) =_{\Nat} 0\big) \ \vee \ \neg
\forall_{n \in \Nat}\big(f(n) =_{\Nat} 0\big)\bigg). \]
Because of a result of Bauer and Swan in [4], we cannot show in $\BISH$ the existence of an
uncountable separable metric space, hence, using the discrete metric, the existence of an uncountable
discrete set. Note that in [9], p. 66, a set $S$ is called discrete, if the set
$D := \{(s, t) \in S \times S \mid s=_S t\}$ is a freefree subset, or a detachabledetachable subset subset of $S \times S$.
In Definition <ref> we use the symbol $D(S)$ for $D$ and we call it the diagonal of $S$. We employ here the diagonal of a set in the fundamental definition of a set-indexed family of sets (Definition <ref>).
In [9] and [19], the negation $\neg \phi$ of a formula $\phi$ is not mentioned explicitly. E.g., the exact
writing of condition $(\Ap_1)$ in Definition <ref> is “if $x =_X y$ and $x \neq_X y$, then $0 =_{\Nat} 1$”.
Similarly, the condition of tightness in Definition <ref> is written as follows:
“if $x \neq_X y$ entails $0 = 1$, then $x =_X y$”. hence, if $\neq_X$ is tight, the implication $x \neq_X y \To
0 =_{\Nat} 1$ is logically equivalent to the (positively defined, if $X$ is a defined totality) equality $x =_X y$.
Within intuitionistic logic one defines $\neg \phi := \phi \To \bot$.
The definitions of $(-2)$-sets and $(-1)$-sets are proof-irrelevant translations of the corresponding notions in $\HoTT$,
which were introduced by Voevodsky (see [127]). The definition of a $0$-set requires to determine a set
$\Eq^X(x, y)$ of witnesses of the equality $x =_X y$. This is done in a universal way in $\MLTT$, while in $\BST$
in a “local” way, and by definition (see Definition <ref>).
In the literature of constructive mathematics (see e.g., [7], pp. 34–35)
the term preset is used for a totality.
Also, the term operation is used for a non-dependent assignment routine from a totality $X$ to a totality $Y$
(see [7], p. 44), while we use it only for a non-dependent assignment routine from a set $X$ to a set $Y$.
The notion of uniqueness associated to the definition of a function is local, in the following sense:
if $f \colon X \to Y$, it is immediate to show that $\forall_{x \in X}\exists!_{y \in Y}\big(f(x) =_Y y\big)$.
The converse is the local version of Myhill's axiom of non-choice $(\LANC)$. Let $P(x, y)$ be an extensional property on
$X \times Y$ i.e., $\forall_{x, x{'}, y, y{'} \in X}\big([x =_X x{'} \ \& \ y =_Y y{'} \ \& \ P(x, y)]
\To P(x{'}, y{'})\big)$. The principle
$(\LANC)$local version of Myhill's axiom of non-choice$(\LANC)$ is the formula
\[ \forall_{x \in X}\exists!_{y \in Y}P(x, y) \To \exists_{f \in \D F(X, Y)}\forall_{x \in X}\big(P(x,
f(x))\big). \]
Notice that $\LANC$ provides the existence of a function for which we only know how its outputs behave with
respect to the equality of $Y$, and it gives no information on how $f$ behaves definitionally.
If we define $Q_x (y) : = P(x, y)$, then if we suppose $Q_x (f(x))$ and $Q_x (g(x))$, for some $f, g \in
\D F(X, Y)$, we get $f(x) =_Y y =_Y g(x)$, and then $(\LANC)$ implies
\[ \forall_{x \in X}\exists!_{y \in Y}P(x, y) \To \exists!_{f \in \D F(X, Y)}\forall_{x \in X}\big(P(x,
f(x))\big). \]
We can use $(\LANC)$ to view an arbitrary subset $(A, i_A^X)$ of $X$ as an extensional subset of $X$.
If $(A, i_A^X) \in \C P(X)$, then the property $P_A$ on $X$ defined by
$P_A(x) := \exists_{a \in A}\big(i_A^X (a) =_X x\big)$,
is extensional, and $(i_A^X, j_A^{X}) : X_{P_A} =_{\C P(X)} (A, i_A^X)$, for some function $j_A^{X} \colon X_{P_A} \to A$.
To show this, let $x, y \in X$ such that $P_A(x)$ and $x =_X y$. By transitivity of $=_X$, if $i_A^{X} (a) =_X x$, then
$i_A^{X} (a) =_X y$. If $x \in X$ and $a, b \in A$ such that $i_A^{X} (a) =_X x =_X i_A^{X} (b)$, then $a =_A b$ i.e.,
$\forall_{x \in X_{P_A}}\exists!_{a \in A}\big(i_A^{X} (a) =_X x\big)$,
and since the property $Q(x, a) : \TOT i_A^{X} (a) =_X x$ is extensional on $X_{P_A} \times A$, by $(\LANC)$
there is a (unique) function $j_A^{X} : X_{P_A} \to A$, such that for every $x \in X_P$ we have that
$i_A^X(j_A ^{X}(x)) =_X x$, and the required diagram commutes.
The principle $(\LANC)$, which is also considered in [5], is included in Myhill's system $\CST$ (see [80])
as a principle of generating functions. This is in contrast to Bishop's algorithmic approach to the concept of
In [19], p. 67, a function $f : A \to B$ is defined as a finite routine which,
applied to any element of $A$, produces an element $b \equiv f(a)$ of $B$, such that $f(a) =_B f(a{'})$,
whenever $a =_A a{'}$. In [19], p. 15, we read that $f$ “affords an explicit, finite mechanical reduction
of the procedure for
constructing $f(a)$ to the procedure for constructing $a$”.
The pattern of defining a function $\fXY$ by first defining an operation $f \colon X \sto Y$, and then proving that
$f$ is a function, is implicit in the more elementary parts of [9] and [19], and more explicit
in the later parts of the books. E.g., in [19], p. 199, an inhabited subset $U$ of
$\D C$ has the maximal extent property, if there
is an operation $\mu$ from $U$ to $\Real^+$ satisfying certain properties. One can show afterwords
that $U$ is open and $\mu$ is a function on $U$. This property is used in Bishop's proof
of the Riemann mapping theorem (see [19], pp. 209–210).
Regarding the set-character of $\D F(X, Y)$, Bishop, in [19], p. 67, writes:
When $X$ is not countable, the set $\D F(X, Y)$ seems to have little practical interest, because
to get a hold on its structure is too hard. For instance, it has been asserted by Brouwer that all
functions in $\D F(\Real, \Real)$ are continuous, but no acceptable proof of this assertion is known.
Similar problems occur though, in function spaces where the domain of the functions is
a countable set. E.g., we cannot accept constructively (i.e., in the sense of Bishop) that the Cantor
space $\D F(\Nat, 2)$ satisfies Markov's principle, but no one that we know of has doubted the
set-character of $\D F(\Nat, 2)$. The possibility of doubting the set-character of the Baire space $\D F(\Nat, \Nat)$
is discussed by Beeson in [7], p. 46.
In intensional Martin-Löf Type Theory the type
\[\bigg(\prod_{x \colon X}f(x) = g(x)\bigg) \to f = g \]
is not provable (inhabited), and its inhabitance is known as the
axiom of function extensionalityaxiom of function extensionality $(\FunExt)$$(\FunExt)$.
In $\BST$ this axiom is part of the canonical definition of the function space $\D F(X, Y)$. Because of this,
many results in $\MLTT + \FunExt$ are translatable in $\BST$ (see Chapter <ref>
The totality $\D V_0$ is not mentioned by Bishop, although it is necessary, if we want to formulate the
fundamental notion of a set-indexed family of sets.
The defined
equality on the universe $\D V_0$ expresses that $\D V_0$ is univalent, as isomorphic sets are
equal in $\D V_0$. In univalent type theory, which is $\MLTT$ extended with Voevodsky's
axiom of univalenceaxiom of univalence $\UA$$\UA$
(see [127]), the existence of a pair of quasi-inverses between types $A$ and $B$ implies
that they are equivalent in Voevodsky's sense, and by the univalence axiom, also propositionally equal.
The axiom $\UA$ is partially translated in $\BST$ as the canonical definition of $\D V_0$.
Because of this, results in $\MLTT + \UA$ that do not raise the level of the universe are translatable in $\BST$.
For example, Proposition <ref> is lemma 4.9.2 in book HoTT [127], where $\UA$ is used
in its proof: if $e \colon X \simeq Y$, then $Z \to X \simeq Z \to Y$, and by
$\UA$ we get $e = \idtoeqv (p)$, for some $p \colon X =_{\C U} Y$. Notice that in the
formulation of this lemma the universe-level is not raised.
The notion of a dependent operation is explicitly mentioned by Bishop in [9], p. 65, and
repeated in [19], p. 70, in the definition of the intersection of a family of subsets of a set
indexed by some set
an element $u$ of $\bigcap_{t \in T}\lambda(t)$ is a finite routine which
associates an element $x_t$ of $\lambda(t)$ with each element $t$ of $T$, such that
$i_t(x_t) = i_{t{'}}(x_{t{'}})$ whenever $t, t{'} \in T$.
This definition corresponds to our Definition <ref>.
Bishop's definition of a subset of a set is related to the notion of a subobject in Category Theory
(see [3], p. 89, and [54], p. 75). In practice the subsets of a set $X$ are defined through an
extensional property on $X$. In [20], p. 7, this approach to the notion of a subset is considered
as its definition. Note
that there the implication $x =_{X} y \To (P(y) \To P(x))$ is also included in
the definition of an extensional property, something which follows though, from the symmetry of $=_X$.
Such a form of separation axiom is used implicitly in [9] and in [19]. Myhill used in
his system $\CST$ the axiom of bounded separation to implement the notion of an extensional subset of $X$.
This axiom is also
included in Aczel's system $\CZF$ (see [1], p. 26).
One could have defined the equality $=_{A \cup B}$ without relying on the non-dependent assignment routine
$i_{A \cup B}^X$. If we define first
$$z =_{A \cup B} w : \TOT \left\{ \begin{array}{lllllll}
i_A^X (z) =_X i_A (w) &\mbox{, $z, w \in A$}\\
i_A^X (z) =_X i_B (w) &\mbox{, $z \in A \ \& \ w \in B$}\\
i_B^X (z) =_X i_B (w) &\mbox{, $z, w \in B$}\\
i_B^X (z) =_X i_A (w) &\mbox{, $z \in B \ \& \ w \in A$,}
\end{array}
\right.$$
we can define afterwords the operation $i_{A \cup B}^X : A \cup B \to X$ as in
Definition <ref>.
In this way the non-dependent assignment routine $i_{A \cup B}^X$ is defined on a set, and it is an operation.
Bishop avoids this definition, probably because this pattern cannot be extended to the definition of a union of a family of
subsets (see Definition <ref>). In that case,
we cannot write down the corresponding case distinction for $z =_{A \cup B} w$. Moreover,
the proof of $\big(A \cup B, i_{A \cup B}^X\big) \subseteq X$ is immediate, if one
uses Definition <ref>.
The definition of the empty subset $\emptyset_X$ of a set $X$, given in [9], p. 65, can be
formulated as follows. Let $X$ be a set and $x_0 \in X$. The totality $\emptyset_X$ is defined by
$z \in \emptyset_X : \TOT x_0 \in X \ \& \ 0 =_{\Nat} 1$.
Let $i_{\emptyset}^X \colon \emptyset_X \sto X$ be the non-dependent assignment routine, defined by $i(z) := x_0$,
for every $z \in \emptyset_X$, and let $z =_{\emptyset_X} w : \TOT i(z) =_X i(w) : \TOT x_0 =_X x_0.$
The pair $(\emptyset_X, i_{\emptyset}^X)$ is the empty subsetempty subset of a set of $X$.
One can show that $=_{\emptyset_X}$ is an equality on $\emptyset_X$, and hence $\emptyset_X$ can be
considered to be a set. The assignment routine $i_{\emptyset}^X$ is an embedding of $\emptyset_X$ into $X$, and hence
$(\emptyset_X, i_{\emptyset}^X)$ is a subset of $X$. As Bishop himself writes in [9], p. 65, “the definition
of $\emptyset$ is negativistic, and we prefer to mention the void set as seldom as possible”.
In [19], p. 69, Bishop and Bridges define two subsets $A, B$ of $X$ to be disjoint, when
$A \cap B$ “is the void subset of $X$”.
Clearly, this “is” cannot be $A \cap B := \emptyset_X$. If we interpret it as
$A \cap B =_{\C P(X)} \emptyset_X$, we need the existence of certain functions
from $\emptyset_X$ to $A \cap B$ and from $A \cap B$ to $\emptyset_X$. The latter approach is
followed in $\MLTT$ for the empty type. Following Bishop, we refrain from elaborating this negatively defined notion.
If $(A, i_A^X) \subseteq A$, $(B, i_B^Y) \subseteq Y$, and $\fXY$, the
extensional imageextensional image $f[A]$$f[A]$ of
$A$ under $f$ is defined through the extensional property
$P(y) := \exists_{a \in A}\big(f(i_A(a)) =_Y y\big)$.
Similarly, the extensional pre-imageextensional pre-image $f^{-1}[B]$$f^{-1}[B]$ of
$B$ under $f$ is defined through the extensional property
$Q(x) := \exists_{b \in B}\big(f(x) =_Y i_B (b)\big)$.
The subset $f(A)$ of $Y$ contains exactly the outputs $f(i_A^X(a))$ of $f$, for every $a \in A$, while
the subset $f[A]$ of $Y$ contains all the elements of $Y$ that are $=_Y$-equal to some output $f(i_A(a))$
of $f$, for every $a \in A$. It is useful to keep the “distinction” between the subsets $f(A)$, $f[A]$, and
$f^{-1}(B)$, $f^{-1}[B]$.
We need the equality in $\C P(X)$ of a subset of $X$ to its extensional version (see Note <ref>),
hence the principle $\LANC$, to get
$f(A) =_{\C P(Y)} f[A]$ and $f^{-1}(B) =_{\C P(X)} f^{-1}[B]$.
There are instances in Bishop's work indicating that the powerset of a set is treated as a set.
In [9], p. 68, and in [19], p. 74,
the following “function” is defined
\[j : \C P^{\Disj} (X) \to \C P(X), \ \ \ \ (A^1, A^0) \mapsto A^1. \]
This is in complete contrast to our interpretation of a function as an operation between sets. Of course,
such a rule is an exception in [9] and [19].
In the definition of an integration space, see [19], p. 216, the “set” $\C F(X,Y)$ of all
strongly extensional partial functions from $X$ to $Y$ requires quantification over $\D V_0$.
Such a quantification is also implicit in the definition of a measure space given in [19], p. 282,
and in the definition of a complete measure space
in [19], p. 289. These definitions appeared first in [18], p. 47, and p. 55, respectively.
The powerset is repeatedly used as a set in [20] and [76]. It is not known if the treatment
of the powerset as a set implies some constructively unacceptable principle.
There are instances in Bishop's work indicating that the powerset of a set is not treated as a set.
See e.g., the definition of a set-indexed family of sets in [19], p. 78 (our Definition <ref>).
Similarly, in the definition of a family of subsets of a set $A$ indexed
by some set $T$ (see [19], p. 69), the notion of a finite routine that assigns a
subset of $A$ to an element of $T$ is used, and not the notion of a function from $T$ to $\C P (A)$.
In the definition of a measure space in [9], p. 183, a subfamily of a given family of
complemented sets is considered in order to avoid quantification over the class of all complemented subsets in the
formulations of the definitional clauses of a measure space (see Note <ref>).
The powerset axiom is also avoided in Myhill's formalization [80] of $\BISH$ and in Aczel's subsequent system
$\CZF$ of constructive set theory (see [1]). Although, as we said, it is not known if the use
of the powerset as a set implies some constructively unacceptable principle, it is not accepted in
any predicative development of constructive mathematics.
The notion of a partial function was introduced by Bishop and Cheng in [18], p. 1, and this definition,
together with the introduced term “partial function”, was also included in Chapter 3
of [19], p. 71. The totality of partial functions $\C F(X)$ from a set $X$ to $\Real$ is crucial to the
definition of an integration space in the new measure theory developed in [18], and seriously
extended in [19]. Only the basic algebraic operations on $\C F(X)$ were defined
in [19], p. 71. The composition of partial functions is mentioned in [39],
pp. 66–67.
A notion of a partial dependent operation can be defined as follows. If $A, I$ are sets, a partial dependent
operationpartial dependent operation is a triplet $(A, i_A^I, \Phi_A^{\lambda_0})$, where
$(A, i_A) \subseteq I$, $\lambda_0 \colon A \sto \D V_0$, and $\Phi_A^{\lambda_0} \colon
\bigcurlywedge_{a \in A}\lambda_0(a)$. If $\lambda_0(a) := Y$, for every $a \in A$, then the corresponding
partial dependent operation is reduced to a partial function in $\C F(I, Y)$.
In the study of various subsets of a set $X$ we avoided to define the complement of a subset,
since this requires a negative definition. Recall that the negatively defined notion of empty subset of a set
is not really used. In [9] Bishop introduced a
positive notion of the complement of a subset of a set $X$, the notion of a complemented subset of $X$. For
its definition we need a notion of a fixed inequality on $X$, which is compatible with the given equality of
$X$. In this way we can express the disjointness of two subsets $A, B$ of a set $X$ in a positive way. Usually,
$A, B$ are called disjointdisjoint subsets, if $A \cap B$ is not inhabited.
It is computationally more informative though, if a positive way is found to express disjointness of subsets.
In [25] a positive notion of apartness is used as a foundation of constructive topology.
The definitions of $\B A \cap \B B, \B A \cup \B B$ and $\B A - \B B$ appear in [9], p. 66,
where $\B A \cup \B B$ and $\B A \cap \B B$ are special cases of the complemented subsets
$\bigcup_{i \in I}\B \lambda_0(i)$ and $\bigcap_{i \in I}\B \lambda_0(i)$, respectively (see Proposition <ref>). There the inequality on $X$ is induced by an inhabited set of functions from $X$ to $\Real$. The
definition of $\B A \times \B C$ appears in [9], p. 206, in the section of the product measures.
One can motivate these definitions applying a “classical” thinking. If $x \in X$, recall the definitions
\[ x \in \B A :\TOT x \in A^1 \ \ \ \& \ \ \ x \notin \B A :\TOT x \in A^0. \]
Interpreting the connectives in a classical way, we get
\[ x \in \B A \cup \B B \TOT x \in \B A \ \vee \ x \in \B B :\TOT x \in A^1 \ \vee \ x \in B^1
:\TOT x \in A^1 \cup B^1, \]
\[ x \notin \B A \cup \B B \TOT x \notin \B A \ \& \ x \notin \B B :\TOT x \in A^0 \ \& \ x \in B^1
:\TOT x \in A^1 \cap B^1, \]
\[ x \in \B A \cap \B B \TOT x \in \B A \ \& \ x \in \B B :\TOT x \in A^1 \ \& \ x \in B^1
:\TOT x \in A^1 \cap B^1, \]
\[ x \notin \B A \cap \B B \TOT x \notin \B A \ \vee \ x \notin \B B :\TOT x \in A^0 \ \vee \ x \in B^1
:\TOT x \in A^1 \cup B^1, \]
\[ x \in - \B A \TOT x \notin \B A :\TOT x \in A^0 \ \ \ \ \& \ \ \ \ x \notin - \B A \TOT x \in \B A
:\TOT x \in A^1, \]
\[ (x,y) \in \B A \times \B C \TOT x \in \B A \ \& \ y \in \B C :\TOT x \in A^1 \ \& \ y \in B^1 :\TOT
(x,y) \in A^1 \times B^1,\]
\[ (x,y) \notin \B A \times \B C \TOT x \notin \B A \ \vee \ y \notin \B C :\TOT x \in A^0 \ \vee \ y
\in B^0 :\TOT (x,y) \in (A^0 \times Y) \cup (X \times B^0).\]
In [18], pp. 16–17, and in [19], p. 73,
the operations between the complemented subsets of a set $X$ follow Definition <ref>
in order to employ the good behaviour of the corresponding characteristic functions in the new measure theory.
In the measure theory of [9], where the characteristic functions of complemented subsets are not crucial,
the operations between complemented subsets are defined according to Definition <ref>.
Bishop and Cheng use the notation $\B A \times \B B$ instead of $\B A \otimes \B B$.
As it is evident from the previous figures, the $1$- and $0$-components of the complemented subsets in the Bishop-Cheng definition are subsets of the corresponding $1$- and $0$-components of the complemented subsets in the Bishop definition from [9]. Actually, the definitions of the operations of complemented subsets
in [9] associate to the $1$-component of the complemented subset a maximal complement. The two sets of operations though, share the same algebraic and set-theoretic properties. They only behave differently
with respect to their characteristic functions.
Based on the work [115] of Shulman, we can motivate the second set of operations in a way similar to
the motivation provided for the first set of operations in Note <ref>.
Keeping the definitions of $x \in \B A$ and $x \notin \B B$, we can apply a “linear”
interpretation of the connectives $\vee$ and $\&$. As it is mentioned in [115], p. 2, the multiplicative
version $P \ \parl \ Q$$P \ \parl \ Q$ of $P \vee Q$ in linear logic represents the pattern
“if not $P$, then $Q$; and if not $Q$, then $P$”. Let
\[ x \in \B A \vee \B B :\TOT [x \notin \B A \To x \in \B B] \ \& \ [x \notin \B B \To x \in \B A]. \]
With the use of Ex falsum quodlibet the implication $x \notin \B A \To x \in \B B$ holds if $x \in \B A :\TOT x \in A^1$, or
if $x \notin \B A :\TOT x \in A^0$ and $x \in \B B :\TOT x \in B^1$ i.e., if $x \in A^0 \cap B^1$. Hence,
the first implication holds if $x \in A^1 \cup (A^0 \cap B^1)$. Similarly, the second holds if
$x \in B^1 \cup (B^0 \cap A^1)$. Thus
\[ x \in \B A \vee \B B \TOT x \in [ A^1 \cup (A^0 \cap B^1)] \cap [B^1 \cup (B^0 \cap A^1)], \]
and the last intersection is equal to $\Dm(\B A \vee \B B)$! One then can define $x \notin \B A \vee \B B
:\TOT x \notin \B A \ \& \ x \notin \B B$, and $x \in \B A \wedge \B B :\TOT x \in \B A \ \& \ x \in \B B$,
and $x \notin \B A \wedge \B B :\TOT x \in (- \B A) \vee (- \B B)$.
For the relation of complemented subsets to the Chu construction see [103].
CHAPTER: FAMILIES OF SETS
We develop the basic theory of set-indexed families of sets and of family-maps between them.
We study the exterior union of a family of sets $\Lambda$, or the $\sum$-set of $\Lambda$, and the set
of dependent functions over $\Lambda$, or the $\prod$-set of $\Lambda$. We prove the distributivity of
$\prod$ over $\sum$ for families of sets indexed by a product of sets, which is the translation
of the type-theoretic axiom of choice into $\BST$. Sets of sets are special set-indexed families of sets
that allow “lifting” of functions on the index-set to functions on them. The direct families of sets and the
set-relevant families of sets are introduced. The index-set of the former is a directed set, while the
transport maps of the latter are more than one and appropriately indexed. With the use of the introduced
universe $\D V_0^{\im}$ of sets and impredicative sets we study families of families of sets.
§ SET-INDEXED FAMILIES OF SETS
Roughly speaking, a family of sets indexed by some set $I$ is an
assignment routine $\lambda_0 : I \sto \D V_0$ that
behaves like a function i.e., if $i =_I j$, then $\lambda_0(i) =_{\D V_0} \lambda_0 (j)$. Next follows
an exact formulation of this description that reveals the witnesses of the
equality $\lambda_0(i) =_{\D V_0} \lambda_0 (j)$.
If $I$ is a set,
a family of setsfamily of sets indexed by $I$, or an $I$-family
of sets$I$-family of sets, is a pair $\Lambda := (\lambda_0, \lambda_1)$, where$\lambda_0$
$\lambda_0 \colon I \sto \D V_0$, and$\lambda_1$ $\lambda_1$, a
modulus of function-likeness formodulus of function-likeness $\lambda_0$, is given by
\[ \lambda_1 \colon \bigcurlywedge_{(i, j) \in D(I)}\D F\big(\lambda_0(i), \lambda_0(j)\big),
\ \ \ \lambda_1(i, j) := \lambda_{ij}, \ \ \ (i, j) \in D(I), \]
such that the transport mapstransport map of a family of sets $\lambda_{ij}$
transport map of a family of sets$\lambda_{ij}$
of $\Lambda$ satisfy the following conditions:
For every $i \in I$, we have that $\lambda_{ii} := \id_{\lambda_0(i)}$.
If $i =_I j$ and $j =_I k$, the following diagram commutes
(E) at (0,0) $\lambda_0(j)$;
[right=of E] (F) $\lambda_0(k).$;
[above=of E] (D) $\lambda_0(i)$;
[->] (E)–(F) node [midway,below] $\lambda_{jk}$;
[->] (D)–(E) node [midway,left] $\lambda_{ij}$;
[->] (D)–(F) node [midway,right] $\ \lambda_{ik}$;
$I$ is the index-setindex-set of the family $\Lambda$.
If $X$ is a set, the constant $I$-family of setsconstant family of sets
$X$$C^X$ is the pair
$C^X := (\lambda_0^X, \lambda_1^X)$, where $\lambda_0 (i) := X$, for every
$i \in I$, and $\lambda_1 (i, j) := \id_X$, for every $(i, j) \in D(I)$ $($see the left diagram in
Definition <ref>$)$.
The dependent operation $\lambda_1$ should have been written as follows
\[ \lambda_1 \colon \bigcurlywedge_{z \in D(I)}\D F\big(\lambda_0(\prb_1(z)), \lambda_0(\prb_2(z))\big), \]
but, for simplicity, we avoid the use of the primitive projections $\prb_1, \prb_2$. Condition (a) of
Definition <ref> could have been written as $\lambda_{ii} =_{\mathsmaller{\D F(\lambda_0(i),
\lambda_0(i))}} \id_{\lambda_0(i)}$. If $i =_I j$, then by conditions (b) and (a) of Definition <ref>
we get
$id_{\lambda_0(i)} := \lambda_{ii} = \lambda_{ji} \circ \lambda_{ij}$ and
$ \id_{\lambda_0(j)} := \lambda_{jj} = \lambda_{ij} \circ \lambda_{ji} $
i.e., $(\lambda_{ij}, \lambda_{ji}) \colon \lambda_0 (i) =_{\D V_0} \lambda_0 (j)$. In this sense $\lambda_1$ is
a modulus of function-likeness for $\lambda_0$.
|
kroeger-vink=true
# Microwave-based quantum control and coherence protection of tin-vacancy spin
qubits in a strain-tuned diamond membrane heterostructure
Xinghan Guo1,∗ Alexander M. Stramma2,∗ Zixi Li1 William G. Roth2 Benchen
Huang3 Yu Jin3 Ryan A. Parker2 Jesús Arjona Martínez2 Noah Shofer2 Cathryn P.
Michaels2 Carola P. Purser2 Martin H. Appel2 Evgeny M. Alexeev2,4 Tianle Liu5
Andrea C. Ferrari4 David D. Awschalom1,5,6 Nazar Delegan1,6 Benjamin
Pingault6,7 Giulia Galli1,3,6 F. Joseph Heremans1,6 Mete Atatüre2,† Alexander
A. High1,6,† 1Pritzker School of Molecular Engineering, University of Chicago,
Chicago, IL 60637, USA 2Cavendish Laboratory, University of Cambridge,
Cambridge CB3 0HE, United Kingdom 3Department of Chemistry, University of
Chicago, Chicago, IL 60637, USA 4Cambridge Graphene Centre, University of
Cambridge,Cambridge CB3 0FA, United Kingdom 5Department of Physics,
University of Chicago, Chicago, IL 60637, USA 6Center for Molecular
Engineering and Materials Science Division, Argonne National Laboratory,
Lemont, IL 60439, USA 7QuTech, Delft University of Technology, 2600 GA Delft,
The Netherlands
###### Abstract
Robust spin-photon interfaces in solids are essential components in quantum
networking and sensing technologies. Ideally, these interfaces combine a long-
lived spin memory, coherent optical transitions, fast and high-fidelity spin
manipulation, and straightforward device integration and scaling. The tin-
vacancy center (SnV) in diamond is a promising spin-photon interface with
desirable optical and spin properties at $1.7\text{\,}\mathrm{K}$. However,
the SnV spin lacks efficient microwave control and its spin coherence degrades
with higher temperature. In this work, we introduce a new platform that
overcomes these challenges – SnV centers in uniformly strained thin diamond
membranes. The controlled generation of crystal strain introduces orbital
mixing that allows microwave control of the spin state with
$99.36(9)\text{\,}\mathrm{\%}$ gate fidelity and spin coherence protection
beyond a millisecond. Moreover, the presence of crystal strain suppresses
temperature dependent dephasing processes, leading to a considerable
improvement of the coherence time up to
$223(10)\text{\,}\mathrm{\SIUnitSymbolMicro s}$ at $4\text{\,}\mathrm{K}$, a
widely accessible temperature in common cryogenic systems. Critically, the
coherence of optical transitions is unaffected by the elevated temperature,
exhibiting nearly lifetime-limited optical linewidths. Combined with the
compatibility of diamond membranes with device integration, the demonstrated
platform is an ideal spin-photon interface for future quantum technologies.
††preprint: APS/123-QED
## Introduction
Color centers in diamond are a leading platform in quantum technologies, key
achievements such as the demonstration of a quantum register [1, 2, 3],
distant entanglement generation between three nodes [4], quantum teleportation
[5], along with myriad landmarks in quantum sensing [6, 7]. In recent years,
group IV centers have gained much attention due to their excellent optical
properties [8, 9, 10, 11, 12, 13, 14, 15]. Their $D_{3d}$ symmetry renders
optical transitions insensitive to first-order charge noise [16, 17, 18].
Additionally, a favorable Debye Waller factor leads to the majority of photons
being emitted into the zero-phonon line, critical for spin-photon entanglement
[19]. However, the electronic structure of group IV centers – a spin 1/2
system with two ground state orbital branches – renders the electron spin
susceptible to phonon-driven transitions between the two branches [20]. This
temperature-dependent spin dephasing can be mitigated by operating at
millikelvin temperatures [21, 22] or by engineering the local phonon density
of states through nanostructuring [23, 24]. Alternatively, dephasing can be
mitigated by qubit engineering such as working with group IV centers with high
spin-orbit coupling and thus large orbital splitting [25], or by leveraging
spin-strain interaction in randomly-, or controllably strained group IV
centers[3, 24]. With a spin-orbit coupling significantly higher than those of
the silicon vacancy (SiV) and the germanium vacancy (GeV) centers, the SnV
center has the highest reported spin coherence time at
$1.7\text{\,}\mathrm{K}$ [26]. However, efficient microwave (MW) control of
group IV spins requires the magnitude of spin-strain interaction to be
comparable with the spin-orbit interaction, which for SnV necessitates strain
approaching $0.1\text{\,}\mathrm{\%}$. This degree of strain is challenging to
achieve in microelectrical mechanical structures (MEMS) such as diamond
cantilevers, with reported values on the order of $0.015\text{\,}\mathrm{\%}$
[23]. Therefore, a controlled process to generate $\approx
0.1\text{\,}\mathrm{\%}$ strain in diamond is desired to improve SnV qubit
performance by both increasing the operational temperature and enabling
efficient MW driving.
Figure 1: Strained SnV in diamond membrane heterostructures. (a) Schematics of
the diamond-fused silica heterostructure. The static, tensile strain inside
the membrane is generated from the disparity of thermal expansion ratios of
diamond and fused silica. (b) The microscope image of the diamond membrane
(dashed cyan region) bonded to the fused silica substrate. A trench (dashed
green region) was fabricated prior to bonding. The gold coplanar waveguide is
fabricated post bonding to introduce microwave signals. The location of the
SnV center used in this study is highlighted by a red star. (c) Energy level
of strained SnVs. Unstrained centers, strained centers and strained centers in
the presence of a magnetic field are colored in purple, blue and green,
respectively. (d) The PL spectrum of a strained SnV center (orange), showing a
red-shifted zero-phonon line (ZPL) wavelength with a much larger ground-state
splitting compared with the values in bulk diamond (purple). (e) The
statistics of the SnV ground-state splitting. Two different devices with
identical layout were measured. Device 1 (orange) was used for all-optical
spin control (discussed in the SI) and device 2 (purple) was used for
microwave spin control.
In this work, we utilize heterogeneous integration of diamond membranes to to
generate strain-tuned SnVs. By bonding SnV incorporated pristine diamond
membranes to a glass substrate, we leverage the heterogeneous thermal
expansion coefficients of the two materials to generate a uniform, in-plane
strain in the diamond to the order of $0.1\text{\,}\mathrm{\%}$. This strain
greatly increases the energy splitting between the two orbital levels of the
SnV and induces orbital mixing in the spin ground state. We demonstrate MW
manipulation of the spin with $99.36(9)\text{\,}\mathrm{\%}$ Rabi fidelity at
$4.50(2)\text{\,}\mathrm{MHz}$ for 24 dBm MW input power. At
$1.7\text{\,}\mathrm{K}$, the implementation of dynamical decoupling allows
the SnV to reach millisecond coherence time, which is largely preserved even
at $4\text{\,}\mathrm{K}$, owing to the strain-induced increased ground state
orbital splitting. In combination with near lifetime-limited optical
linewidths up to $7\text{\,}\mathrm{K}$, our spin-photon interface is
compatible with broadly utilized low-infrastructure and cost-effective
portable cryogenic systems. Additionally, the demonstrated strained-membrane
heterostructure maintains robustness and flexibility for additional photonic,
electronic, and micro-electromechanical systems (MEMS) integration. Our SnV-
based diamond membrane platform greatly reduces the technological barrier for
establishing quantum nodes for networking.
### SnVs in strained diamond
This work relies on strain engineering to improve SnV qubit performance.
First, we demonstrate that heterogeneous thermal expansion disparities between
diamond and glass in a diamond-membrane heterostructure are sufficient to
generate uniform strain of the magnitude necessary to beneficially impact SnV.
The diamond membranes used in this work were generated via the “smart-cut”
method combined with isotopically purified (${}^{12}C$) overgrowth. The
membrane thickness is nominally 150 nm, with pristine crystal quality and
atomically smooth surfaces [27]. To introduce a positive tensile strain inside
the diamond membrane, we bond them onto
$500\text{\,}\mathrm{\SIUnitSymbolMicro m}$-thick fused silica substrates—a
material with a low thermal expansion coefficient ($<1\times
10^{-6}$$\text{\,}{\mathrm{K}}^{-1}$) – using a layer of hydrogen
silsesquioxane (HSQ). The schematic of this strain generation method is shown
in Figure 1 (a). The device is then annealed at
$600\text{\,}\mathrm{\SIUnitSymbolCelsius}$, beyond the temperature at which
the HSQ solidifies to glass, bonding the heterostructure in a ”zero-strain”
condition [28]. Due to the mismatch in thermal contraction between diamond and
fused silica and the negligible thickness of the diamond membrane compared to
that of the fused silica substrate, cooling down the device to cryogenic
temperature regime generates a positive (tensile), static strain profile in
the diamond membrane with an estimated magnitude of
$0.05\text{\,}\mathrm{\%}0.1\text{\,}\mathrm{\%}$ (see section 1.3 and 1.4 in
SI for details). This passive, uniform, and membrane-compatible strain
generation is complimentary to recent demonstrations of electromechanically-
induced strain on suspended diamond beams [24, 29].
Figure 1 (b) is the microscope image showing the layout of our diamond-
membrane heterostructure device. Prior to the membrane bonding, we patterned
and etched a $5\text{\,}\mathrm{\SIUnitSymbolMicro m}$ deep trench on the
fused silica to suspend part of the membrane and mitigate background
fluorescence from the HSQ resist. To study MW control of the SnV centers, we
patterned and deposited gold coplanar waveguides following membrane bonding.
The strain monotonically increases the orbital splitting of the SnV centers in
the membranes, which can be directly verified in the photoluminescence (PL)
spectra at $1.7\text{\,}\mathrm{K}$. The energy level diagram of the strained
SnV is shown in Figure 1 (c), highlighting the ground state orbital splitting
($\Delta_{gs}$) and the respective contributions of spin-orbit coupling,
strain, and magnetic Zeeman interaction in purple, blue, and green boxes.
Figure 1 (d) compares the spectra of a strained (unstrained) SnV center in a
diamond membrane (bulk diamond) with $\Delta_{gs}=$ $\approx
1300(850)\text{\,}\mathrm{GHz}$. This particular strained center is used in
further optical, microwave and spin characterizations in this work.
Remarkably, we note that all color centers in the membrane are comparably
strained. As shown in Figure 1 (e), we observed a distribution of the orbital
branches splitting centered around $1500\text{\,}\mathrm{GHz}$ across
different devices with a minimum (maximum) value of
$1200(1800)\text{\,}\mathrm{GHz}$. We carried out density functional theory
(DFT) calculations to compute strain-susceptibilities and characterize the SnV
spin-strain interaction (see SI); our results show that the increase of the
splitting between orbital branches from $850\text{\,}\mathrm{GHz}$ to $\approx
1500\text{\,}\mathrm{GHz}$ due to strain, corresponds to a diamond membrane
strain magnitude of $0.075\text{\,}\mathrm{\%}$(see section 1.2 in the SI for
details). The consistent strain generation, in combination with our ability to
perform additional integration and nanofabrication following membrane bonding
[30, 31], highlights the robustness and versatility of our platform.
### Optical properties of SnV under strain
To investigate the potential of strained SnV as a spin-photon interface, we
first verify that the symmetry of the defect is preserved even under
considerable strain by characterizing the optical transitions as a function of
the magnetic ($B$) field orientation. Using the $\langle 111\rangle$
crystallographic axis – the high symmetry axis of the SnV as the reference, we
rotate the $B$ field in both polar ($\theta$) and azimuthal ($\phi$) angles at
the same magnitude ($0.2\text{\,}\mathrm{T}$). The absolute energy splitting
between the two spin-conserving transitions (A1-B2) with respect to $\theta$
and $\phi$ is shown in Figure 2 (a), indicating that large splittings at
moderate values of magnetic field are achievable which is ideal for later SnV
spin initialization and control. Similarly to the unstrained case, we observe
a $\phi$ rotational symmetry of the splitting with respect to $\langle
111\rangle$, which corresponds to the intrinsic spin quantization axis. We
further verify that the polarization of the SnV transitions (i.e. dipole
operator matrix elements) remain along the $\langle 111\rangle$ direction (see
section 3.1 of the SI), as in the unstrained case [18].
Figure 2: Optical properties of the strained SnV center under applied magnetic
fields at $1.7\text{\,}\mathrm{K}$. (a) The energy splitting rate between the
A1-B2 spin conserving transitions with respect to the polar angle $\theta$ of
the applied magnetic field at different azimuthal angle $\phi$. The aligned
field is highlighted with a black arrow. (b) PLE scan, averaged over
$20\text{\,}\mathrm{s}$, of the {A1, B2} transitions at an aligned $B$-field
with a magnitude of $81.5\text{\,}\mathrm{mT}$. The average linewidth for both
transitions are below $48\text{\,}\mathrm{MHz}$, which is less than 1.5 times
of the lifetime limited value ($32.26(19)\text{\,}\mathrm{MHz}$). (c) The
initialization curve of the A1 transition, showing a time constant of
$24.2(3)\text{\,}\mathrm{\SIUnitSymbolMicro s}$ and an initialization fidelity
of $98.82\text{\,}\mathrm{\%}$.
From the $B$-field scan of the strained SnV, we note that besides the normal
A1-B2 splitting maximum along the quantization axis, an additional local
maximum at $\theta=$$90\text{\,}\mathrm{\SIUnitSymbolDegree}$ – the equator
plane perpendicular to the quantization axis – is observed, with the relative
A1-B2 position being inverted, as verified by coherent population trapping
measurements (see SI). This differs from the unstrained case. The novel
feature arises from the moderate crystal strain (comparable in magnitude to
the spin-orbit coupling) which increases the difference in effective Zeeman
shift between ground and excited states, mostly visible for a magnetic field
orthogonal to the spin-orbit-dictated quantization axis. As is the case for
moderately strained SiV centers [22] for MW-based control, we roughly align
the $B$-field towards the quantization axis to achieve highly cycling optical
transitions with cyclicity reaching $\eta\approx 2500$ (see section 4.2 of
SI). We note that $\eta$ can be as low as 6 when the $B$ field is
perpendicular to the quantization axis, which is ideal for Raman-based all-
optical control of strained SnV (see section 4.3 of SI). Moreover, by
comparing the dependence on $\theta$ of the A1-B2 splitting with calculated
results, we are able to determine the Stevens reduction factor $g_{L}$ for
ground and excited states mentioned in [32]. This model is then used to
explain the optically detected magnetic resonance (ODMR) frequency of the
strained SnV discussed below.
Additionally, our measurements reveal near-transform limited optical
linewidths, thereby showing that the application of strain does not alter the
excellent coherence properties of the optical transitions, as previously
demonstrated with unstrained centers [25, 11]. As shown in Figure 2 (b), the
$20\text{\,}\mathrm{s}$ average scan returns a mean linewidth of
$47.4(16)\text{\,}\mathrm{MHz}$, only $40\text{\,}\mathrm{\%}$ more than the
lifetime-limited value of $32.26(19)\text{\,}\mathrm{MHz}$
($4.933(190)\text{\,}\mathrm{ns}$ optical lifetime, see section 3.2 of SI).
The long term frequency stability of the {A1, B2} transitions returns a center
frequency standard deviation of $\sigma_{c}=$$23.8(1)\text{\,}\mathrm{MHz}$
and a A1-B2 splitting standard deviation of
$\sigma_{s}=$$13.28(6)\text{\,}\mathrm{MHz}$ (see section 3.4 of SI). This
linewidth and peak stability is comparable to that of other measurements of
group IV color centers in nanostructures [13, 3, 33] and thus confirms the
excellent potential of these defects for quantum photonic applications.
The resolvable splitting and narrow optical transitions are crucial for the
spin initialization and readout of the SnV qubit. The spin initialization
curve with subtracted background is shown in Figure 2 (c), indicating a fitted
exponential decay constant of $24.2(3)\text{\,}\mathrm{\SIUnitSymbolMicro s}$.
The initialization pulse duration was set to
$200\text{\,}\mathrm{\SIUnitSymbolMicro s}$ allowing us to reach a fidelity of
$98.8\text{\,}\mathrm{\%}$. We note that with a cyclicity of over
$2500\text{\,}$, this platform is a prime candidate for single shot readout if
the signal counts can be improved via on-chip structures (nanophotonics, fiber
couplers or grating couplers, solid immersion lenses) [34, 35, 33, 36, 37, 38]
or external methods (microcavities) [39, 40, 41].
### Efficient MW control of the SnV spin
Figure 3: MW control of the strained SnV center at $1.7\text{\,}\mathrm{K}$.
(a) Pulsed ODMR spectrum with scanned MW frequency. The data (purple dots) is
fitted with two Lorentzian functions (dashed line) split by
$628(182)\text{\,}\mathrm{kHz}$ and with a linewidth of
$1047(208)\text{\,}\mathrm{kHz}$ and $891(197)\text{\,}\mathrm{kHz}$,
respectively. (b) Rabi oscillation of the SnV at zero detuning, indicating a
Rabi frequency of $4.50(2)\text{\,}\mathrm{MHz}$ with a fidelity of
$99.36(9)\text{\,}\mathrm{\%}$. (c) Rabi oscillation as a function of the MW
driving frequency. (d) Randomized benchmarking at $1.7\text{\,}\mathrm{K}$,
showing an average gate fidelity of $97.7(1)\text{\,}\mathrm{\%}$. The Rabi
frequency is set to $2.8\text{\,}\mathrm{MHz}$ to avoid excess heating
effects.
A critical component of a spin-photon interface is high-fidelity spin control,
commonly achieved through MW driving of the electron spin. In the case of
group IV centers, a MW field can only drive the spin transition in the
presence of strain [42, 23]. This arises due to the orthogonality of orbital
states associated with the electron spin qubit of group IV centers [18].
Strain that is comparable in strength to spin-orbit coupling relaxes this
orthogonality, enabling microwave control. SnV, with larger spin-orbit
coupling ($850\text{\,}\mathrm{GHz}$) and smaller strain susceptibility than
SiV and GeV, requires large crystal strain to meet this criteria. This strain
requirement goes beyond the achievable magnitude demonstrated via active
strain tuning [23] or implantation-induced strain [3].
To demonstrate efficient MW control, we utilize the nominal
$0.1\text{\,}\mathrm{\%}$ crystal strain in the diamond membrane. We estimate
an effective Landé factor $g$ of $1.62\text{\,}$ for the transverse microwave
field with the external magnetic field roughly aligned to the SnV quantization
axis (see section 2.1 in SI). This value is relatively high compared with
spin-orbit-dominated regime for unstrained centers ($\leq 0.3\text{\,}$) and
is close to the free electron value ($g=2$). In addition, we tapered the MW
waveguide around the measurement area by shrinking its width to
$6\text{\,}\mathrm{\SIUnitSymbolMicro m}$ to enhance the microwave amplitude,
as shown in Figure 1 (b). The distance between the target SnV and the
waveguide is $\approx 4\text{\,}\mathrm{\SIUnitSymbolMicro m}$, ensuring an
efficient exposure to the MW driving field (see section 2.1 - 2.3 in SI for
details).
We begin the MW control characterization by initializing the spin via optical
pumping and scan the frequency of a MW field across the expected spin
resonance while monitoring the fluorescence intensity of the spin readout at
$1.7\text{\,}\mathrm{K}$. In Figure 3 (a) we observe clear signature of
optically detected magnetic resonance (ODMR) for the target SnV center. The
$81.5\text{\,}\mathrm{mT}$ external magnetic field is aligned to the
quantization axis by polarisation measurements and 3D field scan. The ODMR
shows a profile with two overlapping peaks separated by
$628(182)\text{\,}\mathrm{kHz}$, indicating an interaction between the
electronic spin of the SnV with another system in the vicinity, likely a
[^13C] nuclear spin or the electron spin of a P1 center. Further investigation
is needed to understand the nature of this interaction. By driving both power-
broadened ODMR transitions, we are able to resonantly manipulate the spin
state of the SnV with a Rabi frequency of $4.50(2)\text{\,}\mathrm{MHz}$. The
Rabi oscillation curve and the chevrons (Rabi oscillations with varied driving
frequency) are shown in Figure 3 (b) and (c). We observe a long-time averaged
Rabi $\pi$-gate fidelity of $99.36(9)\text{\,}\mathrm{\%}$, improving
significantly from previously demonstrated optical Raman-based spin control
value [26]. We note that the MW power delivered to the device is approximately
24 dBm ($250\text{\,}\mathrm{mW}$) which is comparable to previous
demonstrations on strained SiV [3]. We also characterized the power dependence
of the Rabi rate. Starting from a linear dependence, the Rabi rate deviates to
sub-linear when the power surpasses 24 dBm due to excessive heating (see
section 2.4 in SI), which could be optimized by replacing gold with
superconducting metals (such as niobium or NbTiN) to deliver the MW signal.
We further characterize the single qubit gate fidelity of MW control via
randomized benchmarking. For this, we use the following set of Clifford gates:
{$I$, $\pi_{x}$, $\pi_{y}$, $\pi_{x}/2$, $-\pi_{x}/2$, $\pi_{y}/2$,
$-\pi_{y}/2$} (see section 5.1 in SI). To prevent excessive heating effect
during benchmarking which would lead to undesired spin decoherence, we apply a
slightly slower Rabi rate ($2.8\text{\,}\mathrm{MHz}$, 18 dBm) which requires
no time buffer between gates. The benchmarking result is shown in Figure 3
(d). We extract an average Clifford gate fidelity of
$97.7(1)\text{\,}\mathrm{\%}$, indicating power efficient MW control with high
fidelity under stringent randomized benchmarking.
### SnV spin coherence properties
We next utilize microwave control to characterize the SnV coherence at
$1.7\text{\,}\mathrm{K}$. We perform a Ramsey measurement as shown in Figure 4
(a). The Gaussian envelope of the Ramsey oscillations corresponds to a spin
dephasing time $T_{2}^{*}$ of $2.5(1)\text{\,}\mathrm{\SIUnitSymbolMicro s}$.
Similar to ODMR, we observe interaction with a proximal spin in the Ramsey
measurement, and we verify that this does not originate from the detuning of
the MW signal via phase dependent readout (see section 5.2 in SI). Possible
decoherence sources could be nearby vacancies and defects in the diamond
membrane, as well as surface spins from both sides of the membrane [43].
Figure 4: Spin coherence of the strained SnV at $1.7\text{\,}\mathrm{K}$. (a)
$T_{2}^{*}$ Ramsey of the SnV center, showing a dephasing time of
$2.5(1)\text{\,}\mathrm{\SIUnitSymbolMicro s}$. The extra beating pattern of
$554(5)\text{\,}\mathrm{kHz}$ is estimated to be an interaction with the
electron or nuclear spin in the vicinity. (b) Dynamical decoupling of the SnV
via CPMG pulses. The CPMG-1 (spin-echo) returns a $T_{2,echo}$ of
$100(1)\text{\,}\mathrm{\SIUnitSymbolMicro s}$, while the CPMG-128 reaches a
$T_{2,CPMG128}$ of $1.57(8)\text{\,}\mathrm{ms}$. (c) The scaling of $T_{2}$
with the number of CPMG and XY pulses, showing a sub-linear dependence.
Advanced pulse sequences, such as dynamical decoupling via CPMG (Carr-Purcell-
Meiboom-Gill) and XY pulse sequences [44, 45], allow us to extend the spin
coherence to millisecond timescales. The CPMG results are shown in Figure 4
(b). The $T_{2,echo}$ returns a value of
$100(1)\text{\,}\mathrm{\SIUnitSymbolMicro s}$, which is already longer than
$35.5(30)\text{\,}\mathrm{\SIUnitSymbolMicro s}$ measured using all-optical
spin echo process (see section 4.3 and 4.4 in SI), in the absence of optically
induced dephasing mechanisms. The $T_{2,CPMG128}$, comprising 128 refocusing
microwave pulses, prolongs the SnV spin coherence to
$1.57(8)\text{\,}\mathrm{ms}$. We note that with no signal normalization being
applied, the CPMG figure indicates a high signal fidelity of $\approx
80\text{\,}\mathrm{\%}$ for up to 128 pulses. Future developments on the MW
driving fidelity including superconducting metals and faster Rabi pulses can
further improve the signal fidelity to higher numbers of pulses. We plot the
relationship between the $T_{2}$ and the number of CPMG or XY pulses $N$ in
Figure 4 (c) and fit it with $T_{2}\sim N^{\beta}$. The fitting curve returns
a sub-linear dependence with a $\beta$ factor of $0.593(8)\text{\,}$. We
observed minimal $T_{2}$ differences between CPMG and XY sequences. XY
sequences are more resilient to control pulse errors compared to CPMG [45],
verifying that the observed coherence is not limited by our control (see
section 5.4 in SI).
### Spin-photon interface at $4\text{\,}\mathrm{K}$
Finally, we demonstrate that our strained SnV platform shows state-of-the-art
spin coherence for Group IV color centers at $4\text{\,}\mathrm{K}$. For Group
IVs, the dominant decoherence source of the electronic spin is the electron-
phonon interaction (phonon-mediated decay) between orbital branches [20, 42].
The electron-phonon interaction rate depends on the temperature-dependent
phonon population and the energy splitting $\Delta_{gs}$ between orbital
branches. Therefore, enhanced coherence of the group IV centers can be
achieved via either cooling down to millikelvin temperature [22, 21],
increased energy splitting by using heavier group IV elements [25],
engineering of the phonon density of states [46], or strain engineering [24].
Here we utilize both a heavy element (Sn as compared to Si and Ge) and crystal
strain in diamond to improve electron spin coherence at elevated temperatures.
The Rabi oscillation of the SnV at $4\text{\,}\mathrm{K}$ is shown in Figure 5
(a). The fidelity is characterized to be $97.7(5)\text{\,}\mathrm{\%}$, only
slightly lower than the value at $1.7\text{\,}\mathrm{K}$ due to background
heating limitations. We characterize the average gate fidelity via randomized
benchmarking at $4\text{\,}\mathrm{K}$ using the same
$2.8\text{\,}\mathrm{MHz}$ Rabi rate, returning a gate fidelity of
$95.7(3)\text{\,}\mathrm{\%}$, confirming the maintained high performance spin
manipulation of the strained SnV at $4\text{\,}\mathrm{K}$.
Figure 5: Performance of the strained SnV center at $4\text{\,}\mathrm{K}$.
(a) Rabi oscillation of the SnV center, showing a gate fidelity of
$97.7(5)\text{\,}\mathrm{\%}$ (b) Randomized benchmarking at
$4\text{\,}\mathrm{K}$, showing an average gate fidelity of
$95.7(3)\text{\,}\mathrm{\%}$. (c) Temperature dependence of the spin decay
time $T_{1}^{\text{spin}}$, dephasing times $T_{2}^{*}$, $T_{2,\text{echo}}$,
and $T_{2,\text{2XY8}}$. (d) ZPL linewidths of the two spin conserving
transitions (A1, B2) with respect to the temperature, showing negligible
broadening with the maximum linewidth below $52.0(8)\text{\,}\mathrm{MHz}$.
The transform-limited linewidth is shown with a dashed line.
Equipped with high fidelity Rabi control, we investigate the spin coherence of
the SnV centers at elevated temperatures. Due to the much larger splitting
$\Delta_{gs}$ of the strained SnV ($\approx 1300\text{\,}\mathrm{GHz}$)
compared with bulk SnV ($\approx 850\text{\,}\mathrm{GHz}$), electron-phonon
dephasing onsets at higher temperatures. Figure 5 (c) shows the
$T_{1}^{\text{spin}}$, $T_{2}^{*}$, $T_{2,echo}$ and $T_{2,2XY8}$ versus
temperature. Fitting the same $\beta$ factor in $T_{2}\sim N^{\beta}$ using
Hahn-echo and XY4 coherence times returns a value of $0.391(8)\text{\,}$ at
$4\text{\,}\mathrm{K}$ and $0.014()\text{\,}$ at $4.5\text{\,}\mathrm{K}$,
indicating that the dominant decoherence mechanism becomes phonon-induced
orbital transitions.
From Figure 5 (c) we notice a much lower dephasing time compared with the
decay time $T_{1}^{\text{spin}}$ [47]. This feature originates from the fact
that only spin-flipping transitions between the lower and upper orbital branch
drive $T_{1}^{\text{spin}}$, whereas $T_{2}$ is sensitive to dephasing by the
spin-conserving transitions due to different precession frequencies in the
orbital branches [23]. In our case, the phonon transitions are highly cycling
due to the aligned magnetic field. Nevertheless, $T_{2}^{*}$ at
$4\text{\,}\mathrm{K}$ remains at $2.7(1)\text{\,}\mathrm{\SIUnitSymbolMicro
s}$ – comparable to the $1.7\text{\,}\mathrm{K}$ value, and $T_{2,echo}$ only
decreases slightly to $74(2)\text{\,}\mathrm{\SIUnitSymbolMicro s}$, with
$T_{2,2XY8}$ reaching the depolarization-limited $T_{2}$ –
$223(10)\text{\,}\mathrm{\SIUnitSymbolMicro s}$. It is worth emphasizing that
all of these are record high values for all group IV spin qubits at
$4\text{\,}\mathrm{K}$ to date.
To demonstrate the potential of the strained SnV center as a promising spin-
photon interface at elevated temperature, we investigate the temperature
dependence of the SnV optical coherence. As shown in Figure 5 (d), we observe
that the ZPL linewidth remains unchanged for both A1 and B2 transitions up to
$7\text{\,}\mathrm{K}$ with the maximum linewidth remaining below
$52.0(8)\text{\,}\mathrm{MHz}$—only $60\text{\,}\mathrm{\%}$ higher than
lifetime-limited values. In the future, modest Purcell enhancement of SnV
emission rates with on-chip nanophotonics or microcavities can generate fully
lifetime-limited photons suitable for efficient entanglement generation.
## Conclusions
In this work, we demonstrate that SnV in strained diamond membranes is a
promising platform for quantum technologies. We create simple heterostructures
that leverage differences in thermal expansion to passively generate
significant strain of $0.05\text{\,}\mathrm{\%}$ to $0.1\text{\,}\mathrm{\%}$
in diamond, enabling efficient, high fidelity microwave control of the SnV
spin. The presence of the strain also suppresses the phonon-mediated decay and
improves the spin coherence of the SnV at $4\text{\,}\mathrm{K}$, which
greatly reduces the technological barrier for quantum networking applications.
We reach a Rabi $\pi$ gate fidelity of $99.36(9)\text{\,}\mathrm{\%}$
($97.7(5)\text{\,}\mathrm{\%}$) with a randomized single qubit gate fidelity
of $97.7(1)\text{\,}\mathrm{\%}$ ($95.7(3)\text{\,}\mathrm{\%}$) at
$1.7\text{\,}\mathrm{K}$ ($4\text{\,}\mathrm{K}$). Dynamical decoupling
sequences allow the SnV spin coherence to reach $1.57(8)\text{\,}\mathrm{ms}$
at $1.7\text{\,}\mathrm{K}$ and $223(10)\text{\,}\mathrm{\SIUnitSymbolMicro
s}$ at $4\text{\,}\mathrm{K}$. In the future this value can be further
enhanced by generating higher strain through heterostructure optimization
and/or additional active tuning. Our platform, derived from scalable diamond
membrane generation, is compatible with further on-chip integration, such as
microwave coplanar waveguides, integrated photonics [31], and MEMS. Finally,
$4\text{\,}\mathrm{K}$ cryostats are relatively affordable and less
infrastructure-intensive in comparison to cryogen-free
$1.7\text{\,}\mathrm{K}$ and $\text{\,}\mathrm{mK}$ dilution-fridge systems.
Therefore, the demonstrated spin-photon interface at $4\text{\,}\mathrm{K}$
can reduce barriers to widespread utilization and deployment of solid-state
quantum technologies.
## Acknowledgements
This work on strain engineering of Group IV color centers is supported by the
Air Force Office of Scientific Research under award number FA9550-22-1-0518.
This work acknowledges funding through Q-NEXT, supported by the U.S.
Department of Energy, Office of Science, National Quantum Information Science
Research Centers. The experiment receives support from the ERC Advanced Grant
PEDESTAL (884745) and the EU Quantum Flagship 2D-SIPC. Membrane integration
research is supported by NSF award AM-2240399. Diamond growth related efforts
were supported by the U.S. Department of Energy, Office of Basic Energy
Sciences, Materials Science and Engineering Division (N.D.). The membrane
bonding work is supported by NSF award AM-2240399. This work made use of the
Pritzker Nanofabrication Facility (Soft and Hybrid Nanotechnology Experimental
Resource, NSF ECCS-2025633) and the Materials Research Science and Engineering
Center (NSF DMR-2011854) at the University of Chicago. A.M.S. acknowledges
support from EPSRC/NQIT, R.A.P. from the General Sir John Monash Foundation
and G-research, J.A.M. from the Winton Programme and EPSRC DTP, C.P.M. from
the EPSRC DTP. B. P. acknowledges funding from the European Union’s Horizon
2020 research and innovation programme under the Marie Skłodowska-Curie Grant
Agreement No. 840968. The authors thank Srujan Meesala, Dorian Gangloff for
insightful discussions, Haoxiong Yan and Ming-Han Chou for experimental help.
∗ These authors contributed equally to this work.
† Correspondence should be addressed to<EMAIL_ADDRESS><EMAIL_ADDRESS>
## Competing interest
A. A. H., X. G., Z. L., T. L., N. D., and F. J. H. filed a provisional patent
for the strain generation of bonded membranes.
## References
* Sar _et al._ [2012] T. V. D. Sar, Z. H. Wang, M. S. Blok, H. Bernien, T. H. Taminiau, D. M. Toyli, D. A. Lidar, D. D. Awschalom, R. Hanson, and V. V. Dobrovitski, Decoherence-protected quantum gates for a hybrid solid-state spin register, Nature 484, 82 (2012).
* Taminiau _et al._ [2014] T. H. Taminiau, J. Cramer, T. van der Sar, V. V. Dobrovitski, and R. Hanson, Universal control and error correction in multi-qubit spin registers in diamond, Nature nanotechnology 9, 171 (2014).
* Stas _et al._ [2022] P.-J. Stas, Y. Q. Huan, B. Machielse, E. N. Knall, A. Suleymanzade, B. Pingault, M. Sutula, S. W. Ding, C. M. Knaut, D. R. Assumpcao, Y.-C. Wei, M. K. Bhaskar, R. Riedinger, D. D. Sukachev, H. Park, M. Lončar, D. S. Levonian, and M. D. Lukin, Robust multi-qubit quantum network node with integrated error detection, Science 378, 557 (2022), https://www.science.org/doi/pdf/10.1126/science.add9771 .
* Pompili _et al._ [2021] M. Pompili, S. L. N. Hermans, S. Baier, H. K. C. Beukers, P. C. Humphreys, R. N. Schouten, R. F. L. Vermeulen, M. J. Tiggelman, L. dos Santos Martins, B. Dirkse, S. Wehner, and R. Hanson, Realization of a multinode quantum network of remote solid-state qubits, Science 372, 259 (2021), https://www.science.org/doi/pdf/10.1126/science.abg1919 .
* Hermans _et al._ [2022] S. L. Hermans, M. Pompili, H. K. Beukers, S. Baier, J. Borregaard, and R. Hanson, Qubit teleportation between non-neighbouring nodes in a quantum network, Nature 605, 663 (2022).
* Kucsko _et al._ [2013] G. Kucsko, P. C. Maurer, N. Y. Yao, M. Kubo, H. J. Noh, P. K. Lo, H. Park, and M. D. Lukin, Nanometre-scale thermometry in a living cell, Nature 500, 54 (2013).
* Shi _et al._ [2018] F. Shi, F. Kong, P. Zhao, X. Zhang, M. Chen, S. Chen, Q. Zhang, M. Wang, X. Ye, Z. Wang, Z. Qin, X. Rong, J. Su, P. Wang, P. Z. Qin, and J. Du, Single-DNA electron spin resonance spectroscopy in aqueous solutions, Nature Methods 15, 697 (2018), arXiv:2002.08425 .
* Knall _et al._ [2022] E. N. Knall, C. M. Knaut, R. Bekenstein, D. R. Assumpcao, P. L. Stroganov, W. Gong, Y. Q. Huan, P.-J. Stas, B. Machielse, M. Chalupnik, _et al._ , Efficient source of shaped single photons based on an integrated diamond nanophotonic system, Physical Review Letters 129, 053603 (2022).
* Bhaskar _et al._ [2017] M. K. Bhaskar, D. D. Sukachev, A. Sipahigil, R. E. Evans, M. J. Burek, C. T. Nguyen, L. J. Rogers, P. Siyushev, M. H. Metsch, H. Park, _et al._ , Quantum nonlinear optics with a germanium-vacancy color center in a nanoscale diamond waveguide, Physical review letters 118, 223603 (2017).
* Martínez _et al._ [2022] J. A. Martínez, R. A. Parker, K. C. Chen, C. M. Purser, L. Li, C. P. Michaels, A. M. Stramma, R. Debroux, I. B. Harris, M. H. Appel, _et al._ , Photonic indistinguishability of the tin-vacancy center in nanostructured diamond, Physical Review Letters 129, 173603 (2022).
* Narita _et al._ [2023] Y. Narita, P. Wang, K. Ikeda, K. Oba, Y. Miyamoto, T. Taniguchi, S. Onoda, M. Hatano, and T. Iwasaki, Multiple tin-vacancy centers in diamond with nearly identical photon frequency and linewidth, Physical Review Applied 19, 024061 (2023).
* Rugar _et al._ [2020] A. E. Rugar, C. Dory, S. Aghaeimeibodi, H. Lu, S. Sun, S. D. Mishra, Z.-X. Shen, N. A. Melosh, and J. Vuckovic, Narrow-linewidth tin-vacancy centers in a diamond waveguide, ACS Photonics 7, 2356 (2020).
* Wan _et al._ [2020] N. H. Wan, T.-J. Lu, K. C. Chen, M. P. Walsh, M. E. Trusheim, L. De Santis, E. A. Bersin, I. B. Harris, S. L. Mouradian, I. R. Christen, E. S. Bielejec, and D. Englund, Large-scale integration of artificial atoms in hybrid photonic circuits, Nature 583, 226 (2020).
* Görlitz _et al._ [2020] J. Görlitz, D. Herrmann, G. Thiering, P. Fuchs, M. Gandil, T. Iwasaki, T. Taniguchi, M. Kieschnick, J. Meijer, M. Hatano, _et al._ , Spectroscopic investigations of negatively charged tin-vacancy centres in diamond, New Journal of Physics 22, 013048 (2020).
* Iwasaki _et al._ [2017] T. Iwasaki, Y. Miyamoto, T. Taniguchi, P. Siyushev, M. H. Metsch, F. Jelezko, and M. Hatano, Tin-vacancy quantum emitters in diamond, Physical review letters 119, 253601 (2017).
* De Santis _et al._ [2021] L. De Santis, M. E. Trusheim, K. C. Chen, and D. R. Englund, Investigation of the stark effect on a centrosymmetric quantum emitter in diamond, Physical Review Letters 127, 147402 (2021).
* Aghaeimeibodi _et al._ [2021] S. Aghaeimeibodi, D. Riedel, A. E. Rugar, C. Dory, and J. Vučković, Electrical tuning of tin-vacancy centers in diamond, Physical Review Applied 15, 064010 (2021).
* Hepp _et al._ [2014] C. Hepp, T. Müller, V. Waselowski, J. N. Becker, B. Pingault, H. Sternschulte, D. Steinmüller-Nethl, A. Gali, J. R. Maze, M. Atatüre, and C. Becher, Electronic structure of the silicon vacancy color center in diamond, Physical Review Letters 112, 10.1103/PhysRevLett.112.036405 (2014).
* Sipahigil _et al._ [2016] A. Sipahigil, R. E. Evans, D. D. Sukachev, M. J. Burek, J. Borregaard, M. K. Bhaskar, C. T. Nguyen, J. L. Pacheco, H. A. Atikian, C. Meuwly, R. M. Camacho, F. Jelezko, E. Bielejec, H. Park, M. Lončar, and M. D. Lukin, An integrated diamond nanophotonics platform for quantum-optical networks, Science 354, 847 (2016), https://www.science.org/doi/pdf/10.1126/science.aah6875 .
* Jahnke _et al._ [2015] K. D. Jahnke, A. Sipahigil, J. M. Binder, M. W. Doherty, M. Metsch, L. J. Rogers, N. B. Manson, M. D. Lukin, and F. Jelezko, Electron-phonon processes of the silicon-vacancy centre in diamond, New Journal of Physics 17, 10.1088/1367-2630/17/4/043011 (2015).
* Becker _et al._ [2018] J. N. Becker, B. Pingault, D. Groß, M. Gündoğan, N. Kukharchyk, M. Markham, A. Edmonds, M. Atatüre, P. Bushev, and C. Becher, All-optical control of the silicon-vacancy spin in diamond at millikelvin temperatures, Physical review letters 120, 053603 (2018).
* Sukachev _et al._ [2017] D. D. Sukachev, A. Sipahigil, C. T. Nguyen, M. K. Bhaskar, R. E. Evans, F. Jelezko, and M. D. Lukin, Silicon-vacancy spin qubit in diamond: A quantum memory exceeding 10 ms with single-shot state readout, Physical Review Letters 119, 10.1103/PhysRevLett.119.223602 (2017).
* Meesala _et al._ [2018] S. Meesala, Y. I. Sohn, B. Pingault, L. Shao, H. A. Atikian, J. Holzgrafe, M. Gündoǧan, C. Stavrakas, A. Sipahigil, C. Chia, R. Evans, M. J. Burek, M. Zhang, L. Wu, J. L. Pacheco, J. Abraham, E. Bielejec, M. D. Lukin, M. Atatüre, and M. Lončar, Strain engineering of the silicon-vacancy center in diamond, Physical Review B 97, 1 (2018).
* Sohn _et al._ [2018] Y. I. Sohn, S. Meesala, B. Pingault, H. A. Atikian, J. Holzgrafe, M. Gündoǧan, C. Stavrakas, M. J. Stanley, A. Sipahigil, J. Choi, M. Zhang, J. L. Pacheco, J. Abraham, E. Bielejec, M. D. Lukin, M. Atatüre, and M. Lončar, Controlling the coherence of a diamond spin qubit through its strain environment, Nature Communications 9, 17 (2018).
* Trusheim _et al._ [2020] M. E. Trusheim, B. Pingault, N. H. Wan, M. Gündoǧan, L. D. Santis, R. Debroux, D. Gangloff, C. Purser, K. C. Chen, M. Walsh, J. J. Rose, J. N. Becker, B. Lienhard, E. Bersin, I. Paradeisanos, G. Wang, D. Lyzwa, A. R. Montblanch, G. Malladi, H. Bakhru, A. C. Ferrari, I. A. Walmsley, M. Atatüre, and D. Englund, Transform-limited photons from a coherent tin-vacancy spin in diamond, Physical Review Letters 124, 1 (2020).
* Debroux _et al._ [2021] R. Debroux, C. P. Michaels, C. M. Purser, N. Wan, M. E. Trusheim, J. A. Martínez, R. A. Parker, A. M. Stramma, K. C. Chen, L. D. Santis, E. M. Alexeev, A. C. Ferrari, D. Englund, D. A. Gangloff, and M. Atatüre, Quantum control of the tin-vacancy spin qubit in diamond, Physical Review X 11, 10.1103/PhysRevX.11.041041 (2021).
* Guo _et al._ [2021] X. Guo, N. Delegan, J. C. Karsch, Z. Li, T. Liu, R. Shreiner, A. Butcher, D. D. Awschalom, F. J. Heremans, and A. A. High, Tunable and transferable diamond membranes for integrated quantum technologies, Nano Letters 21, 10392 (2021).
* Siew _et al._ [2000] Y. K. Siew, G. Sarkar, X. Hu, J. Hui, A. See, and C. T. Chua, Thermal curing of hydrogen silsesquioxane, Journal of The Electrochemical Society 147, 335 (2000).
* Dang _et al._ [2021] C. Dang, J.-P. Chou, B. Dai, C.-T. Chou, Y. Yang, R. Fan, W. Lin, F. Meng, A. Hu, J. Zhu, J. Han, A. M. Minor, J. Li, and Y. Lu, Achieving large uniform tensile elasticity in microfabricated diamond, Science 371, 76 (2021), https://www.science.org/doi/pdf/10.1126/science.abc4174 .
* Butcher _et al._ [2020] A. Butcher, X. Guo, R. Shreiner, N. Delegan, K. Hao, P. J. Duda, D. D. Awschalom, F. J. Heremans, and A. A. High, High-Q Nanophotonic Resonators on Diamond Membranes using Templated Atomic Layer Deposition of TiO2, Nano Letters 20, 4603 (2020).
* Guo _et al._ [2023] X. Guo, M. Xie, A. Addhya, A. Linder, U. Zvi, T. D. Deshmukh, Y. Liu, I. N. Hammock, Z. Li, C. T. DeVault, A. Butcher, A. P. Esser-Kahn, D. D. Awschalom, N. Delegan, P. C. Maurer, F. J. Heremans, and A. A. High, Direct-bonded diamond membranes for heterogeneous quantum and electronic technologies, arXiv preprint: 2306.04408 (2023), arXiv:2306.04408 [physics.app-ph] .
* Thiering and Gali [2018] G. Thiering and A. Gali, Ab initio magneto-optical spectrum of group-iv vacancy color centers in diamond, Physical Review X 8, 021063 (2018).
* Rugar _et al._ [2021] A. E. Rugar, S. Aghaeimeibodi, D. Riedel, C. Dory, H. Lu, P. J. McQuade, Z.-X. Shen, N. A. Melosh, and J. Vučković, Quantum photonic interface for tin-vacancy centers in diamond, Physical Review X 11, 031021 (2021).
* Parker _et al._ [2023] R. A. Parker, J. A. Martínez, K. C. Chen, A. M. Stramma, I. B. Harris, C. P. Michaels, M. E. Trusheim, M. H. Appel, C. M. Purser, W. G. Roth, D. Englund, and M. Atatüre, A diamond nanophotonic interface with an optically accessible deterministic electronuclear spin register, arXiv preprint: 2305.18923 (2023), arXiv:2305.18923 [quant-ph] .
* Bhaskar _et al._ [2020] M. K. Bhaskar, R. Riedinger, B. Machielse, D. S. Levonian, C. T. Nguyen, E. N. Knall, H. Park, D. Englund, M. Lončar, D. D. Sukachev, and M. D. Lukin, Experimental demonstration of memory-enhanced quantum communication, Nature 580, 60 (2020).
* Fuchs _et al._ [2021] P. Fuchs, T. Jung, M. Kieschnick, J. Meijer, and C. Becher, A cavity-based optical antenna for color centers in diamond, APL Photonics 6 (2021).
* Güney Torun _et al._ [2021] C. Güney Torun, P.-I. Schneider, M. Hammerschmidt, S. Burger, J. H. Munns, and T. Schröder, Optimized diamond inverted nanocones for enhanced color center to fiber coupling, arXiv e-prints , arXiv (2021).
* Kuruma _et al._ [2021] K. Kuruma, B. Pingault, C. Chia, D. Renaud, P. Hoffmann, S. Iwamoto, C. Ronning, and M. Lončar, Coupling of a single tin-vacancy center to a photonic crystal cavity in diamond, Applied Physics Letters 118 (2021).
* Tomm _et al._ [2021] N. Tomm, A. Javadi, N. O. Antoniadis, D. Najer, M. C. Löbl, A. R. Korsch, R. Schott, S. R. Valentin, A. D. Wieck, A. Ludwig, _et al._ , A bright and fast source of coherent single photons, Nature Nanotechnology 16, 399 (2021).
* Riedel _et al._ [2017] D. Riedel, I. Söllner, B. J. Shields, S. Starosielec, P. Appel, E. Neu, P. Maletinsky, and R. J. Warburton, Deterministic enhancement of coherent photon generation from a nitrogen-vacancy center in ultrapure diamond, Physical Review X 7, 031040 (2017).
* Ruf _et al._ [2021] M. Ruf, M. J. Weaver, S. B. van Dam, and R. Hanson, Resonant excitation and purcell enhancement of coherent nitrogen-vacancy centers coupled to a fabry-perot microcavity, Physical Review Applied 15, 024049 (2021).
* Pingault _et al._ [2017] B. Pingault, D. D. Jarausch, C. Hepp, L. Klintberg, J. N. Becker, M. Markham, C. Becher, and M. Atatüre, Coherent control of the silicon-vacancy spin in diamond, Nature Communications 8, 10.1038/ncomms15579 (2017).
* Sangtawesin _et al._ [2019] S. Sangtawesin, B. L. Dwyer, S. Srinivasan, J. J. Allred, L. V. Rodgers, K. De Greve, A. Stacey, N. Dontschuk, K. M. O’Donnell, D. Hu, _et al._ , Origins of diamond surface noise probed by correlating single-spin measurements with surface spectroscopy, Physical Review X 9, 031052 (2019).
* De Lange _et al._ [2010] G. De Lange, Z.-H. Wang, D. Riste, V. Dobrovitski, and R. Hanson, Universal dynamical decoupling of a single solid-state spin from a spin bath, Science 330, 60 (2010).
* Souza _et al._ [2011] A. M. Souza, G. A. Alvarez, and D. Suter, Robust dynamical decoupling for quantum computing and quantum memory, Physical review letters 106, 240501 (2011).
* et al. [2023] K. K. et al., Direct-bonded diamond membranes for heterogeneous quantum and electronic technologies, In preparation (2023), arXiv:2306.04408 [physics.app-ph] .
* Rogers _et al._ [2014] L. J. Rogers, K. D. Jahnke, M. H. Metsch, A. Sipahigil, J. M. Binder, T. Teraji, H. Sumiya, J. Isoya, M. D. Lukin, P. Hemmer, and F. Jelezko, All-optical initialization, readout, and coherent preparation of single silicon-vacancy spins in diamond, Physical Review Letters 113, 1 (2014).
|
# The occurrence of riddled basins and blowout bifurcations in a parametric
nonlinear system
M. Rabiee Department of Mathematics, Ferdowsi University of Mashhad, Mashhad,
Iran<EMAIL_ADDRESS>, F. H. Ghane Department of
Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran.
<EMAIL_ADDRESS>, M. Zaj Department of Mathematics, Ferdowsi University
of Mashhad, Mashhad, Iran<EMAIL_ADDRESS>and S. Karimi Department of
Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran.
<EMAIL_ADDRESS>
###### Abstract.
In this paper, a two parameters family $F_{\beta_{1},\beta_{2}}$ of maps of
the plane living two different subspaces invariant is studied. We observe
that, our model exhibits two chaotic attractors $A_{i}$, $i=0,1$, lying in
these invariant subspaces and identify the parameters at which $A_{i}$ has a
locally riddled basin of attraction or becomes a chaotic saddle. Then, the
occurrence of riddled basin in the global sense is investigated in an open
region of $\beta_{1}\beta_{2}$-plane. We semi-conjugate our system to a random
walk model and define a fractal boundary which separates the basins of
attraction of the two chaotic attractors, then we describe riddled basin in
detail. We show that the model undergos a sequence of bifurcations: “a blowout
bifurcation”, “a bifurcation to normal repulsion” and “a bifurcation by
creating a new chaotic attractor with an intermingled basin”. Numerical
simulations are presented graphically to confirm the validity of our results.
###### Key words and phrases:
riddled basin of attraction, blowout bifurcation, skew product, normal
Lyapunov exponent
###### 2010 Mathematics Subject Classification:
37C05,37C40, 37C70, 37H15, 37E05, 37D35
∗Corresponding author
## 1\. Introduction
There has been a lot of recent interest in the global dynamics of systems with
multiple attractors, with the recognition that the structure of basins of
attraction may be very complicated. The terminology of multiple attractors,
allows several attractors to coexist, has appeared in [Buescu(1997),
Ott(1993), Daza et al.(2016), Dudkowskia et al.(2016)]. It is very common for
dynamical systems to have more than one attractor. Among of them, we focus on
systems having multiple attractors may present basins of attraction densely
blended, a phenomenon called _riddling_. This means that for every initial
condition in the basin of one attractor there are arbitrary near conditions
which tend to the basin of any of the other attractors. Ott et al. [Ott et
al.(1993)] introduced non-linear dynamical systems with a simple symmetry
contain riddled basins. Also, the conditions of occurrence riddled basins are
defined in Alexander et al. [Alexander et al.(1992)] and then generalized by
Ashwin et al. [Ashwin et al.(1996)].
Riddling basins arise in systems that possess chaotic dynamics in a smooth
invariant manifold of lower dimension than that of the full phase space. This
complexity appears at the transition point between strong and weak stability
of the invariant subspace. When riddled basins occur, differing initial
conditions of the different replicates may asymptotic to different attractors.
Hence, the task of predicting what will be the final state of the system
becomes difficult. A detailed picture of multiple attractors with riddled
basins is available through works of several authors [Alexander et al.(1992),
Ashwin et al.(1996), Lai et al.(2005), Mohd Roslan & Ashwin(2016), Schultz et
al.(1993)]. Applications of riddling in complex dynamical systems of physical
and biological interest can be found: for instance, for a forced double-well
Duffing oscillator [Ott & Sommer(1993), Ott et al.(1993), Ott et al.(1994)],
ecological population models [Cazelles(2001), Viana et al.(2009), Karimi &
Ghane(2020)], learning dynamical systems [Nakajima & Ueda(1996)], coupled non-
linear electronic circuits [Ashwin et al.(1994), Heagy et al.(1995)], among
others.
Take a nonlinear dynamical system possesses a smooth invariant manifold $N$.
Suppose that the restriction of the system to $N$ has an attractor $A$, so it
is stable to perturbation within $N$. Assume the behaviour of the system near
$A$ is determined by combining the dynamics on $A$ and the dynamics transverse
to $N$. In the case that $A$ is chaotic, by the global nature of the effect of
perturbations transverse to $N$, considerable complexities in dynamics is
observed [Ashwin et al.(1996), Ashwin et al.(1994), Cazelles(2001), Viana et
al.(2009)]. In this situation, the local dynamic stability of the chaotic
attractor $A$ may be described in terms of normal Lyapunov exponents. When the
largest normal Lyapunov exponent is negative, there is a set of positive
measure which is forward asymptotic to $A$ [Alexander et al.(1992), Ashwin et
al.(1996)]. In general, when we discuss the occurrence of the riddled basin
for the attractor $A$, it is necessary to have a dense set of points with zero
Lebesgue measure in $A$ lying in the invariant subspace which are transversely
unstable, thus it is necessary that the attractor $A$ be chaotic. In most
chaotic attractors ergodic measures are not unique; for instance, they may
exhibit dirac ergodic measures whose support are periodic orbits. Each ergodic
measure carries its own Lyapunov exponents, so the stability in transverse
directions can be considered independently for every ergodic measure supported
in that attractor. For example, two different periodic orbits in the attractor
may have normal exponents of different signs. If there exists a natural
measure on $A$, then Lebesgue-almost all points have corresponding normal
exponents and manifolds, but there can still be a dense set in $A$ with the
opposite behaviour [Ashwin et al.(1996)].
When the full space contains two chaotic attractors lying in different
invariant subspaces, the system presents a complex fractal boundary between
the initial conditions leading to each of the two attractors. However, in a
riddled basin, small variations in initial conditions induce a switch between
the different asymptotic attractors but fractal boundary causes to predict,
from a given initial condition, what trajectory in phase-space the system will
follow.
Ott et al. [Ott et al.(1994)] observed the occurrence of riddling basin for a
certain nonlinear model of point particle motion subject to friction and
periodic forcing in a two-dimensional potential and tested this observation
numerically. However, they verified their results theoretically by
calculations leading to a simple piecewise-linear model.
In this article, we examine the behavior of a two parameter family of planar
systems $F_{\beta_{1},\beta_{2}}$. In our setting, each system
$F_{\beta_{1},\beta_{2}}$ exhibits two invariant subspaces $N_{i}$, $i=1,2$,
having chaotic attractors $A_{i}$. We demonstrate the emergence conditions of
locally riddled basin, chaotic saddle and riddled basin in the global sense.
The blowout bifurcations of chaotic attractors in these invariant subspaces
are illustrated. We give a detailed analysis to occurrence of these phenomena.
It can be done by semi-conjugating the system to a random walk model and
defining a fractal boundary which separates the basins of attraction of two
chaotic attractors. Using this approach, we investigate the system at riddled
basin and blowout in detail.
The setting of this paper is a family $\mathcal{F}$ of two parametric skew
product maps of the form
$\displaystyle
F_{\beta_{1},\beta_{2}}:\mathbb{I}\times\mathbb{I}\to\mathbb{I}\times\mathbb{I},\
F_{\beta_{1},\beta_{2}}(x,y):=(f(x),g_{\beta_{1},\beta_{2}}(x,y)),$ (1)
where $\mathbb{I}$ is the unit interval $[0,1]$, $f$ is an expanding Markov
map given by
$\displaystyle f(x)=\left\\{\begin{array}[]{cc}2x&$ for$\quad 0\leq x\leq
1/2,\\\ 2x-1&$ for$\quad 1/2<x\leq 1\end{array}\right.$ (4)
and
$\displaystyle
g_{\beta_{1},\beta_{2}}(x,y):=\left\\{\begin{array}[]{cc}g_{1,\beta_{1}}(y)&$
for$\quad 0\leq x\leq 1/2,\\\ g_{2,\beta_{2}}(y)&$ for$\quad 1/2<x\leq
1.\end{array}\right.$ (7)
We assume that the $C^{2}$ diffeomorphisms
$g_{i,\beta_{i}}:\mathbb{I}\to\mathbb{I}$, $i=1,2$, fulfill the following
conditions:
1. (I1)
$g_{i,\beta_{i}}(0)=0,\quad g_{i,\beta_{i}}(1)=1,\quad
g_{i,\beta_{i}}(\beta_{i})=\beta_{i},\quad\text{for}\ i=1,2;$
2. (I2)
$g_{1,\beta_{1}}(y)<y\quad\text{for}\ y<\beta_{1},\quad
g_{1,\beta_{1}}(y)>y\quad\text{for}\ y>\beta_{1};$
3. (I3)
$g_{2,\beta_{2}}(y)>y\quad\text{for}\ y<\beta_{2},\quad
g_{2,\beta_{2}}(y)<y\quad\text{for}\ y>\beta_{2};$
In (1), the subspaces
$N_{0}:=\mathbb{I}\times\\{0\\},\quad N_{1}:=\mathbb{I}\times\\{1\\}$ (8)
play the role of the $F_{\beta_{1},\beta_{2}}$-invariant manifolds with
chaotic dynamics inside, for each $\beta_{1},\beta_{2}\in(0,1)$. Our objective
is to study some types of bifurcations by varying the parameters $\beta_{i}$
and characterize the different possible dynamics.
Particular example is given by
$\displaystyle
g_{\beta_{1},\beta_{2}}(x,y):=\left\\{\begin{array}[]{cc}g_{1,\beta_{1}}(y)=y+y(1-y)(y-\beta_{1})&$
for$\quad 0\leq x\leq 1/2,\\\ g_{2,\beta_{2}}(y)=y-y(1-y)(y-\beta_{2})&$
for$\quad 1/2<x\leq 1.\end{array}\right.$ (11)
The parameters $\beta_{i}$, $i=1,2$, vary the transverse dynamics without
changing the dynamics on the invariant subspaces $N_{0}$ and $N_{1}$. We will
show that, for some values of $\beta_{i}$, the parametric family
$F_{\beta_{1},\beta_{2}}$ exhibits two attractors lying in invariant subspaces
$N_{i}$ with a qualitative dynamics which depends on the initial conditions.
Attractors in our model, for some parameters values, exhibit a complex
attracting basin structure that is riddled by holes. This phenomenon produces
an unpredictability qualitatively greater than the traditional sensitive
dependence on initial conditions within a single chaotic attractor. The
dynamics of the system is described by two Lyapunov exponents. The first one
is the parallel Lyapunov exponent which describes the evolution on the
invariant subspaces and must be positive for emergence of riddled basins. The
second is the normal Lyapunov exponent that characterizes evolution transverse
to the subspaces [Ashwin et al.(1996), Cazelles(2001), Viana et al.(2009)].
In this paper, we estimate the range of values of parameters $\beta_{i}$,
$i=1,2$, such that the attractors $A_{i}$ have a locally riddled basin,
riddled basin in the global sense or becomes a chaotic saddle and provide
rigorous analysis for these complex behaviors. Also, we investigate that, when
$\beta_{1}=\beta_{2}$, a new chaotic attractor is born and its basin is
intermingled with the basins of both chaotic attractors $A_{i}$, $i=1,2$. We
show that by varying the parameters $\beta_{i}$, we are forced to undergo a
sequence of bifurcations: a “blowout bifurcation”, a “bifurcation to normal
repulsion” and “a bifurcation by creating a new chaotic attractor with an
intermingled basin”.
Here, we will show that by varying the parameters $\beta_{i}$, it is possible
one of the chaotic sets in the invariant subspaces looses the stability when
the parameters pass through critical values. This happens if the normal
Lyapunov exponent of one of the chaotic attractors crosses zero at these
critical values. If the normal Lyapunov exponent is negative, there is a set
of positive measure which is forward asymptotic to that attractor. However,
there can still be an infinite set of trajectories in the neighborhood of the
attractor that are repelled from it. In particular, if the normal Lyapunov
exponent is small and negative, the blowout bifurcation occurs and the
riddling basin can be observed near the bifurcation point.
This paper is organized as follows. In Section 2, we describe precisely the
notions and terminology used in this paper. In Section 3, we concentrate on
studying the two parameters family $F_{\beta_{1},\beta_{2}}$. Using the
results of [Ashwin et al.(1996)], we investigate the occurrence of locally
riddled basins and chaotic saddles for some values of parameters in an open
region of $\beta_{1}\beta_{2}$-plane. In Section 4, we introduce a random walk
model which is semi-conjugate to the skew product system
$F_{\beta_{1},\beta_{2}}$. This allows us to define a fractal boundary between
the initial conditions leading to each of the two attractors $A_{0}$ and
$A_{1}$. In Section 5, we demonstrate the emergence conditions of riddled
basins in the global sense and indicate the parameters values for which the
family undergos a sequence of bifurcations: a blowout bifurcation and a
bifurcation by creating a new chaotic attractor with intermingled basin.
## 2\. Terminology
In this section, we introduce the concepts and the notations which are basic
in this paper.
### 2.1. Attractors and riddled basins
Let $M$ be a compact connected smooth Riemannian manifold and $m$ denotes the
normalized Lebesgue measure. First we recall some classical definitions
related to attractors.
Let $F:M\to M$ be a continuous map and $A\subset M$ is a compact $F$-invariant
set ( i.e. $F(A)=A$). We say $A$ is _transitive_ if there exists $x\in A$ such
that $\omega(x)=A$, where $\omega(x)$ is the set of limit points of the orbit
$\\{F^{n}(x)\\}_{n\geq 0}$. _The basin of attraction_ of $A$, we denote it by
$\mathcal{B}(A)$, is the set of points whose $\omega$-limit set is contained
in $A$. For non-empty $A$ the basin $\mathcal{B}(A)$ is always non-empty
because it includes $A$. For $A$ to be an attractor, we require that
$\mathcal{B}(A)$ is large in the appropriate sense. The compact invariant set
$A$ is called an _asymptotically stable attractor_ if it is Lyapunov stable
and the basin of attraction $\mathcal{B}(A)$ contains a neighbourhood of $A$.
Many variants for the definition of an attractor can be found the literature,
see [Milnor(1985)] for a discussion. In a weaker form [Milnor(1985)], we say
that a compact $F$-invariant set $A$ is an attractor in Milnor sense for $F$
if the basin of attraction $\mathcal{B}(A)$ has positive Lebesgue measure. To
be more precise, we say that A is a _Milnor attractor_ if $\mathcal{B}(A)$ has
non-zero Lebesgue measure and there is no compact proper subset $A^{\prime}$
of $A$ whose basin coincides with $\mathcal{B}(A)$ up to a set of zero
measure. Melbourne introduced a stronger form of a Milnor attractor
[Melbourne(1999)]. $A$ is called an _essential attractor_ if
$\lim_{\delta\to
0}\frac{m(B_{\delta}(A)\cap\mathcal{B}(A))}{m(B_{\delta}(A))}=1,$
where $B_{\delta}(A)$ is a $\delta$-neighbourhood of $A$ in $M$.
Here, we deal with chaotic attractors. A compact $F$-invariant set $A$ is a
_chaotic attractor_ if $A$ is a transitive Milnor attractor and supports an
ergodic measure $\mu$ but is not uniquely ergodic. In particular, at least one
of the Lyapunov exponents (with respect to $\mu$) is positive.
Some of dynamical systems having chaotic attractors with densely intertwined
basins of attraction, which we call it _riddled basin_. Riddled basins were
introduced in 1992 by [Alexander et al.(1992)]. In this case, a basin is
riddled with holes (in a measure theoretical sense) of another basin. To be
more precise, the basin of attraction $\mathcal{B}(A)$ of an attractor $A$ is
_riddled_ if its complement $\mathcal{B}(A)^{c}$ intersects every disk in a
set of positive measure.
This concept is generalized to the “locally riddled basin”. It considers the
case where the basin of $A$ is open but local normal unstable manifolds exist
in a dense set in $A$. Precisely, a Milnor attractor $A$ has a _locally
riddled basin_ if there exists a neighbourhood $U$ of $A$ such that, for all
$x\in A$ and $\varepsilon>0$
$m(B_{\varepsilon}(x)\cap(\bigcap_{n\geq 0}F^{-n}(U))^{c})>0.$ (12)
If there is another Milnor attractor $A^{\prime}$ such that
$\mathcal{B}(A)^{c}$ in the definition of locally riddled basin may be
replaced with $\mathcal{B}(A^{\prime})$, then we say that the basin of $A$ is
riddled with the basin of $A^{\prime}$. If $\mathcal{B}(A)$ and
$\mathcal{B}(A^{\prime})$ are riddled with each other, we say that they are
_intermingled_.
The corresponding concepts can be defined for repelling sets. An invariant
transitive set $A$ is a _chaotic saddle_ if there exists a neighborhood $U$ of
$A$ such that $\mathcal{B}(A)\cap U\neq\emptyset$ but $m(\mathcal{B}(A))=0$.
Consider the case that the invariant set $A$ is contained in an invariant
$n$-dimensional submanifold $N$ of $\mathbb{R}^{m}$, with $n<m$. We say that
$A$ is a _normally repelling chaotic saddle_ if $\mathcal{B}(A)\neq A$ and
$\mathcal{B}(A)\subset N$. Note that in this case, however, $A$ is an
attractor in the invariant subspace, but all points not lying on this subspace
eventually leave a neighborhood of $A$.
### 2.2. Lyapunov exponents
Assume $F$ is a smooth map defined on a smooth manifold $M$ and let $N\subset
M$ be an $n$-dimensional embedded submanifold and forward invariant by $F$,
with $n<m$. Hence, for $x\in N$, one has that $d_{x}F(T_{x}N)\subset
T_{x}F(N)$. We consider the restriction of $F$ to $N$, denoted by $F_{|N}$.
Moreover, we assume that $A$ is a chaotic attractor for $F$. We denote by
$\mathcal{M}_{F}(A)$ and $\mathcal{E}_{F}(A)$ the sets of invariant
probability measures and ergodic measures supported in $A$, respectively. It
is known that both $\mathcal{M}_{F}(A)$ and $\mathcal{E}_{F}(A)$ are non-empty
(see [Walters(1982)]).
For a vector $v\neq 0$ with base point $x$, _the Lyapunov exponent_
$\lambda(x,v)$ at the point $x$ in the direction of $v$ is defined to be
$\lambda(x,v)=\lim_{n\to\infty}\frac{1}{n}\log\|d_{x}F^{n}(v)\|_{T_{F^{n}(x)}M}$
(13)
whenever the limit exists. Here, we have two kind of Lyapunov exponents for
the chaotic invariant set $A$: the _parallel Lyapunov exponents_ which
indicate the exponential rate of stretching on $A$ when $F$ is restricted to
$N$ and the _normal Lyapunov exponents_ that present the exponential rate of
expansion on $A$ in the normal direction denoted by $\lambda_{\parallel}$ and
$\lambda_{\perp}$, respectively. Precisely, we define these Lyapunov exponents
as follows. Since $N$ is an embedded submanifold, we can take a smooth
splitting of the tangent bundle $TM$ in a neighbourhood of $N$ of the form
$T_{x}M=T_{x}N\oplus(T_{x}N)^{\bot}$, when $x\in N$. To simplify the notation,
we write $TM_{n}:=T_{F^{n}(x)}M$.
###### Definition 1.
Given $x\in A$; $v\in T_{x}M=T_{x}N\oplus T_{x}N^{\perp}$, we define the
_parallel Lyapunov exponent_ at $x$ in the direction of $v$ to be
$\displaystyle\lambda_{\parallel}(x,v)=\lim_{n\rightarrow\infty}\dfrac{1}{n}\ln\parallel\pi_{(TN_{n})}\circ
d_{x}F^{(n)}\circ\pi_{TN_{0}}(v)\parallel_{TM},$ (14)
where $\pi_{V}$ is the orthogonal projection onto a subspace $V$. Similarly,
we define the _normal Lyapunov exponent_ at $x$ in the direction of $v$ to be
$\displaystyle\lambda_{\perp}(x,v)=\lim_{n\rightarrow\infty}\dfrac{1}{n}\ln\parallel\pi_{(TN_{n})^{\perp}}\circ
d_{x}F^{(n)}\circ\pi_{(TN_{0})^{\perp}}(v)\parallel_{TM_{n}}.$ (15)
Here, we are interested in Sinai-Ruelle-Bowen (or SRB) measures which are a
special type of invariant measures, see [Mohd Roslan & Ashwin(2016)].
Let $A$ be an asymptotically stable attractor under $F_{|N}$. An invariant
ergodic probability measure $\mu$ is called an $SRB$ measure for $A$ if its
support is $A$ and has absolutely continuous conditional measures on unstable
manifolds (with respect to the Riemannian measure).
An attractor $A$ is an _SRB-attractor_ if it supports an SRB measure. Since
$A$ is an asymptotically stable attractor under $F_{|N}$, hence, it is the
closure of the union of unstable manifolds on $N$. Note that the existence of
an SRB measure supported on $A$ implies the absolute continuity of the stable
foliation of $A$, see [Pugh & Shub(1989)]. By a result from [Pugh &
Shub(1989)], we have the following:
Assume $\mu$ is an SRB measure for $F_{|N}$. Given a neighborhood $U$ of $A$
there is a set $B(\mu)\subset U$, we call it the _basin_ of $\mu$, with
positive Lebesgue measure such that for all $x\in B(\mu)$ and all continuous
functions $\phi:N\to\mathbb{R}$ one has that
$\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}\phi(F^{i}|_{N}(x))=\int_{A}\phi
d\mu.$
###### Remark 2.
Take an open set $D$ with $\mu_{SRB}(D)=\mu_{SRB}(\text{Cl}(D))$, then
$\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}\chi_{D}(F^{i}|_{N}(x))=\mu_{SRB}(D),$
for a.e. $x$ in the basin $B(\mu)$.
Given an ergodic invariant probability measure
$\mu\in\mathcal{E}_{F_{|N}}(A)$, the normal Lyapunov exponents
$\lambda_{\bot}^{1}(\mu)<\cdots<\lambda_{\bot}^{s}(\mu)$ exists and are
constants in a set $B_{\mu}$ of full $\mu$-measure. We define
$\lambda_{min}:=\inf\bigcup_{\mu\in\mathcal{E}_{F_{|N}}(A)}\\{\lambda_{\bot}^{i}(\mu)\\},\
\Lambda_{max}:=\sup\bigcup_{\mu\in\mathcal{E}_{F_{|N}}(A)}\\{\lambda_{\bot}^{i}(\mu)\\}.$
(16)
Let $\mu$ be an $F$-invariant ergodic probability measure supported in $A$,
with normal Lyapunov exponents
$\lambda_{\bot}^{1}(\mu)<\cdots<\lambda_{\bot}^{s}(\mu)$. The _normal
stability index_ $\Lambda_{\mu}$ of $\mu$ is
$\Lambda_{\mu}:=\lambda_{\bot}^{s}(\mu).$ (17)
We recall the next result from [Alexander et al.(1992)].
###### Theorem 3.
Assume $A$ is an SRB attractor for $F_{|N}$ with $\Lambda_{SRB}<0$, where
$\Lambda_{SRB}$ is defined by (17) for SRB measure $\mu_{SRB}$. Then
$m(\mathcal{B}(A))>0$. Furthermore, $A$ is an essential attractor provided
that $A$ is either uniformly hyperbolic or $\mu_{SRB}$ is absolutely
continuous with respect to Riemannian measure on $N$.
### 2.3. Blowout bifurcations
Let $M$ be a smooth Riemannian manifold and $N\subset M$ be an $n$-dimensional
embedded submanifold and forward invariant by $F$, with $n<m$. Assume that
there is a parameter $\beta$ that varies the transverse dynamics without
changing the dynamics on the invariant subspace $N$. We call the parameter
$\beta$ a _normal parameter_ [Ashwin et al.(1996)]. Moreover, we assume that
the normal Lyapunov exponents vary continuously with $\beta$ and that $A$ is
an asymptotically stable attractor for the restriction map $F_{|N}$.
Let $A$ be a chaotic attractor that supports an SRB measure $\mu_{SRB}$. Then,
the sign of $\Lambda_{SRB}$ determines whether $A$ attracts or repels
infinitesimal perturbations in the direction transverse to $N$. If
$\Lambda_{SRB}<0$, $A$ attracts trajectories transversely in the phase space
and hence, $A$ is also an attractor of the whole phase space. When
$\Lambda_{SRB}>0$, trajectories in the vicinity of $A$ are repelled away from
it. That is, $A$ is transversely unstable and is not an attractor of the whole
phase space. Thus a bifurcation occurs when $\Lambda_{SRB}$ crosses zero, the
so-called _blowout bifurcation_.
Blowout bifurcations are classified as either hysteretic (subcritical) or non-
hysteretic (supercritical). In a hysteretic blowout, riddled basins before the
blowout give rise to a hard loss of stability [Ashwin et al.(1998),
Pikovsky(1984)], after blowout, almost all points near the invariant subspace
eventually move away, never to return. However, a non-hysteretic blowout
giving a soft loss of stability to an on-off intermittent attractor [Ashwin et
al.(1998), Platt et al.(1993)].
It was shown that [Ashwin et al.(1996)] the occurrence of “locally riddled
basins” is exhibited near the blowout bifurcation.
### 2.4. Step skew products
Let $\Sigma_{2}^{+}=\\{1,2\\}^{\mathbb{N}}$ and equip $\Sigma_{2}^{+}$ with
the topology generated by the base of cylinder sets
$[\alpha_{0},\cdots,\alpha_{n-1}]=\\{\omega\in\Sigma_{2}^{+}:\omega_{j}=\alpha_{j},\
\text{for}\ j=0,\cdots,n-1\\}.$
Let $\sigma:\Sigma_{2}^{+}\to\Sigma_{2}^{+}$ be the (left) shift operator
defined by $(\sigma\omega)_{i}=\omega_{i+1}$ for
$\omega=(\omega_{i})_{0}^{\infty}$.
Here, we take the Bernoulli measure $\mathbb{P}^{+}$ on $\Sigma_{2}^{+}$ where
the symbols $1,2$ have probability $p_{1}=p_{2}=1/2$. Indeed, the Bernoulli
measure $\mathbb{P}^{+}$ on $\Sigma_{2}^{+}$ is determined by its values on
cylinders;
$\mathbb{P}^{+}([\alpha_{0},\cdots,\alpha_{n-1}])=\prod_{j=0}^{n-1}p_{\alpha_{j}}.$
(18)
Let $G^{+}$ be a step skew product map with the fiber maps $g_{1}$ and $g_{2}$
defined by
$G^{+}:\Sigma_{2}^{+}\times\mathbb{I}\to\Sigma_{2}^{+}\times\mathbb{I},\quad(\omega,x)\mapsto(\sigma\omega,g_{\omega_{0}}(x)).$
(19)
Denote by $\mathcal{B}$ the Borel sigma-algebra on $\mathbb{I}$. For a skew
product $G^{+}$ with fiber maps $g_{1}$ and $g_{2}$, a measure $m$ on
$\mathbb{I}$ and any $\mathcal{B}$-measurable set $A$, we denote the push-
forward measure of $m$ by $g_{i}m$, $i=1,2$, in which
$g_{i}m(A)=m(g_{i}^{-1}(A)).$
We recall that a probability measure $m$ on the interval $\mathbb{I}$ for
$G^{+}$ is _stationary_ if it satisfies
$m=\sum_{i=1}^{2}p_{i}g_{i}m.$
The natural extension of $G^{+}$ is obtained when the shift acts on two sided
time $\mathbb{Z}$; this yields a skew product system
$G:\Sigma_{2}\times\mathbb{I}\to\Sigma_{2}\times\mathbb{I}$ with
$\Sigma_{2}=\\{1,2\\}^{\mathbb{Z}}$ and given by the same expression
$G(\omega,y)=(\sigma\omega,g_{\omega_{0}}(y)).$ (20)
Write $\pi:\Sigma_{2}\to\Sigma_{2}^{+}$ for the natural projection
$(\omega_{n})_{-\infty}^{\infty}\mapsto(\omega_{n})_{0}^{\infty}$. The Borel
sigma-algebra $\mathcal{F}^{+}$ on $\Sigma_{2}^{+}$ yields a sigma-algebra
$\mathcal{F}=\pi^{-1}\mathcal{F}^{+}$ on $\Sigma_{2}$. Write $\mathbb{P}$ for
the Bernoulli measure on $\Sigma_{2}$, defined analogues to $\mathbb{P}^{+}$
(see (18)).
The invariant measure $\mu^{+}=\mathbb{P}^{+}\times m$ for $G^{+}$ gives rise
to an invariant measure $\mu$ for $G$, with marginal $\mathbb{P}$. Invariant
measures for $G^{+}$ with marginal $\mathbb{P}^{+}$ and invariant measures for
$G$ with marginal $\mathbb{P}$ are in one to one relationship, see
[Arnold(1998)].
A measure $\mu$ on $\Sigma_{2}\times\mathbb{I}$ with marginal $\mathbb{P}$ has
conditional measures $\mu_{\omega}$ on the fibers
$\\{\omega\\}\times\mathbb{I}$ such that
$\mu(A)=\int_{\Sigma_{2}}\mu_{\omega}A_{\omega}d\mathbb{P}(\omega)$ (21)
for each measurable set $A$, where
$A_{\omega}=A\cap(\\{\omega\\}\times\mathbb{I})$. For $\mathbb{P}$-almost all
$\omega$, the conditional measures are given as the weak star limit
$\mu_{\omega}=\lim_{n\to\infty}g_{\sigma^{-n}\omega}^{n}m$ (22)
of push-forwards of the stationary measure.
## 3\. The occurrence of locally riddled basin
In this section, we concentrate on studying the two parameters family
$F_{\beta_{1},\beta_{2}}:\mathbb{I}\times\mathbb{I}\to\mathbb{I}\times\mathbb{I}$
of skew product maps as defined in (1). By varying the parameters $\beta_{1}$
and $\beta_{2}$, we will investigate the occurrence of locally riddled basins
for the family $F_{\beta_{1},\beta_{2}}$.
We recall a normal parameter of the system—one that preserves the dynamics on
the invariant submanifold but varies it in the rest of the phase space, first
introduced in [Ashwin et al.(1996)]. We observe that the parameters
$\beta_{i}$ vary the transverse dynamics without changing the dynamics on the
invariant subspaces $N_{0}$ or $N_{1}$, so, they are normal parameters.
In our model, the expanding map $f$ exhibits a chaotic attractor which
supports an absolutely continuous invariant ergodic measure (a.c.i.m) whose
density is bounded and bounded away from zero (see [Adler & Flatto(1991)]). In
particular, this measure is absolutely continuous with respect to Lebesgue. By
this fact and invariance of the subspaces $N_{i}$, the restriction of
$F_{\beta_{1},\beta_{2}}$ to these invariant subspaces possess chaotic
attractors $A_{i}$, $i=0,1$, with the basin of attraction
$\mathcal{B}(A_{i})$. In particular, these attractors are $SRB$ attractors.
Note that, in our context, the normal dynamics is continuously dependent on
normal parameters $\beta_{i}$. Also, the invariant subspaces have codimension
1 in the phase space $\mathbb{I}\times\mathbb{I}$ and there is only one normal
direction. Hence, we can discuss the transitions between the invariant sets
$A_{i}$ being a locally riddled basin attractor and a chaotic saddle.
By Theorem 3, since $\mu_{SRB}$ is absolutely continuous with respect to an
Riemannian measure on the subspace $N_{i}$, $i=1,2$, the subset $A_{i}$ is an
essential attractor if the normal Lyapunov exponent is negative, see also
[Alexander et al.(1992)].
In general, for the occurrence of a riddled basin, we require that the
parallel Lyapunov exponent $\lambda_{\parallel}$ is positive but the maximal
normal Lyapunov exponent $\Lambda_{SRB}$ is slightly negative.
First, we describe the dynamics of the particular skew product
$F_{\beta_{1},\beta_{2}}$ whose fiber maps $g_{i,\beta_{1},\beta_{2}}$,
$i=1,2$, given by (11). For almost every point in $A_{i}$ the parallel
Lyapunov exponent for the dynamics on $N_{i}$ is
$\displaystyle L_{\|}=1/2\ln 2+1/2\ln 2=\ln 2.$ (23)
In particular, it is positive and does not depend on $\beta_{i}$, $i=1,2$. By
a straightforward computations, the normal Lyapunov exponents at typical
points in $A_{i}$ are given by
$\displaystyle
L_{\perp,\beta_{1},\beta_{2}}(0)=1/2\ln(g_{1,\beta_{1},\beta_{2}}^{\prime}(0))+1/2\ln(g_{2,\beta_{1},\beta_{2}}^{\prime}(0))=1/2\ln(1-\beta_{1})+1/2\ln(1+\beta_{2})$
(24)
and
$\displaystyle
L_{\perp,\beta_{1},\beta_{2}}(1)=1/2\ln(g_{1,\beta_{1},\beta_{2}}^{\prime}(1))+1/2\ln(g_{2,\beta_{1},\beta_{2}}^{\prime}(1))=1/2\ln\beta_{1}+1/2\ln(2-\beta_{2}).$
(25)
We observe that, by (24) and (25), these normal Lyapunov exponents vary
continuously with $\beta_{i}$. In Figure 1 the plot of
$L_{\perp,\beta_{1},\beta_{2}}(i)$, $i=0,1$, are illustrated by varying
$\beta_{1}$ and for fixed $\beta_{2}=1/2$. It demonstrates the continuous
dependence of normal Lyapunov exponents with respect to $\beta_{1}$ in this
case.
$\begin{array}[]{c}\includegraphics[scale={0.4}]{Lyapunov}\end{array}$
Figure 1. Normal Lyapunov exponents vary continuously with $\beta_{i}$. The
blue curve depicts $L_{\perp,\beta_{1},\beta_{2}}(0)$ by varying
$\beta_{1}=\beta$ and for fixed value $\beta_{2}=1/2$, while the red curve
depicts $L_{\perp,\beta_{1},\beta_{2}}(1)$.
Given an ergodic measure $\mu$, let $G_{\mu}$ be the set of generic points of
$\mu$. That is
$G_{\mu}=\\{x\in A:\frac{1}{n}\sum_{i=0}^{n-1}\delta_{f^{j}(x)}\to\mu\\}$
where convergence is in the weak∗ topology. For any $\alpha>0$ define
$G_{\alpha}:=\bigcup_{\mu\in\mathcal{E}_{f}(A),\Lambda_{\mu}\geq\alpha}G_{\mu}.$
(26)
###### Proposition 4.
[Ashwin et al.(1996), Proposition 3.19] Suppose $F:M\to M$ is a $C^{1+\alpha}$
map leaving the embedded submanifold $N$ invariant, and that $A$ is an
asymptotically stable chaotic attractor for $F|_{N}$. Let $\Lambda_{max}$,
$\lambda_{min}$ and $\Lambda_{SRB}$ be given by (16) and (17). Then, under
$F:M\to M$
1. $(1)$
If $\Lambda_{SRB}<0<\Lambda_{max}$ then $A$ is a Milnor (essential) attractor.
If in addition there exists $\alpha>0$ with $G_{\alpha}$ dense in $A$, then
$A$ has a locally riddled basin.
2. $(2)$
If $\lambda_{min}<0<\Lambda_{SRB}$, $\mu_{SRB}$-almost all Lyapunov exponents
are non-zero and $m(\bigcup_{\mu\neq\mu_{SRB}}G_{\mu})=0$, where $m$ is the
Riemannian volume on $N$, then $A$ is a chaotic saddle.
Note that, by [Ashwin et al.(1994), Remark 3.4], if $codim(N)=1$, as in our
setting, there is only one normal direction. In this case
$\lambda_{\mu}=\Lambda_{\mu}$ for all ergodic $\mu$, and the normal spectrum
depends smoothly on normal parameters.
Define $\rho:\mathbb{I}\to\\{-1,1\\}$ by
$\displaystyle\rho(x)=\left\\{\begin{array}[]{cc}-1&$ for$\quad 0\leq x\leq
1/2,\\\ 1&$ for$\quad 1/2<x\leq 1.\end{array}\right.$ (29)
Let $x$ be a periodic point of $f$ of period $n$ and let
$\mathcal{O}(x)=\\{x_{i}:f(x_{i-1})=x_{i},\ i=1,\dots,n,\quad\text{with}\quad
x_{0}=x\\}$. Take
$Per^{+}(f):=\\{x\in Per(f):\sum_{i=0}^{n-1}\rho(x_{i})>0\\},$ (30)
where $Per(f)$ denotes the set of all periodic orbits of $f$. By
[Gorodnik(2017), Proposition 3.1], there is a semi-conjugacy between the shift
map $\sigma:\Sigma_{2}^{+}\to\Sigma_{2}^{+}$ and the doubling map $f$. By this
fact, the following result is immediate.
###### Lemma 5.
The subset $Per^{+}(f)$ is dense in each chaotic attractor $A_{i}$, $i=0,1$,
within the invariant subspaces $N_{i}$.
###### Theorem 6.
Let $F_{\beta_{1},\beta_{2}}\in\mathcal{F}$ be a skew product of the form (1)
whose fiber maps $g_{i,\beta_{1},\beta_{2}}$, $i=1,2$, given by (11) and
$\beta_{1},\beta_{2}\in(0,1)$. Then the following holds:
1. $(a)$
if $\beta_{1}>1/2$ or ($\beta_{1}\leq 1/2$ $\&$
$\beta_{2}<\frac{1}{1-\beta_{1}}-1$), then $A_{0}$ is a Milnor (essential)
attractor and has a locally riddled basin;
2. $(b)$
if $\beta_{1}<1/2$ or ($\beta_{1}\geq 1/2$ $\&$
$\beta_{2}>2-\frac{1}{\beta_{1}}$), then $A_{1}$ is a Milnor (essential)
attractor and has a locally riddled basin;
3. $(c)$
if $\beta_{1}<1/2$ and $\beta_{2}>\frac{1}{1-\beta_{1}}-1$ then $A_{0}$ is a
chaotic saddle;
4. $(d)$
if $\beta_{1}>1/2$, $\beta_{2}<2-\frac{1}{\beta_{1}}$ then $A_{1}$ is a
chaotic saddle.
###### Proof.
Let $N_{i}$, $i=0,1$, be given by (8) and consider the restriction
$F_{\beta_{1},\beta_{2}}|_{N_{i}}$. By definition, since there is only one
normal direction, the normal stability index $\Lambda_{\mu}(A_{i})$ of an
ergodic invariant probability measure
$\mu\in\mathcal{E}_{F_{\beta_{1},\beta_{2}}|_{N_{i}}}(A_{i})$ is equal to the
normal Lyapunov exponent $\lambda_{\mu}(A_{i})$.
Note that, for $0\leq x\leq 1/2$, we have
$d_{(x,0)}F_{\beta_{1},\beta_{2}}=\begin{pmatrix}2&\quad 0\\\ 0&\quad
dg_{1,\beta_{1}}(0)\\\ \end{pmatrix}=\begin{pmatrix}2&\quad 0\\\ 0&\quad
1-\beta_{1}\\\ \end{pmatrix}$
and for $1/2<x\leq 1$, we have
$d_{(x,0)}F_{\beta_{1},\beta_{2}}=\begin{pmatrix}2&\quad 0\\\ 0&\quad
dg_{2,\beta_{2}}(0)\\\ \end{pmatrix}=\begin{pmatrix}2&\quad 0\\\ 0&\quad
1+\beta_{2}\\\ \end{pmatrix}$
where $g_{i,\beta_{i}}$, $i=1,2$, given by (11).
Hence, the normal stability index is computed as follows:
$\displaystyle\lambda_{\mu}(A_{0})=\Lambda_{\mu}(A_{0})=\int_{A_{0}\cap[0,1/2]}\log(1-\beta_{1})d\mu(x)+\int_{A_{0}\cap(1/2,1]}\log(1+\beta_{2})d\mu(x).$
(31)
Therefore,
$\displaystyle\lambda_{\mu}(A_{0})=\Lambda_{\mu}(A_{0})=\mu(A_{0}\cap[0,1/2])\log(1-\beta_{1})+\mu(A_{0}\cap(1/2,1])\log(1+\beta_{2}).$
(32)
Note that, by (34), for each invariant measure $\mu$, $\Lambda_{\mu}$ is
finite. Additionally, $\Lambda_{\mu}$ is smoothly dependent on normal
parameters $\beta_{1}$ and $\beta_{2}$.
The base map $f$ given by (4) is a piecewise expanding map. By definition of
$F_{\beta_{1},\beta_{2}}$ and since $N_{0}$ is one dimensional, we conclude
that $F_{\beta_{1},\beta_{2}}|_{A_{0}}$ is also piecewise expanding. This fact
implies that $F_{\beta_{1},\beta_{2}}|_{A_{0}}$ has a Lebesgue-equivalent
ergodic invariant measure (see [Ashwin et al.(1994), Walters(1982)]); this
corresponds to the desired $\mu_{SRB}(A_{0})$ (see Subsection 4.3 of [Ashwin
et al.(1994)]). By this fact and (34),
$\Lambda_{SRB}(A_{0})=1/2\log(1-\beta_{1})+1/2\log(1+\beta_{2}).$
Note that $\Lambda_{SRB}(A_{0})=L_{\perp,\beta_{1},\beta_{2}}(0)$, where
$L_{\perp,\beta_{1},\beta_{2}}(0)$ is the normal Lyapunov exponent given by
(24). Also, it characterizes [Alexander et al.(1992)] evolution transverse to
the $x$-axis. If it is negative, the invariant set $A_{0}$ is a Milnor
attractor. Simple computations show that, for $\beta_{1}>1/2$ or
($\beta_{1}\leq 1/2$ $\&$ $\beta_{2}<\frac{1}{1-\beta_{1}}-1$),
$\Lambda_{SRB}(A_{0})<0$ and hence $A_{0}$ is a Milnor (essential) attractor.
Take the invariant dirac measure $\mu_{1}$ supported on the fixed point at
$(1,0)$. Using (34), $\Lambda_{\mu_{1}}(A_{0})=\log(1+\beta_{2})$ which is
positive. By this fact, $0<\Lambda_{\mu_{1}}(A_{0})\leq\Lambda_{max}(A_{0})$.
These computations show that $\Lambda_{SRB}(A_{0})<0<\Lambda_{max}(A_{0})$.
By Lemma 4, the set $Per(f)^{+}$ is dense in $A_{0}$. By definition of
$Per(f)^{+}$ and (34), the dirac measure supported on $\mathcal{O}(x)$, for
$x\in Per(f)^{+}$, has positive normal stability index. By this fact and
taking suitable dirac measures supported on $\mathcal{O}(x)$, for $x\in
Per(f)^{+}$, we may find $\alpha>0$ such that $G_{\alpha}$ given by (26) is
dense in $A_{0}$. By these observations and statement (1) of Proposition 4,
$A_{0}$ has a locally riddled basin and the proof of $(a)$ is finished.
Let $\mu_{2}$ be the invariant dirac measure supported on the fixed point at
$(0,0)$. Using (34), $\Lambda_{\mu_{2}}(A_{0})=\log(1-\beta_{1})$ which is
negative. By this fact, $\lambda_{min}(A_{0})\leq\Lambda_{\mu_{2}}(A_{0})<0$.
Also, it is easy to see that for $\beta_{1}<1/2$ and
$\beta_{2}>\frac{1}{1-\beta_{1}}-1$, $\Lambda_{SRB}(A_{0})>0$. By these facts,
for $\beta_{1}<1/2$ and $\beta_{2}>\frac{1}{1-\beta_{1}}-1$, we get
$\lambda_{min}(A_{0})<0<\Lambda_{SRB}(A_{0})$. Since $\mu_{SRB}(A_{0})$ is
equivalent to Lebesgue and whose support is $A_{0}$,
$m(\bigcup_{\mu\neq\mu_{SRB}(A_{0})}G_{\mu})=0$, where $m$ is the Lebesgue
measure on $N_{0}$. Clearly, $\mu_{SRB}(A_{0})$-almost all Lyapunov exponents
are non-zero. By these facts and statement (2) of Proposition 4, $A_{0}$ is a
chaotic saddle which verifies statement $(c)$.
We apply similar arguments to prove $(b)$ and $(d)$. Indeed, for $0\leq x\leq
1/2$, we have
$d_{(x,1)}F_{\beta_{1},\beta_{2}}=\begin{pmatrix}2&\quad 0\\\ 0&\quad
dg_{1,\beta_{1}}(1)\\\ \end{pmatrix}=\begin{pmatrix}2&\quad 0\\\
0&\quad\beta_{1}\\\ \end{pmatrix}$
and for $1/2<x\leq 1$, we have
$d_{(x,1)}F_{\beta_{1},\beta_{2}}=\begin{pmatrix}2&\quad 0\\\ 0&\quad
dg_{2,\beta_{2}}(1)\\\ \end{pmatrix}=\begin{pmatrix}2&\quad 0\\\ 0&\quad
2-\beta_{2}\\\ \end{pmatrix}$
where $g_{i,\beta_{i}}$, $i=1,2$, given by (11). Hence, the normal stability
index is computed as follows:
$\displaystyle\lambda_{\mu}(A_{1})=\Lambda_{\mu}(A_{1})=\int_{A_{1}\cap[0,1/2]}\log(\beta_{1})d\mu(x)+\int_{A_{1}\cap(1/2,1]}\log(2-\beta_{2})d\mu(x).$
(33)
As a consequence,
$\displaystyle\lambda_{\mu}(A_{1})=\Lambda_{\mu}(A_{1})=\mu(A_{1}\cap[0,1/2])\log(\beta_{1})+\mu(A_{1}\cap(1/2,1])\log(2-\beta_{2}).$
(34)
As above, $F_{\beta_{1},\beta_{2}}|_{A_{1}}$ has a Lebesgue-equivalent ergodic
invariant measure; this corresponds to the desired $\mu_{SRB}(A_{1})$. By this
fact and (34), for the attractor $A_{1}$,
$\Lambda_{SRB}(A_{1})=1/2\log(\beta_{1})+1/2\log(2-\beta_{2})$. Note that
$\Lambda_{SRB}(A_{1})=L_{\perp,\beta_{1},\beta_{2}}(1)$, where
$L_{\perp,\beta_{1},\beta_{2}}(1)$ is given by (25). Simple computations show
that, for $\beta_{1}<1/2$ or ($\beta_{1}\geq 1/2$ $\&$
$\beta_{2}>2-\frac{1}{\beta_{1}}$), $\Lambda_{SRB}(A_{1})<0$ and hence $A_{1}$
is a Milnor (essential) attractor. Take the invariant dirac measure $\nu_{1}$
supported on the fixed point at $(1,1)$. Using (34),
$\Lambda_{\nu_{1}}(A_{1})=\log(2-\beta_{2})$ which is positive. By this fact,
$0<\Lambda_{\nu_{1}}(A_{1})\leq\Lambda_{max}(A_{1})$. These computations show
that $\Lambda_{SRB}(A_{1})<0<\Lambda_{max}(A_{1})$. By argument applied in
$(a)$, we may find $\alpha>0$ such that $G_{\alpha}$ given by (26) is dense in
$A_{1}$. By these observations and statement (1) of Proposition 4, $A_{1}$ has
a locally riddled basin and the proof of $(b)$ is finished.
Let $\nu_{2}$ be the invariant dirac measure supported on the fixed point at
$(0,1)$. Using (34), $\Lambda_{\nu_{2}}(A_{1})=\log(\beta_{1})$ which is
negative. By this fact, $\lambda_{min}(A_{1})\leq\Lambda_{\nu_{2}}(A_{1})<0$.
Also, it is easy to see that for $\beta_{1}>1/2$,
$\beta_{2}<2-\frac{1}{\beta_{1}}$, $\Lambda_{SRB}(A_{1})>0$. By these facts,
if $\beta_{1}>1/2$, $\beta_{2}<2-\frac{1}{\beta_{1}}$, then
$\lambda_{min}(A_{1})<0<\Lambda_{SRB}(A_{1})$. Since $\mu_{SRB}(A_{1})$ is
equivalent to Lebesgue and whose support is $A_{1}$,
$m(\bigcup_{\mu\neq\mu_{SRB}(A_{1})}G_{\mu})=0$. By these facts and statement
(2) of Proposition 4, $A_{1}$ is a chaotic saddle which verifies statement
$(d)$. ∎
$\begin{array}[]{cc}\includegraphics[scale={0.4}]{9}\includegraphics[scale={0.4}]{10}\end{array}$
Figure 2. The basins of attraction for the attractors $A_{0}$ and $A_{1}$. The
blue region in each figure corresponds to the basin of attraction
$\mathcal{B}(A_{0})$ while the yellow region corresponds to the basin of
attraction $\mathcal{B}(A_{1})$. The left frame depicts the basins of
attraction of $A_{0}$ and $A_{1}$, for $\beta_{1}=0.4$, $\beta_{2}=0.5$. In
this case $A_{i}$, $i=0,1$, are Milnor attractors and $A_{0}$ has locally
riddled basin. The right frame shows the basins of attraction of $A_{0}$ and
$A_{1}$ for $\beta_{1}=0.3$, $\beta_{2}=0.5$. In this case $A_{0}$ is a
chaotic saddle and the basin $\mathcal{B}(A_{0})$ has zero measure, while
$A_{1}$ is a Milnor attractor.
$\begin{array}[]{cc}\includegraphics[scale={0.4}]{2}\includegraphics[scale={0.4}]{3-1_1}\end{array}$
Figure 3. The basins of attraction for the attractors $A_{0}$ and $A_{1}$. The
blue region in each figure corresponds to the basin of attraction
$\mathcal{B}(A_{0})$ while the yellow region corresponds to the basin of
attraction $\mathcal{B}(A_{1})$. The left frame depicts the basins of
attraction of $A_{0}$ and $A_{1}$, for $\beta_{1}=0.7$, $\beta_{2}=0.65$. In
this case $A_{i}$, $i=0,1$, are Milnor attractors and $A_{1}$ has locally
riddled basin. The right frame shows the basins of attraction of $A_{0}$ and
$A_{1}$ for $\beta_{1}=0.6$, $\beta_{2}=0.25$. In this case $A_{1}$ is a
chaotic saddle and the basin $\mathcal{B}(A_{1})$ has zero measure, while
$A_{0}$ is a Milnor attractor.
Let
$\Gamma_{S}:=\\{(\beta_{1},\beta_{2}):\beta_{1},\beta_{2}\in(0,1),\
2-\frac{1}{\beta_{1}}<\beta_{2}<\frac{1}{1-\beta_{1}}-1\\}.$ (35)
If we set
$\displaystyle\Gamma_{\frac{1}{2}}=\\{(\beta_{1},\beta_{2}):\beta_{2}=\frac{1}{2},\quad\beta_{1}\in(\frac{1}{3},\frac{2}{3})\\}$
(36)
then $\Gamma_{\frac{1}{2}}\subset\Gamma_{S}$. Therefore, $\Gamma_{S}$ is a
nonempty open region.
###### Corollary 7.
For each $(\beta_{1},\beta_{2})\in\Gamma_{S}$, the both invariant sets $A_{0}$
and $A_{1}$ are Milnor attractors and have locally riddled basins.
$\begin{array}[]{c}\includegraphics[scale={0.5}]{ss}\end{array}$
Figure 4. The open region $\Gamma_{S}$, where for each
$(\beta_{1},\beta_{2})\in\Gamma_{S}$, the both invariant sets $A_{0}$ and
$A_{1}$ are Milnor attractors.
We extend the above results to the general case. Let
$F_{\beta_{1},\beta_{2}}\in\mathcal{F}$ be a skew product of the form (1)
whose fiber maps $g_{i,\beta_{1},\beta_{2}}$, $i=1,2$, satisfy conditions
$(I1)$-$(I3)$. For $j=0,1$, we take
$\alpha_{\beta_{1}\beta_{2}}^{j}:=dg_{1,\beta_{1}}(j)dg_{2,\beta_{2}}(j),$
(37)
where $dg_{i,\beta_{i}}(j)$, $i=1,2$, are the derivatives of the fiber maps
$g_{i,\beta_{i}}$ at the point $j$.
###### Theorem 8.
Let $F_{\beta_{1},\beta_{2}}\in\mathcal{F}$ be a skew product of the form (1)
whose fiber maps $g_{i,\beta_{1},\beta_{2}}$, $i=1,2$, satisfy conditions
$(I1)$-$(I3)$. Then the following statements hold:
1. $(a)$
If there exists $\beta_{1}^{r}\in(0,1)$ such that for each
$\beta_{1}<\beta_{1}^{r}$, the sign of $\log(\alpha_{\beta_{1}\beta_{2}}^{0})$
changes from negative to positive by varying $\beta_{2}$, then, there exists a
smooth function $\beta_{1}\mapsto\xi(\beta_{1})$ with $\xi(\beta_{1}^{r})=1$
such that
1. $(i)$
if $\beta_{1}<\beta_{1}^{r}$ and $\beta_{2}<\xi(\beta_{1})$, then $A_{0}$ is a
Milnor (essential) attractor and has a locally riddled basin,
2. $(ii)$
if $\beta_{1}<\beta_{1}^{c}$ and $\beta_{2}>\xi(\beta_{1})$, then $A_{0}$ is a
chaotic saddle;
2. $(b)$
If there exists $\beta_{1}^{\ell}\in(0,1)$ such that for each
$\beta_{1}>\beta_{1}^{\ell}$, the sign of
$\log(\alpha_{\beta_{1}\beta_{2}}^{1})$ changes from positive to negative by
varying $\beta_{2}$, then, there exists a smooth function
$\beta_{1}\mapsto\zeta(\beta_{1})$ with $\zeta(\beta_{1}^{\ell})=0$ such that
1. $(i)$
if $\beta_{1}>\beta_{1}^{\ell}$ and $\beta_{2}>\zeta(\beta_{1})$, then $A_{1}$
is a Milnor (essential) attractor and has a locally riddled basin;
2. $(ii)$
if $\beta_{1}>\beta_{1}^{\ell}$ and $\beta_{2}<\zeta(\beta_{1})$, then $A_{1}$
is a chaotic saddle.
###### Proof.
To prove, we closely follow the proof of Theorem 6 and omit some details. As
we have seen before there is only one normal direction, so the normal
stability index $\Lambda_{\mu^{i}}$, $i=0,1$, of an ergodic invariant
probability measure
$\mu^{i}\in\mathcal{E}_{F_{\beta_{1},\beta_{2}}|_{N_{i}}}(A_{i})$ is equal to
the normal Lyapunov exponent $\lambda_{{\mu}^{i}}$.
For $0\leq x\leq 1/2$,
$d_{(x,y)}F_{\beta_{1},\beta_{2}}=\begin{pmatrix}2&\quad 0\\\ 0&\quad
dg_{1,\beta_{1}}(y)\\\ \end{pmatrix}$
and for $1/2<x\leq 1$,
$d_{(x,y)}F_{\beta_{1},\beta_{2}}=\begin{pmatrix}2&\quad 0\\\ 0&\quad
dg_{2,\beta_{2}}(y)\\\ \end{pmatrix}$
where $g_{i,\beta_{i}}$, $i=1,2$, are the fiber maps of
$F_{\beta_{1},\beta_{2}}$.
Hence, the normal stability index $\Lambda_{\mu^{i}}$ for the attractor
$A_{i}$, $i=0,1$, is computed as follows:
$\displaystyle\lambda_{\mu^{i}}=\Lambda_{{\mu}^{i}}=\int_{A_{i}\cap[0,1/2]}\log(dg_{1,\beta_{1}}(i))d\mu^{i}(x)+\int_{A_{i}\cap(1/2,1]}\log(dg_{2,\beta_{2}}(i))d\mu^{i}(x).$
(38)
Therefore,
$\displaystyle\lambda_{\mu^{i}}=\Lambda_{{\mu}^{i}}=\mu^{i}(A_{i}\cap[0,1/2])\log(dg_{1,\beta_{1}}(i))+\mu^{i}(A_{i}\cap(1/2,1])\log(dg_{2,\beta_{2}}(i)).$
(39)
Note that $F_{\beta_{1},\beta_{2}}|_{A_{i}}$, $i=0,1$, is piecewise expanding,
hence, it has a Lebesgue-equivalent ergodic invariant measure which is the
desired $\mu_{SRB}^{i}$. By this fact and (39),
$\Lambda_{SRB}^{i}=1/2\log(dg_{1,\beta_{1}}(i))+1/2\log(dg_{2,\beta_{2}}(i)).$
By (39), for each invariant measure $\mu^{i}$, $\Lambda_{{\mu}^{i}}$ is
smoothly dependent on the normal parameters $\beta_{1}$ and $\beta_{2}$. This
fact and the hypothesis of statements $(a)$ and $(b)$ imply that there exist
smooth functions $\beta_{1}\mapsto\xi(\beta_{1})$ and
$\beta_{1}\mapsto\zeta(\beta_{1})$ such that they satisfy the following
properties:
$\Lambda_{SRB}^{o}<0\quad\text{if}\quad\beta_{1}<\beta_{1}^{r}\ \&\
\beta_{2}<\xi(\beta_{1}),\quad\text{and}\quad\Lambda_{SRB}^{o}>0\quad\text{if}\quad\beta_{1}<\beta_{1}^{r}\
\&\ \beta_{2}>\xi(\beta_{1}),$ (40)
$\Lambda_{SRB}^{1}<1\quad\text{if}\quad\beta_{1}>\beta_{1}^{\ell}\ \&\
\beta_{2}>\zeta(\beta_{1}),\quad\text{and}\quad\Lambda_{SRB}^{1}>0\quad\text{if}\quad\beta_{1}>\beta_{1}^{\ell}\
\&\ \beta_{2}<\zeta(\beta_{1}).$ (41)
Take the invariant dirac measure $\nu_{1}^{0}$ supported on the fixed point at
$(1,0)$. Using (39), $\Lambda_{\nu_{1}^{0}}=\log(dg_{2,\beta_{2}}(0))$ which
is positive by condition $(I3)$. By this fact,
$0<\Lambda_{\nu_{1}^{0}}\leq\Lambda_{max}^{0}$. These computations show that
$\Lambda_{SRB}^{0}<0<\Lambda_{max}^{0}$ for the attractor $A_{0}$. Using (39)
and Lemma 4, we apply the argument used in the proof of Theorem 6 to find
$\alpha^{0}>0$ such that the subset $G_{\alpha^{0}}$ given by (26) is dense in
$A_{0}$. By these observations and statement (1) of Proposition 4, $A_{0}$ has
a locally riddled basin and the proof of the first statement of $(a)$ is
finished.
Similarly, we take the invariant dirac measure $\nu_{0}^{1}$ supported on the
fixed point at $(0,1)$. Using (39),
$\Lambda_{\nu_{0}^{1}}=\log(dg_{1,\beta_{1}}(1))$ which is positive by
condition $(I2)$. By this fact,
$0<\Lambda_{\nu_{0}^{1}}\leq\Lambda_{max}^{1}$. These computations show that
$\Lambda_{SRB}^{1}<0<\Lambda_{max}^{1}$. As above, using Lemma 4 and (39), we
can apply the argument used in the proof of Theorem 6 to find $\alpha^{1}>0$
such that the subset $G_{\alpha^{1}}$ given by (26) is dense in $A_{1}$. By
these observations and statement (1) of Proposition 4, $A_{1}$ has a locally
riddled basin and the proof of the first statement of $(b)$ is finished.
Let $\nu_{0}^{0}$ be the invariant dirac measure supported on the fixed point
at $(0,0)$. Using (39), $\Lambda_{\nu_{0}^{0}}=\log(dg_{1,\beta_{1}}(0))$
which is negative by condition $(I2)$. By this fact,
$\lambda_{min}^{0}\leq\Lambda_{\nu_{0}^{0}}<0$. Also, by (40) for
$\beta_{2}>\xi(\beta_{1})$, $\Lambda_{SRB}^{0}>0$. By these facts, for
$\beta_{2}>\xi(\beta_{1})$, we get $\lambda_{min}^{0}<0<\Lambda_{SRB}^{0}$.
Since $\mu_{SRB}^{0}$ is equivalent to Lebesgue and whose support is $A_{0}$,
$m(\bigcup_{\mu\neq\mu_{SRB}^{0}}G_{\mu})=0$, where $m$ is the Lebesgue
measure on $N_{0}$. Clearly, $\mu_{SRB}^{0}$-almost all Lyapunov exponents are
non-zero. By these facts and statement (2) of Proposition 4, $A_{0}$ is a
chaotic saddle which verifies the second statement of $(a)$.
Let $\nu_{0}^{1}$ be the invariant dirac measure supported on the fixed point
at $(0,1)$. Using (34), $\Lambda_{\nu_{0}^{1}}=\log(dg_{1,\beta_{1}}(0))$
which is negative by condition $(I2)$. By this fact,
$\lambda_{min}^{1}\leq\Lambda_{\mu_{0}^{1}}<0$. Also, by (41), for
$\beta_{2}<\zeta(\beta_{1})$, $\Lambda_{SRB}^{1}>0$. By these facts, if
$\beta_{2}>\zeta(\beta_{1})$, then $\lambda_{min}^{1}<0<\Lambda_{SRB}^{1}$.
Since $\mu_{SRB}^{1}$ is equivalent to Lebesgue and whose support is $A_{1}$,
$m(\bigcup_{\mu\neq\mu_{SRB}^{1}}G_{\mu})=0$. By these facts and statement (2)
of Proposition 4, $A_{1}$ is a chaotic saddle which verifies the second
statement of $(b)$. ∎
Let
$\Gamma_{G}:=\\{(\beta_{1},\beta_{2}):\beta_{1},\beta_{2}\in(0,1),\quad\zeta(\beta_{2})<\beta_{2}<\xi(\beta_{1})\\}.$
(42)
Then, $\Gamma_{S}\subset\Gamma_{G}$, and hence, $\Gamma_{G}$ is a nonempty
open region in $\beta_{1}\beta_{2}$-plane.
###### Corollary 9.
For each $(\beta_{1},\beta_{2})\in\Gamma_{G}$, the both invariant sets $A_{0}$
and $A_{1}$ are Milnor attractors and have locally riddled basins.
## 4\. Random walk model
In this section, we introduce a random walk model which is topologically semi-
conjugate to the skew product system $F_{\beta_{1},\beta_{2}}$ given by (1).
This allows us to define a fractal boundary between the initial conditions
leading to each of the two attractors $A_{1}$ and $A_{2}$.
###### Definition 10.
Let $\mathcal{S}$ be the set of step skew product systems of the form
$G_{\beta_{1},\beta_{2}}^{+}:\Sigma_{2}^{+}\times\mathbb{I}\to\Sigma_{2}^{+}\times\mathbb{I},\
\
G_{\beta_{1},\beta_{2}}^{+}(\omega,y)=(\sigma\omega,g_{\omega_{0},\beta_{\omega_{0}}}(y)),$
(43)
where $\mathbb{I}=[0,1]$, $\beta_{1},\beta_{2}\in(0,1)$ such that the fiber
maps $g_{1,\beta_{1}}$ and $g_{2,\beta_{2}}$ are strictly increasing $C^{2}$
diffeomorphisms given by (7) satisfying conditions $(I1)-(I3)$.
We will pick the diffeomorphisms $g_{1,\beta_{1}}$ and $g_{2,\beta_{2}}$
randomly, independently at each iterate, with positive probabilities $p_{1}$
and $p_{2}=1-p_{1}$. This corresponds to taking a Bernoulli measure on
$\Sigma_{2}^{+}$ from which we pick $\omega$. The obtained random compositions
$g_{\omega,\beta_{1},\beta_{2}}^{n}(y)=g_{\omega_{n-1},\beta_{\omega_{n-1}}}\circ\dots\circ
g_{\omega_{0},\beta_{\omega_{0}}}(y),\quad\text{for}\ n\geq 1,\quad
g_{\omega,\beta_{1},\beta_{2}}^{0}(y)=y$ (44)
form a random walk on the interval.
We define the _standard measure_ $s$ on $\Sigma_{2}^{+}\times\mathbb{I}$ by
the product of Bernoulli measure $\mathbb{P}^{+}$ and the Lebesgue measure on
$\mathbb{I}$.
Given a skew product $G_{\beta_{1},\beta_{2}}^{+}\in\mathcal{S}$, the _normal
Lyapunov exponent_ of $G_{\beta_{1},\beta_{2}}^{+}$ at a point
$(\omega,y)\in\Sigma_{2}^{+}\times\mathbb{I}$ is
$\lim_{n\to\infty}\frac{1}{n}\ln(g^{\prime}_{\omega_{n-1},\beta_{\omega_{n-1}}}(g_{\omega,\beta_{1},\beta_{2}}^{n-1}(y))\dots
g_{\omega_{0},\beta_{\omega_{0}}}^{\prime}(y))=\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}\ln(g_{\omega_{i},\beta_{\omega_{i}}}^{\prime}(g_{\omega,\beta_{1},\beta_{2}}^{i}(y))),$
(45)
in case the limit exists. Since $y=0,1$ are fixed points of $g_{i,\beta_{i}}$,
$i=1,2$, by Birkhoff’s ergodic theorem, we obtain for $y=0,1$ that
$L_{\beta_{1},\beta_{2}}(y)=\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}\ln(g_{\omega_{i},\beta_{\omega_{i}}}^{\prime}(g_{\omega,\beta_{1},\beta_{2}}^{i}(y)))=\int_{\Sigma_{2}^{+}}\ln(g_{\omega_{0},\beta_{\omega_{0}}}^{\prime}(y))d\nu^{+}(\omega)=\sum_{i=1}^{2}p_{i}\ln
g_{i,\beta_{i}}^{\prime}(y)$ (46)
for $\mathbb{P}^{+}$-almost all $\omega\in\Sigma_{2}^{+}$.
The subspaces
$\mathbb{A}_{0}:=\Sigma_{2}^{+}\times\\{0\\},\ \ \text{and}\ \
\mathbb{A}_{1}:=\Sigma_{2}^{+}\times\\{1\\}$
are invariants by $G^{+}_{\beta_{1},\beta_{2}}$, for each
$\beta_{1},\beta_{2}\in(0,1)$. The basins are
$\mathcal{B}_{\beta_{1},\beta_{2}}(\mathbb{A}_{0}):=\\{(\omega,y):d((G_{\beta_{1},\beta_{2}}^{+})^{n}(\omega,y),\mathbb{A}_{0})\to
0\ \text{as}\ n\to\infty\\},$
$\mathcal{B}_{\beta_{1},\beta_{2}}(\mathbb{A}_{1}):=\\{(\omega,y):d((G_{\beta_{1},\beta_{2}}^{+})^{n}(\omega,y),\mathbb{A}_{1})\to
0\ \text{as}\ n\to\infty\\},$
where $d(y,A)=\inf_{z\in A}\|y-z\|$, for each subset $A\subset\mathbb{I}$.
First, we consider a specific example of a step skew product system from
$\mathcal{S}$ with the fiber maps
$g_{1,\beta_{1}}(y)=y+y(1-y)(y-\beta_{1}),\ \ g_{2,1/2}(y)=y-y(1-y)(y-1/2),$
(47)
with $\beta_{2}=\frac{1}{2}$ and illustrate time series of random walks for
some values of $\beta_{1}$. Using (46), we get
$L_{\beta_{1},1/2}(0)=1/2\ln(1-\beta_{1})+1/2/ln3/2,\ \text{and}\
L_{\beta_{1},1/2}(1)=1/2\ln\beta_{1}+1/2/ln3/2.$ (48)
Note that $L_{\beta_{1},1/2}(0)<0$ and $L_{\beta_{1},1/2}(1)<0$ for each
$1/3<\beta_{1}<2/3$.
$\begin{array}[]{cc}\includegraphics[scale={0.4}]{fbeta4}\includegraphics[scale={0.4}]{32}\end{array}$
Figure 5. The left frame depicts the graphs of $g_{1,\beta_{1}}$, $g_{2,1/2}$,
for $\beta_{1}=2/5$. The right frame shows a time series of the random walk
generated by $g_{1,\beta_{1}}$, $g_{2,1/2}$, for $\beta_{1}=2/5$, both picked
with probability $1/2$.
$\begin{array}[]{cc}\includegraphics[scale={0.4}]{fbeta5}\includegraphics[scale={0.4}]{22}\end{array}$
Figure 6. The left frame depicts the graphs of $g_{1,\beta_{1}}$, $g_{2,1/2}$,
for $\beta_{1}=1/2$. The right frame shows a time series of the random walk
generated by $g_{1,\beta_{1}}$, $g_{2,1/2}$, for $\beta_{1}=1/2$, both picked
with probability $1/2$.
$\begin{array}[]{cc}\includegraphics[scale={0.4}]{fbeta6}\includegraphics[scale={0.4}]{11}\end{array}$
Figure 7. The left frame depicts the graphs of $g_{1,\beta_{1}}$, $g_{2,1/2}$,
for $\beta_{1}=3/5$. The right frame shows a time series of the random walk
generated by $g_{1,\beta_{1}}$, $g_{2,1/2}$, for $\beta_{1}=3/5$, both picked
with probability $1/2$.
The graphs of $g_{1,\beta_{1}}$, $g_{2,1/2}$ are illustrated in the left
frames of Fig. 5, Fig. 6 and Fig. 7, for parameter values
$\beta_{1}=2/5,1/2,3/5$, respectively. The right panels of these figures show
time series of the random walk generated by $g_{1,\beta_{1}}$, $g_{2,1/2}$,
for $\beta_{1}=2/5,1/2,2/3$, both picked with probability $1/2$. For these
values of $\beta_{1}$, the normal Lyapunov exponents at $0$ and $1$ are
negative.
###### Theorem 11.
Let $G^{+}_{\beta_{1},\beta_{2}}\in\mathcal{S}$ be a step skew product whose
fiber maps defined by (11) and $(\beta_{1},\beta_{2})\in\Gamma_{S}$, where
$\Gamma_{S}$ is given by (35). Then, the sets $\mathbb{A}_{0}$ and
$\mathbb{A}_{1}$ are Milnor attractors for $G^{+}_{\beta_{1},\beta_{2}}$. They
attract sets of positive standard measure and the union of the basins
$\mathcal{B}_{\beta_{1},\beta_{2}}(\mathbb{A}_{i})$, $i=0,1$, has full
standard measure.
###### Proof.
By (46) and the definition of $\Gamma_{S}$, $L_{\beta_{1},\beta_{2}}(0)<0$ and
$L_{\beta_{1},\beta_{2}}(1)<0$.
###### Lemma 12.
Let $(\beta_{1},\beta_{2})\in\Gamma_{S}$ and take
$r_{\beta_{1},\beta_{2}}(\omega)=\sup\\{y\in\mathbb{I}:\lim_{n\to\infty}g_{\omega,\beta_{1},\beta_{2}}^{n}(y)=0\\},\quad
s_{\beta_{1},\beta_{2}}(\omega)=\inf\\{y\in\mathbb{I}:\lim_{n\to\infty}g_{\omega,\beta_{1},\beta_{2}}^{n}(y)=1\\},$
where $g_{\omega,\beta_{1},\beta_{2}}^{n}(y)$ is given by (44). Then
$r_{\beta_{1},\beta_{2}}(\omega)>0$ and $s_{\beta_{1},\beta_{2}}(\omega)<1$
for $\mathbb{P}^{+}$ almost all $\omega\in\Sigma_{2}^{+}$.
###### Proof.
The lemma is a counterpart of [Gharaei & Homburg(2017), Lemma 3.1] to our
setting. In the proof of [Gharaei & Homburg(2017), Lemma 3.1], the authors
only uses the fact that the normal Lyapunov exponents at the point 0 and 1 are
negative and applying Birkhoff’s ergodic theorem. Since
$L_{\beta_{1},\beta_{2}}(0)<0$ and $L_{\beta_{1},\beta_{2}}(1)<0$, the lemma
is proved. ∎
Since, by the previous lemma, for each $(\beta_{1},\beta_{2})\in\Gamma_{S}$,
the function $r_{\beta_{1},\beta_{2}}$ is positive almost everywhere, thus,
the basin $\mathcal{B}_{\beta_{1},\beta_{2}}(\mathbb{A}_{0})$ has positive
standard measure. Since, the function $s_{\beta_{1},\beta_{2}}$ is less than 1
almost everywhere, the same holds for the basin of
$\mathcal{B}_{\beta_{1},\beta_{2}}(\mathbb{A}_{1})$. By these facts, the
basins $\mathcal{B}_{\beta_{1},\beta_{2}}(\mathbb{A}_{i})$ of
$G_{\beta_{1},\beta_{2}}^{+}$-invariant sets $\mathbb{A}_{i}$, $i=0,1$, have
positive standard measures. In particular, they are Milnor attractors. This
proves the first statement of the theorem.
Write $\mathcal{P}$ for the space of probability measures on $\mathbb{I}$,
equipped with the weak star topology and define
$\mathcal{T}:\mathcal{P}\to\mathcal{P}$ by
$\mathcal{T}m=\sum_{i=1}^{2}p_{i}g_{i,\beta_{i}}m.$
We recall that a stationary measure is a fixed point of $\mathcal{T}$. Note
that a probability measure $m$ is a stationary measure if and only if
$\mu^{+}=\mathbb{P}^{+}\times m$ is an invariant measure of
$G_{\beta_{1},\beta_{2}}^{+}$ with marginal $\mathbb{P}^{+}$ on
$G_{\beta_{1},\beta_{2}}^{+}$, see [Gharaei & Homburg(2017), Lemma A.2]. We
say that $m$ is ergodic if $\mathbb{P}^{+}\times m$ is ergodic for
$G_{\beta_{1},\beta_{2}}^{+}$.
To proceed proving Theorem 11, we prove that there is a boundary between
basins $\mathcal{B}_{\beta_{1},\beta_{2}}(\mathbb{A}_{0})$ and
$\mathcal{B}_{\beta_{1},\beta_{2}}(\mathbb{A}_{1})$. Indeed, we show that
there is an invariant measurable graph
$\psi_{\beta_{1},\beta_{2}}:\Sigma_{2}\to\mathbb{I}$ that separates the
basins: for $\mathbb{P}$-almost all $\omega$,
$\displaystyle\lim_{n\to\infty}g_{\omega,\beta_{1},\beta_{2}}^{n}(y)=\left\\{\begin{array}[]{cc}0&$
if$\quad y<\psi_{\beta_{1},\beta_{2}}(\omega),\\\ 1&$ if$\quad
y>\psi_{\beta_{1},\beta_{2}}(\omega)\end{array}\right.$ (51)
where $g_{\omega,\beta_{1},\beta_{2}}^{n}(y)$ is given by (44). To prove the
theorem, we closely follow [Gharaei & Homburg(2017), Theorem 3.1] and omit
some details. Let $H_{\beta_{1},\beta_{2}}^{+}$ be the skew product map whose
fiber maps are defined by $h_{i,\beta_{i}}=g_{i,\beta_{i}}^{-1}$. Then, it has
positive normal Lyapunov exponents along $\mathbb{A}_{0}$ and
$\mathbb{A}_{1}$. This means that $L_{\beta_{1},\beta_{2}}(0)>0$ and
$L_{\beta_{1},\beta_{2}}(1)>0$ for the skew product map
$H_{\beta_{1},\beta_{2}}^{+}$.
###### Lemma 13.
For the skew product $H_{\beta_{1},\beta_{2}}^{+}$ defined above, there exists
an ergodic stationary measure $m$ with $m(\\{0\\}\cup\\{1\\})=0$.
###### Proof.
The lemma is just a reformulation of [Gharaei & Homburg(2017), Lemma 3.2] in
our context. Note that to prove [Gharaei & Homburg(2017), Lemma 3.2] the
authors use only the fact that the normal Lyapunov exponents at both fixed
points $0$ and $1$ are positive. So the result holds for our setting. ∎
Take the extension skew product $H_{\beta_{1},\beta_{2}}$ of
$H_{\beta_{1},\beta_{2}}^{+}$ given by (20). Note that the stationary measure
$m$ gives an invariant measure $\mu_{m}$ for the extension skew product system
$H_{\beta_{1},\beta_{2}}$ (see [Arnold(1998)]). Its conditional measures on
fibers $\\{\omega\\}\times\mathbb{I}$ are denoted by $\mu_{m,\omega}$ (see
(21) and (22)). By [Gharaei & Homburg(2017), Lemma 3.3], the conditional
measure $\mu_{m,\omega}$ of $\mu_{m}$ is a $\delta$-measure for
$\mathbb{P}$-almost every $\omega\in\Sigma_{2}$.
By [Gharaei & Homburg(2017), Lemma 3.4], the stationary measure $m$ obtained
by Lemma 13 is unique. Thus, for each $(\beta_{1},\beta_{2})\in\Gamma_{S}$,
there is a unique stationary measure $m_{\beta_{1},\beta_{2}}$ with
$m_{\beta_{1},\beta_{2}}(\\{0\\}\cup\\{1\\})=0$. So, by these facts, there
exists a measurable function
$\psi_{\beta_{1},\beta_{2}}:\Sigma_{2}\to\mathbb{I}$ such that for
$\mathbb{P}$-almost all $\omega$,
$\lim_{n\to\infty}h^{n}_{\sigma^{-n}\omega,\beta_{1},\beta_{2}}m_{\beta_{1},\beta_{2}}=\delta_{\psi_{\beta_{1},\beta_{2}}(\omega)},$
where $h^{n}_{\sigma^{-n}\omega,\beta_{1},\beta_{2}}$ is given by (44). Note
that a function increasing if and only if its inverse is increasing. As the
convex hull of the support of $m_{\beta_{1},\beta_{2}}$ equals $\mathbb{I}$
and $h_{1,\beta_{1}}$, $h_{2,\beta_{2}}$ are increasing, this implies that for
every $y\in(0,1)$
$\lim_{n\to\infty}h^{n}_{\sigma^{-n}\omega,\beta_{1},\beta_{2}}(y)=\psi_{\beta_{1},\beta_{2}}(\omega).$
By the fact that $h_{1,\beta_{1}}$, $h_{2,\beta_{2}}$ are increasing, we get
$\lim_{n\to\infty}(h^{n}_{\sigma^{-n}\omega,\beta_{1},\beta_{2}})^{-1}(y)=1$
if $y>\psi_{\beta_{1},\beta_{2}}(\omega)$ and
$\lim_{n\to\infty}(h^{n}_{\sigma^{-n}\omega,\beta_{1},\beta_{2}})^{-1}(y)=0$
if $y<\psi_{\beta_{1},\beta_{2}}(\omega)$. Thus,
$\lim_{n\to\infty}g^{n}_{\omega,\beta_{1},\beta_{2}}(y)=1$
if $y>\psi_{\beta_{1},\beta_{2}}(\omega)$ and
$\lim_{n\to\infty}g^{n}_{\omega,\beta_{1},\beta_{2}}(y)=0$
if $y<\psi_{\beta_{1},\beta_{2}}(\omega)$.
This observation shows that the union of the basins of attraction of
$\mathbb{A}_{0}$ and $\mathbb{A}_{1}$ has full standard measure and hence,
Theorem 11 is proved. ∎
The following result describes intermingled basins for step skew product
systems $G^{+}_{\beta_{1},\beta_{2}}\in\mathcal{S}$.
###### Theorem 14.
Let $G^{+}_{\beta_{1},\beta_{2}}\in\mathcal{S}$ be a step skew product whose
fiber maps defined by (11) and $(\beta_{1},\beta_{2})\in\Gamma_{S}$, where
$\Gamma_{S}$ is given by (35). Let $\beta_{1}=\beta_{2}=\beta$. Then, the
invariant subset $\mathbb{A}_{\beta}=\Sigma_{2}^{+}\times\\{\beta\\}$ is a
Milnor attractor and the basins of $\mathbb{A}_{i}$ and $\mathbb{A}_{\beta}$
are intermingled, for each $i=0,1$.
###### Proof.
Let $(\beta_{1},\beta_{2})\in\Gamma_{S}$ and $\beta_{1}=\beta_{2}=\beta$.
Then, the subset $\mathbb{A}_{\beta}=\Sigma_{2}^{+}\times\\{\beta\\}$ is
invariant. By (46) and the definition of $\Gamma_{S}$, the normal Lyapunov
exponents at the points 0, 1 and $\beta$ are negative. Thus, by Theorem 11,
$\mathbb{A}_{\beta}$ is a Milnor attractor. Now, we take the restriction skew
products
$G^{+}_{\beta_{1},\beta_{2}}|_{\Sigma_{2}\times[0,\beta]}\quad\text{and}\quad
G^{+}_{\beta_{1},\beta_{2}}|_{\Sigma_{2}\times[\beta,1]}.$ (52)
The restriction skew products given by (52) satisfy the hypothesis of [Gharaei
& Homburg(2017), Theorem 3.1], thus, the basins of $\mathbb{A}_{i}$ and
$\mathbb{A}_{\beta}$ are intermingled, for each $i=0,1$. This finishes the
proof. ∎
Let us define $\tau:\mathbb{I}\to\\{1,2\\}$ by
$\displaystyle\tau(x)=\left\\{\begin{array}[]{cc}1&$ for$\quad 0\leq x\leq
1/2,\\\ 2&$ for$\quad 1/2<x\leq 1.\end{array}\right.$ (55)
Let $\Sigma_{12}\subset\Sigma_{2}$ be the sets of bi-infinite sequences of 1’s
and 2’s which do not end with a tail of 1s or 2s. The metric and measure is
inherited from the space $\Sigma_{2}$. Note that $\Sigma_{12}$ is closed and
invariant by the shift map. By [Gorodnik(2017)], there exists a map $K$ that
semi-conjugates the restrictions $\sigma|_{\Sigma_{12}}$ and
$f|_{\mathbb{I}\setminus D}$, where $D$ is the set of dyadic rationals (i.e.
rational numbers of the form $k/2^{n}$, whose denominator is a power of 2). By
definition, the subset $\Sigma_{12}$ has the full Bernoulli measure and $D$
has zero Lebesgue measure (see [Gorodnik(2017)]). Then, the composition of the
semi-conjugating map $K$ and the map $\psi_{\beta_{1},\beta_{2}}$ given by
(51) yields a measurable map $\Phi:\mathbb{I}\to\mathbb{I}$ that separates the
basins.
###### Corollary 15.
There exists a measurable graph map
$\Phi_{\beta_{1},\beta_{2}}:\mathbb{I}\to\mathbb{I}$ such that for Lebesgue-
almost all $x\in\mathbb{I}$,
$\displaystyle\lim_{n\to\infty}g_{x,\beta_{1},\beta_{2}}^{n}(y)=\left\\{\begin{array}[]{cc}0&$
if$\quad y<\Phi_{\beta_{1},\beta_{2}}(x),\\\ 1&$ if$\quad
y>\Phi_{\beta_{1},\beta_{2}}(x)\end{array}\right.$ (58)
where
$g_{x,\beta_{1},\beta_{2}}^{n}(y)=g_{\tau(f^{n-1}(x)),\beta_{\tau(f^{n-1}(x))}}\circ\dots\circ
g_{\tau(x),\beta_{\tau(x)}}(y)$.
The following is immediate from Theorem 11 and Corollary 15.
###### Corollary 16.
Let $F_{\beta_{1},\beta_{2}}\in\mathcal{F}$ be a skew product of the form (1)
whose fiber maps $g_{i,\beta_{1},\beta_{2}}$, $i=1,2$, given by (11). If both
normal Lyapunov exponents $L_{\bot,\beta_{1},\beta_{2}}(i)$, $i=0,1$, are
negative,then the attractors $A_{0}$ and $A_{1}$ attract sets of positive
Lebesgue measure and the union of the basins
$\mathcal{B}_{\beta_{1},\beta_{2}}(A_{i})$, $i=0,1$, has full Lebesgue
measure.
## 5\. The existence of riddled basin and blowout bifurcation
In this section, we demonstrate the emergence conditions of riddled basins in
the global sense.
Note that a general set of conditions under which riddled basins can occur are
as follows (see [Ott et al.(1993)], [Alexander et al.(1992)], [Cazelles(2001)]
and [Viana et al.(2009)]):
1. $(H1)$
There exists an invariant subspace $N$ whose dimension is less than the
dimension of the full-phase space.
2. $(H2)$
The dynamics on the invariant subspace $N$ has a chaotic attractor $A$.
3. $(H3)$
For typical orbits on $A$ the Lyapunov exponents for infinitesimal
perturbations in the direction transverse to $N$ are all negative.
4. $(H4)$
There is another attractor $A^{\prime}$ not belonging to $N$.
5. $(H5)$
At least one of the normal Lyapunov exponents, although negative for almost
any orbits on $A$, has finite time fluctuations that are positive.
###### Remark 17.
Note that for occurrence the riddled basin, it is necessary to have a dense
set of points with zero Lebesgue measure in the attractor lying in the
invariant subspace which are transversely unstable, thus it is necessary that
this attractor be chaotic.
Let $F_{\beta_{1},\beta_{2}}\in\mathcal{F}$ be a skew product of the form (1)
whose fiber maps $g_{i,\beta_{1},\beta_{2}}$, $i=1,2$, given by (11) and
$\beta_{1},\beta_{2}\in(0,1)$. In Theorem 6, we estimated the values of
parameters $\beta_{i}$, $i=1,2$, for which the locally riddled basin occurs
for both attractors $A_{0}$ and $A_{1}$. In particular, for each
$(\beta_{1},\beta_{2})\in\Gamma_{S}$, the verification of condition
$(H1)-(H4)$ was done. Condition $(H5)$ is verified in the next theorem. To do
that we show the existence a set of unstable periodic orbits embedded in
$A_{i}$ which is transversely unstable. This implies that at least one of the
Lyapunov exponents along the directions transverse to invariant subspaces
experiences positive finite-type fluctuations (see [Viana et al.(2009)]).
Here, we prove the riddling in a more direct way. Indeed, in our setting, the
full space contains two chaotic attractors $A_{i}$, $i=0,1$, lying in
different invariant subspaces $N_{i}$. By Corollary 15, the system
$F_{\beta_{1},\beta_{2}}$ presents a complex fractal boundary between the
initial conditions leading to each of the two attractors. This fractal
boundary is the graph of $\Phi_{\beta_{1},\beta_{2}}$ which separates the
basin of attraction. Note that in a riddled basin, small variations in initial
conditions induce a switch between the different chaotic attractors but the
fractal boundary causes to predict, from a given initial condition, what
trajectory in phase-space the system will follow. These facts allow us to
verify the occurrence of riddled basin (in the global sense).
###### Theorem 18.
Let $F_{\beta_{1},\beta_{2}}\in\mathcal{F}$ be a skew product of the form (1)
whose fiber maps $g_{i,\beta_{1},\beta_{2}}$, $i=1,2$, given by (11) and let
$(\beta_{1},\beta_{2})\in\Gamma_{S}$, where $\Gamma_{S}$ is given by (35).
Consider the chaotic attractors $A_{i}$, $i=0,1$. Then the following holds:
* $(a)$
if $\beta_{1}\leq 1/2$ and $\beta_{2}<\frac{1}{1-\beta_{1}}-1$ then
$\mathcal{B}(A_{0})$ is riddled with $\mathcal{B}(A_{1})$;
* $(b)$
if $\beta_{1}\geq 1/2$ and $\beta_{2}>2-\frac{1}{\beta_{1}}$, then
$\mathcal{B}(A_{1})$ is riddled with $\mathcal{B}(A_{0})$;
where $\mathcal{B}(A_{i})$ is the basin of attraction of $A_{i}$, for $i=0,1$.
###### Proof.
As we have seen in Section 3, the two chaotic attractors $A_{i}$, $i=0,1$, are
$SRB$ attractors for the restriction of $F_{\beta_{1},\beta_{2}}$ to the
invariant subspaces $N_{i}$. Since $(\beta_{1},\beta_{2})\in\Gamma_{S}$, by
Corollary 7, both normal Lyapunov exponents $L_{\perp,\beta_{1},\beta_{2}}(0)$
and $L_{\perp,\beta_{1},\beta_{2}}(1)$ are negative. This fact ensures that
$A_{i}$, $i=0,1$, are (essential) attractors (in the Milnor sense) in the
whole phase space (see Theorem 6).
To prove $(a)$, consider the fixed point $Q=(1,0)\in A_{0}$. We closely follow
[Alexander et al.(1992), Section 4] and construct an open set near the fixed
point $Q$ which is not contained in the basin $\mathcal{B}(A_{0})$. Indeed,
consider the graph $x\mapsto(1-x)^{\log\frac{(1+\beta_{2})}{2}}$, a subset of
$A_{0}\times[0,1]$ having a cusp at $Q=(1,0)$. This graph is strictly monotone
and concave on each side of its cusp point. Since the mapping $y\mapsto
dg_{2,\beta_{2}}(y)$ is continuous, there exists a real
$\gamma(\beta_{1},\beta_{2})>0$ sufficiently close to 1 such that if we take
$W_{\beta_{1},\beta_{2}}^{+}=\\{(x,y)\in[0,1]\times[0,1]:\gamma(\beta_{1},\beta_{2})<x<1\
\text{and}\ y>(1-x)^{\log(1+\beta_{2})}\\}$
then for $(x,y)\in W_{\beta_{1},\beta_{2}}^{+}$, $dg_{2,\beta_{2}}(y)>1$. Thus
any point in $W_{\beta_{1},\beta_{2}}^{+}$ escapes from the attractor $A_{0}$.
By Corollary 16, the union of the basins
$\mathcal{B}_{\beta_{1},\beta_{2}}(A_{i})$, $i=0,1$, has full Lebesgue
measure. Also, the graph of the map $\Phi_{\beta_{1},\beta_{2}}$ obtained in
Corollary 15 is the fractal boundary which separates the basin of attraction.
By these facts, if $\gamma(\beta_{1},\beta_{2})$ is close enough to 1, for any
point $(x,y)\in W_{\beta_{1},\beta_{2}}^{+}$, there is $n>1$ such that
$g_{x,\beta_{1},\beta_{2}}^{n}(y)>\Phi_{\beta_{1},\beta_{2}}(x)$ and hence
$(x,y)$ escapes to $\mathcal{B}(A_{1})$ after some iterates.
Let $W_{\beta_{1},\beta_{2}}=\cup_{n\geq
0}F_{\beta_{1},\beta_{2}}^{-n}(W_{\beta_{1},\beta_{2}}^{+})$. Then
$W_{\beta_{1},\beta_{2}}$ is an open set and its boundary has a cusp at each
point of the set $\\{f^{-n}(1)\\}$. The set $\\{f^{-n}(1)\\}$ is dense in
$\mathbb{I}$ and thus every neighborhood of any point in $A_{0}$ intersects
$W_{\beta_{1},\beta_{2}}$. Thus the basin $\mathcal{B}(A_{0})$ is riddled with
the basin $\mathcal{B}(A_{1})$.
To prove ($b$), we construct an open set near the fixed point $Z=(1,1)\in
A_{1}$ which is not contained in the basin $\mathcal{B}(A_{1})$. Consider the
graph $x\mapsto x^{\log(2-\beta_{2})}$, a subset of $A_{1}\times[0,1]$ which
has a cusp at $Z=(1,1)$. This graph is strictly monotone and concave on each
side of its cusp point. Since the mapping $y\mapsto dg_{2,\beta_{2}}(y)$ is
continuous, there exists a real $\eta(\beta_{1},\beta_{2})>0$ sufficiently
close to 0 such that if we take
$U_{\beta_{1},\beta_{2}}^{+}=\\{(x,y)\in[0,1]\times[0,1]:0<x<\eta(\beta_{1},\beta_{2})\
\text{and}\ y>x^{log(2-\beta_{2})}\\},$
then for $(x,y)\in U^{+}$, $dg_{2,\beta_{2}}(y)>1$. Thus any point in
$U_{\beta_{1},\beta_{2}}^{+}$ escapes from the attractor $A_{1}$. By applying
an argument similar to statement ($a$), any point $(x,y)\in
U_{\beta_{1},\beta_{2}}^{+}$ escapes to $\mathcal{B}(A_{0})$ after some
iterates. Let $U=\cup_{n\geq 0}F^{-n}(U^{+})$. Then $U$ is an open set and its
boundary has a cusp at each point of the set $\\{f^{-n}(1)\\}$. The set
$\\{f^{-n}(1)\\}$ is dense in $[0,1]$ and thus every neighborhood of any point
in $A_{1}$ intersects $U$. Thus, $\mathcal{B}(A_{1})$ is riddled with the
basin $\mathcal{B}(A_{0})$. ∎
$\begin{array}[]{cc}\includegraphics[scale={0.4}]{f3_2.eps}\includegraphics[scale={0.4}]{f4}\end{array}$
Figure 8. The left frame depicts the basins of attraction of $A_{0}$ and
$A_{1}$, for $\beta_{1}=.3$ and $\beta_{2}=.42$. The basin of $A_{0}$ is
riddled by the basin of $A_{1}$. The right frame shows the basins of
attraction of $A_{0}$ and $A_{1}$ for $\beta_{1}=.7$ and $\beta_{2}=.58$. The
basin of $A_{1}$ is riddled by the basin of $A_{0}$. The blue region in both
figures corresponds to the basin of attraction $\mathcal{B}(A_{0})$, while the
yellow region corresponds to the basin of attraction $\mathcal{B}(A_{1})$.
Let $(\beta_{1},\beta_{2})\in\Gamma_{S}$. Take $\beta:=\beta_{1}=\beta_{2}$
and
$N_{\beta}:=\\{(x,y):\quad 0\leq x\leq 1,\quad y=\beta\\}.$ (59)
Then $N_{\beta}$ is an invariant subspace by $F_{\beta,\beta}$ and the
restriction of $F_{\beta,\beta}$ to this invariant subspace possesses a
chaotic attractor $A_{\beta}$.
The normal Lyapunov exponent for $F_{\beta,\beta}$ is given by
$\displaystyle
L_{\perp,\beta,\beta}(y)=\frac{1}{2}\ln(-\beta^{2}+\beta+1)+\frac{1}{2}\ln(\beta^{2}-\beta+1).$
(60)
Simple computation shows that $L_{\perp,\beta,\beta}$ is negative for each
$\beta\in(0,1)$. Note that the intervals $[0,\beta]$ and $[\beta,1]$ are
invariants by both fiber maps $g_{1,\beta}$ and $g_{2,\beta}$. By these facts,
Theorem 11 and the comments before Corollary 15, we get the next result.
###### Corollary 19.
$F_{\beta,\beta}$ exhibit three Milnor attractors $A_{0}$, $A_{1}$ and
$A_{\beta}$. Moreover, the basins of $A_{i}$ and $A_{\beta}$ are intermingled,
for each $i=0,1$.
$\begin{array}[]{c}\includegraphics[scale={0.5}]{int}\end{array}$
Figure 9. The basins of attraction for the attractors $A_{0}$, $A_{1}$ and
$A_{\beta}$, for parameters values $\beta_{1}=\beta_{2}=\beta=0.5$. The blue
region in the figure corresponds to the basin of attraction
$\mathcal{B}(A_{0})$, the green region in the figure corresponds to the basin
of attraction $\mathcal{B}(A_{\beta})$, while yellow region corresponds to the
basin of attraction $\mathcal{B}(A_{1})$. The intermingled basin is observed.
We recall that the blowout bifurcation [Ott & Sommer(1994), Ashwin et
al.(1996)] occurs when $L_{\perp,\beta_{1},\beta_{2}}$ crosses zero.
###### Corollary 20.
Let $F_{\beta_{1},\beta_{2}}\in\mathcal{F}$ be a skew product of the form (1)
whose fiber maps $g_{i,\beta_{1},\beta_{2}}$, $i=1,2$, given by (11). Then the
following holds:
* $(a)$
$F_{\beta_{1},\beta_{2}}$ exhibits a (subcritical) hysteretic blowout
bifurcation on passing through any $\beta_{1}<1/2$ and
$\beta_{2}=\frac{1}{1-\beta_{1}}-1$;
* $(b)$
$F_{\beta_{1},\beta_{2}}$ exhibits a (subcritical) hysteretic blowout
bifurcation on passing through any $\beta_{1}>1/2$ and
$\beta_{2}=2-\frac{1}{\beta_{1}}$.
###### Proof.
By the proof of Theorem 6, for each $(\beta_{1},\beta_{2})\in\Gamma_{S}$, the
normal Lyapunov exponents $L_{\perp,\beta_{1},\beta_{2}}(i)$, $i=0,1$, is
smoothly dependent on the normal parameters $\beta_{1}$ and $\beta_{2}$. By
this fact and Theorem 6, $L_{\perp,\beta_{1},\beta_{2}}(i)$ crosses zero at
any $\beta_{1}<1/2$ and $\beta_{2}=\frac{1}{1-\beta_{1}}-1$ and at any
$\beta_{1}>1/2$ and $\beta_{2}=2-\frac{1}{\beta_{1}}$. Thus, by definition of
a hysteretic blowout bifurcation and Theorem 6, the result is immediate. ∎
## 6\. Conclusion
We have investigated the formation of locally riddled basins of attraction and
chaotic saddle for a two parameter family $F_{\beta_{1},\beta_{2}}$,
$\beta_{1},\beta_{2}\in(0,1)$, of skew product systems defined on the plane.
Our model exhibits two distinct chaotic attractors $A_{0}$ and $A_{1}$ lying
in two different invariant subspaces. We have analyzed the model rigorously
using the results of [Ashwin et al.(1996)] and estimated the range of values
of parameters $\beta_{i}$, $i=1,2$, such that the attractor $A_{0}$ or $A_{1}$
has a locally riddled basin, or becomes a chaotic saddle. Then by varying the
parameters $\beta_{i}$, $i=1,2$, in an open region in the
$\beta_{1}\beta_{2}$-plane, we have shown the occurrence of riddled basin (in
the global sense) and hysteretic blowout bifurcation. To prove the riddling
basin, we have semi-conjugated the system to a random walk model and provided
a complex fractal boundary between the initial conditions leading to each of
the two attractors. This boundary separates the basin of attraction and causes
to predict, from a given initial condition, what trajectory in phase-space the
system will follow. Moreover, it was shown that, by varying the parameters in
an open region, a new chaotic attractor appears when $\beta_{1}=\beta_{2}$.
Also, the basin of this new attractor intermingled with the basins of both
attractors $A_{0}$ and $A_{1}$. Numerical simulations were presented
graphically to confirm the validity of our results.
## References
* [Adler & Flatto(1991)] Adler, R. & Flatto, L. [1991] “Geodesic flows, interval maps, and symbolic dynamics,” Bull. Amer. Math. Soc. 25(2), 229–334.
* [Alexander et al.(1992)] Alexander, J. C., Yorke, J. A., You, Z. & Kan, I. [1992] “Riddled basins,” International Journal of Bifurcation and Chaos. 2(04), 795–813.
* [Arnold(1998)] Arnold, L. [1998] Random dynamical systems, (Springer Verlag).
* [Ashwin et al.(1996)] Ashwin, P., Buescu, J. & Stewart, I. [1996] “From attractor to chaotic saddle: a tale of transverse instability,” Nonlinearity. 9(3), 703–738.
* [Ashwin et al.(1994)] Ashwin, P., Buescu, J., & Stewart, I. [1994] “Bubbling of attractors and synchronisation of chaotic oscillators,” Phys. Lett. A. 193 126–139.
* [Ashwin et al.(1998)] Ashwin, P., Aston, P. J., & Nicol, M. [1998] “On the unfolding of a blowout bifurcation” Physica D. 111 81–95.
* [Buescu(1997)] Buescu J. [1997] “Exotic attractors: from Liapunov stability to riddled basins” Progress in Mathematics 153 (Basel: Birkhauser).
* [Cazelles(2001)] Cazelles, B. [2001] “ Dynamics with riddled basins of attraction in models of interacting populations,” Chaos, Solitons and Fractals. 12, 301–311.
* [Daza et al.(2016)] Daza, A., Wagemakers, A., Georgeot, B., Guery-Odelin, D. & Sanjuan, M.A.F [2016] “Basin entropy: a new tool to analyze uncertainty in dynamical systems,” Nature
* [Dudkowskia et al.(2016)] Dudkowskia, D., Jafari, S., Kapitaniak, T., Kuznetsov, N. V., Leonovc, G. A., & Prasad, A. [2016] “Hidden attractors in dynamical systems,” Physics Reports 637 1–-50
* [Gharaei & Homburg(2017)] Gharaei, M. & Homburg, A.J. [2017] “Random interval diffeomorphisms,” Discrete Contin. Dyn. Syst. Ser. B. 10(2) 241–272.
* [Gorodnik(2017)] Gorodnik, A. [2017] _Dynamical systems and ergodic theory_ , Teaching Block 1, University of Bristol.
* [Heagy et al.(1995)] Heagy, J.F., Carroll, T.L & Pecora L.M. [1995] “Experimental and numerical evidence for riddled basins in coupled chaotic oscillators,” Phys. Rev. Lett. 73 3528\.
* [Krzyzewski & Szlenk(1969)] Krzyzewski, K. & Szlenk, W. [1969] “On invariant measures for expanding differentiable mappings,” Studia Math. 33 83–92.
* [Karimi & Ghane(2020)] Karimi, S. & Ghane, F. H. [2020] “Analysis of Coexistence and Extinction in a Two-Species Competition Model, ” International Journal of Bifurcation and Chaos. 30 (16) 2050248 (17 pages).
* [Lai et al.(2005)] Lai, Y. C., He, D. R. & Jiang, Y. M. [2005] “Basins of attraction in piecewise smooth Hamiltonian systems,” Physical Review E. 72(2), 025201.
* [Mane(1987)] Mane, R. [1987] Ergodic theory and differentiable dynamics, Ergebnisse der Mathematik und ihrer Grenzgebiete. 8 (3) [Results in Mathematics and Related Areas (3)] (Springer-Verlag, Berlin).
* [Melbourne(1999)] Melbourne, I. [1999] “An example of a non-asymptotically stable attractor,” Nonlinearity. 4, 835–844.
* [Mera et al.(2003)] Mera, M. E., Moran, M., Preiss, D., & Zajicek, L. [2003] “Porosity, $\sigma$-porosity and measures,” Nonlinearity. 16, 247–255.
* [Milnor(1985)] Milnor, J. [1985] “ On the concept of attractor,” Comm. Math. Phys. 99(2), 177–195.
* [Mohd Roslan & Ashwin(2016)] Mohd Roslan, U. A. & Ashwin, P. [2016] “ Local and global stability indices for a riddled basin attractor of a piecewise linear map,” Dynamical Systems. 31(3), 375–392.
* [Nakajima & Ueda(1996)] Nakajima, H. & Ueda, Y. [1996] “Riddled basins of the optimal states in learning dynamical systems,” Physica D: Nonlinear Phenomena 99(1), 35–44
* [Ott(1993)] Ott E. [1993] Chaos in dynamical systems, (Cambridge: Cambridge University Press).
* [Ott et al.(1994)] Ott, E., Alexander, J. C., Kan, I., Sommerer, J. C. & Yorke, J. A.[1994] “ The transition to chaotic attractors with riddled basins,” Physica D: Nonlinear Phenomena, 76(4), 384–410.
* [Ott & Sommer(1993)] Ott, E. & Sommer, JC. [1993] “A physical system with qualitatively uncertain dynamics,” Nature 365 (6442), 138–140
* [Ott et al.(1993)] Ott, E., Sommerer, J. C., Alexander, J. C., Kan, I. & Yorke, J. A.[1993] “ Scaling behavior of chaotic systems with riddled basins,” Physical review letters, 71(25), 4134.
* [Ott & Sommer(1994)] Ott, E. & Sommerer, J. C. [1994] “Blowout bifurcations: the occurrence of riddled basins and on-off intermittency,” Physics Letters A, 188(1), 39–47.
* [Pikovsky(1984)] Pikovsky, A. S. [1984] “On the interaction of strange attractors,” Zeitschrift fur Physik B Condensed Matter. 55(2), 149–154.
* [Platt et al.(1993)] Platt, N. S. E. A., Spiegel, E. A., & Tresser, C. [1993] “On-off intermittency: A mechanism for bursting,” Physical Review Letters. 70(3), 279.
* [Pugh & Shub(1989)] Pugh, C. & Shub, M. [1989] Ergodic attractors, Vol 312(1), Transactions of the American Mathematical Society, 1–54.
* [Schultz et al.(1993)] Schultz, P., Menck, P. J., Heitzig, J. & Kurths, J. [1993] “Potentials and limits to basin stability estimation,” New Journal of Physics. 19(2), 023005 2–17.
* [Staiger(2002)] Staiger, L. [2002] “How large is the set of disjunctive sequences?” J. UCS 8 (2) 348–362.
* [Viana et al.(2009)] Viana, R.L., Camargo, S., Pereira, R.F., Verges, M.C., Lopes, S.R. & Pinto, S.E.S. [2009] “ Riddled basins in complex physical and biological systems”, Journal of Computational Interdisciplinary Sciences, 1(2), 73–82.
* [Walters(1982)] Walters, P. [1982] An Introduction to Ergodic Theory, ( Berlin: Springer).
|
# Laboratory electron screening in nuclear resonant reactions
C. Iliadis<EMAIL_ADDRESS>Department of Physics & Astronomy, University of
North Carolina at Chapel Hill, NC 27599-3255, USA Triangle Universities
Nuclear Laboratory (TUNL), Duke University, Durham, North Carolina 27708, USA
###### Abstract
Both nonresonant and resonance reaction data are subject to laboratory
electron screening effects. For nonresonant reactions, such effects are well
documented and the measured cross sections can be corrected to find the
unscreened ones. Frequently, the procedure and expression to calculate
laboratory electron screening factors for nonresonant reactions are also
applied to isolated narrow resonances, without much theoretical support or
experimental evidence.
A simple model is applied to estimate electron screening factors, lengths, and
potentials for narrow resonances. The corrections to the measured data result
in an enhancement of the unscreened resonance strengths by less than 0.2%,
contrary to published narrow-resonance screening correction factors, which
predict a reduction of the unscreened strengths by up to 25%. Unless it can be
proven otherwise, it is recommended that measured strengths of isolated narrow
resonances not be corrected for laboratory electron screening.
The prospects of investigating laboratory electron screening effects by
measuring almost negligible differences in resonance strengths are not
promising. Instead, the difference of the resonance energy for the unscreened
and screened situation may be measurable. As an example, the case of the
$E_{r}$ $=$ $956$-keV resonance in the 27Al(p,$\gamma$)28Si reaction is
discussed. It is also demonstrated that the claim of a previously reported
detection of a resonance near $800$ keV in the 176Lu(p,n)176Hf reaction is
incorrect.
## I Introduction
Nonresonant charged-particle nuclear reaction measurements at low bombarding
energies are impacted by the presence of electrons in the vicinity of the
interacting nuclei. These electrons, either bound to individual target or
projectile atoms, or freely moving in the conduction band in the case of a
metal, give rise to an attractive potential that effectively reduces the width
of the overall potential barrier to be penetrated by the projectile.
Therefore, the astrophysical $S$ factor extracted from a nonresonant cross
section measured in the laboratory is expected to be larger compared to the
$S$ factor that would have been obtained in the absence of electrons,
especially at the lowest bombarding energies. This effect has been observed in
several experiments (see, e.g., Ref. [1]). It is important to correct the
measured cross section for such laboratory electron screening effects, and,
thereby, determine the cross section applicable to bare interacting nuclei.
The latter quantity can then be used, together with a prescription of stellar
electron screening, to calculate thermonuclear reaction rates, which are an
essential ingredient for models of stellar evolution and explosion. The
electron screening correction factors differ for the laboratory and stellar
environment. The focus of the present work is on the former. The latter have
been calculated, e.g., by Refs. [2, 3].
Many authors (see e.g., Refs. [4, 5], and references therein) have pointed out
that the magnitude of the laboratory electron screening corrections extracted
from low-energy nonresonant cross section data are larger than what is
predicted from theory. Sophisticated theoretical models have been applied to
the problem, but significant inconsistencies between theory and experiment
remain (for a review, see Ref. [5]).
The aim of the present work is not to provide more accurate predictions for
the nonresonant laboratory electron screening corrections, but to investigate
the correction pertaining to isolated narrow resonances. Assenbaum et al. [6]
were first to suggest that the electron screening correction factors obtained
for nonresonant reactions can be applied equally to narrow resonances. They
also predicted that electron screening effects would result in a shift of the
resonance energy compared to the case of unscreened nuclei. As will be
discussed below, their first claim turns out to be incorrect, while the second
one is confirmed in the present work. Measuring such shifts of the resonance
energy may allow for a detailed study of the interplay between atomic and
nuclear processes.
The effects of atomic electrons on nuclear resonance scattering have been
studied many times before [7, 8, 9, 10]. However, a review of such effects in
nuclear resonance reactions has not been given in any detail. For this reason,
in the literature, the correction factors obtained for nonresonant reactions
are also applied to narrow resonances (see, e.g., Refs. [11, 12, 13, 14]).
Such corrections always result in a bare (unscreened) resonance strength that
is lower, by up to 25%, depending on the reaction, compared to the measured
(screened) strength. However, it is neither obvious why the same laboratory
screening correction factors should be applied to both nonresonant and narrow-
resonance reaction data, nor whether there are compelling reasons to correct
the latter data for laboratory screening effects at all.
In Secs. II and III, laboratory electron screening effects for nonresonant
reactions and narrow resonances, respectively, will be reviewed. Screening
energies and lengths are presented in Sec. IV. Results are provided in Sec. V
and future measurements are discussed in Sec. VI. A concluding summary is
given in Sec. VII.
## II Electron screening in nonresonant reactions
The nonresonant cross section, $\sigma(E)$, at a center-of-mass energy, $E$,
can be parameterized as [15]
$\sigma(E)\equiv\frac{1}{E}S(E)e^{-2\pi\eta(E)}$ (1)
where the astrophysical $S$ factor is frequently a function that varies slowly
with energy; $\eta(E)$ denotes the Sommerfeld parameter, $\eta$ $\equiv$
($Z_{0}Z_{1}e^{2}/\hbar)\sqrt{\mu/(2E)}$; $Z_{0}$, $Z_{1}$, $e$, and $\mu$ are
the charges of the interacting nuclei, elementary charge, and reduced mass,
respectively. The energy-dependent Gamow factor, $e^{-2\pi\eta}$, describes
the $s$-wave transmission through the Coulomb barrier.
The situation is depicted in Fig. 1. The unscreened Coulomb barrier,
$V_{C}(r)$, is shown as the blue curve. A negative screening potential,
$U_{e}$, is represented by the green line. It is depicted here as a constant
potential, which is the usual assumption made in the literature for
nonresonant reactions. The magnitude of $U_{e}$ is highly exaggerated for
illustrative purposes. The screened Coulomb potential, $V_{C}(r)$ $+$ $U_{e}$,
i.e., the sum of the blue and green lines, is shown as the red curve. When a
particle is incident on the unscreened barrier at a center-of-mass energy, $E$
(gray arrow at right), it needs to tunnel through a distance $R_{u}$ $-$
$R_{n}$ to initiate the reaction, where $R_{u}$ and $R_{n}$ denote the
classical turning point for the unscreened barrier and the nuclear radius,
respectively. A particle of energy $E$ incident on the screened barrier will
tunnel through a shorter distance of $R_{s}$ $-$ $R_{n}$, where $R_{s}$ is the
classical turning point of the screened barrier. The increase in the measured
nonresonant cross section is described by the ratio of transmission
probabilities, $T^{\prime}$ and $T$, through the screened and unscreened
barriers, respectively, at energy, $E$,
$f_{nr}\equiv\frac{\sigma_{\mathrm{screen}}}{\sigma_{\mathrm{unscreen}}}=\frac{T^{\prime}(E)}{T(E)}$
(2)
The transmission coefficient in the Wentzel-Kramers-Brillouin (WKB)
approximation for the unscreened Coulomb barrier is given by [16]
$T(E)\approx\exp\left(-\frac{\sqrt{8\mu}}{\hbar}\int_{R_{n}}^{R_{u}}\sqrt{V_{C}(r)-E}\,dr\right)$
(3)
where $\mu$ is the reduced mass, and $V_{C}(r)$ is the (unscreened) Coulomb
potential. The outer turning point is given by $R_{u}$ $=$
$Z_{0}Z_{1}e^{2}/(4\pi\epsilon_{0}E)$, with $\epsilon_{0}$ denoting the vacuum
permittivity. For a particle approaching the screened barrier at energy $E$,
we can write
$T^{\prime}(E)\approx\exp\left(-\frac{\sqrt{8\mu}}{\hbar}\int_{R_{n}}^{R_{u}}\sqrt{V_{C}(r)+U_{e}-E}\,dr\right)$
(4)
It can be seen that Eq. (4) is equivalent to the transmission of the
unscreened barrier at an energy of $E_{\mathrm{eff}}$ $=$ $E$ $+$ $|U_{e}|$,
i.e., $T^{\prime}(E)$ $=$ $T(E_{\mathrm{eff}})$, as indicated by the blue
arrow in Fig. 1. This is the reason why usually the transmission coefficients,
$T^{\prime}(E)$ and $T(E)$, are not computed numerically. Instead, they are
approximated by the Gamow factors, $T(E)$ $\approx$
$\mathrm{exp}(-2\pi\eta(E))$ and $T^{\prime}(E)$ $\approx$
$\mathrm{exp}(-2\pi\eta(E_{\mathrm{eff}}))$, so that the nonresonant electron
screening correction factor becomes
$f_{nr}\approx\frac{e^{-2\pi\eta(E_{\mathrm{eff}})}}{e^{-2\pi\eta(E)}}\approx
e^{\pi\eta(E)\frac{|U_{e}|}{E}}$ (5)
In the last step, it is assumed that the energy of the incident particle is
large compared to the screening energy, i.e., $E$ $\gg$ $|U_{e}|$. The
electron screening potential, $U_{e}$, is assumed to be independent of energy.
The factor, $f_{nr}$, amounts to unity at higher energies, where $E$ $\gg$
$|U_{e}|$, and increases as the energy decreases. Therefore, its magnitude is
$f_{nr}$ $\geq$ $1$.
Figure 1: Schematic representation (not to scale) of electron screening for a
nonresonant charged-particle nuclear reaction in the laboratory, showing the
unscreened Coulomb potential (blue curve), constant negative screening
potential, $U_{e}$ (green line), screened Coulomb potential (red curve), total
energy, $E$ (gray arrows), and effective energy, $E_{\mathrm{eff}}$ $=$ $E$
$+$ $|U_{e}|$ (blue arrow); $R_{n}$, $R_{s}$, and $R_{u}$ denote the nuclear
radius, and the classical turning points at energy $E$ for the screened and
unscreened barrier, respectively. The actual reaction in the laboratory is
represented by the second gray arrow (on the left) extending to the red curve.
Notice that $R_{s}$ is also equal to the classical turning point for the
unscreened barrier (blue curve) at the effective energy, $E_{\mathrm{eff}}$
(blue arrow). No screening potential is shown inside the nucleus, because it
is irrelevant for the derivation of $f_{nr}$ in Eq. (5).
Equation (5) has been applied in Refs. [17, 6] and is the commonly adopted
formalism for nonresonant cross sections. As can be seen from the above
derivation, the incident particle does not actually gain total energy, as is
sometimes stated. Instead, the energy shift, from $E$ to $E_{\mathrm{eff}}$,
facilitates the convenient calculation of $f_{nr}$ by using the Gamow factors
at these two energies (see also Ref. [18]), without the need of computing the
ratio of transmission coefficients at energy $E$ numerically. Also, sometimes
a pre-factor containing the ratio of energies and $S$ factors at $E$ and
$E_{\mathrm{eff}}$ is included in Eq. (5). This is incorrect since the
reaction takes place at energy $E$, not at $E_{\mathrm{eff}}$.
The electron screening potential for nonresonant reactions can be estimated
with a suitable model representing the electronic configuration of the target
and projectile. For example, for gaseous targets and low bombarding energies,
the adiabatic (limit) approximation is frequently used [6]. It assumes that
the electron velocities are much larger than the relative motion between the
target and projectile nuclei. This implies that the electron cloud instantly
adjusts to the ground state of a molecule-like system consisting of the two
approaching nuclei with respective charges of $Z_{0}$ and $Z_{1}$. The
(negative) screening potential, $U_{\mathrm{ad}}$, can then be approximated by
the difference in electron binding energies,
$U_{\mathrm{ad}}\approx B_{e}(Z_{0})+B_{e}(Z_{1})-B_{e}(Z_{0}+Z_{1})$ (6)
where $B_{e}(Z_{0})$, $B_{e}(Z_{1})$, and $B_{e}(Z_{0}+Z_{1})$ denote the
(positive) total electron binding energies in the atoms with charges of
$Z_{0}$, $Z_{1}$, and $Z_{0}+Z_{1}$, respectively (see Eq. (5) in Ref. [6]).
As already pointed out in Sec. I, the values of $|U_{e}|$ extracted from low-
energy cross section data are, in most cases, significantly larger than those
calculated using the adiabatic approximation, $|U_{\mathrm{ad}}|$, by about a
factor of two. A tabulated comparison between values can be found, e.g., in
Ref. [4].
## III Electron screening for narrow resonances
For an isolated narrow resonance, what is usually measured is not directly the
cross section, but the integrated cross section over the energy region of the
resonance. This quantity is referred to as the resonance strength and can be
extracted in the laboratory from the measured thick-target resonance yield
curve [15]. The resonance strength, $\omega\gamma$, is defined by
$\omega\gamma\equiv\omega\frac{\Gamma_{a}\Gamma_{b}}{\Gamma}$ (7)
where $\Gamma_{a}$, $\Gamma_{b}$, and $\Gamma$ $=$ $\Gamma_{a}$ $+$
$\Gamma_{b}$ $+$ … denote the energy-dependent partial widths of the incoming
channel and the outgoing channel, respectively, and the total resonance width;
$\omega$ $\equiv$ $(2J+1)/[(2j_{p}+1)(2j_{t}+1)]$ is the statistical spin
factor, with $J$, $j_{p}$, and $j_{t}$ representing the spins of the
resonance, projectile, and target, respectively. The general form of the
resonance electron screening correction factor can then be written as
$f_{r}\equiv\frac{\omega\gamma_{\mathrm{screen}}}{\omega\gamma_{\mathrm{unscreen}}}=\frac{\Gamma_{a}^{\prime}}{\Gamma_{a}}\frac{\Gamma_{b}^{\prime}}{\Gamma_{b}}\frac{\Gamma}{\Gamma^{\prime}}$
(8)
where the primed and unprimed quantities refer to the screened and unscreened
widths, respectively. The meaning of a “narrow resonance” in the present
context will be defined at the end of this section.
In resonant charged-particle reactions at sufficiently low bombarding
energies, which are of main interest in nuclear astrophysics measurements, the
entrance channel width is much smaller than the exit channel width, i.e.,
$\Gamma_{a}$ $\ll$ $\Gamma_{b}$. In this case, Eq. (8) reduces to
$f_{r}=\frac{\Gamma_{a}^{\prime}}{\Gamma_{a}}=\frac{P^{\prime}}{P}\approx\frac{T^{\prime}}{T}$
(9)
Here, it is assumed that the main energy dependence of the particle partial
width, $\Gamma_{a}$, arises from the penetration factor, $P_{\ell}$ (see,
e.g., Ref. [15]), and the latter quantity is approximated by the barrier
transmission coefficient, $T$.111The definition of the transmission
coefficient usually contains the ratio of wave numbers to the left and right
of the barrier, whereas the penetration factor does not [19, 20]. However, the
wave numbers are implicitly included in the WKB wave function normalizations
[16]. Therefore, the energy dependencies of the transmission coefficient and
the penetration factor for the same value of the orbital angular momentum
should be nearly equal.
In the opposite case, $\Gamma_{a}$ $\gg$ $\Gamma_{b}$, the resonance electron
screening correction factor reduces to $f_{r}$ $\approx$
$\Gamma_{b}^{\prime}/\Gamma_{b}$. If such a resonance decays by emission of a
$\gamma$ ray or neutron, electron screening will only impact the value of
$f_{r}$ through the weak energy dependence of $\Gamma_{b}$, with the result
that $f_{r}$ $\approx$ $1$. If the emitted particle is charged (e.g., a proton
or $\alpha$ particle), its transmission through the screened barrier must be
considered in addition (see Eq. (8)).
Figure 2 presents the situation for a resonance with $\Gamma_{a}$ $\ll$
$\Gamma_{b}$, which is of primary interest in the present work. The unscreened
Coulomb barrier is shown as the blue curve. The outer turning point for a
particle approaching this barrier at the resonance energy, $E_{r}$,
corresponding to a resonance level (blue horizontal line) inside the nucleus
at the same energy, is denoted by $R_{u}$. The energy $E_{r}$ is a property of
the compound nucleus only. Whereas outside the nuclear radius a constant
screening potential was assumed for the discussion in Sec. II and Fig. 1, this
restriction will now be relaxed by adopting a negative screening potential,
$V_{\mathrm{screen}}(r)$, that varies with distance (depicted in green in Fig.
2). At large radial distances, $r\rightarrow\infty$, the screening potential
will approach zero, $V_{\mathrm{screen}}(r)\rightarrow 0$ (see also Sec. IV).
Furthermore, inside the nucleus, the screening potential, $U_{e}$, is assumed
to be constant (green horizontal line).222If we simplify the problem and
assume that the K-shell electrons (see Sec. V) form a uniformly charged sphere
surrounding the target nucleus, then the screening potential will be nearly
constant over the much smaller nuclear region. A constant screening potential
inside the nucleus was also assumed, e.g., in Refs. [21, 22].
Figure 2: Schematic representation (not to scale) of electron screening for a
resonance in the laboratory, showing the unscreened Coulomb potential (blue
curve), negative screening potential, $V_{\mathrm{screen}}$ (green), screened
Coulomb potential (red curve), resonance energy, $E_{r}$ (blue arrow), and
shifted energy, $E_{r}^{\prime}$ $=$ $E_{r}$ $-$ $|U_{e}|$ (gray arrow);
$R_{n}$ denotes the nuclear radius, $R_{u}$ is the classical turning point at
energy $E_{r}$ for the unscreened barrier, and $R_{s}$ is the turning point at
energy $E_{r}$ $-$ $|U_{e}|$ for the screened barrier. The actual reaction in
the laboratory is represented by the gray arrow and the red curve. Notice that
the tunneling distance, $R_{s}$ $-$ $R_{n}$, through the screened barrier at
energy $E_{r}$ $-$ $|U_{e}|$ is larger than the distance $R_{u}$ $-$ $R_{n}$
through the unscreened barrier at $E_{r}$. If the screening potential,
$V_{\mathrm{screen}}(r)$, would be constant, the tunneling distances would be
the same and no change in either the transmission coefficient or resonance
strength would be expected.
A laboratory measurement of an isolated narrow resonance is impacted by
electron screening in two ways: (i) outside the nucleus, the sum of the
unscreened Coulomb potential (blue line) and screening potential (green line)
gives rise to the screened Coulomb potential, shown in red; (ii) the
attractive screening potential performs work on the projectile approaching the
target atom, and, therefore, the energy at which the narrow resonance will be
excited in the laboratory becomes $E_{r}^{\prime}$ $=$ $E_{r}$ $-$ $|U_{e}|$,
where $E_{r}^{\prime}$ $<$ $E_{r}$ (see the gray arrow in Fig. 2). Or,
expressed differently, the virtual level inside the compound nucleus (red
horizontal line) is lowered by an amount of $|U_{e}|$.
The transmission coefficent for the unscreened barrier is given by Eq. (3),
where the center-of-mass resonance energy, $E_{r}$, replaces the energy, $E$.
But, unlike the nonresonant case in Sec. II, the transmission coefficient in
the presence of electrons is given by
$T^{\prime}\approx\exp\left(-\frac{\sqrt{8\mu}}{\hbar}\int_{R_{n}}^{R_{s}}\sqrt{V_{C}(r)+V_{\mathrm{screen}}(r)-(E_{r}+U_{e})}\,dr\right)$
(10)
where the outer turning point for the screened case, $R_{s}$, is obtained from
$V_{C}(R_{s})$ $+$ $V_{\mathrm{screen}}(R_{s})$ $=$ $E_{r}$ $+$ $U_{e}$. It
can be seen that, for the special case of a constant screening potential over
the region of the outer turning point, i.e., $V_{\mathrm{screen}}(r)$ $=$
$U_{e}$ $=$ const, the two effects discussed above, (i) and (ii), cancel each
other exactly. Consequently, the two turning points for the screened and
unscreened case, $R_{s}$ and $R_{u}$, would coincide and Eq. (10) reduces to
Eq. (3). In other words, the electron screening correction factor, $f_{r}$,
would become unity. This also means, contrary to the claim in Ref. [6], that
it is incorrect to apply the screening factor for nonresonant reactions,
$f_{nr}$ in Eq. (2), to the measured strength of an isolated narrow resonance,
because this procedure disregards the shift down in resonance energy from
$E_{r}$ to $E_{r}$ $-$ $|U_{e}|$ in the calculation of the transmission
coefficient. The possibility of measuring this resonance energy shift will be
addressed in Sec. VI.
When $V_{\mathrm{screen}}(r)$ is not constant, but declines outside the
nuclear radius toward zero, the transmission coefficient for the screened
Coulomb barrier is, in fact, smaller than the transmission through the
unscreened barrier. This can be seen in Fig. 2, where the distance the
particle needs to tunnel through the screened barrier, $R_{s}$ $-$ $R_{n}$, at
$E_{r}$ $-$ $|U_{e}|$ is larger than the distance for tunneling through the
unscreened barrier, $R_{u}$ $-$ $R_{n}$, at the energy $E_{r}$. Therefore, the
unscreened resonance strength is generally larger than the screened value,
which is the opposite of the assumption generally made in the literature for
the laboratory screening correction for a narrow resonance (see Sec. I). In
other words, unlike the correction factor for nonresonant cross sections,
$f_{nr}$ $\geq$ $1$, the magnitude of the narrow-resonance correction factor
is $f_{r}$ $\leq$ $1$, as long as the screening potential,
$V_{\mathrm{screen}}(r)$, is negative. It is assumed here that the screening
potential, $U_{e}$, is constant inside the nucleus and can simply be
subtracted from the unscreened resonance energy. It follows from the above
discussion that the important quantity in this context is not only the
magnitude of the electron screening potential, but also its rate of decline
over the tunneling region.
Arguments similar to the above had been presented earlier in connection with
electron screening in $\alpha$-particle radioactivity [21, 23] and screening
effects for narrow resonances in astrophysical plasmas [3, 22]. The shift in
the energy of the virtual resonance level, caused by electron screening, is
frequently disregarded in the literature (see, e.g., Refs. [17, 24, 25]),
leading to incorrect predictions.
In the present context, a “narrow resonance” is defined by $\Gamma$ $\ll$
$|U_{e}|$, i.e., its total width must be small compared to the shift in the
resonance energy, $U_{e}$ $=$ $E_{r}^{\prime}$ $-$ $E_{r}$, caused by electron
screening. As discussed above, for this condition the reaction occurs at the
screened energy, $E_{r}^{\prime}$, instead of the unscreened one, $E_{r}$. For
a broad resonance, i.e., $\Gamma$ $\gtrsim$ $|U_{e}|$, the reaction can
proceed over an extended range of incident energies, including the unscreened
resonance energy, and the electron screening correction factor must be
computed numerically from an expression more complicated than Eq. (9).
## IV Screening lengths and screening potentials
A simple model is used in this work to estimate numerical values for the
screening effects on the measured strength of a narrow resonance. The
resonance screening factor, $f_{r}$, is found by numerically integrating Eqs.
(3) and (10). A Yukawa-type expression is adopted for the screened Coulomb
potential outside the nuclear radius,
$V_{C}(r)+V_{\mathrm{screen}}(r)=\frac{e^{2}}{4\pi\epsilon_{0}}\frac{Z_{0}Z_{1}}{r}~{}e^{-r/L}\,\,,r\geq
R_{n}$ (11)
where $L$ represents the electron screening length scale. The exponential
factor damps the overall potential to nearly zero after a few screening
lengths. For $r$ $\ll$ $L$, and keeping only the linear term in the expansion
of the exponential factor, Eq. (11) reduces to
$V_{C}(r)+V_{\mathrm{screen}}(r)\approx\frac{e^{2}}{4\pi\epsilon_{0}}\frac{Z_{0}Z_{1}}{r}-\frac{e^{2}}{4\pi\epsilon_{0}}\frac{Z_{0}Z_{1}}{L}\,\,,r\ll
L$ (12)
Therefore, and following Refs. [21, 22], the constant screening potential
inside the nucleus, $U_{e}$, can be approximated by
$U_{e}=-\frac{e^{2}}{4\pi\epsilon_{0}}\frac{Z_{0}Z_{1}}{L}$ (13)
For the nuclear radius, a value of
$R_{n}=1.2(A_{0}^{1/3}+A_{1}^{1/3})~{}\mathrm{fm}$ (14)
will be assumed, where $A_{0}$ and $A_{1}$ are the integer mass numbers of the
projectile and target, respectively.
The last task before the electron screening factor for narrow resonances,
$f_{r}$, can be computed numerically is to specify the electron screening
length, $L$. The smaller the screening length scale, the larger the screening
energy inside the nucleus, $U_{e}$, and its rate of decline outside the
nuclear radius, and the larger the screening correction factor, $f_{r}$, will
become. The screening length will depend on the atoms under consideration and
the environment in which the nuclear reaction takes place.
A dominant contribution to the electron screening is provided by the (inner)
core electrons, especially the K electrons. Their contribution will be
estimated by approximating their screening length with the radius of the K
shell,
$L_{KS}=r_{K}$ (15)
The latter values were calculated by Ref. [26] using the electron localization
function (ELF), together with Hartree-Fock wave functions of the neutral
atoms. Typical values of $r_{K}$ range from $0.58a_{0}$ for carbon to
$0.094a_{0}$ for iron, where $a_{0}$ $=$ $5.29\times 10^{4}$ fm denotes the
Bohr radius.
When the target atoms either form a metal lattice or are embedded in a metal
backing, the screening effect of the (free) conduction-band electrons must be
considered in addition. An approximation of their screening length can be
obtained from the Thomas-Fermi model of a metal [27], which predicts333The
numerical value of $3.7\times 10^{-10}$ provided in Eq. (3) of Ref. [21] is
incorrect and should be replaced by $6.1\times 10^{-9}$.
$L_{TF}=\sqrt{\frac{2\epsilon_{0}E_{F}}{3\rho e^{2}}}=6.1\times
10^{4}\sqrt{\frac{E_{F}~{}[\mathrm{eV}]}{\rho~{}[10^{22}~{}\mathrm{cm^{-3}}]}}~{}\mathrm{fm}$
(16)
where $E_{F}$ denotes the Fermi energy and $\rho$ is the electron density.
Typical values for metals are $E_{F}$ $\approx$ $10$ eV and $\rho$ $\approx$
$10\times 10^{22}$ cm-3 [27], giving a shielding length of $L_{TF}$ $\approx$
$6.10\times 10^{4}$ fm.
A number of authors (see, e.g., Refs. [28, 29, 25]) have computed screening
lengths using the Debye-Hückel model, which yields444The numerical value of
$2.18\times 10^{-8}$ provided in Eq. (4) of Ref. [21] is incorrect and should
be replaced by $2.18\times 10^{-11}$.
$L_{DH}=\sqrt{\frac{\epsilon_{0}k_{B}T}{\rho e^{2}}}=6.9\times
10^{2}\sqrt{\frac{T~{}[\mathrm{K}]}{\rho~{}[10^{22}~{}\mathrm{cm^{-3}}]}}~{}\mathrm{fm}$
(17)
where $k_{B}$ and $T$ denote the Boltzmann constant and temperature,
respectively. This model gives much smaller screening lengths, resulting in a
stronger electron screening effect. Equation (17) is useful for a plasma [2],
but this formulation does not apply to metals at room temperature, as pointed
out, e.g., by Refs. [18, 30]. For doped semiconductors or electrolytes, the
Debye-Hückel model results in modified expressions [27] compared to Eq. (17).
Here, only the dominant contributions to the electron screening, according to
Eqs. (15) and (16), are considered. For a metal target and low bombarding
energies, the velocity of the incident projectile is much smaller than the
Fermi velocity of the electrons, and, therefore, the electron screening effect
is caused by the static polarization of both the surrounding bound and
conduction electrons. When applicable, the effects of K-shell and conduction
electrons will be combined by adopting a shielding length of $L^{-1}$ $=$
$r_{K}^{-1}$ $+$ $L_{TF}^{-1}$, which assumes that the total screening
potential is given by the sum of the individual contributions. Numerical
results will be presented in the next section.
## V Results and discussion
Table 1 gives the main results, including a comparison with values from the
literature. Six narrow resonances are listed in the reactions
17O(p,$\alpha$)14N, 18O(p,$\gamma$)19F, 22Ne(p,$\gamma$)23Na,
25Mg(p,$\gamma$)26Al, and 27Al(p,$\gamma$)28Si. All of these fulfill the
conditions $\Gamma_{a}$ $\ll$ $\Gamma_{b}$ and $\Gamma$ $\lesssim$ $100$ eV
(see Sec. III). The target compositions are given in column 4. They range from
wide-gap semiconductor material (Ta2O5), gas (22Ne), to metal (Mg, Al). The
screening lengths of the K-shell electrons in the neutral target atoms,
$r_{K}$, which are listed in column 5, were assumed to be approximately equal
to the K-shell radii found in Tab. 1 of Ref. [26]. For the two metals, the
screening lengths, $L_{TF}$, calculated from the Thomas-Fermi model according
to Eq. (16), are given in column 6. The outer turning point radii, $R_{s}$, of
the screened Coulomb potential, calculated from Eq. (10), are listed in column
7. A comparison of length scales indicates that the screening lengths, $r_{K}$
and $L_{TF}$, are much larger than the outer turning-point radii, $R_{s}$.
Consequently, any screening correction factors are expected to be small.
Column 8 provides values for the constant screening potential, $U_{e}$ (see
Eq. (13)), inside the compound nucleus, which are approximately equal to the
energy difference between the unscreened resonance energy, $E_{r}$, and the
screened one. Values of $U_{e}$ range from $-0.5$ to $-2.0$ keV. They are
similar to the adiabatic approximation estimates obtained from Eq. (6), which
are given in column 9.
The present estimates of the screening correction factors for narrow
resonances, $f_{r}$, calculated according to Eqs. (9) $-$ (16), are listed in
column 10. As can be seen, the values of $f_{r}$ are unity within $0.2$%.
Also, the results predict that the screened resonance strengths are slightly
smaller than the unscreened ones, consistent with the discussion in Sec. III.
In comparison, screening “enhancement” factors for narrow resonances from the
literature, $f_{\mathrm{Lit}}$, calculated from Eqs. (5) and (6), are given in
column 11. These factors yield screened resonance strengths that exceed the
unscreened values by 7% to 25%, depending on the reaction. Again, it must be
emphasized that it is not appropriate to calculate electron screening factors
for narrow resonances using Eq. (5), which applies to nonresonant cross
sections and disregards the shift in the resonance energy, as explained in
Sec. III. Notice, that the (incorrect) literature “enhancement” factors are
significant, even when the measured resonance strength uncertainties are taken
into account.
A number of tests were performed to investigate the sensitivity of the present
results to parameter variations. Changing the nuclear radius parameter in Eq.
(14) from $1.2$ fm to either $0.5$ fm or $2.0$ fm did not impact the numerical
values of $f_{r}$ noticeably. The inclusion of a centrifugal term,
$\hbar^{2}\ell(\ell+1)/(2\mu r^{2})$, in Eqs. (3) and (10), and varying the
orbital angular momentum, $\ell$, between $0$ and $3$, did not change any of
the results either. Increasing the screening lengths adopted here (i.e., the
values of $r_{K}$ and $L_{TF}$ listed in columns 5 and 6, respectively, of
Tab. 1) will result in values of $f_{r}$ that are even closer to unity. When
the screening lengths are reduced by a factor of two, the electron screening
correction factors, $f_{r}$, are unity within 1%. These changes are negligibly
small, contrary to the correction factors reported in the literature for
narrow resonances (column 11).
The simple procedure for calculating narrow-resonance screening factors
presented here has a number of shortcomings. A static, time-independent model
has been adopted, although a dynamical approach would be more appropriate. A
constant screening potential is assumed inside the compound nucleus, see Eq.
(13), which oversimplifies the actual situation. Similar arguments apply to
approximating the screened potential by the damped, Yukawa-type, expression of
Eq. (11). The numerical results are impacted slightly by the adopted values of
the screening lengths for the K-shell and conduction electrons, for which
rough estimates have been employed here. It is worthwhile to address these
issues in the future using more sophisticated models, e.g., similar to those
developed for the related case of $\alpha$-particle radioactivity [31, 23,
30].
Table 1: Electron screening factors, $f_{r}$, and related quantities, for reported measured narrow resonances. Reaction | E${}_{r}^{c.m.}$ (keV) | $\Gamma$ (eV) | Target | $r_{K}$ (fm)111Electron screening length of K-shell electrons in the neutral target atom; see Tab. 1 of Ref. [26]. | $L_{TF}$ (fm)222Electron screening length of the Thomas-Fermi model for metals; see Eq. (16). | $R_{s}$ (fm)333Outer turning point of screened potential; see Eq. (10). | $U_{e}$ (keV)444Constant screening potential inside the compound nucleus, which is approximately equal to the energy shift down from the unscreened resonance energy to the screened one; see Eq. (13). | $U_{\mathrm{ad}}$ (keV)555Adiabatic approximation estimate of the screening potential; see Eq. (6). | $f_{r}$666Present estimate of the screening correction factor for narrow resonances; see Eq. (9). | Literature
---|---|---|---|---|---|---|---|---|---|---
| | | | | | | | | | $f_{\mathrm{Lit}}$ | Ref.
17O(p,$\alpha$)14N | 64.5 | 130$\pm$5888From Refs. [35, 36], using $\Gamma$ $\approx$ $\Gamma_{\alpha}$. | Ta2O5121212Wide-gap semiconductor. | 21160 | | 178.6 | $-$0.54 | $-$0.68 | 0.9996 | 1.15 | [12]
18O(p,$\gamma$)19F | 90.0 | 121$\pm$5999From R matrix fit of Ref. [33] (see their Table 4). | Ta2O5121212Wide-gap semiconductor. | 21160 | | 128.0 | $-$0.54 | $-$0.68 | 0.9998 | 1.10 | [32]
18O(p,$\alpha$)15N | 90.0 | 121$\pm$5999From R matrix fit of Ref. [33] (see their Table 4). | Ta2O5121212Wide-gap semiconductor. | 21160 | | 128.0 | $-$0.54 | $-$0.68 | 0.9998 | 1.09 | [33, 34]
22Ne(p,$\gamma$)23Na | 149.4 | $<60$101010Upper limit using $\Gamma$ $\approx$ $\Gamma_{\gamma}$, with $\Gamma_{\gamma}$ estimated using the Recommended Upper Limits (RUL) [37] for the primary transitions observed by Refs. [11] and [38] for 22Ne $+$ $p$ and 25Mg $+$ $p$, respectively. | Ne gas | 15870 | | 96.4 | $-$0.91 | $-$0.91 | 0.9998 | 1.07 | [11]
25Mg(p,$\gamma$)26Al | 92.2 | $<30$101010Upper limit using $\Gamma$ $\approx$ $\Gamma_{\gamma}$, with $\Gamma_{\gamma}$ estimated using the Recommended Upper Limits (RUL) [37] for the primary transitions observed by Refs. [11] and [38] for 22Ne $+$ $p$ and 25Mg $+$ $p$, respectively. | Mg metal | 12484 | 55315 | 187.4 | $-$1.7 | $-$1.2 | 0.9976 | 1.25 | [13]
27Al(p,$\gamma$)28Si | 956 | 70$\pm$14111111From Ref. [39]. | Al metal | 11310 | 49044 | 19.6 | $-$2.0 | $-$1.3 | 0.9999 | |
176Lu(p,n)176Hf | 805777This resonance could not have been observed by Ref. [25] because its strength would be far below the present-day detection limit; see Sec. VI. | | Lu metal | | | | | | | | [25]
## VI Resonance energy shifts caused by electron screening
Experimental studies of electron screening effects in resonant reactions face
a number of obstacles.
First, electron screening is expected to impact a resonance strength in a
charged-particle induced reaction only when the entrance channel width is
significantly smaller than the exit-channel one, $\Gamma_{a}$ $\ll$
$\Gamma_{b}$555For the condition $\Gamma_{a}$ $\gg$ $\Gamma_{b}$ (or
$\omega\gamma$ $\approx$ $\omega\Gamma_{b}$), and assuming that the resonance
decays by emission of a neutron or $\gamma$ ray, electron screening will
impact the exit channel width, $\Gamma_{b}$, only through the small change in
the decay energy (Sec. III). In this case, the value of $f_{r}$ will be close
to unity for an exothermic reaction. (see Sec. III). Second, even when the
condition $\Gamma_{a}$ $\ll$ $\Gamma_{b}$ is fulfilled, the ratio of screened
versus unscreened resonance strengths, $f_{r}$, will be close to unity (see
Table 1) because the effects of the screened Coulomb potential and the shift
in the resonance energy compensate each other largely (see Sec. IV).
Consequently, electron screening will not significantly impact the values of
measured resonance strengths, which are frequently extracted from the plateau
height of thick-target yield curves [15].
Because of these difficulties, it is worthwhile to consider, instead, to
measure the shift in the resonance energy, $E_{r}^{\prime}$ $-$ $E_{r}$,
caused by electron screening. Such a shift is expected to occur, in principle,
in a charged-particle resonance reaction regardless of the relative magnitudes
of the entrance and exit channel partial widths ($\Gamma_{a}$ and
$\Gamma_{b}$).
A shift in resonance energy, presumably caused by electron screening, had been
reported by Kettner et al. [25]. They measured the thick-target yield curve of
a 176Lu(p,n)176Hf resonance at 805 keV (center-of-mass energy), using three
different target-backing combinations (Lu2O3 insulator, Lu metal, and PdLu
alloy). No other information on this resonance is available in the literature.
They observed an energy difference in the leading edge of the yield curves
between the metal (and alloy) and the insulator target. By assuming that the
insulator target exhibits insignificant screening, the observed energy shift
down by $-32\pm 2$ keV (Tab. 1) was interpreted as the electron screening
potential for the metal (and alloy) target. Huke et al. [18] discussed the
energy shift reported by Ref. [25], but attributed it instead to differences
in target preparation and resulting stopping powers. The Wigner limit for an
$s$-wave proton partial width in the 176Lu(p,n)176Hf reaction at $805$ keV in
the center of mass corresponds to a value of $\approx$ $10^{-16}$ eV, which is
far below the present-day experimental detection limit. Therefore, the claim
of Ref. [25] to have detected a resonance at $805$ keV in the 176Lu(p,n)176Hf
reaction, which is still being discussed in the recent literature [40, 41,
42], is incorrect.
No unambiguous evidence has so far been published demonstrating the existence
of a shift in resonance energy caused by laboratory electron screening. Such
an energy shift could be detected by comparing a resonance energy measured in
the laboratory with the corresponding unscreened value. The latter corresponds
to the resonance energy that would be obtained in the absence of all electrons
surrounding the interacting nuclei. It can be determined from
$E_{r}=E_{x}-Q_{\mathrm{nu}}$ (18)
where $E_{r}$ is the unscreened resonance energy in the center-of-mass system
(same as Sec. III), which is a property of the compound nucleus only; $E_{x}$
denotes the excitation energy of the compound nucleus, and $Q_{\mathrm{nu}}$
represents the $Q$ value computed from nuclear (as opposed to atomic) masses
[43]. This value can be compared to the resonance energy that is obtained from
a laboratory measurement, by using the relativistic expression
$E_{r}^{\prime}=\sqrt{2m_{0}c^{2}E_{r}^{lab}+[(m_{0}+m_{1})c^{2}]^{2}}-(m_{0}+m_{1})c^{2}$
(19)
where $E_{r}^{\prime}$ and $E_{r}^{lab}$ denote the center-of-mass energy of
the resonance in the presence of electrons (same as in Sec. III), and the
measured resonance energy in the laboratory reference frame, respectively;
$m_{0}$ and $m_{1}$ represent the masses of the target and projectile. The
energy shift caused by electron screening contributes to the measured
difference, $E_{r}^{\prime}$ $-$ $E_{r}$.
This procedure requires a careful assessment of all input quantities. The
candidate resonance needs to be narrow (i.e., $\Gamma$ $\lesssim$ $100$ eV),
and the target well characterized and free of surface contamination. The
energy spread of the incident beam must be small (i.e., no more than a few
hundreds of electron volts). The excitation energy, $E_{x}$, in Eq. (18) needs
to be precisely measured, preferably by $\gamma$-ray spectrometry. The
laboratory resonance energy, $E_{r}^{lab}$, in Eq. (19) must be measured
precisely using methods that do not depend on the energies of other
(calibration) resonances. Finally, additional effects caused by the presence
of atomic electrons in the target need to be accounted for, e.g., the
excitation and ionization of bound electrons in the atom in which the nuclear
reaction is taking place [10, 44], and the Lewis effect [45].
As an example, let us consider the resonance in the 27Al(p,$\gamma$)28Si
reaction near a center-of-mass energy of $956$-keV ($J^{\pi}$ $=$ $3^{+}$;
$\Gamma$ $=$ $70\pm 14$ eV [39]). The corresponding excitation energy, which
was determined from the measured $\gamma$-ray energies of the primary decays,
is reported as $E_{x}$ $=$ $12541.31\pm 0.14$ keV [46]. The nuclear $Q$ value
amounts to $Q_{\mathrm{nu}}$ $=$ $11583.63\pm 0.05$ keV [47]. Consequently,
this yields an unscreened resonance energy of $E_{r}$ $=$ $957.68\pm 0.15$
keV, according to Eq. (18). The laboratory value of the resonance energy is
reported as $E_{r}^{lab}$ $=$ $991.756\pm 0.017$ keV [48]. In that experiment,
an aluminum metal target was used and the energy was determined relative to a
Josephson-derived 1-V standard. Also, the reported value includes corrections
caused by the ionization of atomic electrons (corresponding to an energy shift
of $24\pm 12$ eV). The above laboratory resonance energy results in a screened
resonance energy in the center-of-mass system of $E_{r}^{\prime}$ $=$
$956.032\pm 0.016$ keV, according to Eq. (19). The energy difference,
$E_{r}^{\prime}$ $-$ $E_{r}$, amounts to $-1.65\pm 0.15$ keV. This result is
near the screening energy of $U_{e}$ $=$ $-2.0$ keV (Table 1), which was
estimated using the simple model of the present work, based on a Yukawa-type
screening potential and screening lengths for electrons in the K shell and the
conduction band (Sec. IV). It is also close to the value of $U_{\mathrm{ad}}$
$=$ $-1.3$ keV that is found from the adiabatic approximation (see Eq. (6)).
Although these two estimates of the screening potential roughly agree with the
energy difference, $E_{r}^{\prime}$ $-$ $E_{r}$, estimated above for the
$E_{r}$ $=$ $956$-keV resonance in the 27Al(p,$\gamma$)28Si reaction, further
studies will be needed to confirm this claim.
## VII Summary
The present work addressed the estimation of laboratory electron screening
correction factors for isolated narrow resonances. Such corrections are
frequently performed in the literature with the same procedure and expression
used to correct laboratory nonresonant cross sections. It was pointed out that
electron screening affects nonresonant cross sections and resonance strengths
differently, and that it is not appropriate to correct measured resonance
strengths using the same procedure and expression employed for the correction
of measured nonresonant cross sections. The reported literature screening
factors applied to narrow resonances result in unscreened resonance strengths
that are smaller, by 7% to 25% depending on the reaction, than the measured
(screened) ones. On the contrary, the present work demonstrated that
unscreened resonance strengths are equal to the measured ones within $0.2$%.
This small correction is of no practical importance. Unless demonstrated
otherwise, measured resonance strengths do not need to be corrected for
laboratory electron screening effects.
Since electron screening has a negligible impact on the strengths of narrow
resonances, any attempts to study such effects by measuring the thick-target
yield are futile. Instead, and regardless of the relative magnitudes of the
entrance and exit channel partial widths, it may be more promising to detect
the shift in the resonance energy down from the unscreened value (i.e.,
obtained in the absence of any electrons) to the screened one (i.e., measured
in the laboratory). Although no unambiguous evidence has been published so far
demonstrating such an energy shift, it is pointed out this effect is likely
present in the data for the $E_{r}$ $=$ $956$-keV resonance in the
27Al(p,$\gamma$)28Si reaction. It is also demonstrated that the claim of a
previously reported detection [25] of a resonance in the 176Lu(p,n)176Hf
reaction is incorrect.
###### Acknowledgements.
The comments of Alain, Coc, Robert Janssens, Yosuke Kanai, Richard Longland,
Caleb Marshall, and Thanassis Psaltis are highly appreciated. This work is
supported by the DOE, Office of Science, Office of Nuclear Physics, under
grants DE-FG02-97ER41041 (UNC) and DE-FG02-97ER41033 (TUNL).
## References
* Aliotta _et al._ [2001] M. Aliotta, F. Raiola, G. Gyürky, A. Formicola, R. Bonetti, C. Broggini, L. Campajola, P. Corvisiero, H. Costantini, A. D’Onofrio, Z. Fülöp, G. Gervino, L. Gialanella, A. Guglielmetti, C. Gustavino, G. Imbriani, M. Junker, P. Moroni, A. Ordine, P. Prati, V. Roca, D. Rogalla, C. Rolfs, M. Romano, F. Schümann, E. Somorjai, O. Straniero, F. Strieder, F. Terrasi, H. Trautvetter, and S. Zavatarelli, Nucl. Phys. A 690, 790 (2001).
* Salpeter [1954] E. E. Salpeter, Austr. J. Phys. 7, 373 (1954).
* Mitler [1977] H. E. Mitler, Astrophys. J. 212, 513 (1977).
* Spitaleri _et al._ [2016] C. Spitaleri, C. Bertulani, L. Fortunato, and A. Vitturi, Physics Letters B 755, 275 (2016).
* Aliotta and Langanke [2022] M. Aliotta and K. Langanke, Front. Phys. 10 (2022), 10.3389/fphy.2022.942726.
* Assenbaum _et al._ [1987] H. J. Assenbaum, K. Langanke, and C. Rolfs, Zeitschrift für Physik A Atomic Nuclei 327, 461 (1987).
* Benn _et al._ [1967] J. Benn, E. Dally, H. Müller, R. Pixley, H. Staub, and H. Winkler, Nuclear Physics A 106, 296 (1967).
* Thompson _et al._ [1980] W. J. Thompson, J. F. Wilkerson, T. B. Clegg, J. M. Feagin, E. J. Ludwig, and E. Merzbacher, Phys. Rev. Lett. 45, 703 (1980).
* Briggs and Lane [1981] J. Briggs and A. Lane, Physics Letters B 106, 436 (1981).
* Heinz [1987] U. Heinz, Reports on Progress in Physics 50, 145 (1987).
* Depalo _et al._ [2016] R. Depalo, F. Cavanna, M. Aliotta, M. Anders, D. Bemmerer, A. Best, A. Boeltzig, C. Broggini, C. G. Bruno, A. Caciolli, G. F. Ciani, P. Corvisiero, T. Davinson, A. Di Leva, Z. Elekes, F. Ferraro, A. Formicola, Z. Fülöp, G. Gervino, A. Guglielmetti, C. Gustavino, G. Gyürky, G. Imbriani, M. Junker, R. Menegazzo, V. Mossa, F. R. Pantaleo, D. Piatti, P. Prati, O. Straniero, T. Szücs, M. P. Takács, and D. Trezzi (LUNA Collaboration), Phys. Rev. C 94, 055804 (2016).
* Bruno _et al._ [2016] C. G. Bruno, D. A. Scott, M. Aliotta, A. Formicola, A. Best, A. Boeltzig, D. Bemmerer, C. Broggini, A. Caciolli, F. Cavanna, G. F. Ciani, P. Corvisiero, T. Davinson, R. Depalo, A. Di Leva, Z. Elekes, F. Ferraro, Z. Fülöp, G. Gervino, A. Guglielmetti, C. Gustavino, G. Gyürky, G. Imbriani, M. Junker, R. Menegazzo, V. Mossa, F. R. Pantaleo, D. Piatti, P. Prati, E. Somorjai, O. Straniero, F. Strieder, T. Szücs, M. P. Takács, and D. Trezzi (LUNA Collaboration), Phys. Rev. Lett. 117, 142502 (2016).
* Strieder _et al._ [2012] F. Strieder, B. Limata, A. Formicola, G. Imbriani, M. Junker, D. Bemmerer, A. Best, C. Broggini, A. Caciolli, P. Corvisiero, H. Costantini, A. DiLeva, Z. Elekes, Z. Fülöp, G. Gervino, A. Guglielmetti, C. Gustavino, G. Gyürky, A. Lemut, M. Marta, C. Mazzocchi, R. Menegazzo, P. Prati, V. Roca, C. Rolfs, C. Rossi Alvarez, E. Somorjai, O. Straniero, F. Terrasi, and H. P. Trautvetter, Phys. Lett. B 707, 60 (2012).
* Sergi _et al._ [2015] M. L. Sergi, C. Spitaleri, M. La Cognata, L. Lamia, R. G. Pizzone, G. G. Rapisarda, X. D. Tang, B. Bucher, M. Couder, P. Davies, R. deBoer, X. Fang, L. Lamm, C. Ma, M. Notani, S. O’Brien, D. Roberson, W. Tan, M. Wiescher, B. Irgaziev, A. Mukhamedzhanov, J. Mrazek, and V. Kroha, Phys. Rev. C 91, 065803 (2015).
* Iliadis [2015] C. Iliadis, _Nuclear Physics of Stars_ (Wiley-VCH Verlag GmbH & Co. KGaA, 2015).
* Merzbacher [1970] E. Merzbacher, _Quantum Mechanics, 2nd edition_ (Wiley, New York, 1970).
* Erma [1957] V. A. Erma, Phys. Rev. 105, 1784 (1957).
* Huke _et al._ [2008] A. Huke, K. Czerski, P. Heide, G. Ruprecht, N. Targosz, and W. Żebrowski, Phys. Rev. C 78, 015803 (2008).
* Blatt and Weisskopf [1952] J. M. Blatt and V. F. Weisskopf, _Theoretical nuclear physics_ (Springer, New York, 1952).
* Evans [1955] R. Evans, _The Atomic Nucleus_ (McGraw-Hill, New York, 1955).
* Zinner [2007] N. T. Zinner, Nuclear Physics A 781, 81 (2007).
* Cussons _et al._ [2002] R. Cussons, K. Langanke, and T. Liolios, Eur. Phys. J. A 15, 291 (2002).
* Karpeshin [2013] F. F. Karpeshin, Phys. Rev. C 87, 054319 (2013).
* Liolios [2003] T. E. Liolios, Phys. Rev. C 68, 015804 (2003).
* Kettner _et al._ [2006] K. U. Kettner, H. W. Becker, F. Strieder, and C. Rolfs, J. Phys. G 32, 489 (2006).
* Kohout and Savin [1996] M. Kohout and A. Savin, International Journal of Quantum Chemistry 60, 875 (1996).
* Ashcroft and Mermin [1976] N. Ashcroft and N. Mermin, _Solid State Physics_ (Saunders College, Philadelphia, 1976).
* Bonomo _et al._ [2003] C. Bonomo, G. Fiorentini, Z. Fülöp, L. Gang, G. Gyürky, K. Langanke, F. Raiola, C. Rolfs, E. Somorjai, F. Strieder, J. Winter, and M. Aliotta, Nucl. Phys. A 719, C37 (2003).
* Raiola _et al._ [2004] F. Raiola, L. Gang, C. Bonomo, G. Gyürky, M. Aliotta, H. W. Becker, R. Bonetti, C. Broggini, P. Corvisiero, A. D’Onofrio, Z. Fülöp, G. Gervino, L. Gialanella, M. Junker, P. Prati, V. Roca, C. Rolfs, M. Romano, E. Somorjai, F. Strieder, F. Terrasi, G. Fiorentini, K. Langanke, and J. Winter, Eur. Phys. J. A 19, 283 (2004).
* Dzyublik [2014] A. Y. Dzyublik, Phys. Rev. C 90, 054619 (2014).
* Patyk _et al._ [2008] Z. Patyk, H. Geissel, Y. A. Litvinov, A. Musumarra, and C. Nociforo, Phys. Rev. C 78, 054317 (2008).
* Best _et al._ [2019] A. Best, F. Pantaleo, A. Boeltzig, G. Imbriani, M. Aliotta, J. Balibrea-Correa, D. Bemmerer, C. Broggini, C. Bruno, R. Buompane, A. Caciolli, F. Cavanna, T. Chillery, G. Ciani, P. Corvisiero, L. Csedreki, T. Davinson, R. deBoer, R. Depalo, A. Di Leva, Z. Elekes, F. Ferraro, E. Fiore, A. Formicola, Z. Fülöp, G. Gervino, A. Guglielmetti, C. Gustavino, G. Gyürky, M. Junker, I. Kochanek, M. Lugaro, P. Marigo, R. Menegazzo, V. Mossa, V. Paticchio, R. Perrino, D. Piatti, P. Prati, L. Schiavulli, K. Stöckel, O. Straniero, F. Strieder, T. Szücs, M. Takács, D. Trezzi, M. Wiescher, and S. Zavatarelli, Phys. Lett. B 797, 134900 (2019).
* Bruno _et al._ [2019] C. Bruno, M. Aliotta, P. Descouvemont, A. Best, T. Davinson, D. Bemmerer, A. Boeltzig, C. Broggini, A. Caciolli, F. Cavanna, T. Chillery, G. Ciani, P. Corvisiero, R. Depalo, A. Di Leva, Z. Elekes, F. Ferraro, A. Formicola, Z. Fülöp, G. Gervino, A. Guglielmetti, C. Gustavino, G. Gyürky, G. Imbriani, M. Junker, M. Lugaro, P. Marigo, R. Menegazzo, V. Mossa, F. Pantaleo, D. Piatti, P. Prati, K. Stöckel, O. Straniero, F. Strieder, T. Szücs, M. Takács, and D. Trezzi, Phys. Lett. B 790, 237 (2019).
* Bruno [2017] C. G. Bruno, _Underground measurement of hydrogen-burning reactions on 17,18O at energies of astrophysical interest_, Ph.D. thesis, The University of Edinburgh, Edinburgh (2017).
* Mak _et al._ [1980] H.-B. Mak, G. Ewan, H. Evans, J. MacArthur, W. McLatchie, and R. Azuma, Nuclear Physics A 343, 79 (1980).
* Fox _et al._ [2005] C. Fox, C. Iliadis, A. E. Champagne, R. P. Fitzgerald, R. Longland, J. Newton, J. Pollanen, and R. Runkle, Phys. Rev. C 71, 055801 (2005).
* Endt [1993] P. Endt, Atomic Data and Nuclear Data Tables 55, 171 (1993).
* Lotay _et al._ [2022] G. Lotay, D. T. Doherty, R. V. F. Janssens, D. Seweryniak, H. M. Albers, S. Almaraz-Calderon, M. P. Carpenter, A. E. Champagne, C. J. Chiara, C. R. Hoffman, C. Iliadis, A. Kankainen, T. Lauritsen, and S. Zhu, Phys. Rev. C 105, L042801 (2022).
* Endt [1990] P. Endt, Nuclear Physics A 521, 1 (1990).
* Gajević _et al._ [2013] J. Gajević, A. Cvetinović, A. Likar, M. Lipoglavšek, P. Pelicon, T. Petrovič, and A. Sánchez Ortiz, Eur. Phys. J. A 49, 70 (2013).
* Lipoglavšek [2018] M. Lipoglavšek, in _European Physical Journal Web of Conferences_, European Physical Journal Web of Conferences, Vol. 165 (2018) p. 01035.
* Cvetinović _et al._ [2023] A. Cvetinović, D. Đeorđić, G. Guardo, M. Kelemen, M. La Cognata, L. Lamia, S. Markelj, U. Mikac, R. Pizzone, T. Schwarz-Selinger, I. Tišma, M. Vencelj, J. Vesić, and M. Lipoglavšek, Phys. Lett. B 838, 137684 (2023).
* Iliadis [2019] C. Iliadis, The Physical Review C 99, 065809 (2019).
* Amundsen and Barker [1994] P. A. Amundsen and P. H. Barker, Phys. Rev. C 50, 2466 (1994).
* Lewis [1962] H. W. Lewis, Phys. Rev. 125, 937 (1962).
* Endt _et al._ [1990] P. Endt, C. Alderliesten, F. Zijderhand, A. Wolters, and A. Van Hees, Nuclear Physics A 510, 209 (1990).
* Wang _et al._ [2021] M. Wang, W. Huang, F. Kondev, G. Audi, and S. Naimi, Chinese Physics C 45, 030003 (2021).
* Brindhaban _et al._ [1994] S. Brindhaban, P. Barker, M. Keeling, and W. Wood, Nucl. Instr. Meth. A 340, 436 (1994).
|
numbersnone#1none:
left:
right:
# Can we learn from developer mistakes?
Learning to localize and repair real bugs from real bug fixes
Cedric Richter University of OldenburgAmmerländer Heerstraße
114-118OldenburgGermany26129<EMAIL_ADDRESS>and Heike
Wehrheim University of OldenburgAmmerländer Heerstraße
114-118OldenburgGermany26129<EMAIL_ADDRESS>
(2018)
###### Abstract.
Real bug fixes found in open source repositories seem to be the perfect source
for learning to localize and repair real bugs. However, the absence of large
scale bug fix collections has made it difficult to effectively exploit real
bug fixes in the training of larger neural models in the past. In contrast,
artificial bugs – produced by mutating existing source code – can be easily
obtained at a sufficient scale and are therefore often preferred in the
training of existing approaches. Still, localization and repair models that
are trained on artificial bugs usually underperform when faced with real bugs.
This raises the question whether bug localization and repair models trained on
real bug fixes are more effective in localizing and repairing real bugs.
We address this question by introducing RealiT, a pre-train-and-fine-tune
approach for effectively learning to localize and repair real bugs from real
bug fixes. RealiT is first pre-trained on a large number of artificial bugs
produced by traditional mutation operators and then fine-tuned on a smaller
set of real bug fixes. Fine-tuning does not require any modifications of the
learning algorithm and hence can be easily adopted in various training
scenarios for bug localization or repair (even when real training data is
scarce). In addition, we found that training on real bug fixes with RealiT is
empirically powerful by nearly doubling the localization performance of an
existing model on real bugs while maintaining or even improving the repair
performance.
program repair, bug detection, bug fixes, learn to debug
††copyright: acmcopyright††journalyear: 2018††doi:
XXXXXXX.XXXXXXX††conference: ; June 2022; Woodstock, NY
## 1\. Introduction
Figure 1. Training on real bug fixes improves localization and repair of real
bugs.
Finding and fixing software bugs are one of the most common challenges in
software engineering (Alaboudi and LaToza, 2021). Developers are often faced
with these tasks while spending a considerable amount of time in fixing
software bugs. Still, some bugs find their way in open software repositories
which then has to be fixed by a bug fixing code change. This raises the
question whether we can relieve the developer from the debugging process by
learning from common developer mistakes and their fixes found in open source
projects.
Previous work (Pradel and Sen, 2018; Hellendoorn et al., 2019; Vasic et al.,
2019; Allamanis et al., 2017; Richter and Wehrheim, 2022a; Allamanis et al.,
2021) addressed this question by designing automatic learning-based methods
for bug localization and repair. However, to obtain the necessary amount of
data needed for training, they often employed code mutants instead of real bug
fixes. Mutants are generated by automatically injecting small code changes
into existing code. As this process is automated, mutants can be easily
obtained at large masses which is necessary for training effective learning-
based models. However, a mutant may not represent a real bug which could
ultimately bottleneck the performance of learning based bug localization
repair.
In contrast in this work, we aim to explore the effect of real bug fixes
obtained from open source repositories on the training of learning-based bug
localization and repair methods. For this, we employ a novel dataset of 33k
real-world bug fixes obtained from public Python projects. Since this dataset
is still comparably small to the datasets typically used for training
localization and repair models (Allamanis et al., 2021; Hellendoorn et al.,
2019) (which oftentimes contain millionth of artificial bugs), we propose
RealiT (pronounced “reality”), a novel training scheme for learning to Repair
and localize with Transformers. RealiT is designed to combine the strengths of
both training on mutants and real bug fixes by first pre-training on a high
number of mutants and then fine-tuning on a smaller set of real bug fixes.
This design allows us to not only evaluate the impact of real bug fixes and
mutants together on the training process, but also individually (by skipping
either pre-training or fine-tuning phase).
To evaluate the impact of RealiT’s training on the localization and repair of
real bugs, we implement RealiT for fixing a variety of single token bugs in
Python. Our implementation considers four common types of single token bugs –
that can be fixed by changing a single program token. We evaluate RealiT
together with several baselines on over 2000 real world bugs collected in the
PyPIBugs benchmark (Allamanis et al., 2021).
By integrating real bug fixes in the training process with RealiT, we observe
significant performance gains over a training that solely focuses on mutants
(as in previous works). In fact, training with real bug fixes allows to nearly
double the number of successfully localized real bugs (x-axis in Figure 1)
while also maintaining or improving the repair performance.
Our main contributions can be summarized as follows:
* •
For investigating the effect of real bug fixes on the training of neural bug
localization and repair models, we propose a simple pre-train-and-fine-tune
approach. We find that training both on mutants and real bugs with our method
significantly improves the performance over models solely trained on either
mutants or real bugs when evaluated on real single token bugs in Python.
* •
We show that data quality and quantity has a significant impact on neural bug
localization and repair models. By pre-training on a large number of mutants
(up to 20x larger than in previous work), RealiT already significantly
improves localization and repair performance both on mutants and real bugs.
Combined with fine-tuning on real bug fixes, RealiT is the first model to
repair a significant portion of a real bugs benchmark.
* •
For adopting RealiT into future projects, we show that even training on
smaller subsets of real bug fixes can yield performance improvement for
localization and repair of real bugs. However, more bug fixes are also more
beneficial.
We plan to release all trained models, pre-training and fine-tuning
code111https://github.com/cedricrupb/nbfbaselines.
Table 1. Examples of single token bug types taken from PyPIBugs (Allamanis et al., 2021) Example | Description
---|---
⬇ 1# VarMisuse: applied instead of patch 2applied = self.db.applied_patches() 3for patch in applied: 4 if patch in patches: 5 patches.remove(applied) | All applied patches should be removed from the patches list. However, the developer mistakenly tries to remove applied instead of a single patch. Fix: replace applied in Line 5 by patch defined in Line 3.
⬇ 1# BinOp: != instead of == 2def updateRefractionParameters(self): 3 ... 4 if self.ui.checkRefracNone.isChecked(): 5 return False 6 if self.checkRefracNoTrack.isChecked(): 7 if self.app.mount.status != 0: 8 return False 9 ... | The function updateRefractionParameters performs an update and returns true if the update was successful. Prior to the update the function checks some preconditions and the function should abort if the mount is not ready. Therefore, we can conventionally expect that we abort if the status is zero. However, we check whether the status is not zero. Fix: replace != in Line 7 by ==.
⬇ 1# Negation: namespace instead of not namespace 2if namespace: 3 self.namespacesFilter = [ "prymatex", "user" ] 4else: 5 self.namespacesFilter = namespace.split() | A default namespacesFilter should be used if no namespace is given. However, the condition checks the inverse. Fix: replace namespace in Line 2 by not namespace.
## 2\. Background
In this section, we introduce the necessary background for our approach. To
begin with, we start by describing the single token localization and repair
task tackled by RealiT and how previous techniques addressed this task by
predicting token replacements and learning from mutants.
### 2.1. Single token bug localization and repair
In this work, we focus on the localization and repair of single token bugs.
Single token bugs are bugs that can be repaired by replacing only a single
program token (e.g. a variable or binary operator). For this reason, they are
often easy to repair – as only a single token has to be changed – but hard to
identify. Examples for single token bugs are given in Table 1. Interestingly,
single token bug localization and repair has previously only been addressed
through training with mutants (Pradel and Sen, 2018; Hellendoorn et al., 2019;
Vasic et al., 2019; Allamanis et al., 2017; Richter and Wehrheim, 2022a;
Allamanis et al., 2021). Nevertheless, real bug fixes for single token bugs –
which can be employed for training or testing – are available in bug fix
collections such as ManySStuBs4J (Karampatsis and Sutton, 2020) or TSSB-3M
(Richter and Wehrheim, 2022b).
Task description. Throughout this work, we view source code as a sequence of
tokens $\mathcal{T}=t_{0},t_{1},t_{2},\dots,t_{n}$. A single token bug can
then be fixed by replacing a single token $t_{l}$ with another token $r$ in
the same scope ($r=t_{l^{\prime}}$) or coming from an external vocabulary
($r\in V$). To effectively localize and repair a single token bug, the
following three tasks have to be performed: (1) the program $\mathcal{T}$ has
to be classified to contain a bug, (2) the bug location $t_{l}$ has to be
localized and then (3) the correct repair $r$ has to be identified. In
practice, these three tasks are often modeled as token replacement operations.
Let $\mathcal{T}$ be a program containing a single token bug and
$\mathcal{T}^{\prime}$ be the corrected bug-free version, then the
localization and repair model is trained to perform the following operations:
(1) $\mathcal{T}\xrightarrow{\text{replace}(t_{l},r)}\mathcal{T}^{\prime}$ (2)
$\mathcal{T}^{\prime}\xrightarrow{\text{noop}()}\mathcal{T}^{\prime}$
Here, we fix the buggy program $\mathcal{T}$ by replacing $t_{l}$ with $r$ and
therefore translating it into $\mathcal{T}^{\prime}$. Since
$\mathcal{T}^{\prime}$ is bug-free, a change is not required (noop).
In practice, we train models to estimate the likelihood of each possible token
replacement and select the most likely replacement to fix $\mathcal{T}$.
### 2.2. Mutation
Motivated by the general absence of real bug fixes at a sufficient scale,
previous learning based localization and repair approaches (Hellendoorn et
al., 2019; Vasic et al., 2019; Richter and Wehrheim, 2022a; Allamanis et al.,
2021) mainly focused on training on mutants. Mutants are artificially
generated (pseudo-)bugs that are introduced into a correct program via a
mutation operator. For single token bugs, the mutation operator can be seen as
a token replacement operator which can be inverted by a localization and
repair model:
(3)
$\mathcal{T}\xrightarrow{\text{mutate}(t_{l},r)}\mathcal{T}^{\prime}\xrightarrow{\text{replace}(t_{l},r^{-1})}\mathcal{T}$
For a dataset of bug-free programs (e.g. mined from open source projects), the
mutation operator first introduces a token mutation by replacing a random
token with a random other token. The token types are often specified (e.g.
binary operators) such that the programs remains interpretable after the
transformation. Afterwards, the localization and repair model is trained to
invert the mutation process to obtain the original program.
While traditionally mutation operator are designed as a random process,
previous work also tried to design more realistic mutation operators by
learning from real bug fixes (Patra and Pradel, 2021), by training an
adversary to the repair model (Allamanis et al., 2021) or by finding
replacements that naturally fit the context (Richter and Wehrheim, 2022a).
### 2.3. Real bug fixes
Real bug fixes are often obtained by scraping the commit history of public
open source projects. During this process, commits are often classified as bug
fixing based on certain keywords in the commit message (Karampatsis and
Sutton, 2020). Even though this process cannot guarantee that every collected
commit is a real bug fix, it has been empirically shown (Karampatsis and
Sutton, 2020) that the process is highly precise (e.g. over 90% of all
collected code changes were real bug fixes). In this work, we are interested
in special types of single token bug fixes. Here, a bug is fixed by replacing
only a single token:
(4) $\mathcal{T}_{i}\xrightarrow{\text{replace}(t_{l},r)}\mathcal{T}_{i+1}$
Note that a (bug fixing) commit only represents a snapshot of the project at
time $i$. Therefore, while it is highly likely $\mathcal{T}_{i}$ contains a
single token bug which can be fixed by $\text{replace}(t_{l},r)$, we cannot
guarantee that the bug fix is complete and $\mathcal{T}_{i+1}$ is bug-free.
## 3\. Methodology
In this section, we introduce RealiT as an effective training technique for
bug localization and repair with Transformers. We start by giving a general
overview of the training process for transferring and improving the
performance of a localization and repair model trained solely on mutants.
Afterwards, we discuss the Transformer-based architecture used during training
and the inference strategy we apply for localizing and repairing single token
bugs in more detail.
Github CodeReal Bug FixesDatasetsCorrectsupersampleMutantspre-train(1) Train
on mutantsmutateCorrectsupersampleBugssupersamplefine-tune(2) Fine-tune on
real bugsReal Test
Setpatches1.2remove3(4applied5)6Localizationpatches1.2remove3(4applied5)6Repairpatches1.2remove3(4applied5)6(3)
Evaluate Figure 2. Overview over the RealiT training and evaluation process
### 3.1. RealiT: Training on mutants and real bugs
To facilitate both mutants and real bug fixes, we design RealiT as a pre-
train-and-fine-tune approach. Therefore, we perform the training in two phases
(as shown in Figure 2). In the first pre-training phase, we train RealiT on
artificially generated mutants introduced into source code obtained by mining
public open source repositories. Afterwards, in the second fine-tuning phase,
we employ the pre-trained version of RealiT to further train it on real bug
fixes.
Pre-training with code mutants. During pre-training, we train our model
similar to the way current localization and repair models are trained
(Hellendoorn et al., 2019). Here, the training objective is not to identify
real bugs but rather to identify and transform mutated code snippets back into
the real code snippets. For this task, we naturally start with a general
corpus of Github code snippets (e.g. function implementations). This corpus
can often easily be obtained by mining the recent version of popular open
source projects. Since bugs are scarce in open source projects, we can safely
assume that most of the code snippets are likely correct. For training, we
mutate each code snippet in our corpus at max $k$ times which produces a
dataset of at max $k$ unique mutants per code snippet222The number of code
rewrites applicable to a code snippet and hence the number of unique mutants
per code snippet is limited by design and might be lower than $k$. We never
introduce mutation duplicates.. During our experiments, we decided to employ
an unusually large number of mutants per code snippet ($k$ = 100) since we
observed that this improves the performance after fine-tuning. We employ the
original code corpus as training examples of unmutated correct code. Based on
the two datasets, RealiT is then trained to (1) distinguish mutants from real
code, (2) identify the mutated location (if any) and (3) find the original
replaced token. Since the dataset of mutants is up to $k$ times larger than
the set of correct code snippet (by construction), we additionally
supersample333During training, mutants and correct programs are sampled at the
same frequency. As the set of mutants is up to $k$ times larger, we uniformly
(super-)sample correct programs multiple times to match the number of mutants
seen during training. each correct code snippet such that RealiT is trained on
correct and mutated code snippets at the same frequency. This avoids that the
model is biased towards detecting mutants.
Learning to fix real bugs with bug fixes. In the second phase, we aim to
further optimize the performance of RealiT for localization and repair of real
bugs by fine-tuning on real bug fixes. For the fine-tuning process, we adopt a
pre-trained version of RealiT obtained from the previous phase. Then, we
continue the training now on realistic buggy and bug-free code. As examples
for realistic buggy code, we employ the code related to real bug fix before
the fix is applied. Bug-free code is again obtained by using the original
Github corpus. During training, RealiT is now fine-tuned to (1) distinguish
real buggy code from bug-free code, (2) identify the bug location (if any) and
(3) imitate the original bug fix. Since now, the code corpus is usually much
larger than the set of bug fixes, we supersample the buggy programs to match
the correct programs in their frequency.
We believe that pre-training and fine-tuning can have an orthogonal effect on
the localization and repair model (which we aim to explore during our
evaluation). Due to the mutation process, the pre-training phase is more
tailored towards identifying correct programs and deviations from them. In
contrast, the fine-tuning phase aims to teach the model the difference between
real correct programs and real buggy programs (and how they can be fixed).
### 3.2. Model architecture
In the section, we discuss the neural model employed by RealiT to learn
localization and repair of single token bugs. Since our main focus is to study
the effect of mutants and real bug fixes on the training process, we employ a
fairly standard Transformer-based model (Hellendoorn et al., 2019) for
learning bug localization and repair.
Probabilistic model. For the task of single token localization and repair,
programs are represented as a sequence of tokens
$\mathcal{T}=t_{0},t_{1},t_{2},\dots,t_{n}$ where each token $t_{l}$
represents a potential bug location. Single token bugs are fixed only by
replacing single tokens $t_{l}$ with another token $r$
($\text{replace}(t_{l},r)$). In the following, we model localization and
repair as a joint probability distribution over all potential bug locations
and repairs $\\{\langle l,r\rangle\mid
t_{l}\in\mathcal{T}\cup\\{\mathtt{NOOP}\\}\text{ and }r\in\mathcal{T}\cup
V\\}$:
(5) $p(\langle
l,r\rangle\mid\mathcal{T})=p_{\text{loc}}(l\mid\mathcal{T})\cdot
p_{\text{repair}}(r\mid l,\mathcal{T})$
Here, localization and repair is factorized into first localizing a bug
location ($p_{\text{loc}}(l\mid\mathcal{T})$) and then finding a repair
dependent on the bug location ($p_{\text{repair}}(r\mid l,\mathcal{T})$). For
localization, we include a special NOOP location that indicates that
$\mathcal{T}$ is bug-free and no repair is necessary. In practice, we
implement the probability distributions similar to pointer networks (Gu et
al., 2016) (with the addition of an external vocabulary for repair).
Neural code representation. To learn a neural code representation, we learn a
neural encoding function $\textbf{e}(t_{i})$ that maps each token $t_{i}$ to a
vector representation. For this, we employ a BPE subtoken encoder (Sennrich et
al., 2016) (with a vocabulary of 10K subtokens) to obtain an initial token
embedding by averaging the embedding of its subtokens. Afterwards, we encode
the sequence of token embeddings via a Transformer encoder (Devlin et al.,
2019) (with relative position encodings (Shaw et al., 2018)) to obtain a
contextualized vector representation $\mathbf{r}_{i}=\textbf{e}(t_{i})$.
Localization & Repair module. To finally compute the probability distribution
over bug locations and repairs, we employ individual modules for localization
and repair based on the computed vector representation $\mathbf{r}_{i}$. The
localization module is a multilayer perceptron that computes a bugginess score
for each potential bug location based on the vector representation
$\mathbf{r}_{i}$ and the original token embedding. The objective of the
localization module is to learn how likely the token $t_{i}$ does not fit its
surrounding context (represented by $\mathbf{r}_{i}$). The localization
probability is computed as a softmax distribution over all potential bug
locations. The repair module is designed similar to CopyNet (Gu et al., 2016).
Given the vector representation $\mathbf{r}_{i}$ of a potential bug location,
the repair module computes a repairing score between the bug location and each
repair candidate at token $t_{j}$ (represented by $\mathbf{r}_{j}$). In
addition, a similar score is obtained based on token embeddings of an external
vocabulary $V$ (e.g. other binary operators). The repair probability score is
then computed as softmax distribution between all repair candidates.
### 3.3. Finding and repairing real bugs
After the successful training process, the localization and repair models are
typically confronted with new unseen programs with the objective to identify a
potential bug and repair it. This is typically done (Vasic et al., 2019) by
finding the most likely repair for the most likely bug location (according to
the model). However, the most likely repair at the most likely bug location
might not always be meaningful. For example, while the model might be
confident that a bug is located at a certain location, there might not be a
suitable repair candidate that can actually fix the bug. For this reason, we
propose an alternative strategy: Instead of taking the most likely repair for
the most likely bug location, we search for the most likely meaningful
combination of bug location and its repair (and thus ignoring bug
localizations that cannot be fixed by the model).
Beam search decoding. RealiT implements this search strategy via a beam search
decoding (Kalchbrenner and Blunsom, 2013). Here, we iterate the top-$k$ bug
locations according to the model and for each bug location we again search for
the top-$k$ token repairs. During this process, we only store pairs that are
assigned the highest likelihood together. Afterwards, we filter the candidate
pairs to only meaningful repairs: If the model predicts a bug location then
this should be fixed by a different token of the same type. Combinations of
the special NOOP location and repair are always meaningful (since nothing will
be changed). Finally, as a result, RealiT computes the most likely meaningful
repair operation:
(6) $\text{replace}(t_{l^{\prime}},r^{\prime})\text{ with }\langle
l^{\prime},r^{\prime}\rangle=\text{argmax }p(\langle
l,r\rangle\mid\mathcal{T})$
### 3.4. Implementation
To effectively measure the impact of mutants and real bug fixes on the
training process and to exclude variances due to implementation details, we
implemented RealiT together with several baselines considered during our
evaluation in a unified
framework444https://github.com/cedricrupb/nbfbaselines. In particular, we
followed the design of Hellendoorn et al. (Hellendoorn et al., 2019) by
implementing a common localization and repair framework with exchangeable
components (e.g. token encoder, localization and/or repair modules). In the
process, we reimplemented or reused state-of-the-art components for all
employed subcomponents. For example, RealiT and all Transformer-based
baselines are built upon the official BERT implementation from the transformer
library (Wolf et al., 2020). The localization and repair modules together with
the graph-based baselines are implemented closely to the implementation of the
PyBugsLab model (Allamanis et al., 2021). In addition, we reimplemented the
code preprocessing pipelines for tokenization (Hellendoorn et al., 2019) and
graph construction (Allamanis et al., 2021) (used for the graph-based
baselines) in independent libraries to facilitate reuse. Finally, we plan to
release all trained checkpoints of RealiT and all evaluated models. We think
that these are not only valuable for reproducing our results but also provide
easy access to effective models in neural bug localization and repair.
## 4\. Evaluation
We evaluate RealiT on localization and repair of single token bugs in Python.
To guide our evaluation, we specifically designed individual experiments to
address the following research questions:
RQ1:
Can RealiT improve the single token bug localization and repair performance in
comparison to techniques purely trained on mutants?
RQ2:
Is pre-training on mutants necessary for achieving a high localization and
repair performance?
RQ3:
Can training with mutants alone be sufficient for achieving a high
performance?
RQ4:
Are real bug fixes still helpful if the number of real bug fixes available for
training is further limited?
In RQ1, we compare RealiT with various techniques purely trained on mutants.
RQ2 and RQ3 are designed to explore the effect of mutation and real bug fixes
on the training process. Especially, since real bugs are hard to obtain, we
are interested whether they are really necessary for training effective bug
localizer and repairer. Finally, in RQ4, we explore how many real bug fixes
are necessary in practice to improve the localization and repair performance.
### 4.1. Bug types
To facilitate both mutants and real bug fixes, we require bug types that can
be introduced by existing mutation operators and where examples of real bug
fixes are available. For this reason, we focus on the four single token bug
types in Python that can be generated by existing mutation operators
(Derezińska and Hałas, 2014) and for which we can obtain real bug fixes for
training and evaluation (Richter and Wehrheim, 2022b). In the following, we
describe the bug types together with the employed mutation operator in more
detail.
Variable Misuse. As the main carrier of the program state, variables are
abundant in source code. Therefore, variable misuses easily occur when a
developer accidentally uses the wrong variable name instead of the intended
one. As specified by Allamanis et al. (Allamanis et al., 2017), the usage of a
wrong variable is considered as a variable misuse if the wrong usage refers to
a local variable which can be fixed by replacing it with another locally
defined variable.
Mutator: For generating variable misuses, the mutator replaces a usage of a
locally defined variable with another random variable defined in the same
context.
Wrong Binary Operator. As a traditional example for a mutation type in
mutation testing (Derezińska and Hałas, 2014), wrong binary operator bugs
appear when a binary operator is corrupted with a type-equivalent operator
(e.g. == is replaced by !=, but not by <<). For training RealiT, we consider
all types of binary operators including Boolean, arithmetic, comparison and
bitvector operators.
Mutator: Wrong binary operator mutants are generated by replacing an binary
operator with another random binary operator of the same type.
Wrong Unary Operator. In addition to binary operators, we also consider two
types of wrong unary operator bugs: logical and arithmetic negation. In
contrast to binary operators, wrong unary operators are often not replaced but
primarily occur when a unary operator is missing or accidentally added. This
includes for example a forgotten logical negation in a condition or an
accidentally added arithmetic inversion during a calculation.
Mutator: Wrong unary operator mutants are generated by randomly dropping or
inserting negation operators (either arithmetic or logical) in front of an
identifier. To ensure that inserted negations are semantically meaningful,
negations are inserted dependent on the context (e.g. logical negations in
conditions and arithmetic negations in arithmetic expressions).
Wrong Literal. Another common bug type produced by mutation operators are
literal replacements. Similar to mutation testing (Derezińska and Hałas,
2014), we are limited to literal replacements from finite sets of common
literal types. This naturally includes Boolean literal replacement by
replacing True with False and vice versa, but also integer replacements from
the set -2, -1, 0, 1, 2.
Mutator: Mutants are generated by replacing literals from a set of predefined
literals with another random literal of the same set and type.
### 4.2. Datasets
For training and evaluating RealiT, we require two types of datasets: a
general Github corpus of code snippets and a dataset of real bug fixes. To
achieve comparable results, we employ existing datasets (if available). For
the same reason, we also decided to focus on bugs in Python function
implementations.
Github code corpus. As a general corpus of Python code, we employ the ETH
Py150k dataset (Raychev et al., 2016) containing over 150k program files from
popular Python projects. The dataset is split into 100k files for training and
50k files for testing. During our evaluation, we employ the same split and,
hence, train only on Python functions obtained from train split. The test
split is used for evaluating the performance on mutants. We extract all top
level functions and deduplicate the datasets such that the Python functions
used for training only occur once in the training dataset and do not occur in
the test split. In total, our training corpus contains more than 360k Python
function implementations (after filtering).
Real bug fixes. For obtaining real bug fixes at a sufficient scale, we employ
the SSB-9M dataset (Richter and Wehrheim, 2022b) of over 9M general single
statement bug fixes in Python. The dataset does not include the necessary
implementation code itself but references the original commits in the
repositories in addition to other useful metadata. This includes information
about the code change as a Unix diff and whether the code change appears
inside a function. Based on this information, we first pre-filtered the
dataset for bug fixes that likely fall into one of our bug categories. After
the filtering process, we then mined the function code from the original
repositories. Since not all bug types can be identified purely on the code
difference (e.g. a variable misuse requires that all variables are defined in
scope), we filtered and deduplicated the resulting dataset of buggy Python
functions for a second time. This process has lead to around 35k examples of
real bug fixes that match at least one of our bug types. Finally, we use 33k
examples for training and hold out around 2k examples as a validation set used
during training.
Test benchmarks. We employ two test benchmarks to evaluate the performance on
the localization and repair task. To evaluate the localization and repair
performance on real bugs, we employ the PyPIBugs benchmark (Allamanis et al.,
2021). The benchmark is a dataset of 2374 real-world single statement bugs and
their fix derived from open source projects. The benchmark is hand-filtered
and therefore it is likely that each included bug represents a real world bug.
We only considered single token bugs (which excludes argument swaps) in
functions where the implementation is still publicly accessible555The
benchmark consists of references to bug fixing commits. We found that not all
bug fixing commits were publicly accessible at the time of writing.. This
produced a real world test benchmark of 2028 real-world bugs. To avoid an
overlap between train and test set, we excluded all near duplicates
(Allamanis, 2019) from our training datasets. Additionally, we also employ the
test portion of the Github corpus as a mutant benchmark. For this, we extend
the corpus of correct code snippets with up to 9 mutants per snippet.
Table 2. Evaluation results for bug detection and repair on mutants and real bug fixes | | FPR% | Real Bugs (PyPIBugs) | | Mutants
---|---|---|---|---|---
| | Joint | Loc. | Repair | | Joint | Loc. | Repair
RNN (Vasic et al., 2019) | | 33.92 | 9.47 | 13.36 | 47.88 | | 52.39 | 61.49 | 80.06
Transformer (Hellendoorn et al., 2019) | | 25.59 | 18.98 | 23.02 | 59.52 | | 74.04 | 81.26 | 88.74
GNN (Allamanis et al., 2021) | | 25.29 | 18.24 | 23.82 | 53.74 | | 66.11 | 75.08 | 84.59
GREAT (Hellendoorn et al., 2019) | | 29.98 | 19.03 | 23.62 | 56.31 | | 70.76 | 78.84 | 86.99
RealiT (ours) | | 29.53 | 39.00 | 44.23 | 73.52 | | 67.09 | 75.95 | 85.66
RealiT - without beam search decoding | | 22.96 | 36.69 | 41.86 | 73.52 | | 65.27 | 74.08 | 85.66
RealiT - without pre-training on mutants | | 19.32 | 12.67 | 16.07 | 40.38 | | 2.37 | 8.58 | 31.34
RealiT - without fine-tuning on real bug fixes | | 33.41 | 25.10 | 30.92 | 65.53 | | 77.41 | 84.49 | 90.18
RealiT - with fine-tuning on postfix mutants | | 27.72 | 27.66 | 32.64 | 67.40 | | 78.95 | 85.12 | 90.55
RealiT - with reduced mutant frequency (5x) | | 31.75 | 33.28 | 38.56 | 68.59 | | 65.32 | 74.74 | 83.90
## 5\. Result
In this section, we discuss our evaluation results with the ultimate goal of
answering our research questions.
### 5.1. RQ1: RealiT in comparison?
For answering the first research question, we evaluate whether RealiT improves
the single token bug localization and repair performance by training with real
bug fixes. Since we are interested in the impact of real bug fixes on the
training process, we compare our results with several baseline algorithms
trained purely on mutants. For the comparison, we consider bug localization
and repair models based on recursive neural networks (RNN) (Vasic et al.,
2019), transformers (absolute positions) (Hellendoorn et al., 2019), graph
neural networks (GNN666Our evaluation setup differs slightly from (Allamanis
et al., 2021) in that only individual function implementations are considered.
Therefore, graph level information that would require access to the
implementation context cannot be computed. ) (Allamanis et al., 2021) and
GREAT (Hellendoorn et al., 2019). All baseline models are trained in a
supervised setting purely on mutants. The training dataset is constructed
similar to the pre-training dataset used for RealiT (with $k$ = 5 mutants
injected). The baselines are trained for 300 epochs (a 200k examples per
epoch) with early-stopping on our validation set. For RealiT, we skip the last
epoch and instead fine-tune on real bug fixes.
Real-world performance. To begin with, we start by considering the performance
of RealiT on our real-world benchmark. Table 2 provides an overview over our
evaluation results. We measured the joint accuracy of localizing and repairing
real bugs (Joint) in addition to the localization accuracy (Loc.) of finding
the bug location and the repair accuracy (Repair) of finding the real bug fix
given the bug location. In this section, we focus on the upper part of the
table and we consider the results on our real-world benchmark.
We observe that RealiT significantly outperforms all baseline algorithms
trained purely on mutants both in localization and repair of real single token
bugs. Interestingly enough, we find that the highest relative gain obtained
from fine-tuning on real bug fixes can be achieved for the localization
performance (with a nearly 2x improvement). This indicates that for effective
localization of human made bugs we actually need to learn from human made bug
fixes. Still, the bug localization remains harder than bug repair as RealiT
can fix more than 73% of all bugs when the bug location is given. Therefore,
it is important to investigate into better strategies for bug localization
(potentially by integrating techniques from static analysis).
Finally, the fact that significant performance improvements are observable in
both localization and repair suggests that real bug fixes exhibit exploitable
statistics for localization and repair. We will further explore this in RQ2
and RQ3.
Localization and repair of mutants. While our ultimate goal is to find and fix
real bugs, we also measured the localization and repair accuracy of RealiT for
artificial mutants. Surprisingly, we observe that RealiT performs worse than
most baseline models both in localization and repair after fine-tuning on real
bug fixes. Interestingly, this is not a limitation of the RealiT model as the
version of RealiT trained purely on mutants performs competitively or even
better than all baselines in localizing and repairing mutants. Therefore, the
fine-tuning on real bugs encourages RealiT to “forget” some (potentially
spurious) patterns that were used to detect mutants but do not help for
identifying real bugs. In addition, this provides further evidence that there
might exist mutants that either do not represent real bugs or represent bugs
that are highly unlikely to appear in reality. Finally, this observation is
also interesting for the evaluation of localization and repair models. As
there is clearly no correlation between the real world performance and the
performance on mutants when the model is fine-tuned on real bug fixes,
performance gains on mutants independent from real world performance become
difficult to interpret.
False positives. We also measured the false positive rate (FPR) of RealiT on
bug-free code snippets. Here, we employ the original test set of our Github
corpus. Our results are also summarized in Table 2. We observe that RealiT has
a false positive rate comparable to the other baselines – only outperformed by
the Transformer and GNN. However, we believe that an increase of 3% more false
positives is still acceptable as RealiT localizes and fixes nearly twice as
many real bugs. In addition, we also evaluate a version of RealiT without beam
search decoding (i.e. using the repair with the highest likelihood). The
results are also shown in the lower part of Table 2. We observe that while
beam search decoding improves the localization performance by up to 3%, it
also induces a worse false positive rate compared to the model without beam
search decoding. This is a common trade off between a higher true localization
performance with a worse false positive rate.
### 5.2. RQ2: Are mutants necessary?
As we have seen in RQ1, fine-tuning on real bug fixes does improve
localization and repair of real bugs. This raises the question whether mutants
are necessary for the performance gain or if the same performance can be
achieved with real bugs alone. To answer this question and therefore RQ2, we
trained two additional versions of RealiT: (1) a version of RealiT that is not
pre-trained on mutants and (2) a version of RealiT that is not fine-tuned on
real bug fixes. We evaluate both versions again on all benchmarks.
Mutants vs real bugs. We start by comparing the two new versions of RealiT.
Our evaluation results are summarized with all other results in Table 2. We
observe that training on mutants outperforms training on real bug fixes only.
It is likely that RealiT overfits the smaller training dataset of real bug
fixes and therefore fails to generalize to complete new unseen bugs. In
contrast, the version purely trained on mutants has learned from a variety of
mutants during training (some of which are likely similar to real bugs).
However, when evaluated on bug-free code snippets only, we see that RealiT
trained only on real bug fixes clearly outperforms all other techniques in
terms of false positive rate. This could again indicate that some mutants in
the training dataset are not bug inducing (e.g. a mutation that replaces <=
with != without changing the function behavior) which guides the model to
detect these structures in bug-free code.
Training with mutants and real bugs. We now compare the two variants of RealiT
with our original RealiT model. We find that fine-tuning on real bug fixes
significantly improves the performance of RealiT over the already strong
baseline of training on mutants alone. Interestingly enough, this does not
only hold for localization and repair of real bug fixes but also on the false
positive rate on bug-free code. This shows that pre-training on mutants and
fine-tuning on real bugs combines the strengths of both the high localization
and repair performance (by training on mutants) and the bug detection accuracy
(by training on real bugs). Therefore, we see that mutants are necessary to
achieve the high performance of RealiT but fine-tuning on real bugs provides
additional improvements.
Effect on individual bug types. Since the effect of pre-training and fine-
tuning seems to be complementary, we are also interested in how the training
affects the performance on individual bug types. Table 3 summarizes our
results on the real-world test benchmark divided into single token bug types.
First of all, we again find that training on both mutants and real bugs does
improve performance in both localization and repair on all bug types. However,
the margin of the performance gain is dependent on the bug type. For example,
we see the highest improvement for Wrong Binary Op where training on real bugs
alone already yields high performance. To answer our research question also
for individual bug types, pre-training on mutants can also be crucial for the
performance on individual bug types (where we observe a significant
improvement for at least three bug types Wrong Assign Op, Wrong Literal and
Variable Misuse).
Table 3. Evaluation results for bug detection and repair on different bug types Bug type | | RealiT | | Mutants only | | No Mutants
---|---|---|---|---|---|---
| Joint | Loc. | Repair | | Joint | Loc. | Repair | | Joint | Loc. | Repair
Wrong Assign Op | | 20.45 | 29.54 | 70.45 | | 9.10 | 13.64 | 52.27 | | 2.27 | 2.27 | 65.91
Wrong Binary Op | | 56.34 | 59.15 | 84.51 | | 14.08 | 28.17 | 39.44 | | 30.99 | 35.21 | 70.42
Wrong Boolean Op | | 42.31 | 42.31 | 95.05 | | 23.08 | 24.18 | 93.41 | | 21.43 | 21.97 | 81.87
Wrong Comparison Op | | 36.95 | 51.47 | 67.00 | | 19.70 | 35.22 | 57.64 | | 23.40 | 33.74 | 57.14
Wrong Literal | | 24.42 | 32.56 | 76.74 | | 19.77 | 22.09 | 77.91 | | 9.30 | 12.79 | 46.51
Variable Misuse | | 39.87 | 42.62 | 71.75 | | 28.73 | 31.88 | 65.13 | | 7.43 | 9.04 | 25.75
### 5.3. RQ3: Are mutants sufficient?
Our evaluation for RQ2 has shown that training on mutants is crucial for
obtaining high performing RealiT models. Still, it is not clear whether
mutants on its own can be sufficient for training RealiT. In other words,
there might exist a mutation configuration that achieves the same performance
trained on mutants based on the same base datasets. To answer RQ3, we designed
several experiments.
Mutation frequency. We trained several versions of RealiT by varying the
mutation frequency (up to 1x, 3x, 5x, 10x, 100x and 1000x unique mutants per
code snippet). For the comparison, we measured the performance of each trained
model before and after fine-tuning on real bug fixes. The models are evaluated
on our real bugs validation set. Figure 3 gives an overview of our results for
bug localization and repair accuracy independently. The configuration 0x
represents a version of RealiT only trained on real bug fixes. First of all,
in contrast to common believe (Hellendoorn et al., 2019), we observe that
increasing the number of mutants up to 100x generated mutants per code snippet
leads to a performance improvement for both localization and repair777We
observe the same trend for joint localization and repair which is not shown
here for brevity.. This is surprising as the number of unique mutants per code
snippet is limited (with an average of 85 unique mutants per code snippet)
and, henceforth, buggy programs with more mutant candidates are oversampled.
Still, we found that increasing the limit of mutants beyond 100x (and thereby
oversampling code snippets in our dataset that provide up to 200k unique
mutants) actually decreases the localization and repair performance.
Now, when also considering the performance on the validation set after fine-
tuning, we find that fine-tuning always provides a significant boost over the
models trained solely on mutants for both localization and repair accuracy.
However, we still observe that the performance gain for localization is higher
than for repair (especially as we increase the number of mutants).
Surprisingly, we also observe that the gap between the model performances
before and after fine-tuning on real bug fixes shrinks as we increase the
number of mutants generated per code snippet (up to 100x). While this could
indicate that simply scaling the number of mutants can be sufficient for
achieving a high repair accuracy, the gap actually starts increasing again
after scaling beyond 100x mutants per code snippet. Therefore, we can conclude
that while mutants alone can significantly improve the performance they are
not sufficient in our training setup for achieving the same high performing
localization and repair models as obtained by fine-tuning on real bug fixes
with RealiT.
(a) Localization accuracy
(b) Repair accuracy
Figure 3. Effect of mutant frequency during training on the real world
validation performance. The gray dotted line represents the average number of
unique mutants that can be generated per code snippet.
Training with postfix mutants. While it seems that mutants alone introduced in
arbitrary code from our code corpus is not sufficient for closing the
performance graph to fine-tuned models, it is unclear whether mutants can be
sufficient when introduced in an implementation context that is more typical
for a real bug. To test this, we designed an additional experiment where we
fine-tuned RealiT on postfix mutants (i.e. mutants that are produced by first
applying the real bug fix and then reintroducing a bug with a mutation
operator). Our results are also shown in Table 2. We observe that even fine-
tuning on postfix mutants provides a slight boost in performance both on
localization and repair of mutants and real bugs. Surprisingly, the boost is
slightly higher for real bugs than for mutants even though we only increased
the number of mutants. Still, we find that a model trained on real bug fixes
clearly outperforms a model trained on postfix mutants when evaluated on real
bug fixes. Since the performance of detecting mutants does not decrease when
training on postfix mutants (similar to scaling the number of mutants
generated), we actually conclude that the performance gain is likely a scaling
effect and can not necessarily be attributed to the mutation context.
In total, we find that training on mutants alone is not sufficient for
achieving a high performing RealiT model. In addition, our results show that
training on real bug fixes is especially helpful for the localization of real
bugs which is hard to obtain by training on mutants alone.
### 5.4. RQ4: How many bug fixes are necessary?
Obtaining a dataset of multiple thousand real bug fixes can be challenging
especially if the considered bug type occurs less frequent in open source
projects. For this reason, we aim to explore how much the size of the fine-
tuning dataset (the number of real bug fixes) influence the final performance
of RealiT. Therefore, we evaluate three variants of RealiT pre-trained on 1x,
5x and 100x mutants per code snippet which we then fine-tune on several
subsamples of our real bug fix datasets. We consider subsamples of 1% (334),
3% (996), 5% (1.658), 10% (3.314), 30% (9.936), 50% (16.559), 70% (23.180),
90% (29.802) of all bug fixes. To obtain more stable results, we fine-tune our
models on three subsamples per sample size and evaluate fine-tuned models on
our validation set. Averaged results for all three RealiT variants fine-tuned
on the generated subsamples are reported in Figure 4.
Impact of the real bug fix dataset size. We can observe a clear trend that
more real bug fixes lead to an improved performance across all RealiT
variants. This holds true even for small fine-tuning datasets of around 1000
(less than 5% of the original dataset size) bug fixes. As reported by the
authors of ManySStuBs4J (Karampatsis and Sutton, 2020) or PySStuBs (Kamienski
et al., 2021), real bug fix collections of this size can be obtained by mining
the top 1000 most popular Github projects for the respective language.
Scaling real bug fixes vs. scaling mutants. Although we have seen that fine-
tuning on more real bug fixes increases the performance, it is actually
difficult to scale up the number of real bug fixes (as the number of publicly
accessible projects to mine real bug fixes from is limited). In contrast,
generating more mutants per code snippet is more cost effective. For example,
to achieve the same performance gain obtained from scaling the number of
mutants generated from 5x to 100x, we have to fine-tune on at least 10% of our
real bug fix dataset (3314 bugs). Still, although scaling the number of
mutants is preferred, the scaling effect is however limited, as we have seen
in RQ3.
Figure 4. Effect of real bug fixes on the fine-tuning performance on the
validation set. The x-axis is the percentage of the bug fix dataset used for
fine-tuning. Gray dotted lines mark datasets that exceed 1k and 10k examples
respectively.
## 6\. Threats to validity
Although a variety of learning-based bug localization and repair models have
been developed in recent years, there does not exist a unique setup for
training and benchmarking these models which is universally accepted.
Therefore, even though we implemented our baselines close to the reference
implementation, the resulting trained models might behave differently than in
the setup they were originally developed for. To still achieve comparable
results, we designed our evaluation to replicate prior studies (Hellendoorn et
al., 2019) on neural bug localization and repair models as close as possible.
For example, we adopted the same publicly accessible Github corpus ETH Py150k,
a similar architectural design and similar baselines as employed by
Hellendoorn et al. (Hellendoorn et al., 2019). To further support a wider
range of bug types which allowed us to exploit the hand validated benchmark
PyPIBugs, we adjusted the architecture and mutation process similar to
Allamanis et al. (Allamanis et al., 2021). Still, our evaluation results for
the baseline algorithms (in Table 2) are slightly different than the results
of prior studies. For example, we found that while graph-based models such as
GNNs and GREAT still perform better in localizing and repairing real bugs,
Transformers show a surprisingly strong performance on mutants. Note however
that Hellendoorn et al. (Hellendoorn et al., 2019) anticipated this result
(even though they only evaluated on variable misuses) when the models are
trained for a longer duration – which we did by training on approximately 2.4x
more training examples. In contrast to the results of Allamanis et al.
(Allamanis et al., 2021), we observe that graph-based models underperform in
our evaluation setup which we attribute to two main differences: (1) for a
fair comparison, all models only have access to the function implementation
without the implementation context which prohibits the computation of type
related or call structure related information exploited by the graph-based
models and (2) we trained all models on a different (potentially smaller)
dataset. Although integrating the type of information and training on a larger
dataset would potentially benefit all baselines, the performance ranking
between architectures might differ. However, since our experiments showed that
the performance gain due to training on real bug fixes is unique and the
effect could not be replicated by training on mutants, we expect that adapting
our evaluation setup has little to no influence on our evaluation outcome.
## 7\. Related Work
We discuss the most related previous or concurrent work that (1) tackle single
token bug localization and repair with alternative training strategies, (2)
exploit real bug fixes for automatic program repair or code mutations and (3)
consider alternative pre-train-and-fine-tune techniques.
Single token bug localization and repair. The detection and repair of single
token bugs have been explored in previous work (Allamanis et al., 2017; Pradel
and Sen, 2018; Vasic et al., 2019; Hellendoorn et al., 2019; Richter and
Wehrheim, 2022a; Patra and Pradel, 2021; Allamanis et al., 2021). Allamanis et
al. (Allamanis et al., 2017) addressed the detection and repair of variable
misuse bugs (which we also considered in this work) by representing programs
as graphs. Vasic et al. (Vasic et al., 2019) proposed a joint model for same
task and Hellendoorn et al. (Hellendoorn et al., 2019) explored alternative
program representations. These techniques all have in common that they do not
learn from real bug fixes but from artificially mutated code. In contrast,
while RealiT employs a similar Transformer-based architecture as discussed by
Hellendoorn et al. (Hellendoorn et al., 2019), we showed that integrating real
bug fixes in the training process is crucial for the localization and repair
of real bugs. More recent work (Richter and Wehrheim, 2022a; Patra and Pradel,
2021; Allamanis et al., 2021) also showed that the quality of training data is
important for effective bug localization and repair. For example, employing a
more realistic mutator (Richter and Wehrheim, 2022a; Patra and Pradel, 2021)
(i.e. a mutator that is more likely to reproduce a real bug) or learning to
inject hard to find bugs (Allamanis et al., 2021) can both improve the
localization and repair performance. However, the integration of these
approaches often increases the complexity by requiring to learn a mutation
operator either prior and concurrent to the training process. With RealiT, we
showed that integrating real bug fixes, while relying on simpler and easier to
implement mutation operators, can be sufficient to obtain a significant
improvement in real bug localization and repair performance. Interestingly
enough, a concurrent work (He et al., 2022) also explored whether real bug
fixes have an impact on the performance of learning-based bug detectors.
Similar to RealiT, their model is pre-trained on mutants and then fine-tuned
real bug fixes. Surprisingly, while the authors found that fine-tuning on real
bug fixes improves precision (i.e. the number of correct programs classified
as buggy), the recall (i.e. the number of real bugs detected and repaired)
actually suffers. In contrast, we find that RealiT improves the number of bugs
detected and repaired significantly while training on real bug fixes can also
decrease the false positive rate. We attribute the difference in our findings
to significant differences to the RealiT training process: (1) the number of
real bug fixes we fine-tune on is several magnitudes larger, (2) the number of
mutants generated per code snippet is significantly higher and (3) the
distribution of buggy and bug-free programs is balanced both during pre-
training and fine-tuning. We believe that especially (3) is key to success of
RealiT. Training on an unbalanced dataset (with significant more bug-free than
buggy code) risks that the model defaults to not detecting a bug (which would
result in a higher precision and lower recall by design).
Learning from real bug fixes. Real bug fixes are not only a valuable resource
for learning to localize and repair single token bugs but they can also be
effectively exploited for automatic program repair (Tufano et al., 2019b; Chen
et al., 2019; Li et al., 2020; Lutellier et al., 2020; Bader et al., 2019) or
code mutations (Tufano et al., 2019a; Patra and Pradel, 2021). SequenceR (Chen
et al., 2019), for example, learns from thousands of bug fixes to predict one-
line bug patches. Dlfix (Li et al., 2020) and CoCoNuT (Lutellier et al., 2020)
improved the repair performance by proposing more effective learning
strategies. In contrast to RealiT, however, these techniques are designed to
only repair a given program location and, hence, whether a program is buggy
and where the bug has to be fixed has to be known beforehand. In addition,
these techniques are often trained on real bug fixes only without considering
mutants for the training process. We showed that learning from mutants is
actually crucial to achieve high performing models. This observation is also
supported by DrRepair (Yasunaga and Liang, 2020) which showed that pre-
training a repair models on artificial errors improved the repair performance
on syntactic errors. Still, their approach rely on a compiler to detect this
type of bugs. The type of single token bugs which we considered in this work
are typically missed by a compiler.
Code mutation addresses the inverse problem of injecting a bug into a correct
program. Tufano et al. (Tufano et al., 2019a) and Patra and Pradel (Patra and
Pradel, 2021) showed that bug fixes can be effectively leverage to learn code
mutations by learning to replicate the original bug. Interestingly, Yasunaga
et al. (Yasunaga and Liang, 2021) showed that repeatedly training a breaker
and fixer that initially learn from real bug fixes but then provide training
data for each other actually improves the performance of the fixer to repair
syntactic bugs. While our work showed that real bug fixes are also crucial for
bug detection, we believe that exploiting real bug fixes in the mutation
process for training bug detection and repair models can be a promising
direction for future work.
Pre-training and fine-tuning. Pre-training on large corpora of fuzzy data and
then fine-tuning on a specific task with a smaller dataset has been shown to
be highly successful in domains such as natural language processing (Devlin et
al., 2019; Raffel et al., 2020), image processing (Kolesnikov et al., 2020)
and most recently programming language processing (Feng et al., 2020; Kanade
et al., 2020). In contrast to RealiT, these techniques are often pre-trained
on a generic unrelated task where data is available before fine-tuning them on
a specific task. RealiT, however, is trained and fine-tuned with same
architecture with largely the same objective of identifying and repairing
buggy (or mutated) code.
CuBERT (Kanade et al., 2020) showed that pre-training on a generic corpus of
Python code can improve the detection performance on variable misuses.
However, the authors employed mutants instead of real bug fixes in the fine-
tuning phase. In contrast, RealiT is pre-trained on mutants and then fine-
tuned on real bug fixes. A combination of these two approaches by applying
RealiT on top of a pre-trained model would be interesting and we leave this
open for future work.
## 8\. Conclusion
In this work, we explore the effect of training on real bug fixes and mutants
on the performance of bug localization and repair models. For this, we propose
RealiT, a novel pre-train-and-fine-tune approach for learning to localize and
repair bugs with Transformers. RealiT can effectively utilize both mutants and
real bug fixes during training by first pre-training on mutants and then fine-
tuning on real bug fixes. Our evaluation on thousands of real bugs obtained
from real Python projects showed that RealiT can significantly improve the
localization and repair of real bugs in contrast to models solely trained on
mutants. In addition, our experiments showed (1) that pre-training on mutants
plays an important role for achieving the performance level, (2) that mutants
alone are however not sufficient to unlock the potential of RealiT and (3)
that a high number of real bug fixes is actually necessary for achieving a
high performing model.
Based on these observations, we see as future work the integration of more
realistic data in the training process of neural bug localization and repair
models. For example, training on more realistic mutants could boost the
performance even before fine-tuning on real bug fixes. In addition, it might
also be interesting to explore the effect of other – even unrelated – types of
bug fixes on the training process of neural bug localization and repair
approaches. Integrating more supported bug types also allows us to exploit
more real bug fixes found in open source projects.
Finally, to conclude, RealiT demonstrates that neural bug localization and
repair models can effectively learn from developer mistakes, in form of real
bug fixes, to localize and repair real bugs.
## References
* (1)
* Alaboudi and LaToza (2021) Abdulaziz Alaboudi and Thomas D. LaToza. 2021. An Exploratory Study of Debugging Episodes. _CoRR_ abs/2105.02162 (2021). https://arxiv.org/abs/2105.02162
* Allamanis (2019) Miltiadis Allamanis. 2019\. The adverse effects of code duplication in machine learning models of code. In _Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software_. 143–153.
* Allamanis et al. (2017) Miltiadis Allamanis, Marc Brockschmidt, and Mahmoud Khademi. 2017\. Learning to represent programs with graphs. _arXiv preprint arXiv:1711.00740_ (2017).
* Allamanis et al. (2021) Miltiadis Allamanis, Henry Jackson-Flux, and Marc Brockschmidt. 2021\. Self-supervised bug detection and repair. _Advances in Neural Information Processing Systems_ 34 (2021), 27865–27876.
* Bader et al. (2019) Johannes Bader, Andrew Scott, Michael Pradel, and Satish Chandra. 2019. Getafix: Learning to fix bugs automatically. _Proceedings of the ACM on Programming Languages_ 3, OOPSLA (2019), 1–27.
* Chen et al. (2019) Zimin Chen, Steve Kommrusch, Michele Tufano, Louis-Noël Pouchet, Denys Poshyvanyk, and Martin Monperrus. 2019. Sequencer: Sequence-to-sequence learning for end-to-end program repair. _IEEE Transactions on Software Engineering_ 47, 9 (2019), 1943–1959.
* Derezińska and Hałas (2014) Anna Derezińska and Konrad Hałas. 2014. Analysis of mutation operators for the python language. In _Proceedings of the Ninth International Conference on Dependability and Complex Systems DepCoS-RELCOMEX. June 30–July 4, 2014, Brunów, Poland_. Springer, 155–164.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_. Association for Computational Linguistics, Minneapolis, Minnesota, 4171–4186. https://doi.org/10.18653/v1/N19-1423
* Feng et al. (2020) Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, et al. 2020\. Codebert: A pre-trained model for programming and natural languages. _arXiv preprint arXiv:2002.08155_ (2020).
* Gu et al. (2016) Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. 2016\. Incorporating copying mechanism in sequence-to-sequence learning. _arXiv preprint arXiv:1603.06393_ (2016).
* He et al. (2022) Jingxuan He, Luca Beurer-Kellner, and Martin Vechev. 2022\. On Distribution Shift in Learning-based Bug Detectors. _arXiv preprint arXiv:2204.10049_ (2022).
* Hellendoorn et al. (2019) Vincent J Hellendoorn, Charles Sutton, Rishabh Singh, Petros Maniatis, and David Bieber. 2019\. Global relational models of source code. In _International conference on learning representations_.
* Kalchbrenner and Blunsom (2013) Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. _arXiv preprint arXiv:1306.3584_ (2013).
* Kamienski et al. (2021) Arthur V Kamienski, Luisa Palechor, Cor-Paul Bezemer, and Abram Hindle. 2021. Pysstubs: Characterizing single-statement bugs in popular open-source python projects. In _2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR)_. IEEE, 520–524.
* Kanade et al. (2020) Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, and Kensen Shi. 2020. Pre-trained Contextual Embedding of Source Code. _CoRR_ abs/2001.00059 (2020). arXiv:2001.00059 http://arxiv.org/abs/2001.00059
* Karampatsis and Sutton (2020) Rafael-Michael Karampatsis and Charles Sutton. 2020. How Often Do Single-Statement Bugs Occur?: The ManySStuBs4J Dataset. In _MSR_. ACM, 573–577. https://doi.org/10.1145/3379597.3387491
* Kolesnikov et al. (2020) Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. 2020\. Big transfer (bit): General visual representation learning. In _European conference on computer vision_. Springer, 491–507.
* Li et al. (2020) Yi Li, Shaohua Wang, and Tien N Nguyen. 2020. Dlfix: Context-based code transformation learning for automated program repair. In _Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering_. 602–614.
* Lutellier et al. (2020) Thibaud Lutellier, Hung Viet Pham, Lawrence Pang, Yitong Li, Moshi Wei, and Lin Tan. 2020. Coconut: combining context-aware neural translation models using ensemble for program repair. In _Proceedings of the 29th ACM SIGSOFT international symposium on software testing and analysis_. 101–114.
* Merity et al. (2016) Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture models. _arXiv preprint arXiv:1609.07843_ (2016).
* Patra and Pradel (2021) Jibesh Patra and Michael Pradel. 2021. Semantic bug seeding: a learning-based approach for creating realistic bugs. In _Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering_. 906–918.
* Pradel and Sen (2018) Michael Pradel and Koushik Sen. 2018. Deepbugs: A learning approach to name-based bug detection. _Proceedings of the ACM on Programming Languages_ 2, OOPSLA (2018), 1–25.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020\. Exploring the limits of transfer learning with a unified text-to-text transformer. _J. Mach. Learn. Res._ 21, 140 (2020), 1–67.
* Raychev et al. (2016) Veselin Raychev, Pavol Bielik, and Martin Vechev. 2016\. Probabilistic model for code with decision trees. _ACM SIGPLAN Notices_ 51, 10 (2016), 731–747.
* Richter and Wehrheim (2022a) Cedric Richter and Heike Wehrheim. 2022a. Learning Realistic Mutations: Bug Creation for Neural Bug Detectors. In _2022 IEEE Conference on Software Testing, Verification and Validation (ICST)_. IEEE, 162–173.
* Richter and Wehrheim (2022b) Cedric Richter and Heike Wehrheim. 2022b. TSSB-3M: Mining single statement bugs at massive scale. _arXiv preprint arXiv:2201.12046_ (2022).
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016\. Neural Machine Translation of Rare Words with Subword Units. In _Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_. Association for Computational Linguistics, Berlin, Germany, 1715–1725. https://doi.org/10.18653/v1/P16-1162
* Shaw et al. (2018) Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018\. Self-attention with relative position representations. _arXiv preprint arXiv:1803.02155_ (2018).
* Tufano et al. (2019a) Michele Tufano, Cody Watson, Gabriele Bavota, Massimiliano Di Penta, Martin White, and Denys Poshyvanyk. 2019a. Learning how to mutate source code from bug-fixes. In _2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)_. IEEE, 301–312.
* Tufano et al. (2019b) Michele Tufano, Cody Watson, Gabriele Bavota, Massimiliano Di Penta, Martin White, and Denys Poshyvanyk. 2019b. An empirical study on learning bug-fixing patches in the wild via neural machine translation. _ACM Transactions on Software Engineering and Methodology (TOSEM)_ 28, 4 (2019), 1–29.
* Vasic et al. (2019) Marko Vasic, Aditya Kanade, Petros Maniatis, David Bieber, and Rishabh Singh. 2019. Neural program repair by jointly learning to localize and repair. _arXiv preprint arXiv:1904.01720_ (2019).
* Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020\. Transformers: State-of-the-Art Natural Language Processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_. Association for Computational Linguistics, Online, 38–45. https://www.aclweb.org/anthology/2020.emnlp-demos.6
* Yasunaga and Liang (2020) Michihiro Yasunaga and Percy Liang. 2020. Graph-based, self-supervised program repair from diagnostic feedback. In _International Conference on Machine Learning_. PMLR, 10799–10808.
* Yasunaga and Liang (2021) Michihiro Yasunaga and Percy Liang. 2021. Break-it-fix-it: Unsupervised learning for program repair. In _International Conference on Machine Learning_. PMLR, 11941–11952.
## Appendix A Model Architectures
For our evaluation, we implemented our baselines in a common code base. All
neural network modules are implemented in PyTorch. In the following, we
discuss the general architecture used for neural bug localization and repair
and the design and hyperparameters individually for all baseline models.
General. All our models follow the general structure proposed by Hellendoorn
et al. (Hellendoorn et al., 2019). The architecture consists of an input
module (for mapping tokens to vectors), a central encoding model and
localization and repair head. For constructing our baselines, we change the
central encoding model. The remaining structure remains the same (if not
specified otherwise). For the input module, we use a BPE subtoken encoder with
a vocabulary of 10k subtokens and embed each token by averaging its subtoken
representation.
Similar to Allamanis et al. (Allamanis et al., 2021), we employ dedicated
heads for localization and repair.
For localization, we use an architecture similar to pointer networks (Merity
et al., 2016). Given a program $\mathcal{T}=t_{0},t_{1},\dots,t_{n}$ and let
$t_{l}$ be a potential bug location, we then compute the initial token
embedding $e_{l}$ and the contextual vector representation $\mathbf{r}_{l}$
coming from the encoding model. Based on these representations, we compute a
buggyness score for each potential bug location with a simple MLP:
$s_{l}=\mathbf{W}_{2}\sigma(\mathbf{W}_{1}(\mathbf{r}_{l}||e_{l}||\mathbf{r}_{l}-e_{l}))$
Here, $\mathbf{W}_{2}\in\mathbb{R}^{1\times d}$,
$\mathbf{W}_{1}\in\mathbb{R}^{d\times 3d}$ are learnable projections of the
MLP. The intuition here is that the MLP should learn the correct token
representation $\mathbf{r}_{l}$ which then would disagree with the initial
token embedding $e_{l}$ if $t_{l}$ is buggy. We model the distribution
$p_{loc}$ by a softmax over all buggyness scores.
Based on the same intuition used for localization, we designed our repair
module. Given a potential bug location represented by $\mathbf{r}_{l}$, the
repair module computes a repair score for all other tokens (represented by
$\mathbf{r}_{j}$) similar to an attention mechanism:
$rep_{lj}=\frac{\mathbf{W}_{q}(\mathbf{r}_{l})(\mathbf{W}_{k}(\mathbf{r}_{j}))^{T}}{\sqrt{d}}$
Here, $\mathbf{W}_{q}\in\mathbb{R}^{d\times d}$,
$\mathbf{W}_{k}\in\mathbb{R}^{d\times d}$ are learnable projections. To
include an external vocabulary $V$, we represent each vocabulary entry a
learnable vector $v_{j}\in\mathbb{R}^{d}$ and compute a repair score in a
similar way:
$rep_{lj}=\frac{\mathbf{W}_{q}(\mathbf{r}_{l})(v_{j})^{T}}{\sqrt{d}}$
Finally, $p_{repair}$ is computed by a softmax over all repair scores (token
based and vocabulary based together).
We train all models using the Adam optimizer with learning rate $1e-4$ and a
linear warm-up of 800 steps, additionally clipping gradient norms at 1.0 (0.5
for the GNN). Models are trained with weight decay of 0.1 for regularization.
During training, we consider function implementations with up to 1024 tokens
(1536 nodes for the GNN) and trained with minibatch sizes of up to 12.5K
tokens (nodes).
RealiT. We follow the same architectural design for RealiT. As an encoding
model, we employ a 6-layer Transformer encoder (Devlin et al., 2019), a hidden
size of 512, an intermediate size of 2048 and 8 attention heads. During
training, we use a dropout regularization of 0.1. For encoding the positions
of tokens, we employ relative position encoding (Shaw et al., 2018) as we
found that this performed better. A comparison of Transformer with and without
relative position encoding can be found in Table 2 (”RealiT - without fine-
tuning on real bug fixes” vs ”Transformer”).
GNN. For the graph neural network baseline, we followed the design of
Allamanis et al. (Allamanis et al., 2021) as close as possible. We
reimplemented the GNN based on the reference implementation provided by the
authors. The GNN consists of 8 message propagation layers with a skip
connection between the first and the fourth layer and between the fourth and
the eights layer. The node hidden size is set to 256.
In addition, we also adapted the general architecture to match the reference
implementation. Instead of averaging the subtoken embeddings, we employ max
pooling as the authors found that this performed better. In addition, we also
reimplemented the same localization and repair head.
Remaining baselines. The remaining baselines employ the same hyperparameters
as specified by Hellendoorn et al. (Hellendoorn et al., 2019). The Transformer
is a 6-layer encoder with absolute position embeddings (512 hidden size, 2048
intermediate size, 8 attention heads). GREAT uses a similar 6-layer
architecture with the addition of an edge bias. The RNN is a 2-layer
bidirectional recursive neural network with hidden size of 512.
|
[1]Roy Beck
1]School of Physics and Astronomy, the center for Nanoscience and
Nanotechnology, and the center of Physics and Chemistry of Living Systems, Tel
Aviv University, Tel Aviv, Israel
2]Materials Department, Biomolecular Sciences and Engineering Program, and
Physics Department, University of California, Santa Barbara, USA
# From isolated polyelectrolytes to star-like assemblies:
The role of sequence heterogeneity on the statistical structure of the
intrinsically disordered Neurofilament-low tail domain
Mathar Kravikass Gil Koren Omar A. Saleh<EMAIL_ADDRESS>[ [
###### Abstract
Intrinsically disordered proteins (IDPs) are a subset of proteins that lack
stable secondary structure. Given their polymeric nature, previous mean-field
approximations have been used to describe the statistical structure of IDPs.
However, the amino-acid sequence heterogeneity and complex intermolecular
interaction network have significantly impeded the ability to get proper
approximations. One such case is the intrinsically disordered tail domain of
Neurofilament low (NFLt), which comprises a 50 residue-long uncharged domain
followed by a 96 residue-long negatively charged domain. Here, we measure two
NFLt variants to identify the impact of the NFLt two main subdomains on its
complex interactions and statistical structure. Using synchrotron small-angle
x-ray scattering, we find that the uncharged domain of the NFLt induces
attractive interactions that cause it to self-assemble into star-like polymer
brushes. On the other hand, when the uncharged domain is truncated, the
remaining charged N-terminal domains remain isolated in solution with typical
polyelectrolyte characteristics. We further discuss how competing long- and
short-ranged interactions within the polymer brushes dominate their ensemble
structure, and, in turn, their implications on previously observed phenomena
in NFL native and diseased states.
###### keywords:
Intrinsically disordered proteins, Small angle X-ray scattering,
Neurofilament, Polymer physics, Polymer Brushes
## Introduction
Intrinsically disordered proteins (IDPs) are a subset of proteins that,
instead of forming a rigid singular structure, fluctuate between different
conformations in their native form [1, 2]. Nonetheless, IDPs serve significant
biological functions and account for about 44% of the human genome [3]. The
lack of fixed structure provides IDPs many advantages in regulatory systems in
which they often play a crucial role in mediating protein interaction [4, 5].
These roles often come into play from intrinsically disordered regions (IDRs)
of folded proteins interacting with other IDRs. For example, in the
Neurofilament proteins, tails emanating from the self–assembled filament
backbone domains bind together and form a network of filaments [6, 7, 8, 9,
10].
The ensemble statistics of IDPs stem from their sequence composition and the
surrounding solution [2]. For example, previous studies showed that IDPs
comprising mostly negatively charged amino acids (polyelectrolytes) are
locally stretched due to electrostatic repulsion between the monomers [11].
Moreover, different properties, such as hydrophobicity, were shown to be
linked with local IDP domain collapse [12]. The complex interactions that
arise from sequence heterogeneity allow IDPs to form specific complexes
without losing their disordered properties [13]. For example, Khatun et al.
recently showed how, under limited conditions, the human amylin protein
self–assembles into fractal structures [14].
As IDPs are disordered chains, polymer theories are prime candidates to relate
the measured structural statistics to known models, which can help link the
sequence composition of the IDP to its conformations [15, 16, 17, 18].
Specifically, polymer scaling theories allow us to derive the statistical
structure of IDPs given sequence–derived parameters, such as charge density
and hydrophobicity [11, 19, 12, 20, 21]. However, due to the heterogeneity of
the IDP primary structure (i.e., the amino acid sequence), some systems showed
contradictions with the behavior theorized by standard heterogeneous polymer
physics [22, 23, 17, 24, 19].
The unique biological properties of IDPs have given rise to numerous attempts
to use them as building blocks for self–assembled structures [25]. For
example, IDPs were proposed as brush–like surface modifiers, due to their
enhanced structural plasticity to environmental conditions [26, 27]. Another
example of an IDP brush system is the Neurofilament (NF) protein system [6,
28, 29], described as interacting bottle–brushes. NF subunit proteins form
mature filaments with protruding disordered C–terminus IDR known as ‘tails.’
NF tails were shown to mediate NF network formation and act as shock
absorbents in high–stress conditions [29]. Moreover, NF aggregations are known
to accumulate alongside other proteins in several neurodegenerative diseases,
such as Alzheimer’s, Parkinson’s, etc. [30].
The NF low disordered tail domain (NFLt) sequence can be divided into two
unique regions: an uncharged region (residues 1–50) starting from its N
terminal and a negatively charged region (residues 51–146). The NFLt can be
described as a polyelectrolyte with a net charge per residue (NCPR) of -0.24.
Furthermore, the statistical structures of segments within the NFLt are
influenced by the amount, type, and disperse of the charged amino acid within
a segment [22]. Nonetheless, other structural constraints, particularly
long–range contacts, impact the local statistical structures. Additionally,
NFLt was shown to have glassy dynamics with the response to tension [31]. Such
dynamics were associated with multiple weakly interacting domains and
structural heterogeneity.
In this paper, we revisit NFLt as a model system for charged IDP and focus on
the contribution of its neutral and hydrophobic N–terminal domain. We will
show that increased salt concentration causes NFLt to form star–like brushes
with increased aggregation number ($Z$). We will further quantify the
competition between hydrophobic attraction and electrostatic and steric
repulsion in the formation of the structures of NFLt.
## Results
To study the N–terminal domain contribution to the structure of NFLt, we
designed two variants and measured them at various buffer conditions. The
first construct is the entire 146 residues of the NFLt chain, which we term as
WT (NCPR = -0.24), and the second is isolating the 104 negatively charged
residues from the C-terminal of NFLt (NCPR = -0.33), termed as
$\rm{\Delta}$N42. We expressed the variants in E-coli and purified it up to
96% (see methods).
We assessed the variants in solution using small–angle X–ray scattering
(SAXS), a technique extensively used to characterize the statistical
structures of IDPs [32]. From the raw SAXS data, measured at various
salinities, we can already find high structural differences between the two
variants (Fig. 1a). Dominantly at the low wave-vector ($q$) region, the WT
variant scattering ($I$) rises with added NaCl salt. Such an increase at low
$q$ implies high molecular mass particles due to aggregation of the WT
variant.
In contrast, $\rm{\Delta}$N42 shows a separated Gaussian polymer profile
(Figs. 1a, S1), nearly insensitive to total salinity ($C_{s}=20-520$ mM).
Similarly, the data presented in Kratky format ($qI^{2}$ vs. $q$, Fig. 1a)
shows the $\rm{\Delta}$N42 has the signature of a disordered polymer. In
contrast, the WT variant, in particular at high salinity, has a combination of
a collapse domain (the peaks from below $q=0.25\rm{nm}^{-1}$) and a disordered
polymeric structure (the scattering rise at higher $q$ Fig. 1a).
Figure 1: SAXS measurements of WT and $\rm{\Delta}$N42 at different salinity
($C_{s}$). a. For increasing $C_{s}$, the WT variant shows increased small
angle scattering, a signature for aggregation. In contrast, $\rm{\Delta}$N42
remains structurally intrinsically disordered as $C_{s}$ vary. Data points are
shifted for clarity. Lines are form-factor fittings, as described in the text.
b. Normalized Kratky plot of the same SAXS measurements. The $\rm{\Delta}$N42
variant remains disordered and unchanged with salinity, while the WT variant
shows a hump at low $q$, typical for a collapse region. With increasing
$C_{s}$, the hump at the lower $q$ range becomes a sharper peak accompanied by
a scattering rise at the higher $q$ range. Such behavior indicates that the
aggregation coexists with the WT variant’s highly dynamic and disordered
regions. Both variants shown are at the highest measured concentration (Table
S1, S3). WT measurements are in 20 mM Tris pH 8.0 with 0, 150, 250, and 500 mM
added NaCl (from bottom to top). Likewise, for $\rm{\Delta}$N42, measurements
are in 20 mM Tris pH 8.0 with 0 and 150 mM added NaCl (bottom to top).
Being completely disordered, $\rm{\Delta}$N42 lacks a stable structure and can
be described using a statistical ensemble of polymeric conformations [33]
were:
$\begin{split}I(q)=I_{0}\exp\\{-\frac{1}{3}(qR_{\rm G})^{2}+\\\
0.0479(\nu-0.212)(qR_{\rm G})^{4}\\}.\end{split}$ (1)
Here, $I_{0}$ is the scattering at $q=0$, $\nu$ is Flory scaling exponent, and
$R_{\rm G}$ is the radius of gyration defined by:
$R_{\rm
G}=\sqrt{\frac{\gamma(\gamma+1)}{2(\gamma+2\nu)(\gamma+2\nu+1)}}bN^{\nu},$ (2)
where $\gamma=1.615$ and $b=0.55\rm{nm}$ (see [33]) and the analysis is viable
up to $qR_{\rm G}\sim 2$ (Fig. S2, S3). In all $\rm{\Delta}$N42 cases, the
scattering profile fits Eq. 1 and with $\nu$ ranging between 0.63–0.69
depending on the buffer salinity (Table S1). In ‘infinite dilution’ conditions
(zero polymer concentration), we find $\nu$ to decrease monotonically from
0.73 to 0.62 with added salt (Table S2).
Given the noticeable aggregation for the WT variant, alternative form factors
were considered to match the scattering profiles (lines in Fig. 1). The
absence of structural motifs at high $q$ values ($q>0.3\,\,\rm{nm}^{-1}$)
indicates a disordered nature for WT at shorter length scales. Conversely, in
the lower $q$ region ($q<0.3\,\,\rm{nm}^{-1}$), the scattering suggests stable
structural motifs or a larger molecular weight particles. Such SAXS resembles
that of self–assembled decorated spherical micelles [34]. Variations of
micelle models are shown to fit the data (Figs. 1, S4-S6). Sufficiently low
aggregation number and core size distil the description of the spherical
micelle into a ‘star–like’ brush. Alternative attempts to fit the scattering
profiles to other form factors models, including vesicles and lamellar, were
unsuccessful.
For the star–like model, the aggregated variants form a small spherical core
of volume $V_{\rm core}$ made out of $n\cdot Z$ monomers (comparison with
different cores described in [35] and in Fig. S4), where $n$ denotes the
peptide length per polypeptide within the core, and $Z$ is the aggregation
number, i.e. the number of polypeptides per ‘star.’ The remainder of the WT
variant then protrudes from the core as the star polymer brush (Figs. 2a,
S4-S6).
Figure 2: a. Schematic of the system’s structure variation with salinity
($C_{s}$). While $\rm{\Delta}$N42 remains disordered and segregated, the WT
variant aggregates to a star–like polymer with a higher aggregation number at
higher $C_{s}$. b–e. Structural parameters for WT (blue symbols) and
$\rm{\Delta}$N42 (black symbols) variants extracted from fitting the SAXS
data. Full and hollow circles represent the spherical and cylindrical core
fitted parameters, respectively. d. In all cases, the brush heights ($h$) are
much larger than the corresponding grafting length ($\rho$), indicative of a
brush regime. e. The structurally intrinsically disordered $\rm{\Delta}$N42
variant compacts with higher $C_{s}$ values and remains more compacted from
the projected brushes for the WT variant. All values are the extrapolated
‘zero concentration’ fitting parameters (see Fig. S7)
The star–like scattering form factor is described as a combination of four
terms [34]: the self–correlation term of the core $F_{\rm c}$, the self-
correlation term of the tails $F_{\rm t}$, the cross-correlation term of the
core and the tails $S_{\rm ct}$ and the cross-correlation term of the tails
$S_{\rm tt}$:
$\begin{split}F_{\text{total}}(q)=Z^{2}\beta_{\rm c}^{2}F_{\rm
c}(q)+Z\beta_{\rm t}^{2}F_{\rm t}(q)+\\\ 2Z^{2}\beta_{\rm c}\beta_{\rm
t}S_{\rm ct}(q)+Z(Z-1)\beta_{\rm t}^{2}S_{\rm tt}(q).\end{split}$ (3)
Here, $\beta_{\rm c}$ and $\beta_{\rm t}$ are the excess scattering length of
the core and the tails, respectively. From fitting the scattering data, we
extracted the height of the tails $h=2R_{\rm G}$, the aggregation number $Z$,
and the relevant core’s parameters (e.g., core radius $R$ for a spherical
core, cylinder radius $R$ and length $L$ for a cylindrical core [35]),
schematically illustrated in Fig. 2a. All fitting parameters are found in
Table S3.
To avoid misinterpretation and to minimize intermolecular interaction effects,
we present the fitting results at the ‘infinitely diluted regime’ by
extrapolating the relevant parameters measured at various protein
concentrations to that at zero protein concentration (Fig. S7, Table S4). The
parameters are mostly independent of the concentration unless explicitly
mentioned.
At low salinity (20mM), the aggregation number for the WT variant is of a
dimer ($Z\approx 2$), and the core’s shape is that of a cylinder (with a
radius $R=0.89$ nm and length $L=1.19$ nm). At higher salt conditions (170-520
mM), the form factor fits spherical core aggregates with increasingly higher
$Z$’s (Fig. 2a).
Given the relatively small core volume ($V_{\rm core}\approx 1-2\rm{nm}^{3}$,
Fig. 2c), it is crucial to evaluate the ‘grafting’ distance between
neighboring chains, $\rho$, on the core surface ($S=4\pi R^{2}=Z\rho^{2}$) and
the brush extension, $h$, outside the core. As shown in Fig. 2d, in all cases,
$h/\rho\gg 1$ indicates a ‘brush regime’ where neighboring chains repel each
other while extending the tail’s height [36].
The repulsion between the grafted tail is further emphasized when comparing
$h/2$ for WT to the equivalent $\rm{\Delta}$N42 length-scale ($R_{\rm G}$),
showing a significant extension for WT (Fig. 2e). We notice that the WT tail’s
length ($h$) increases at low salt (during the transitions from a dimer to a
trimer), followed by a steady mild decrease as the $C_{s}$, and following $Z$
increase. Similar compactness with increasing $C_{s}$ is shown for
$\rm{\Delta}$N42 and is expected for polyelectrolyte due to the reduction in
electrostatic repulsion [37]. To better compare the statistical structure of
two variants of disordered regions, we followed the polymeric scaling notation
$\nu$ that quantifies the compactness of the chain. For $\rm{\Delta}$N42, we
extracted $\nu$ from Eqs. 1 and 2 and found a significant decrease in its
value as 50 mM of NaCl is added to the 20 mM Tris buffer (Fig. 3a). The
following monotonic decline is in line with polyelectrolytic models and
electrostatic screening effects [38], shown in a solid red line in Fig. 3a.
Interestingly, previous measurements of segments within the NFLt charged
domain were shown to have similar $\nu$ values as in $\rm{\Delta}$N42 .
However, the same decline in salinity was not observed (Fig. 3a) [22].
Figure 3: Deduced structural parameters from the SAXS data fitting. a. Flory
exponent ($\nu$) of WT tails and $\rm{\Delta}$N42 variants showing extended
disordered scaling. The red line refers to the theoretical brush model [39],
and the blue line refers to the theoretical polyelectrolyte [38].
$\rm{\Delta}$N42 shows a decrease in the protein extension due to the decline
in intermolecular electrostatic repulsion (see also Fig. 4). WT shows an
increase in the extension when shifting from a dimer to a trimer, followed by
a slight decline with a further increase in salinity. In gray, average $\nu$
is obtained from measuring separate NFLt segments with an NCPR of -0.3 to -0.6
[22]. b. The core (aggregated) peptide length per polypeptide as a function of
salinity. At high salinity, each polypeptide aggregates via 2–3 amino acids
that form the star–like polymer core. Both panels’ values are the extrapolated
‘zero concentration’ parameters (supplementary Fig. S8).
For the WT variant, the scaling factor ($\nu$) of the ‘star–like polymer’
brushes is extracted from Eq. 2. Here, we use $R_{\rm G}=h/2$, where $h$ is
obtained from Eq. 3. For $C_{s}=20$ mM, we find a $\nu$ is of similar scale as
for $\rm{\Delta}$N42 . This similarity can be attributed to the nature of the
dimer, where the intermolecular electrostatic interactions dominate the
expansion of each of the two tails. As $C_{s}$ increases by $150$ mM, $\nu$
exhibits a considerable increase, presumably due to neighboring tail
repulsion. Above $C_{s}=170$ mM, $\nu$ shows a weak decrease. We attribute
this weak decline to the salt–brush regime of polyelectrolyte brushes [39]
shown in solid blue in Fig. 3a. In this regime, $h\propto C_{s}^{-1/3}$, and
subsequently $\nu\propto-\frac{1}{3}log(C_{s})$.
We note that the cores of the star–like polymers are relatively small and that
each polypeptide aggregates through only a few, most likely hydrophobic, amino
acids. From the tabulated amino-acid partial volume, $\langle\phi_{aa}\rangle$
[40], we evaluated the average number of amino-acids per polypeptide inside
the core $n=V_{\rm core}/(\langle\phi_{aa}\rangle\cdot Z)$.
In Fig. 3a, we indeed see that the most significant change occurs at the low
salt regime, where $n$ drops from an average of 7 to 3 amino acids
($C_{s}=20,170$ mM, respectively). We presume this behavior to be due to a
salting–in effect, as is known to occur within globular proteins [41] and were
recently alluded to impact IDPs [42]. The following trend is a further
decrease in $n$, albeit much weaker, which results in a final average $n$ of
about two as the salinity reaches $C_{s}=520$ mM.
Last, in Fig. 4, we quantify the intermolecular interactions by evaluating the
second virial coefficient, $A_{2}$, using a Zimm analysis [43] (Table S5).
Here, $A_{2}$ describes the deviation of the statistical ensemble from an
ideal gas. In agreement with our previous data, we find that the
inter–molecular interactions of $\rm{\Delta}$N42 change from repulsive
($A_{2}>0$) to weakly attractive ($A_{2}\leq 0$) as the salinity increases. In
contrast, for WT, $A_{2}$ changes from a nearly neutral state of
intermolecular interactions (i.e., ideal gas regime) to mildly attractive
($A_{2}<0$). These findings are reflected in the dependency of the variant
Flory coefficient $\nu$ in concentration. While at the lowest salinity,
$\rm{\Delta}$N42 is shown to expand as protein concentration is decreased, for
higher salinities and for the WT measurements, $\nu$ remain primarily
unchanged (Fig. S8a).
Combining our results for both variants, we find an exemplary role of
long–range electrostatic interactions tuning the statistical structure of
IDPs. Without the uncharged N-terminal domain, the NFLt exhibited significant
change as the electrostatic interactions were screened, causing them to
condense further. In contrast, the presence of the uncharged domain incurred
aggregation of the proteins, bringing the tails much closer to each other. The
increase in proximity was reflected in a significant increase in the expansion
compared to the truncated variant, which exhibited a much weaker contraction
with salinity.
Figure 4: The osmotic second virial coefficient $A_{2}$ as a function of the
two variants’ salinity ($C_{s}$). $\rm{\Delta}$N42 intermolecular interactions
transition from repulsive to attractive as $C_{s}$ increases. WT changes from
a nearly neutral state of intermolecular interactions to attractive. Inset: A
demonstration (WT variant, 20 mM Tris and 500 mM NaCl pH 8.0) for the Zimm
analysis used to extract $A_{2}$ from SAXS data measured at various protein
concentrations ($C$). Values shown in the graph are in mg/ml units. The dashed
lines show the extrapolation from the measured data (colored lines) to the
fitted $q\rightarrow 0$ and $C\rightarrow 0$ yellow lines, where $\alpha=0.01$
is an arbitrary constant used in this analysis.
## Discussion and Conclusions
We investigated the effects of structural heterogeneity on the interactions of
NFLt, an IDP model system. For NFLt, the N-terminal region consisting of the
first $\sim 50$ residues is hydrophobic and charge neutral, while the
remaining chain is highly charged. We found that the sequence heterogeneity
differentiates between the structures of the entire WT NFLt and a variant
lacking the N-terminal domain. In particular, the WT variant self-assembles
into star-like structures while the $\rm{\Delta}$N42 one remains isolated in
all measured cases.
Since $\rm{\Delta}$N42 can be attributed as a charged polymer, weakly
attractive interactions take center stage as the electrostatic repulsion
diminishes with charge screening (Fig. 4). These interactions could be
attributed to monomer-monomer attractions that arise from the sequence
heterogeneity of the IDP, such as weak hydrophobic attraction from scattered
hydropathic sites [22, 29, 28, 44, 45, 46].
For the WT variant, the intermolecular interactions started from a near-
neutral state and transitioned to weakly attractive. However, as the WT
measurements describe self-assembling complexes, the interpretation of these
results differs from $\rm{\Delta}$N42. As such, we interpret the
intermolecular interactions as the ‘aggregation propensity,’ the protein
complex’s growing ability. The aggregation propensity grows as the
attractivity between the complex and the other polypeptides in the solution
increases. This behavior can be observed when examining the responsiveness of
the aggregation number $Z$ to protein concentration $C$ (Fig. S7). In the
lowest measured screening, $Z$ dependency on protein concentration was
minimal. As we increase the screening effects, this dependency becomes more
substantial. This characterization is also found in folded proteins, where
inter-molecular interactions were shown to indicate aggregation propensity
[47].
In our previous study [22], Flory exponents ($\nu$) of shorter segments from
the same NFLt were measured independently and in the context of the whole NFLt
using SAXS and time-resolved Förster resonance energy transfer (trFRET).
There, regardless of the peptide sequence, in the context of the entire NFLt,
the segments’ structural statistics were more expanded (i.e., with larger
$\nu$ values) than when measured independently. Similarly, these short
segments measured with SAXS have smaller $\nu$ values (i.e., with a compacted
statistical structure) than those of measured here for $\rm{\Delta}$N42 in all
salt conditions (Fig. 3a, grey symbols).
The expansion of segments in the context of a longer chain corroborates that
long-range contacts contribute to the overall disordered ensemble [22].
Interestingly, at $C_{s}=520$ mM salinity, we found similar $\nu$ values of
the $\rm{\Delta}$N42 and the previous short segment measurements, indicating a
comparable expansion. We suggest that at higher salinities, the significance
of electrostatic long-range contacts diminishes, aligning the expansion
‘scaling laws’ regardless of the chain length. Importantly, comparisons
between our $\rm{\Delta}$N42 variant results (and not to the WT variant) to
the previous segments’ measurements are more suitable as the chains did not
aggregate in those cases.
Compared to $\rm{\Delta}$N42, WT exhibits a mild contraction in salt,
resembling the behavior of the ‘salt–brush’ regime observed in polyelectrolyte
brushes, as demonstrated in Fig. 3. Similar salt–brush behavior was previously
observed in Neurofilament high tail domain brushes grafted onto a substrate
[26], and in a recent polyelectrolytic brush scaling theory [48]. In the salt-
brush regime, Pincus showed that brush mechanics resemble neutral brushes,
determined by steric inter-chain interactions [49]. In this interpretation,
the effective excluded volume per monomer enlarges and is proportional to
$1/\kappa_{\rm s}^{2}$, where $\kappa_{\rm s}$ is the Debye length attributed
to the added salt. Consequently, we suggest that the heightened charge
screening in the WT solution allows steric interactions between brushes to
play a more significant role in determining the brush ensemble. Additionally,
we deduce that the increased prevalence of steric repulsion counteracts the
attractive forces responsible for aggregation, thereby preventing brush
collapse.
The NFLt contraction aligns with previous studies of native NFL hydrogel
networks [29, 28]. At high osmotic pressure, the NFL network showed weak
responsiveness to salinity higher than $C_{s}=100$ mM, in agreement with
theory [48]. With the observed salt–brush behavior for WT, we suggest that
weak salt response in NFL hydrogels coincides with the increase in steric
repulsion shown for the star-like structures (Fig. 3a, blue line).
Additionally, our measurements show that the hydrophobic N-terminal regime of
the NFLt domain aggregates. This result is consistent with the findings of
Morgan et al. [31], where single-molecule pulling experiments were performed
on WT NFLt, and slow aging effects were observed, likely due to collapse (and
potential aggregation) of the neutral domain. Indeed, follow–up studies by
Truong et al. [50] used single-molecule stretching to show that added
denaturant led to a swelling of the chain (increased $\nu$), demonstrating
that the WT chain has hydrophobic aggregation that can be disrupted by the
denaturant. These observations suggest that at higher salt, the loss of
repulsion may lead to attractive hydrophobic interactions growing more
prominent in the NFL network. However, the steric repulsion from the remaining
NFL tail may shield such an unwanted effect. Nonetheless, such effects may
grow more prominent as the native filament assembly is disrupted.
In summary, we showed how the sequence composition of the NFLt IDP caused
structural deviation from a disordered polyelectrolyte to a self–assembled
star–like polymer brush. Together with the self–regulatory properties of the
brushes, such behavior can be exploited to design structures that can resist
specific environmental conditions. Additionally, our results showed possible
implications on NFL aggregates that could shed light on the underlying
correlations between the complex structure and the conditions driving it.
While IDPs resemble polymers in many aspects, as we showed here, it is
critical to assess their sequence to distinguish where and how to use the
appropriate theoretical arguments to describe their statistical properties and
structure.
## Methods
Protein purification Protein purification followed Koren et al. [22]. Variant
$\rm{\Delta}$N42, included two cysteine residues at the C- and N terminals.
After purification, $\rm{\Delta}$N42 variants were first reduced by 20 mM
2-Mercaptoethanol. Next, 2-Mercaptoethanol was dialysed out with 1 L of 50 mM
HEPES at pH 7.2. To block the cysteine sulfhydryl group, we reacted
$\rm{\Delta}$N42 variants with 2-Iodoacetamide at a molar ratio of 1:20. At
the reaction, the variants’ concentrations were $\sim$2 mg/ml. The reaction
solution was kept under dark and slow stirring for 5 hr and stopped by adding
50 mM 2-Mercaptoethanol followed by overnight dialysis against 1 L of 20 mM
Tris at pH 8.0 with 0.1% 2-Mercaptoethanol. Final purity was $>$95% as
determined by SDS-PAGE (Fig. S9).
SAXS measurement and analysis Protein samples were dialyzed overnight in the
appropriate solution and measured with a Nanodrop 2000 spectrophotometer
(Thermo Scientific) for concentration determination. Buffers were prepared
with 1 mM of TCEP to reduce radiation damage and 0.2% of Sodium Azide to
impair sample infection. The samples were prepared in a final concentration of
2 mg/ml, measured in a series of 4 dilutions. Preliminary measurements were
measured at Tel-Aviv University with a Xenocs GeniX Low Divergence CuK$\alpha$
radiation source setup with scatterless slits [51] and a Pilatus 300K
detector. All samples were measured at three synchrotron facilities: beamline
B21, Diamond Light Source, Didcot, UK [52], beamline P12, EMBL, DESY, Hamburg,
Germany [53], and beamline BM 29 ESRF, Grenoble, France [54]. Measurements at
ESRF were done using a robotic sample changer [55].
Integrated SAXS data was obtained from the beamline pipeline and 2D
integration using the ”pyFAI” Python library [56]. Extended Guinier analyses
for the $\rm{\Delta}$N42 variant were done with the ”curve_fit” function from
the ”Scipy” Python library [57]. To extract $R_{g}$ and $\nu$, extended
Guinier analysis was conducted for $0.7<qR_{g}<2$. Error calculation was done
from the covariance of the fitting.
Model fittings for the WT variant were done using the ”lmfit” Python library
[58] using the model described in [34, 35]. Due to the complexity of the
model, cylindrical core fittings were done by binning the data in 100
logarithmic bins to reduce computation time. Within the same model, core
parametres (cylinder radius $R$ and cylinder length $L$) were set constant, to
offset fitting errors. Initial values of $R$ and $L$ were calculated with the
highest measured concentration. Physical boundary conditions were imposed on
the fitting, and scattering length (SL) values were set to be unchanged by the
fitting process. SL values of both the core and the tail domains were
determined by tabulated values of amino acid SLD in 100% H2O [59] (Table S3).
Fitting parameter error evaluation was done by finding the covariant of the
returning fitting parameters. Error calculation of the volume was done using:
$\frac{dV}{V}=\sqrt{3\left(\frac{dR}{R}\right)^{2}}$. In addition, $\nu$
values of WT were found by a recursive search of the corresponding tail height
$h/2$ over Eq. 2. Errors of $\nu$ were then found by assuming a simple case of
$R_{g}=bN^{\nu}$, from which:
$d\nu\sim\frac{\ln{(1+dR/R)}}{\ln{N}}\sim\ln{(N)}^{-1}\frac{dR}{R}$
Zimm analysis Zimm analysis was performed as described in [43]. Data
normalization was done by first determining $I_{0}$ by fitting a linear curve
over the Guinier plot ($\ln{I(q)}$ vs $q^{2}$). Normalized $1/I(q)$ linear
fitting was done starting with the earliest possible data point until a
deviation from the linear behavior occurs. Data points were then binned for
visual clarity without impacting the result.
Brush model fitting Brush height model as described in [39] was fitted with a
prefactor $c=0.33$ to match data. Resulting heights were converted to $\nu$ by
$h=bN^{\nu}$ where $b=0.38$ nm and $N=146$. To accommodate for the change in
grafting density, a linear curve was fitted to the grafting density’s change
in salinity and was used to obtain a continuous plot.
Polyelectrolye fitting The fitting model was used as described in [38] with a
pre-factor $c=1.24$ to match data.
## Acknowledgments
R.B. and O.A.S. dedicate this article to Fyl Pincus, for his continuous
leadership and friendship over the years. His past works on charged polymer
brushes, and polymers’ scaling laws, inspired much research in the field,
including this work. The synchrotron SAXS data were collected at beamline P12,
operated by EMBL Hamburg at the PETRA III storage ring (DESY, Hamburg,
Germany), at beamline B21, operated by Diamond Light Source (Didcot, UK), and
at beamline BM29, operated by ESRF (Grenoble, France). We would like to thank
Cy M. Jefferies (DESY), Katsuaki Inoue (DLS), and Mark Tully (ESRF) for their
assistance in using the beamlines. This work has been supported by the NSF
(MCB-2113302), the NSF–BSF program (2020787), the Israel Science Foundation
(1454/20), and by iNEXT-Discovery (15410), funded by the Horizon 2020 program
of the European Commission. We also acknowledge the fruitful discussion and
help from Yacov Kantor, Uri Raviv, and Sagi Meir.
### Statements and Declarations
Conflicting interests The authors claim no conflicting interests.
Data availability The raw SAXS data is available in the Small–Angle Scattering
Biological Data Bank (SASBDB) with the identifier: XXXX.
Author contribution M.K., G.K., and R.B. designed the project. M.K. conducted
experiments and analysis with G.K.’s and O.A.S.’s assistance. M.K., G.K.,
R.B., and O.A.S. wrote the paper.
## References
* * [1] Holehouse, A. S. & Kragelund, B. B. The molecular basis for cellular function of intrinsically disordered protein regions. _Nature Reviews Molecular Cell Biology_ 1–25 (2023).
* [2] Chowdhury, A., Nettels, D. & Schuler, B. Interaction dynamics of intrinsically disordered proteins from single-molecule spectroscopy. _Annual Review of Biophysics_ 52, 433–462 (2023).
* [3] Xue, B., Dunker, A. K. & Uversky, V. N. Orderly order in protein intrinsic disorder distribution: disorder in 3500 proteomes from viruses and the three domains of life. _Journal of Biomolecular Structure and Dynamics_ 30, 137–149 (2012).
* [4] Uversky, V. N. Intrinsic disorder-based protein interactions and their modulators. _Current pharmaceutical design_ 19, 4191–4213 (2013).
* [5] Ehm, T. _et al._ Self-assembly of tunable intrinsically disordered peptide amphiphiles. _Biomacromolecules_ 24, 98–108 (2022).
* [6] Laser-Azogui, A., Kornreich, M., Malka-Gibor, E. & Beck, R. Neurofilament assembly and function during neuronal development. _Current opinion in cell biology_ 32, 92–101 (2015).
* [7] Chernyatina, A. A., Nicolet, S., Aebi, U., Herrmann, H. & Strelkov, S. V. Atomic structure of the vimentin central $\alpha$-helical domain and its implications for intermediate filament assembly. _Proceedings of the National Academy of Sciences_ 109, 13620–13625 (2012).
* [8] Malka-Gibor, E. _et al._ Phosphorylation-induced mechanical regulation of intrinsically disordered neurofilament proteins. _Biophysical journal_ 112, 892–900 (2017).
* [9] Hirokawa, N., Glicksman, M. A. & Willard, M. B. Organization of mammalian neurofilament polypeptides within the neuronal cytoskeleton. _The Journal of cell biology_ 98, 1523–1536 (1984).
* [10] Safinya, C. R., Deek, J., Beck, R., Jones, J. B. & Li, Y. Assembly of biological nanostructures: isotropic and liquid crystalline phases of neurofilament hydrogels. _Annu. Rev. Condens. Matter Phys._ 6, 113–136 (2015).
* [11] Müller-Späth, S. _et al._ Charge interactions can dominate the dimensions of intrinsically disordered proteins. _Proceedings of the National Academy of Sciences_ 107, 14609–14614 (2010).
* [12] Milles, S. & Lemke, E. A. Single molecule study of the intrinsically disordered fg-repeat nucleoporin 153. _Biophysical Journal_ 102, 10a (2012).
* [13] Sekiyama, N., Kobayashi, R. & Kodama, T. S. Toward a high-resolution mechanism of intrinsically disordered protein self-assembly. _The Journal of Biochemistry_ 174, 391–398 (2023).
* [14] Khatun, S. _et al._ Fractal self-assembly and aggregation of human amylin. _Soft Matter_ 16, 3143–3153 (2020).
* [15] Shea, J.-E., Best, R. B. & Mittal, J. Physics-based computational and theoretical approaches to intrinsically disordered proteins. _Current opinion in structural biology_ 67, 219–225 (2021).
* [16] Van Der Lee, R. _et al._ Classification of intrinsically disordered regions and proteins. _Chemical reviews_ 114, 6589–6631 (2014).
* [17] Baul, U., Chakraborty, D., Mugnai, M. L., Straub, J. E. & Thirumalai, D. Sequence effects on size, shape, and structural heterogeneity in intrinsically disordered proteins. _The Journal of Physical Chemistry B_ 123, 3462–3474 (2019).
* [18] Das, R. K. & Pappu, R. V. Conformations of intrinsically disordered proteins are influenced by linear sequence distributions of oppositely charged residues. _Proceedings of the National Academy of Sciences_ 110, 13392–13397 (2013).
* [19] Hofmann, H. _et al._ Polymer scaling laws of unfolded and intrinsically disordered proteins quantified with single-molecule spectroscopy. _Proceedings of the National Academy of Sciences_ 109, 16155–16160 (2012).
* [20] Zheng, W., Dignon, G., Brown, M., Kim, Y. C. & Mittal, J. Hydropathy patterning complements charge patterning to describe conformational preferences of disordered proteins. _The journal of physical chemistry letters_ 11, 3408–3415 (2020).
* [21] Maltseva, D. _et al._ Fibril formation and ordering of disordered fus lc driven by hydrophobic interactions. _Nature Chemistry_ 1–9 (2023).
* [22] Koren, G. _et al._ Intramolecular structural heterogeneity altered by long-range contacts in an intrinsically disordered protein. _Proceedings of the National Academy of Sciences_ 120, e2220180120 (2023).
* [23] Riback, J. A. _et al._ Innovative scattering analysis shows that hydrophobic disordered proteins are expanded in water. _Science_ 358, 238–241 (2017).
* [24] Zeng, X., Ruff, K. M. & Pappu, R. V. Competing interactions give rise to two-state behavior and switch-like transitions in charge-rich intrinsically disordered proteins. _Proceedings of the National Academy of Sciences_ 119, e2200559119 (2022).
* [25] Argudo, P. G. & Giner-Casares, J. J. Folding and self-assembly of short intrinsically disordered peptides and protein regions. _Nanoscale Advances_ 3, 1789–1812 (2021).
* [26] Srinivasan, N., Bhagawati, M., Ananthanarayanan, B. & Kumar, S. Stimuli-sensitive intrinsically disordered protein brushes. _Nature communications_ 5, 5145 (2014).
* [27] Pregent, S. _et al._ Probing the interactions of intrinsically disordered proteins using nanoparticle tags. _Nano letters_ 15, 3080–3087 (2015).
* [28] Beck, R., Deek, J., Jones, J. B. & Safinya, C. R. Gel-expanded to gel-condensed transition in neurofilament networks revealed by direct force measurements. _Nature materials_ 9, 40–46 (2010).
* [29] Kornreich, M., Malka-Gibor, E., Zuker, B., Laser-Azogui, A. & Beck, R. Neurofilaments function as shock absorbers: compression response arising from disordered proteins. _Physical review letters_ 117, 148101 (2016).
* [30] Didonna, A. & Opal, P. The role of neurofilament aggregation in neurodegeneration: lessons from rare inherited neurological disorders. _Molecular neurodegeneration_ 14, 1–10 (2019).
* [31] Morgan, I. L., Avinery, R., Rahamim, G., Beck, R. & Saleh, O. A. Glassy dynamics and memory effects in an intrinsically disordered protein construct. _Physical Review Letters_ 125, 058001 (2020).
* [32] Tria, G., Mertens, H. D., Kachala, M. & Svergun, D. I. Advanced ensemble modelling of flexible macromolecules using x-ray solution scattering. _IUCrJ_ 2, 207–217 (2015).
* [33] Zheng, W. & Best, R. B. An extended guinier analysis for intrinsically disordered proteins. _Journal of molecular biology_ 430, 2540–2553 (2018).
* [34] Pedersen, J. S. & Svaneborg, C. Scattering from block copolymer micelles. _Current opinion in colloid & interface science_ 7, 158–166 (2002).
* [35] Pedersen, J. S. Form factors of block copolymer micelles with spherical, ellipsoidal and cylindrical cores. _Journal of Applied Crystallography_ 33, 637–640 (2000).
* [36] Chen, W.-L., Cordero, R., Tran, H. & Ober, C. K. 50th anniversary perspective: Polymer brushes: Novel surfaces for future materials. _Macromolecules_ 50, 4089–4113 (2017).
* [37] Wang, C.-H., Luo, M.-B., Xu, X., Wang, C. & Sun, L.-Z. Effects of salt concentration on the polyelectrolyte translocation through a cylinder nanopore. _European Polymer Journal_ 121, 109332 (2019).
* [38] Ha, B.-Y. & Thirumalai, D. Conformations of a polyelectrolyte chain. _Physical Review A_ 46, R3012 (1992).
* [39] Kumar, N. A. & Seidel, C. Polyelectrolyte brushes with added salt. _Macromolecules_ 38, 9341–9350 (2005).
* [40] Zamyatnin, A. Protein volume in solution. _Progress in biophysics and molecular biology_ 24, 107–123 (1972).
* [41] Okur, H. I. _et al._ Beyond the hofmeister series: Ion-specific effects on proteins and their biological functions. _The Journal of Physical Chemistry B_ 121, 1997–2014 (2017).
* [42] Wohl, S., Jakubowski, M. & Zheng, W. Salt-dependent conformational changes of intrinsically disordered proteins. _The Journal of Physical Chemistry Letters_ 12, 6684–6691 (2021).
* [43] Zimm, B. H. The scattering of light and the radial distribution function of high polymer solutions. _The Journal of chemical physics_ 16, 1093–1099 (1948).
* [44] Uversky, V. N. _et al._ Natively unfolded human prothymosin $\alpha$ adopts partially folded collapsed conformation at acidic ph. _Biochemistry_ 38, 15009–15016 (1999).
* [45] Möglich, A., Joder, K. & Kiefhaber, T. End-to-end distance distributions and intrachain diffusion constants in unfolded polypeptide chains indicate intramolecular hydrogen bond formation. _Proceedings of the National Academy of Sciences_ 103, 12394–12399 (2006).
* [46] Pappu, R. V., Srinivasan, R. & Rose, G. D. The flory isolated-pair hypothesis is not valid for polypeptide chains: implications for protein folding. _Proceedings of the National Academy of Sciences_ 97, 12565–12570 (2000).
* [47] Quigley, A. & Williams, D. The second virial coefficient as a predictor of protein aggregation propensity: a self-interaction chromatography study. _European Journal of Pharmaceutics and Biopharmaceutics_ 96, 282–290 (2015).
* [48] Zhulina, E. B. & Borisov, O. V. Cylindrical brushes with ionized side chains: Scaling theory revisited. _Soft Matter_ (2023).
* [49] Pincus, P. Colloid stabilization with grafted polyelectrolytes. _Macromolecules_ 24, 2912–2919 (1991).
* [50] Truong, H. P. _et al._ Pincus blob elasticity in an intrinsically disordered protein. _The European Physical Journal E_ 46, 100 (2023).
* [51] Li, Y., Beck, R., Huang, T., Choi, M. C. & Divinagracia, M. Scatterless hybrid metal–single-crystal slit for small-angle x-ray scattering and high-resolution x-ray diffraction. _Journal of Applied Crystallography_ 41, 1134–1139 (2008).
* [52] Cowieson, N. P. _et al._ Beamline b21: high-throughput small-angle x-ray scattering at diamond light source. _Journal of Synchrotron Radiation_ 27, 1438–1446 (2020).
* [53] Blanchet, C. E. _et al._ Versatile sample environments and automation for biological solution x-ray scattering experiments at the p12 beamline (petra iii, desy). _Journal of applied crystallography_ 48, 431–443 (2015).
* [54] Pernot, P. _et al._ Upgraded esrf bm29 beamline for saxs on macromolecules in solution. _Journal of synchrotron radiation_ 20, 660–664 (2013).
* [55] Round, A. _et al._ Biosaxs sample changer: a robotic sample changer for rapid and reliable high-throughput x-ray solution scattering experiments. _Acta Crystallographica Section D: Biological Crystallography_ 71, 67–75 (2015).
* [56] Kieffer, J., Valls, V., Blanc, N. & Hennig, C. New tools for calibrating diffraction setups. _Journal of synchrotron radiation_ 27, 558–566 (2020).
* [57] Virtanen, P. _et al._ Scipy 1.0: fundamental algorithms for scientific computing in python. _Nature methods_ 17, 261–272 (2020).
* [58] Newville, M. _et al._ Lmfit: Non-linear least-square minimization and curve-fitting for python. _Astrophysics Source Code Library_ ascl–1606 (2016).
* [59] Jacrot, B. The study of biological structures by neutron scattering from solution. _Reports on progress in physics_ 39, 911 (1976).
## Supplementary Information
From isolated polyelectrolyte to star-like assemblies:
the role of sequence heterogeneity on the statistical structure of the
intrinsically disordered Neurofilament-low tail domain
Mathar Kravikass, Gil Koren, Omar Saleh, Roy Beck
Corresponding<EMAIL_ADDRESS>
Contents:
Table S1-S5,
Figure S1-S9
Figure S1: $\rm{\Delta}$N42 SAXS measurements Gaussian form factor fitting for all salinity concentrations $C_{s}$. $r_{G}$ used for the Gaussian form factor is as obtained by the Extended Guinier analysis (see table S1). Figure S2: $\rm{\Delta}$N42 SAXS measurement with corresponding extended Guinier curve. Bottom: Deviation from fit $\sigma_{fit}=(Y_{data}-Y_{fit})/\sigma_{data}$ Dashed line represents the maximum analysis point $qr_{G}=2$ from which deviation starts. Displayed data: $20$mM Tris pH$8.0$ in $1.1$mg/ml. Figure S3: $\rm{\Delta}$N42 SAXS measurements with corresponding extended Guinier curves. Dashed lines represent the maximum analysis point $qr_{G}=2$ from which deviation starts. Protein concentrations were offset for clarity, with the lowest (blue) being of the highest concentration. $C_{s}$ (mM) | C (mg/ml) | $r_{G}$ (nm) | $\nu$ | $I_{0}$ (cm-1)
---|---|---|---|---
20 | 1.1 | 4.23 $\pm$ 0.05 | 0.642 $\pm$ 0.002 | 0.0228
20 | 0.8 | 4.56 $\pm$ 0.07 | 0.660 $\pm$ 0.66 | 0.0254
20 | 0.6 | 5.11 $\pm$ 0.13 | 0.689 $\pm$ 0.007 | 0.027
70 | 1 | 4.41 $\pm$ 0.12 | 0.652 $\pm$ 0.007 | 0.0259
70 | 0.5 | 4.53 $\pm$ 0.22 | 0.659 $\pm$ 0.012 | 0.0257
70 | 0.3 | 4.99 $\pm$ 0.46 | 0.683 $\pm$ 0.023 | 0.032
170 | 1.5 | 4.51 $\pm$ 0.05 | 0.657 $\pm$ 0.003 | 0.19
170 | 0.8 | 4.59 $\pm$ 0.11 | 0.662 $\pm$ 0.006 | 0.18
170 | 0.3 | 4.56 $\pm$ 0.27 | 0.661 $\pm$ 0.015 | 0.18
270 | 1 | 4.43 $\pm$ 0.06 | 0.653 $\pm$ 0.003 | 0.026
270 | 0.5 | 4.45 $\pm$ 0.1 | 0.654 $\pm$ 0.006 | 0.026
270 | 0.3 | 4.64 $\pm$ 0.16 | 0.664 $\pm$ 0.009 | 0.028
520 | 1.5 | 4.14 $\pm$ 0.02 | 0.636 $\pm$ 0.001 | 0.026
520 | 0.78 | 4.00 $\pm$ 0.08 | 0.628 $\pm$ 0.005 | 0.023
520 | 0.38 | 4.05 $\pm$ 0.35 | 0.630 $\pm$ 0.02 | 0.023
Table S1: $\rm{\Delta}$N42 Extended guinier analyis data. Analysis parametres (radius of gyration $r_{G}$, scaling exponent $\nu$, and scattering intensity at $q=0$ ($I_{0}$)) obtained for different salt concentrations ($C_{s}$) and protein concentrations ($C$). $C_{s}$ (mM) | $r_{G}$ (nm) | $\nu$
---|---|---
20 | 5.76 $\pm$ 0.31 | 0.729 $\pm$ 0.015
70 | 4.84 $\pm$ 0.27 | 0.677 $\pm$ 0.015
170 | 4.71 $\pm$ 0.04 | 0.669 $\pm$ 0.003
270 | 4.61 $\pm$ 0.16 | 0.663 $\pm$ 0.009
520 | 3.88 $\pm$ 0.04 | 0.620 $\pm$ 0.003
Table S2: Zero concentration extended Guinier analysis data. Analysis
parameters (radius of gyration $r_{G}$ and scaling exponent $\nu$) were
extrapolated to zero protein concentration at various salt concentrations
($C_{s}$). Figure S4: SAXS measurements of WT and its fitting to different
form factors. Both form factors are of the same model but use a different
core: Spherical or Ellipsoidal. Spherical core fitting yields a core radius of
$R=0.66\pm 0.016$ nm, and the ellipsoidal core yields a core radius of
$R=1.335\pm 0.23$ nm and a secondary radius of $\epsilon R$ where
$\epsilon=0.153\pm 0.08$. Both fittings yield close values of aggregation
number Z ($3.046\pm 0.04$ for spherical and $3.562\pm 0.07$ for ellipsoidal)
and tail height $h/2$ ($9.838\pm 0.04$ nm for spherical and $9.584\pm 0.12$
for ellipsoidal). Below: Fitting error
$\sigma_{fit}=(Y_{fit}-Y_{data})/\sigma_{data}$. Both curves show similar
error profiles. The spherical model proved best to describe the model due to
its simplicity. Displayed data: WT in $20$ mM Tris pH=8.0, and $170$ mM NaCl
at a concentration of $1.3$ mg/ml. Figure S5: SAXS measurements and spherical
form factor fitting for all salinity concentrations ($C_{s}$). The$C_{s}=20$
mM data fit is to a cylindrical core. Dashed lines represent the Gaussian form
factor of the structure tails. Protein concentrations were offset for clarity,
with the lowest (blue) being the highest concentration. Figure S6: WT SAXS
measurement with cylindrical fitting. Measurements at the highest
concentration of $C=2.68$ mg/ml, in a $20$ mM Tris buffer at pH=$8.0$. To
alleviate fitting inconsistencies, consequent fittings of measurements with
lower protein concentrations in the same buffer were done using the obtained
core parameters: Core radius $R=0.89\pm 0.03$ nm, and core length $L=1.19\pm
0.09$ nm.
$C_{s}$ | $C$ | $h/2$ | $\nu$ | $Z$ | $n$ | $R$ | $L$ | $V$ | $\beta_{t}$ | $\beta_{c}$
---|---|---|---|---|---|---|---|---|---|---
(mM) | (mg/ml) | (nm) | | | | (nm) | (nm) | (nm3) | (103nm) | (103nm)
20 | 2.68 | 9.16±0.15 | 0.786±0.0033 | 1.60±0.03 | 10.11±0.711 | 0.89±0.028 | 1.19±0.09 | 2.26±0.027 | 0.227 | 3.826
20 | 1.8 | 8.20±0.13 | 0.758±0.0032 | 1.83±0.04 | 8.52±0.059 | 0.89 | 1.19 | 2.26 | 0.195 | 3.558
20 | 1 | 8.57±0.17 | 0.768±0.0041 | 1.87±0.05 | 8.12±0.060 | 0.89 | 1.19 | 2.26 | 0.195 | 3.558
20 | 0.5 | 8.27±0.16 | 0.759±0.0040 | 1.91±0.05 | 7.78±0.057 | 0.89 | 1.19 | 2.07±0.006 | 0.17 | 3.881
170 | 1.3 | 9.96±0.03 | 0.796±0.0006 | 3.34±0.02 | 2.52±0.009 | 0.66±0.005 | X | 1.18±0.007 | 0.039 | 4.014
170 | 0.73 | 10.11±0.05 | 0.799±0.0009 | 3.27±0.03 | 2.32±0.014 | 0.63±0.007 | X | 1.06±0.010 | 0.039 | 4.014
170 | 0.57 | 10.37±0.08 | 0.806±0.0015 | 2.13±0.02 | 3.10±0.037 | 0.60±0.006 | X | 0.93±0.008 | 0.074 | 3.98
170 | 0.24 | 10.78±0.10 | 0.815±0.0019 | 2.71±0.05 | 3.23±0.025 | 0.66±0.011 | X | 1.23±0.017 | 0.074 | 3.98
270 | 2 | 9.40±0.02 | 0.781±0.0003 | 5.37±0.02 | 1.89±0.007 | 0.70±0.004 | X | 1.42±0.008 | 0.018 | 4.03
270 | 1.5 | 9.43±0.02 | 0.782±0.0004 | 5.02±0.02 | 1.93±0.009 | 0.69±0.005 | X | 1.36±0.009 | 0.018 | 4.03
270 | 1.5 | 9.43±0.02 | 0.782±0.0004 | 5.02±0.02 | 1.93±0.009 | 0.69±0.005 | X | 1.36±0.009 | 0.018 | 4.03
270 | 0.69 | 9.85±0.04 | 0.793±0.0009 | 4.05±0.04 | 2.70±0.016 | 0.71±0.009 | X | 1.53±0.016 | 0.039 | 4.014
370 | 2.3 | 9.36±0.01 | 0.780±0.0003 | 6.68±0.03 | 1.48±0.008 | 0.69±0.005 | X | 1.38±0.009 | 0.018 | 4.036
370 | 1.5 | 9.44±0.02 | 0.782±0.0003 | 6.43±0.03 | 1.65±0.009 | 0.71±0.006 | X | 1.49±0.010 | 0.018 | 4.036
370 | 0.96 | 9.53±0.02 | 0.785±0.0004 | 5.85±0.03 | 2.08±0.010 | 0.74±0.006 | X | 1.70±0.012 | 0.039 | 4.014
370 | 0.6 | 9.81±0.03 | 0.792±0.0007 | 5.33±0.05 | 2.13±0.017 | 0.72±0.010 | X | 1.59±0.019 | 0.039 | 4.014
520 | 2.5 | 9.57±0.01 | 0.786±0.0003 | 8.15±0.04 | 1.82±0.008 | 0.79±0.005 | X | 2.07±0.012 | 0.018 | 4.036
520 | 1.19 | 9.28±0.02 | 0.778±0.0004 | 6.66±0.04 | 1.57±0.010 | 0.70±0.006 | X | 1.47±0.012 | 0.018 | 4.036
520 | 0.66 | 8.90±0.03 | 0.769±0.0006 | 4.05±0.02 | 2.03±0.012 | 0.65±0.007 | X | 1.15±0.010 | 0.039 | 4.014
520 | 0.45 | 9.82±0.03 | 0.792±0.0005 | 6.36±0.06 | 1.94±0.016 | 0.74±0.010 | X | 1.72±0.020 | 0.039 | 4.014
Table S3: WT spherical and cylindrical fitting analysis data. Analysis
parameters (brush height ($h$), scaling exponent ($\nu$), aggregation number
($Z$), core peptide length ($n$), core radius ($R$), cylindrical core length
($L$), core volume ($V$), tail scattering length ($\beta_{t}$) and core
scattering length ($\beta_{c}$)) obtained for different salt concentrations
($C_{s}$) and protein concentrations ($C$). Cylinder length $L$ values are
only relevant to $C_{s}=20$mM where a cylindrical core fit was used. For the
cylindrical core, the same values of $L$ and $R$ were used for all
concentrations to alleviate fitting errors (see Methods).
$C_{s}$ | $h/2$ | $\nu$ | $Z$ | $n$ | $R$ | $L$ | $V$
---|---|---|---|---|---|---|---
(mM) | (nm) | | | | (nm) | (nm) | (nm3)
20 | 8.01±0.46 | 0.751±0.013 | 2.03±0.08 | 7.08±0.39 | 0.89 | 1.19 | 2.26
170 | 10.60±0.33 | 0.811±0.008 | 2.83±0.31 | 3.17±0.55 | 0.64±0.03 | X | 1.10±0.16
270 | 9.84±0.21 | 0.793±0.006 | 3.52±0.22 | 3.06±0.30 | 0.70±0.03 | X | 1.49±0.20
370 | 9.73±0.12 | 0.790±0.003 | 5.19±0.29 | 2.39±0.12 | 0.76±0.02 | X | 1.67±0.01
520 | 9.47±0.44 | 0.783±0.011 | 5.67±0.35 | 1.82±0.29 | 0.68±0.06 | X | 1.28±0.38
Table S4: Zero concentration WT spherical and cylindrical fitting analysis data. Analysis parametres (brush height ($h$), scaling exponent ($\nu$), aggregation number ($Z$), core peptide length ($n$), core radius ($R$), cylindrical core length ($L$) and core volume ($V$)) were extrapolated to zero protein concentration at various salt concentrations ($C_{s}$). Cylinder length $L$ values are only relevant to $C_{s}=20$mM where a cylindrical core was used. Figure S7: Structural parameters for WT (circles) and $\rm{\Delta}$N42 (triangles) variants extracted from fitting the SAXS data. Dashed lines demonstrate the linear fitting of the data used to obtained the zero concentration extrapolations. a. Aggregation number ($Z$) dependency on protein concentration ($C$) increases with increasing salt. b. Core volume $V_{s}$ against protein concentration ($C$). In $C_{s}=20$ mM, the $V_{s}$ values are constant due to fitting constraints (see Methods). c. In all cases, the tail heights ($h$) are much larger than the corresponding grafting length ($\rho$), indicative of a brush regime. d. The structurally intrinsically disordered $\rm{\Delta}$N42 variant compacts with higher $C_{s}$ values and remains more compacted from the projected tails for the WT variant. For the $\rm{\Delta}$N42 variant $r_{G}$ drastically changes as a function of the protein concentration ($C$). $C_{s}$ | $A_{2}^{WT}$ | $A_{2}^{\Delta N42}$
---|---|---
(mM) | (cm3mol/g${}^{2}\times 10^{3}$) | (cm3mol/g${}^{2}\times 10^{3}$)
20 | -0.295±1.346 | 13.264±0.466
70 | X | 3.978±1.248
170 | -2.072±2.091 | 0.169±1.544
270 | -3.328±0.508 | -1.152±0.756
370 | -2.020±0.563 | X
520 | -1.933±3.582 | -4.417±1.514
Table S5: Second virial coefficient $A_{2}$ values for both variants in salt
concentration $C_{s}$. Figure S8: a. Flory exponent ($\nu$) of WT tails and
$\rm{\Delta}$N42 variants as a function of the concentration. $\rm{\Delta}$N42
shows to change radically as a function of the concentration at the lowest
salinities. This effect is reduced as salinity concentration $Cs$ reaches
$170$mM. WT and the rest of $\rm{\Delta}$N42 $\nu$ data shows little change as
a function of the protein concentration.b. The core (aggregated) peptide
length per polypeptide as a function of the concentrations. The large drop
observed from $C_{s}=20$mM to $C_{s}=170$mM can be attributed to the shift
from a dimer to a trimer. Core peptide length difference diminishes with
increasing salinity, however the value still remain largely similar. Figure
S9: a. SDS-PAGE Tris-Glycine 15% of both $\rm{\Delta}$N42 and NFLt (WT),
showing purity above 95%. White dashed lines indicate where image lanes were
edited closer for clarity. Both show a higher molecular weight reading in the
gel, which is common for IDPs.b-c. Deconvoluted ESI-TOF MS spectra of
$\rm{\Delta}$N42 and NFLt respectively. Theoretical molecular weight values
are 12423.57 and 16233.79 for $\rm{\Delta}$N42 and NFLt, respectively
|
# Rigidity of acute triangulations of the plane
Tianqi Wu Department of Mathematics, Clark University, 950 Main St,
Worcester, MA 01610, USA<EMAIL_ADDRESS>
###### Abstract.
We show that a uniformly acute triangulation of the plane is rigid under Luo’s
discrete conformal change, extending previous results on hexagonal
triangulations. Our result is a discrete analogue of the conformal rigidity of
the plane. We followed He’s analytical approach in his work on the rigidity of
disk patterns. The main tools include maximum principles, a discrete Liouville
theorem, smooth and discrete extremal lengths on networks. The key step is
relating the Euclidean discrete conformality to the hyperbolic discrete
conformality, to obtain an $L^{\infty}$ bound on the discrete conformal
factor.
###### Contents
1. 1 Introduction
1. 1.1 Other Related Works
2. 1.2 Notations and Conventions
3. 1.3 Organization of the Paper
4. 1.4 Acknowledgement
2. 2 Preparations for the Proof
1. 2.1 Extremal Length and Modulus of Annuli
2. 2.2 Discrete Harmonic Functions
3. 2.3 Differential of the Curvature Map
4. 2.4 Maximum Principles
5. 2.5 Key Estimates on the Conformal Factors for Geodesic Embeddings
3. 3 Proof of Theorem 1.2
1. 3.1 Proof of Theorem 1.2 Assuming the Boundedness of $\bar{u}$
2. 3.2 Boundedness of the Conformal Factor
4. 4 Discrete Extremal Length and the Discrete Liouville Theorem
1. 4.1 Electrical Networks and Discrete Extremal Length
2. 4.2 Proof of the Discrete Liouville Theorem
5. 5 Hyperbolic Maximum Principles and Proof of Lemma 2.9
1. 5.1 Proof of the Hyperbolic Maximum Principle
6. A Proof of Lemma 5.1
## 1\. Introduction
A fundamental property in conformal geometry is that a conformal embedding of
the plane $\mathbb{R}^{2}$ to itself must be a similar transformation. In this
paper we discretize the plane by triangulations and prove a similar rigidity
result under the notion of discrete conformal change introduced by Luo
[Luo04].
Let $T=(V,E,F)$ be an (infinite) simplicial topological triangulation of the
Euclidean plane $\mathbb{R}^{2}$, where $V$ is the set of vertices, $E$ is the
set of edges and $F$ is the set of faces. Given a subcomplex
$T_{0}=(V_{0},E_{0},F_{0})$ of $T$, denote $|T_{0}|$ as the underlying space
of $T$. An embedding (_resp._ homeomorphism)
$\phi:|T_{0}|\rightarrow\mathbb{R}^{2}$ is called _geodesic_ if $\phi$ maps
each edge of $T_{0}$ to a geodesic arc, i.e., a straight closed line segment.
A _piecewise linear metric_ (_PL metric_ for short) on $T_{0}$ is represented
by an edge length function $l\in\mathbb{R}^{E_{0}}_{>0}$ satisfying the
triangle inequalities. A geodesic embedding $\phi$ of $T_{0}$ naturally
induces a PL metric $l=l(\phi)$ on $T_{0}$ by letting
$l_{ij}=|\phi(i)-\phi(j)|_{2}$. Luo [Luo04] introduced the following notion of
discrete conformality.
###### Definition 1.1 (Luo [Luo04]).
Two PL metrics $l,l^{\prime}$ on $T_{0}=(V_{0},E_{0},F_{0})$ are _discretely
conformal_ if there exists some $u\in\mathbb{R}^{V_{0}}$ such that for any
edge $ij\in E_{0}$
$l^{\prime}_{ij}=e^{\frac{1}{2}(u_{i}+u_{j})}l_{ij}.$
In this case, $u$ is called a _discrete conformal factor_ , and we denote
$l^{\prime}=u*l$.
Given a PL metric $l$ on $T_{0}$, let $\theta^{i}_{jk}$ denote the inner angle
at the vertex $i$ in the triangle $\triangle ijk$ under the metric $l$. Then
$l$ is called
1. (a)
_uniformly nondegenerate_ if there exists a constant $\epsilon>0$ such that
$\theta^{i}_{jk}\geq\epsilon$ for all $\triangle ijk$ in $T_{0}$, and
2. (b)
_uniformly acute_ if there exists a constant $\epsilon>0$ such that
$\theta^{i}_{jk}\leq\pi/2-\epsilon$ for all $\triangle ijk$ in $T_{0}$, and
3. (c)
_Delaunay_ if $\theta^{k}_{ij}+\theta^{k^{\prime}}_{ij}\leq\pi$ for any pair
of adjacent triangles $\triangle ijk$ and $\triangle ijk^{\prime}$ in $T_{0}$.
A uniformly acute PL metric is clearly uniformly nondegenerate and Delaunay.
The main result of the paper is the following.
###### Theorem 1.2.
Suppose $\phi$ is a geodesic homeomorphism of $T$ and $\psi$ is a geodesic
embedding of $T$. If $l(\phi),l(\psi)$ are discretely conformal and both
uniformly acute, then they differ by a constant scaling.
Wu-Gu-Sun [WGS15] first proved Theorem 1.2 for the special case where
$\phi(T)$ is a regular hexagonal triangulation. Dai-Ge-Ma [DGM22] and Luo-Sun-
Wu [LSW20] generalized Wu-Gu-Sun’s result by allowing $l(\psi)$ to be only
Delaunay rather than uniformly acute. All these works essentially rely on the
lattice structure of the embedded vertices $\phi(V)$, and apparently cannot be
generalized to triangulations without translational invariance. To prove
Theorem 1.2, we adopted a different approach, which is developed by He [He99]
in his state-of-art work on the rigidity of disk patterns.
### 1.1. Other Related Works
After Luo introducing the Definition 1.1, various properties regarding the
rigidity and convergence of the discrete conformality were discussed in
[BPS15][WGS15][GLW19][WZ20][LSW20][LWZ21a][DGM22]. To solve the problem of
singularity in the discrete Yamabe flow, Gu et al. [GLSW18][GGL+18] proposed a
revised notion of discrete conformality for piecewise Euclidean (or
hyperbolic) metrics on closed surfaces with marked points, and perfectly
solved the prescribed curvature problem. This major improvement in the theory
of discrete conformality inspired new advanced numerical methods in computing
conformal maps [SWGL15][GSC21][CCS+21], as well as further theoretical
investigations [Spr19][LW19]. Gu et al. [GLSW18][GGL+18] proposed to use the
discrete Yamabe flow to numerically compute the target metric in the
prescribed curvature problem. Since the discrete Yamabe flow may pass through
different combinatorial triangulations, diagonal switches might be needed
along the flow. In [Wu14] it is proved that only finitely many diagonal
switches are needed in a Yamabe flow. Other works on discrete geometric flows
or deformations of triangle meshes could be found in [ZGZ+14][GH18]
[ZX19][FLZ20][WX21][LWZ21b][LWZ21c][LWZ22][Luo22].
### 1.2. Notations and Conventions
In the remaining of the paper, we will identify the plane $\mathbb{R}^{2}$ as
the complex plane $\mathbb{C}$. Given $0<r<r^{\prime}$, denote
$D_{r}=\\{z\in\mathbb{C}:|z|<r\\}$ and
$A_{r,r^{\prime}}=\\{z\in\mathbb{C}:r<|z|<r^{\prime}\\}$. We also denote
$D=D_{1}$ as the unit open disk. Given a subset $A$ of $\mathbb{C}$, $A^{c}$
denote the complement $\mathbb{C}\backslash A$ and $\partial A$ denotes the
boundary of $A$ in $\mathbb{C}$. Given two subsets $A,B$ of $\mathbb{C}$, the
diameter of $A$ is denoted by
$\text{diam}(A)=\sup\\{|z-z^{\prime}|:z,z^{\prime}\in A\\},$
and the distance between $A,B$ is denoted by
$d(A,B)=\inf\\{|z-z^{\prime}|:z\in A,z^{\prime}\in B\\}.$
Given a subset $V_{0}$ of $V$, we use the following notations and conventions.
1. (a)
The _complement_ of $V_{0}$ is denoted as $V_{0}^{c}=V\backslash V_{0}$.
2. (b)
The _boundary_ of $V_{0}$ is denoted as
$\partial V_{0}=\\{i\in V_{0}:\text{there exists $j\in V_{0}^{c}$ such that
$ij\in E$}\\}.$
3. (c)
The _interior_ of $V_{0}$ is denoted as
$int(V_{0})=V_{0}\backslash\partial V_{0}=\\{i\in V_{0}:j\in V_{0}\text{ if
}ij\in E\\}.$
4. (d)
The _closure_ of $V_{0}$ is denoted as
$\overline{V_{0}}=V_{0}\cup\partial(V_{0}^{c})=(int(V_{0}^{c}))^{c}.$
5. (e)
The subcomplex generated by $V_{0}$ is denoted as $T(V_{0})$.
6. (f)
Denote $E(V_{0})=\\{ij\in E:i\in int(V_{0})\text{ or }j\in int(V_{0})\\}$.
Notice that $E(V_{0})$ generally is not the set of edges in $T(V_{0})$.
7. (g)
A real-valued function on $V_{0}$ is often identifies as a vector in
$\mathbb{R}^{V_{0}}$.
Given $i\in V$, the _1-ring neighborhood_ of $i$ is the subcomplex generated
by $i$ and its neighbors. In other words, the 1-ring neighborhood of $i$ is
$T(\\{i\\}\cup\\{j\in V:ij\in E\\}).$
Furthermore, we denote $R_{i}$ as the underlying space of the 1-ring
neighborhood of $i$. Given a subcomplex $T_{0}=(V_{0},E_{0},F_{0})$ of $T$ and
$l\in\mathbb{R}^{E_{0}}$ and $u\in\mathbb{R}^{V_{0}}$, if $u*l$ is a PL metric
then
1. (a)
$\theta^{i}_{jk}(u)=\theta^{i}_{jk}(u,l)$ denotes the inner angle of
$\triangle ijk$ at $i$ under $u*l$, and
2. (b)
$K_{i}(u)=K_{i}(u*l)$ denotes the discrete curvature
$K_{i}(u)=2\pi-\sum_{jk:\triangle ijk\in F}\theta^{i}_{jk}(u).$
### 1.3. Organization of the Paper
In Section 2 we introduce necessary properties and tools for the proof of the
main theorem. The proof of main Theorem 1.2 is given in Section 3. Section 4
gives a proof of a discrete Liouville theorem, which is used in proving
Theorem 1.2. Section 5 proves a key estimate for the discrete conformal factor
by relating to the hyperbolic discrete conformality.
### 1.4. Acknowledgement
The work is supported in part by NSF 1760471.
## 2\. Preparations for the Proof
### 2.1. Extremal Length and Modulus of Annuli
We briefly review the notions of extremal length and conformal modulus. The
definitions and properties discussed here are mostly well-known. One may refer
[Ahl10] and [LV73] for more comprehensive introductions.
A _closed annulus_ is a subset of $\mathbb{C}$ that is homeomorphic to
$\\{z\in\mathbb{C}:1\leq|z|\leq 2\\}$. An _(open) annulus_ is the interior of
a closed annulus. Given an annulus $A$, denote $\Gamma=\Gamma(A)$ as the set
of smooth simple closed curves in $A$ separating the two boundary components
of $A$. A real-valued Borel measurable function $f$ on $A$ is called
_admissible_ if $\int_{\gamma}fds\geq 1$ for all $\gamma\in\Gamma$. Here $ds$
denotes the element of arc length. The _(conformal) modulus_ of $A$ is defined
as
$\text{Mod}(A)=\inf\\{\int_{A}f^{2}:f\text{ is admissible}\\},$
where $\int_{A}f^{2}$ denotes the integral of $f(z)^{2}$ against the 2-dim
Lebesgue measure on $A$. From the definition it is straightforward to verify
that $\text{Mod}(A)$ is conformally invariant. Furthermore, if $f:A\rightarrow
A^{\prime}$ is a $K$-quasiconformal homeomorphism between two annuli, then
$\frac{1}{K}\cdot\text{Mod}(A)\leq\text{Mod}(A^{\prime})\leq{K}\cdot\text{Mod}(A).$
Given $0<r<r^{\prime}$, denote $A_{r,r^{\prime}}$ as the annulus
$\\{z\in\mathbb{C}:r<|z|<r^{\prime}\\}$. It is well-known that
$\text{Mod}(A_{r,r^{\prime}})=\frac{1}{2\pi}\log\frac{r^{\prime}}{r}.$
Intuitively, the conformal modulus measures the relative thickness of an
annulus. If an annulus $A$ in $\mathbb{C}\backslash\\{0\\}$ contains
$A_{r,r^{\prime}}$, then it is “thicker” than $A_{r,r^{\prime}}$ and one can
show that
$\text{Mod}(A)\geq\text{Mod}(A_{r,r^{\prime}})=\frac{1}{2\pi}\log\frac{r^{\prime}}{r}.$
On the other hand, we have that
###### Lemma 2.1.
Suppose $A\subseteq\mathbb{C}\backslash\\{0\\}$ is an annulus separating $0$
from the infinity. If $\text{Mod}(A)\geq 100$, then $A\supseteq A_{r,2r}$ for
some $r>0$.
###### Proof.
Deonte $B$ as the bounded component of $\mathbb{C}-A$, and $r=\max\\{|z|:z\in
B\\}$ and $R=\min\\{|z|:z\in\mathbb{(}B\cup A)^{c}\\}$. If $R\geq 2r$ we are
done. So we may assume $R<2r$.
Then $D_{2r}\cap\gamma\neq\emptyset$ for all $\gamma\in\Gamma(A)$. Let $f$ be
a function on $A$ such that $f(z)=1/r$ on $A\cap D_{3r}$ and $f(z)=0$ on
$A\backslash D_{3r}$. If $\gamma\in\Gamma$ and $\gamma\subseteq D_{3r}$,
$\int_{\gamma}fds=s(\gamma)\cdot\frac{1}{r}\geq
2\cdot\text{diam}(B)\cdot\frac{1}{r}=2r\cdot\frac{1}{r}>1.$
If $\gamma\in\Gamma$ and $\gamma\subsetneq D_{3r}$, then $\gamma$ is a
connected curve connecting $D_{2r}$ and $D_{3r}^{c}$ and
$\int_{\gamma}fds\geq
d(D_{2r},D_{3r}^{c})\cdot\frac{1}{r}=r\cdot\frac{1}{r}=1.$
So $f$ is admissible and
$\text{Mod}(A)\leq\int_{A}f^{2}=\frac{1}{r^{2}}\cdot\text{Area}(A\cap
D_{3r})\leq\frac{\pi(3r)^{2}}{r^{2}}=9\pi<100.$
This contradicts with our assumption. ∎
###### Remark 2.2.
To some extend, Lemma 2.1 is a consequence of Teichmüller’s result on extremal
annuli (see Theorem 4-7 in [Ahl10]). The constant $100$ is chosen for
convenience and should not be optimal.
### 2.2. Discrete Harmonic Functions
Given $V_{0}\subset V$ and $\eta\in\mathbb{R}^{E(V_{0})}_{>0}$, a discrete
function $f:V_{0}\rightarrow\mathbb{R}$, or equivalently a vector
$f\in\mathbb{R}^{V_{0}}$, is called _harmonic at_ $i\in int(V_{0})$ if
$\sum_{j:ij\in E}\eta_{ij}(f_{j}-f_{i})=0.$
The following result is well-known and easy to prove.
###### Proposition 2.3.
Suppose $V_{0}$ is a finite subset of $V$ and
$\eta_{ij}\in\mathbb{R}^{E(V_{0})}_{>0}$.
1. (a)
If $f\in\mathbb{R}^{V_{0}}$ is harmonic at $i$ for all $i\in int(V_{0})$, then
for all $i\in V_{0}$
$|f_{i}|\leq\max_{j\in\partial V_{0}}|f_{j}|.$
2. (b)
Given $g:\partial V_{0}\rightarrow\mathbb{R}$, there exists a unique function
$f:V_{0}\rightarrow\mathbb{R}$ such that
1. (i)
$f_{i}=g_{i}$ on $\partial V_{0}$, and
2. (ii)
$f$ is harmonic at any $i\in int(V_{0})$.
Furthermore, such a map $(\eta,g)\mapsto f$ is smooth from
$\mathbb{R}^{E(V_{0})}_{>0}\times\mathbb{R}^{\partial V_{0}}$ to
$\mathbb{R}^{V_{0}}$.
Given $\eta\in\mathbb{R}^{E}_{>0}$, $f\in\mathbb{R}^{V}$ is called _harmonic_
if it is harmonic at all points in $V$. It is well-known by Liouville’s
Theorem that any bounded smooth harmonic function on the plane is constant.
Here we have a discrete version of Liouville’s Theorem.
###### Theorem 2.4.
Suppose $\phi$ is a geodesic embedding of $T$ and $l(\phi)$ is uniformly
nondegenerate. Given $\eta\in\mathbb{R}^{E}_{>0}$ with
$|\eta|_{\infty}<\infty$, then any bounded harmonic function on $(T,\eta)$ is
constant.
The proof of Theorem 2.4 is postponed to Section 4.
### 2.3. Differential of the Curvature Map
The differential of $K_{i}(u)$ has the following elegant formula, first
proposed by Luo [Luo04].
###### Proposition 2.5 (Adapted from Theorem 2.1 in [Luo04]).
Suppose $T_{0}=(V_{0},E_{0},F_{0})$ is a 1-ring neighborhood of $i\in V$ and
$l\in\mathbb{R}^{E_{0}}$. Then $K_{i}=K_{i}(u)$ is a smooth function on an
open set in $\mathbb{R}^{V_{0}}$, and
$dK_{i}=\sum_{j:ij\in E}\eta_{ij}(du_{i}-du_{j}).$
where $\eta_{ij}=\eta_{ij}(u)$ is defined to be
(2.1)
$\eta_{ij}(u)=\frac{1}{2}\left(\cot\theta^{k}_{ij}(u)+\cot\theta^{k^{\prime}}_{ij}(u)\right),$
where $\triangle ijk,\triangle ijk^{\prime}$ are the two triangles in $F$
containing edge $ij$.
### 2.4. Maximum Principles
We need the following maximum principle.
###### Lemma 2.6.
Suppose $V_{0}$ is a finite subset of $V$, and $u*l,u^{\prime}*l$ are Delaunay
PL metrics on $T(V_{0})$. If $K_{i}(u)=K_{i}(u^{\prime})=0$ for all $i\in
int(V_{0})$, then for all $i\in V_{0}$
$|u_{i}^{\prime}-u_{i}|\leq\max_{j\in\partial V_{0}}|u_{j}^{\prime}-u_{j}|.$
Lemma 2.6 is a standard consequence the following local maximum principle,
which is adapted from Lemma 2.12 in [DGM22] (or Theorem 3.1 in [LSW20]).
###### Lemma 2.7.
Suppose $i\in V$ and $T_{0}=(V_{0},E_{0},F_{0})$ is the 1-ring neighborhood of
$i$ in $V$. Given $l\in\mathbb{R}^{E_{0}}$, if $u*l,u^{\prime}*l$ are two
Delaunay PL metrics on $T_{0}$ and $K_{i}(u)=K_{i}(u^{\prime})=0$, then
$u_{i}^{\prime}-u_{i}\leq\max_{j:ij\in E}(u_{j}^{\prime}-u_{j})$
and the equality holds if $(u_{i}^{\prime}-u_{i})=(u_{j}^{\prime}-u_{j})$ for
any neighbor $j$ of $i$.
###### Remark 2.8.
Lemma 2.12 in [DGM22] is a special case of our Lemma 2.7, where
$u_{i}=u_{i}^{\prime}=0$ is further assumed. However, by the scaling
invariance these two Lemmas are really equivalent.
### 2.5. Key Estimates on the Conformal Factors for Geodesic Embeddings
###### Lemma 2.9.
Suppose $\epsilon>0$ and $\phi,\psi$ are two geodesic embeddings of a
subcomplex $T_{0}=(V_{0},E_{0},F_{0})$ of $T$, such that
1. (i)
$l(\psi)=u*l(\phi)$ for some $u\in\mathbb{R}^{V_{0}}$, and
2. (ii)
the inner angles in both PL metrics $l(\phi)$ and $l(\psi)$ are at most
$\pi/2-\epsilon$.
Given $r,r^{\prime}>0$ and $i\in V$, if
$\phi(|T_{0}|)\subseteq D_{r}$
and
$\psi(i)\in D_{r^{\prime}/2}\subseteq D_{r^{\prime}}\subseteq\psi(|T_{0}|),$
then
$u_{i}\geq\log(r^{\prime}/r)-M$
for some constant $M=M(\epsilon)>0$.
## 3\. Proof of Theorem 1.2
Assume $l(\psi)=\bar{u}*l(\phi)$, and all the inner angles in
$l(\phi),l(\psi)$ are at most $\pi/2-\epsilon$ for a constant $\epsilon>0$. We
will first prove Theorem 1.2 assuming $\bar{u}:V\rightarrow\mathbb{R}$ is
bounded in Section 3.1, and then prove $\bar{u}$ is bounded in Section 3.2.
### 3.1. Proof of Theorem 1.2 Assuming the Boundedness of $\bar{u}$
Let us prove by contradiction and assume that $\bar{u}$ is not constant.
Without loss of generality, we can do a scaling and assume
$\inf_{i\in V}\bar{u}_{i}<0<\sup_{i\in V}\bar{u}_{i}$
and
$-\inf_{i\in V}\bar{u}_{i}=\sup_{i\in V}\bar{u}_{i}=|\bar{u}|_{\infty}.$
By a standard compactness argument, it is not difficult to see that there
exists a small constant
$\delta=\delta(\epsilon,\bar{u})\in(0,|\bar{u}|_{\infty})$ such that if
$|u|_{\infty}<2\delta$,
$\theta^{i}_{jk}(u)=\theta^{i}_{jk}(u,l(\phi))\geq\pi/2-\epsilon/2$
for all $\triangle ijk\in F$. Pick a sequence of increasing subsets $V_{n}$ of
$V$ such that $\cup_{n=1}^{\infty}V_{n}=V$. For each $n\in\mathbb{Z}_{>0}$, we
will construct a smooth $\mathbb{R}^{V_{n}}$-valued function
$u^{(n)}(t)=[u_{i}^{(n)}(t)]_{i\in V_{n}}$ on $(-2\delta,2\delta)$ such that
1. (a)
$u^{(n)}(0)=0$, and
2. (b)
$\dot{u}_{i}^{(n)}(t)=\bar{u}_{i}/|\bar{u}|_{\infty}$ if $i\in\partial V_{n}$,
and
3. (c)
if $i\in\text{int}(V_{n})$ then
(3.1) $\sum_{j:ij\in
E}\eta_{ij}(u^{(n)}(t))(\dot{u}_{i}^{(n)}(t)-\dot{u}_{j}^{(n)}(t))=0$
where $\eta_{ij}(u)$ is defined for all $ij\in E(V_{n})$ as in equation (2.1).
The conditions (b) and (c) give an autonomous ODE system on
$\mathcal{U}_{n}=\\{u\in\mathbb{R}^{V_{n}}:|u|_{\infty}<2\delta\\}.$
Notice that $\eta_{ij}(u)>0$ if $u\in\mathcal{U}_{n}$. Then by part (b) of
Lemma 2.3, $\dot{u}^{(n)}(t)$ is smoothly determined by $u^{(n)}(t)$ on
$\mathcal{U}_{n}$. Given the initial condition $u^{(n)}(0)=0$, assume the
maximum existence interval for this ODE system on $\mathcal{U}_{n}$ is
$(t_{\min},t_{\max})$ where $t_{\min}\in[-\infty,0)$ and
$t_{\max}\in(0,\infty]$. By the maximum principle (part (a) in Lemma 2.3), for
all $i\in V_{n}$
$|\dot{u}^{(n)}|_{\infty}\leq\max_{j\in\partial
V_{n}}|\dot{u}_{j}^{(n)}|=\max_{j\in\partial
V_{n}}|\bar{u}_{j}|/|\bar{u}|_{\infty}\leq 1.$
So $|u^{(n)}(t)|_{\infty}\leq t\leq t_{\max}$ for all $t\in[0,t_{\max})$. By
the maximality of $t_{\max}$, $t_{\max}=\infty$ or
$|u^{(n)}(t)|_{\infty}\rightarrow 2\delta\quad\text{ as }\quad t\rightarrow
t_{\max}.$
So $t_{\max}\geq 2\delta$ and by a similar reason $t_{\min}\leq-2\delta$.
$u^{(n)}(t)$ is indeed well-defined on $(-2\delta,2\delta)$. By Proposition
2.5 and equation (3.1), $K_{i}(u^{(n)}(t))=0$ for all $i\in int(V_{n})$. Then
by Lemma 2.6, for all $i\in V_{n}$
(3.2) $\displaystyle|\bar{u}_{i}-u_{i}^{(n)}(\delta)|\leq\max_{j\in\partial
V_{n}}|\bar{u}_{j}-u_{j}^{(n)}(\delta)|=\max_{j\in\partial
V_{n}}\left(\bar{u}_{j}-\delta\cdot\frac{\bar{u}_{j}}{|\bar{u}|_{\infty}}\right)$
$\displaystyle\leq$
$\displaystyle(1-\frac{\delta}{|\bar{u}|_{\infty}})|\bar{u}|_{\infty}=|\bar{u}|_{\infty}-\delta.$
By picking a subsequence, we may assume that $u^{(n)}_{i}$ converge to
$u_{i}^{*}$ on $[0,\delta]$ uniformly for all $i\in V$. Then
$u^{*}=[u_{i}^{*}]_{i\in V}$ satisfies the following.
(a) $u^{*}_{i}(t)$ is 1-Lipschitz for all $i\in V$. As a consequence, for all
$i\in V$, $u_{i}^{*}(t)$ is differentiable at a.e. $t\in[0,\delta]$.
(b) For all $\triangle ijk\in F$,
$\theta^{i}_{jk}(u^{*}(t))\leq\frac{\pi}{2}-\frac{\epsilon}{2}$. As a
consequence $\theta^{i}_{jk}(u^{*}(t))\geq\epsilon$ for all $\triangle ijk\in
F$ and $\eta_{ij}(u^{*}(t))\leq 2\cot\epsilon$ for all $ij\in E$.
(c) For all $i\in V$, $K_{i}(u^{*}(t))=0$. As a consequence for a.e.
$t\in[0,\delta]$,
$0=\frac{d}{dt}K_{i}(u^{*}(t))=\sum_{j:ij\in
E}\eta_{ij}(u^{*}(t))(\dot{u}^{*}_{i}(t)-\dot{u}^{*}_{j}(t)),$
for all $i\in V$.
(d) By Theorem 2.4, $\dot{u}^{*}(t)$ is constant on $V$ for a.e.
$t\in[0,\delta]$. As a consequence $u_{i}^{*}(\delta)$ equals to a constant
$c$ independent on $i\in V$.
(f) By equation (3.2),
$|\bar{u}_{i}-c|=|\bar{u}_{i}-u^{*}_{i}(\delta)|\leq|\bar{u}|_{\infty}-\delta$
for all $i\in V$. As a consequence we get the following contradiction
$2|\bar{u}|_{\infty}=|\sup_{i\in V}\bar{u}_{i}-\inf_{i\in
V}\bar{u}_{i}|\leq|\sup_{i\in V}\bar{u}_{i}-c|+|\inf_{i\in
V}\bar{u}_{i}-c|\leq 2|\bar{u}|_{\infty}-2\delta.$
### 3.2. Boundedness of the Conformal Factor
Without loss of generality, we may assume that $\psi\circ\phi^{-1}$ is linear
on each triangle $\phi(\triangle ijk)$. Then $\psi\circ\phi^{-1}$ is
$K$-quasiconformal for some constant $K=K(\epsilon)>0$. We will prove the
boundedness of $\bar{u}$ by showing that for any $j,j^{\prime}\in V$,
$|\bar{u}_{j}-\bar{u}_{j^{\prime}}|\leq 2M+2\log C+\log C^{\prime}-\log 2,$
where $M=M(\epsilon)$ is the constant given in Lemma 2.9 and $C=C(\epsilon)$
is the constant given in Lemma 4.3 and
$C^{\prime}=C^{\prime}(\epsilon)=e^{200\pi K}$.
Assume $j,j^{\prime}\in V$. For convenience, let us assume $\phi(j)=\psi(j)=0$
by translations. Pick $r>0$ sufficiently large such that
$|\phi(j^{\prime})|<r/(2C)$ and $\phi(R_{j})\subseteq D_{r}$. Let
$V_{1}=\\{i\in V:\phi(i)\in D_{r}\\}$ and $V_{2}=\\{i\in V:\phi(i)\in
D_{CC^{\prime}r}\\}$ and $T_{1}=T(V_{1})$ and $T_{2}=T(V_{2})$. Then by Lemma
4.3 we have
(3.3) $\\{\phi(j),\phi(j^{\prime})\\}\subseteq D_{r/(2C)}\subseteq
D_{r/C}\subseteq\phi(|T_{1}|),$
and
$\phi(|T_{1}|)\subseteq D_{r}\subseteq D_{C^{\prime}r}\subseteq\phi(|T_{2}|)$
and
(3.4) $\phi(|T_{2}|)\subseteq D_{CC^{\prime}r}.$
So $A=A_{r,C^{\prime}r}$ separates $\phi(|T_{1}|)$ and $\phi(|T_{2}|)^{c}$,
and then $A^{\prime}=\psi\circ\phi^{-1}(A)\ni\psi(j)=0$ separates
$\psi(T_{1})$ and $\psi(T_{2})^{c}$. Furthermore
$\text{Mod}(A^{\prime})\geq\frac{1}{K}\cdot\text{Mod}(A)=\frac{1}{K}\cdot\frac{1}{2\pi}\log\frac{C^{\prime}r}{r}=100.$
Then by Lemma 2.1 there exists $r^{\prime}>0$ such that
$A_{r^{\prime},2r^{\prime}}\subseteq A^{\prime}$. So
$A_{r^{\prime},2r^{\prime}}$ separates $\psi(T_{1})$ and $\psi(T_{2})^{c}$ and
then
(3.5) $\psi(|T_{1}|)\subseteq D_{r^{\prime}}$
and
(3.6) $\\{\psi(j),\psi(j^{\prime})\\}\subseteq D_{r^{\prime}}\subseteq
D_{2r^{\prime}}\subseteq\psi(|T_{2}|).$
By Lemma 2.9 and equations (3.4) and (3.6), both
$\bar{u}_{j},\bar{u}_{j^{\prime}}$ are at least
$\log\frac{2r^{\prime}}{CC^{\prime}r}-M=\log\frac{r^{\prime}}{r}+\log\frac{2}{CC^{\prime}}-M.$
Again by Lemma 2.9 and equations (3.5) and (3.3), both $-\bar{u}_{j}$ and
$-\bar{u}_{j^{\prime}}$ are at least
$\log\frac{r/C}{r^{\prime}}-M=\log\frac{r}{r^{\prime}}-\log C-M.$
So both $\bar{u}_{j}$ and $\bar{u}_{j^{\prime}}$ are in the interval
$[\log\frac{r^{\prime}}{r}+\log\frac{2}{CC^{\prime}}-M,\log\frac{r^{\prime}}{r}+\log
C+M],$
and $|\bar{u}_{j}-\bar{u}_{j^{\prime}}|$ is bounded by the length of this
interval
$2M+\log C-\log\frac{2}{CC^{\prime}}=2M+2\log C+\log C^{\prime}-\log 2.$
## 4\. Discrete Extremal Length and the Discrete Liouville Theorem
### 4.1. Electrical Networks and Discrete Extremal Length
Discrete harmonic functions are closely related to the theory of electrical
networks. Here the 1-skeleton $(V,E)$ of the triangulation $T$ could be viewed
as an electrical network, and $\eta_{ij}$ denotes the conductance of the edge
$ij$, and the function $f$ denotes the electric potentials at the vertices.
Then $f$ is harmonic at $i$ if and only if the outward electric flux at $i$ is
$0$. The theory of electrical networks is closely related to discrete (edge)
extremal length, originally introduced by Duffin [Duf62]. Here we briefly
review the theory of discrete (edge) extremal length, adapted to our setting.
All the definitions and properties here are well-known and one may read
[Duf62][He99] for references.
Assume $V_{1},V_{2}$ are two nonempty disjoint subsets of $V$ such that
$V_{0}=(V_{1}\cup V_{2})^{c}$ is finite. A _path_ $p$ between $V_{1}$ and
$V_{2}$ is a finite set of edges in
$E_{0}=E_{0}(V_{1},V_{2})=\\{ij\in E:i\in V_{0}\text{ or }j\in V_{0}\\}$
such that $\gamma_{p}=\cup\\{e:e\in p\\}$ is a simple curve connecting $V_{1}$
and $V_{2}$. Denote $P=P(V_{1},V_{2})$ as the set of paths between $V_{1}$ and
$V_{2}$. A _cut_ $q$ between $V_{1}$ and $V_{2}$ is a finite set of edges in
$E_{0}$ such that $q$ separates $V_{1}$ and $V_{2}$, i.e., for any path $p\in
P$, $p\cap q\neq\emptyset$. Denote $Q=Q(V_{1},V_{2})$ as the set of cuts
between $V_{1}$ and $V_{2}$.
Given $\mu\in\mathbb{R}^{E}_{>0}$, the _discrete (edge) extremal length_
$EL=EL(V_{1},V_{2},\mu)$ is defined as
(4.1) $EL=\min\\{\sum_{e\in
E_{0}}\mu_{e}w_{e}^{2}:w\in\mathbb{R}^{E_{0}},\sum_{e\in q}w_{e}\geq 1\text{
for all }q\in Q\\},$
and the _discrete (edge) extremal width_ $EW=EW(V_{1},V_{2},\mu)$ is defined
as
$EW=\min\\{\sum_{e\in E_{0}}\mu_{e}w_{e}^{2}:w\in\mathbb{R}^{E_{0}},\sum_{e\in
p}\mu_{e}w_{e}\geq 1\text{ for all }p\in P\\}.$
Here $\mu_{e}$ should be viewed as the resistance of edge $e\in E$. Then the
conductance of edge $e\in E$ should be $\eta_{e}=1/\mu_{e}$. If
$f:V\rightarrow\mathbb{R}$ is harmonic on $V_{0}$ with respect to $\eta$ and
$f|_{V_{1}}=0$ and $f|_{V_{2}}=1$, then $w_{ij}=|f_{j}-f_{i}|/\mu_{ij}$ gives
the unique minimizer in the quadratic minimization problem in equation (4.1).
If we view such $f$ as an electric potential, then $w_{e}$ represents the
current on edge $e\in E_{0}$ and $EL=\sum_{e\in E_{0}}\mu_{e}w_{e}^{2}$ is the
electrical power in the network, which is equal to the _(equivalent)
resistance_ between $V_{1}$ and $V_{2}$. The discrete extremal length and
width satisfy the following reciprocal theorem.
###### Theorem 4.1 (Adapted from Corollary 1 in [Duf62]).
$EL(V_{1},V_{2},\mu)\cdot EW(V_{1},V_{2},\mu)=1$.
Now assume $\emptyset\neq V_{0},V_{1},V_{2}...$ is an increase sequence of
subsets of $V$ and $\cup_{k=0}^{\infty}V_{k}=V$. Then the electric network
$(T,\mu)$ is called _recurrent_ if
$EL(V_{0},V_{n}^{c},\mu)\rightarrow\infty$
as $n\rightarrow\infty$. The recurrency of the network does not depend on the
choice of $V_{n}$’s. Intuitively the recurrency means that the equivalent
resistance between a finite set and the infinity is infinite. Discrete
extremal length is a useful tool to prove the discrete Liouville theorem,
since the recurrency implies the discrete Liouville property.
###### Lemma 4.2 (Lemma 5.5 in [He99]).
Assume $(T,\mu)$ is recurrent, and let $\eta_{e}=1/\mu_{e}$ for all $e\in E$.
Then any bounded harmonic function on $(T,\eta)$ is bounded.
### 4.2. Proof of the Discrete Liouville Theorem
We need the following lemma for the proof.
###### Lemma 4.3.
Suppose $\phi:|T|\rightarrow\mathbb{R}^{2}$ is a geodesic homeomorphism and
any inner angle in $l(\phi)$ is at least $\epsilon>0$. Let $a\in V$ be a
vertex and assume $\phi(a)=0$. Given $r>0$, denote $V_{r}=\\{i\in
V:|\phi(i)|<r\\}$ and $T_{r}=T(V_{r})$. Then there exists a constant
$C=C(\epsilon)>0$ such that if $\phi(R_{a})\subseteq D_{r}$,
1. (a)
$D_{r/C}\subseteq\phi(|T_{r}|),$ and
2. (b)
as a consequence $|\phi(i)|\geq r/C$ for all $i\in\partial V_{r}$.
###### Proof.
By a standard compactness argument, it is not difficult to show that there
exists a constant $\delta=\delta(\epsilon)>0$ such that for all $\triangle
ijk\in F$,
$d(U_{ijk}^{c},\phi(\triangle ijk))\geq\delta\cdot\text{diam}(\phi(\triangle
ijk))$
where
$U_{ijk}=int(\phi(R_{i}))\cup int(\phi(R_{j}))\cup
int(\phi(R_{k}))\supseteq\phi(\triangle ijk).$
We claim that $C=1+2/\delta$ is a desired constant. Let us prove by
contradiction. Suppose $r>\max\\{|\phi(i)|:ai\in E\\}$ and
$D_{r/C}\not\subseteq\phi(|T_{r}|).$ Then there exists $z\in
D_{r/C}\backslash\phi(|T_{r}|)$. Since $\phi$ is a geodesic homeomorphism,
there exists a triangle $\triangle ijk\in F$ such that $z\in\phi(\triangle
ijk)$. Then $\triangle ijk$ is not a triangle in $T_{r}$ and we may assume
$i\notin V_{r}$. So $|\phi(i)|\geq r$ and $ai\notin E$ and $0=\phi(a)\notin
U_{ijk}$. Then
${r}/{C}\geq|0-z|\geq d(U_{ijk}^{c},\phi(\triangle
ijk))\geq\delta\cdot\text{diam}(\phi(\triangle ijk))$
$\geq\delta\cdot|\phi(i)-z|\geq\delta\cdot(r-r/C)=(r/C)\cdot\delta(C-1)=2r/C$
and we get a contradiction. ∎
###### Proof of Theorem 2.4.
By replacing $\eta$ by $\eta/|\eta|_{\infty}$ we may assume that
$|\eta|_{\infty}=1$. Assume $\mu\in\mathbb{R}^{E}$ is defined as
$\mu_{e}=1/\eta_{e}\geq 1$ for all $e\in E$. Then by Lemma 4.2 we only need to
show that $(T,\mu)$ is recurrent. Let
$\mathbf{1}=(1,1,...,1)\in\mathbb{R}^{E}$. Then by the definition (equation
(4.1)) $EL(V_{1},V_{2},\mu)\geq EL(V_{1},V_{2},\mathbf{1})$ whenever well-
defined. So we only need to show that $(T,\mathbf{1})$ is recurrent.
Suppose $a\in V$ is a vertex and without loss of generality we may assume that
$\phi(a)=0$. Let $\epsilon>0$ be the infimum of the inner angles in the PL
metric $l(\phi)$, and $C=C(\epsilon)>1$ be the constant given in Lemma 4.3.
Let $r_{0}=\max\\{|\phi(i)|:ai\in E\\}$ and $r_{n}=(2C)^{n}r_{0}$ and
$V_{n}=\\{i\in V:\phi(i)\in D_{r_{n}}\\}$ for all $n\in\mathbb{Z}_{\geq 0}$.
Clearly $V_{n}$ is an increasing sequence of subsets of $V$ and
$\cup_{n=1}^{\infty}V_{n}=V$. We will prove the recurrency of $(T,\mathbf{1})$
by showing that $EL(V_{0},V_{n}^{c},\mathbf{1})\rightarrow\infty$ as
$n\rightarrow\infty$.
By Lemma 4.3 (b), $|\phi(i)|\geq r_{n}/C=2r_{n-1}$ if $i\in\partial V_{n}\cup
V_{n}^{c}=\overline{V_{n}^{c}}$. So
$V_{n-1}\cap\overline{V_{n}^{c}}=\emptyset$, i.e.,
$V_{n-1}\subseteq(\overline{V_{n}^{c}})^{c}=int(V_{n})$. It is easy to see
(4.2) $E_{0}(V_{n-1},\overline{V_{n}^{c}})\subseteq E(V_{n})\backslash
E(V_{n-1}).$
From the definition of extremal length, we have
$EL(V_{0},V_{n}^{c})\geq
EL(V_{0},\overline{V_{1}^{c}})+EL(V_{1},\overline{V_{2}^{c}})+...+EL(V_{n-1},\overline{V_{n}^{c}})$
since
1. (1)
$E_{0}(V_{0},\overline{V_{1}^{c}}),E_{0}(V_{1},\overline{V_{2}^{c}}),...,E_{0}(V_{n-1},\overline{V_{n}^{c}})$
are disjoint by equation (4.2), and
2. (2)
$Q(V_{0},\overline{V_{1}^{c}}),Q(V_{1},\overline{V_{2}^{c}}),...Q(V_{n-1},\overline{V_{n}^{c}})$
are all subsets of $Q(V_{0},V_{n}^{c})$.
So it suffices to show that for all $n$,
$EL(V_{n-1},\overline{V_{n}^{c}},\mathbf{1})\geq\frac{\sin^{2}\epsilon}{12\pi
C^{2}},$
which by Theorem 4.1 is equivalent to
$EW(V_{n-1},\overline{V_{n}^{c}},\mathbf{1})\leq\frac{12\pi
C^{2}}{\sin^{2}\epsilon}.$
In the remaining of the proof we denote
$E_{0}=E_{0}(V_{n-1},\overline{V_{n}^{c}})$. Pick $w_{e}=l_{e}/r_{n-1}$, and
then for any $p\in P=P(V_{n-1},\overline{V_{n}^{c}})$,
$\sum_{e\in P}w_{e}=\frac{1}{r_{n-1}}\sum_{e\in
P}l_{e}\geq\frac{1}{r_{n-1}}\cdot
d(\phi(V_{n-1}),\phi(\overline{V_{n}^{c}}))\geq\frac{1}{r_{n-1}}\cdot(2r_{n-1}-r_{n-1})=1.$
So
$EW(V_{n-1},\overline{V_{n}^{c}},\mathbf{1})\leq\sum_{e\in
E_{0}}w_{e}^{2}=\frac{1}{r_{n-1}^{2}}\sum_{e\in E_{0}}l_{e}^{2}$
and it remains to show
$\sum_{e\in E_{0}}l_{e}^{2}\leq\frac{12\pi C^{2}}{\sin^{2}\epsilon}\cdot
r_{n-1}^{2}.$
Given $e\in E$, denote $\triangle_{e},\triangle_{e}^{\prime}$ as the two
triangles in $T$ containing $e$. If $e\in E_{0}$, then $e$ contains at least 1
vertex in $(\overline{V_{n}^{c}})^{c}=int(V_{n})$ and
$\triangle_{e},\triangle_{e}^{\prime}$ are both triangles in $T_{n}$, i.e.,
$\phi(\triangle_{e}),\phi(\triangle_{e}^{\prime})$ are both in $D_{r_{n}}$.
Given a triangle $\triangle\in F$, we denote $|\triangle|$ as the area of
$\phi(\triangle)$. Then by the sine law
$|\triangle
ijk|=\frac{1}{2}l_{ij}l_{jk}\sin\theta^{j}_{ik}\geq\frac{1}{2}l_{ij}^{2}\cdot\frac{\sin\theta^{i}_{jk}}{\sin\theta^{k}_{ij}}\cdot\sin\theta^{j}_{ik}\geq
l_{ij}^{2}\cdot\frac{\sin^{2}\epsilon}{2}.$
Notice that a triangle $\triangle\in F$ is counted for at most $3$ times in
$\sum_{e\in E_{0}}(|\triangle_{e}|+|\triangle_{e}^{\prime}|)$ and then
$\sum_{e\in E_{0}}l_{e}^{2}\leq\frac{1}{\sin^{2}\epsilon}\sum_{e\in
E_{0}}(|\triangle_{e}|+|\triangle_{e}^{\prime}|)\leq\frac{1}{\sin^{2}\epsilon}\sum_{\triangle:\phi(\triangle)\subseteq
D_{r_{n}}}3|\triangle|=\frac{3\pi r_{n}^{2}}{\sin^{2}\epsilon}=\frac{12\pi
C^{2}}{\sin^{2}\epsilon}\cdot r_{n-1}^{2}.$
∎
## 5\. Hyperbolic Maximum Principles and Proof of Lemma 2.9
Given $z_{1},z_{2}\in D$, we denote $d_{h}(z_{1},z_{2})$ as the hyperbolic
distance between $z_{1},z_{2}$ in the Poincaré disk model. The (Euclidean)
discrete conformal change is related with the hyperbolic discrete conformal
change as follows.
###### Lemma 5.1.
Suppose $z_{1},z_{2},z_{1}^{\prime},z_{2}^{\prime}\in D$ and
$u_{1},u_{2},u_{1}^{h},u_{2}^{h}\in\mathbb{R}$ are such that
$u_{i}^{h}=u_{i}+\log\frac{1-|z_{i}|^{2}}{1-|z_{i}^{\prime}|^{2}}$
for $i=1,2$. Then
$|z_{1}^{\prime}-z_{2}^{\prime}|=e^{\frac{1}{2}(u_{1}+u_{2})}|z_{1}-z_{2}|$
if and only if
(5.1)
$\sinh\frac{d_{h}(z_{i}^{\prime},z_{j}^{\prime})}{2}=e^{\frac{1}{2}(u_{i}^{h}+u_{j}^{h})}\sinh\frac{d_{h}(z_{i},z_{j})}{2}.$
###### Remark 5.2.
Equation (5.1) is indeed the formula of the discrete conformal change for
piecewise hyperbolic metric. This formula was first proposed by Bobenko-
Pinkall-Springborn [BPS15], and $u^{h}_{i}$ in the formula is called the
hyperbolic discrete conformal factor at $i$.
Lemma 5.1 could be verified by elementary computations. The proof is given in
Appendix. The hyperbolic discrete conformal factor $u^{h}$ also satisfies a
maximum principle.
###### Lemma 5.3.
Suppose $V_{0}$ is a subset of $V$ and $u\in\mathbb{R}^{V_{0}}$ and
$\phi,\psi$ are Euclidean geodesic embeddings of $T(V_{0})$, such that
$\phi(|T(V_{0})|),\psi(|T(V_{0})|)\subseteq D$ and $l(\phi),l(\psi)$ are both
uniformly acute and $l(\psi)=u*l(\phi)$. For all $i\in V_{0}$, denote
$z_{i}=\phi(i)$ and $z_{i}^{\prime}=\psi(i)$ and
$u_{i}^{h}=u_{i}+\log\frac{1-|z_{i}|^{2}}{1-|z_{i}^{\prime}|^{2}}.$
1. (a)
If $i\in int(V_{0})$ and $u_{i}^{h}<0$, then there exists a neighbor $j$ of
$i$ such that
$u_{j}^{h}<u_{i}^{h}.$
2. (b)
If $u_{i}^{h}\geq 0$ for all $i\in\partial V_{0}$, then $u_{i}^{h}\geq 0$ for
all $i\in V_{0}$.
We first prove Lemma 2.9 using the hyperbolic maximum princple and then prove
Lemma 5.3.
###### Proof of Lemma 2.9.
For any $\triangle ijk\in F$,
$e^{\frac{1}{2}(u_{j}-u_{i})}=\frac{e^{(u_{j}+u_{k})/2}}{e^{(u_{i}+u_{k})/2}}=\frac{l_{jk}(\psi)/l_{jk}(\phi)}{l_{ik}(\psi)/l_{ik}(\phi)}=\frac{l_{jk}(\psi)}{l_{ik}(\psi)}\cdot\frac{l_{ik}(\phi)}{l_{jk}(\phi)}\geq\sin^{2}\epsilon.$
So there exists a constant $C=C(\epsilon)>0$ such that $|u_{j}-u_{i}|\leq 2C$
for all $ij\in E$. We will show that $M(\epsilon)=C(\epsilon)+3$ is a
satisfactory constant. By a scaling, we only need to prove for the special
case where $r^{\prime}=1$ and $r=e^{-C-2}$.
Denote $V_{1}=\\{i\in V:\psi(i)\in D\\}$ and $z_{i}=\phi(i)$ and
$z_{i}^{\prime}=\psi(i)$. Define $u^{h}\in\mathbb{R}^{V_{1}}$ as
$u_{i}^{h}=u_{i}+\log\frac{1-|z_{i}|^{2}}{1-|z_{i}^{\prime}|^{2}}$
for all $i\in V_{1}$. Assume $i\in\partial V_{1}$, then there exists $j\in
V_{0}-V_{1}$ such that $ij\in E$. We claim that $u_{i}^{h}\geq 0$, i.e.,
$e^{u_{i}}\cdot\frac{1-|z_{i}|^{2}}{1-|z_{i}^{\prime}|^{2}}\geq 1.$
Notice that
$1-|z_{i}^{\prime}|\leq|z_{i}^{\prime}-z_{j}^{\prime}|=e^{\frac{1}{2}(u_{i}+u_{j})}|z_{i}-z_{j}|\leq
e^{u_{i}+C}\cdot 2r=2e^{-2}e^{u_{i}}.$
So
$e^{u_{i}}\cdot\frac{1-|z_{i}|^{2}}{1-|z_{i}^{\prime}|^{2}}\geq\frac{e^{2}}{2}\cdot\frac{1-|z_{i}|^{2}}{1+|z_{i}^{\prime}|}\geq\frac{e^{2}}{2}\cdot\frac{1-r^{2}}{2}\geq\frac{e^{2}}{2}\cdot\frac{1-(e^{-2})^{2}}{2}>1.$
By the hyperbolic maximum principle Lemma 5.3 (b), $u_{i}^{h}\geq 0$ for all
$i\in V_{1}$. Then for all $i\in V_{0}$ with $|z_{i}^{\prime}|<1/2$,
$u_{i}=u_{i}^{h}-\log\frac{1-|z_{i}|^{2}}{1-|z_{i}^{\prime}|^{2}}\geq-\log\frac{1-|z_{i}|^{2}}{1-|z_{i}^{\prime}|^{2}}\geq\log(1-|z_{i}^{\prime}|^{2})\geq-1=\log(r^{\prime}/r)-M.$
∎
### 5.1. Proof of the Hyperbolic Maximum Principle
For the proof of Lemma 5.3, we need to briefly review the notion of hyperbolic
Delaunay. Given a subcomplex $T_{0}$ of $T$, an embedding
$\phi:|T_{0}|\rightarrow D$ is called a _hyperbolic geodesic embedding_ if
$\phi_{h}$ maps each edge of $T_{1}$ to a hyperbolic geodesic arc in
$(D,d_{h})$. Given a triangle $\triangle{ijk}$ in $T_{0}$ and a Euclidean or
hyperbolic geodesic embedding $\phi$ of $T_{0}$, denote
$C_{ijk}=C_{ijk}(\phi)$ as the circumcircle of $\phi(\triangle ijk)$, i.e., a
round circle in the Riemann sphere $\hat{\mathbb{C}}$ passing through the
three vertices of $\phi(\triangle ijk)$. Furthermore, we denote
$D_{ijk}=D_{ijk}(\phi)$ as the circumdisk of $\phi(\triangle ijk)$, i.e., the
closed round disk in $\hat{\mathbb{C}}$ such that $\partial D_{ijk}=C_{ijk}$
and $\phi(\triangle ijk)\subseteq D_{ijk}$. For a Euclidean geodesic embedding
$\phi$ of $T_{0}$, it is well-known that $l(\phi)$ is Delaunay if and only if
that for any pair of adjacent triangles $\triangle ijk,\triangle ijk^{\prime}$
in $T_{0}$,
$\phi(k^{\prime})\notin int(D_{ijk}).$
So here we naturally call a Euclidean or hyperbolic geodesic embedding $\phi$
_Delaunay_ if
$\phi(k^{\prime})\notin int(D_{ijk})$
for any pair of adjacent triangles $\triangle ijk,\triangle ijk^{\prime}$ in
$T_{0}$.
###### Proof of Lemma 5.3 (a).
Assume $i\in int(V_{0})$ and $T_{1}=(V_{1},E_{1},F_{1})$ is the 1-ring
neighborhood of $i$. Then by Lemma 5.4 below there exists a hyperbolic
Delaunay geodesic embedding $\phi_{h}$ (_resp._ $\psi_{h}$) of $T_{1}$ such
that $\phi_{h}(j)=z_{j}$ (_resp._ $\psi_{h}(j)=z_{j}^{\prime}$) for all $j\in
V_{1}$. By Lemma 5.1,
$\sinh\frac{d_{h}(z_{j}^{\prime},z_{k}^{\prime})}{2}=e^{\frac{1}{2}(u_{j}^{h}+u_{k}^{h})}\sinh\frac{d_{h}(z_{j},z_{k})}{2}$
for all $jk\in E_{1}$. Suppose $f_{1},f_{2}:D\rightarrow D$ are hyperbolic
isometries such that $f_{1}(z_{i})=0$ and $f_{2}(z_{i}^{\prime})=0$. Then
$\tilde{\phi}_{h}=f_{1}\circ\phi_{h}$ (_resp._
$\tilde{\psi}_{h}=f_{2}\circ\psi_{h}$) is a hyperbolic Delaunay geodesic
embedding. Denote $\tilde{z}_{j}=\tilde{\phi}_{h}(j)$ (_resp._
$\tilde{z}_{j}^{\prime}=\tilde{\psi}_{h}(j)$) for all $j\in V_{1}$. Then
$z_{i}=z_{i}^{\prime}=0$ and
$\sinh\frac{d_{h}(\tilde{z}_{j}^{\prime},\tilde{z}_{k}^{\prime})}{2}=e^{\frac{1}{2}(u_{j}^{h}+u_{k}^{h})}\sinh\frac{d_{h}(\tilde{z}_{j},\tilde{z}_{k})}{2}$
for all $jk\in E_{1}$. It is not hard to see that there exists a Euclidean
Delaunay geodesic embedding $\tilde{\phi}$ (_resp._ $\tilde{\psi}$) of $T_{1}$
such that $\tilde{\phi}(j)=\tilde{z}_{j}$ (_resp._
$\tilde{\psi}(j)=\tilde{z}_{j}^{\prime}$). By Lemma 5.1
$l(\tilde{\psi})=\tilde{u}*l(\tilde{\phi})$ where
$\tilde{u}_{j}=u_{j}^{h}-\log\frac{1-|\tilde{z}_{j}|^{2}}{1-|\tilde{z}_{j}^{\prime}|^{2}}.$
By the Euclidean maximum principle Lemma 2.7,
$\tilde{u}_{j}\leq\tilde{u}_{i}<0$ for some neighbor $j$ of $i$. Then
$|\tilde{z}_{j}^{\prime}|=l_{ij}(\tilde{\psi})=e^{\frac{1}{2}(\tilde{u}_{i}+\tilde{u}_{j})}l_{ij}(\tilde{\phi})=e^{\frac{1}{2}(\tilde{u}_{i}+\tilde{u}_{j})}|\tilde{z}_{j}|<|\tilde{z}_{j}|$
and
$u_{j}^{h}=\tilde{u}_{j}+\log\frac{1-|\tilde{z}_{j}|^{2}}{1-|\tilde{z}_{j}^{\prime}|^{2}}<\tilde{u}_{j}\leq\tilde{u}_{i}=u_{i}^{h}-\frac{1-|\tilde{z}_{i}|^{2}}{1-|\tilde{z}_{i}^{\prime}|^{2}}=u_{i}^{h}.$
∎
###### Proof of Lemma 5.3 (b).
If not, assume $u_{i}^{h}=\min_{j:j\in V_{0}}u_{j}^{h}<0$ and then $i\in
int(V_{0})$. By the minimality of $u_{i}^{h}$, $u_{j}^{h}\geq u_{i}^{h}$ for
any neighbor $j$ of $i$. This contradicts with part (a). ∎
###### Lemma 5.4.
Suppose $i\in V$ and $T_{1}=(V_{1},E_{1},F_{1})$ is a 1-ring neighborhood of
$i$. If $\phi$ is a geodesic embedding of $T_{1}$ such that
$\phi(|T_{1}|)\subseteq D$ and $l(\phi)$ is uniformly acute, then there exists
a hyperbolic geodesic embedding $\phi_{h}$ of $T_{1}$ such that
$\phi_{h}(j)=\phi(j)$ for all $j\in V_{1}$. Furthermore, such $\phi_{h}$ is
Delaunay.
###### Proof.
Let $j_{1},j_{2},...,j_{m}$ be the neighbors of $i$ listed counterclockwise in
$\phi(|T_{1}|)$. Denote $z_{0}=\phi(i)$ and $z_{k}=\phi(j_{k})$ for
$k=1,...,m$. If $\gamma(t):[0,1]\rightarrow D$ is a smooth curve such that
$\gamma(0)=z_{0}$, then $\dot{\gamma}(t)$ could be viewed as not only a
complex number but also a vector in the tangent space $T_{z_{0}}D$ of
$(D,d_{h})$ at $z_{0}$. By this way we naturally identify $T_{z_{0}}D$ with
$\mathbb{C}$.
Given $z\in D$, let $v(z)=\exp_{z_{0}}^{-1}z\in T_{z_{0}}D=\mathbb{C}$ where
$\exp_{z_{0}}:T_{z_{0}}D\rightarrow D$ is the exponential map at $z_{0}$ on
the hyperbolic plane $D$. We first show that $v(z_{1}),...,v(z_{m})$ are
counterclockwise around 0 and wrap around $0$ once. More specifically, we will
show that
(5.2) $\arg\left(\frac{v({z_{k+1}})}{v({z_{k})}}\right)\in(0,\pi)$
and
(5.3) $\sum_{k=1}^{m}\arg\left(\frac{v({z_{k+1}})}{v({z_{k})}}\right)=2\pi$
where $z_{m+1}=z_{1}$ and $\arg(z)$ denotes the argument of $z$.
Assume $k\in\\{1,...,m\\}$. Denote $\gamma$ (_resp._ $\gamma_{h}$) as the
Euclidean straight line in $\mathbb{C}$ (hyperbolic geodesic in $D$)
containing $z_{0},z_{k}$. Then $\gamma$ (_resp._ $\gamma_{h}$) cuts
$\mathbb{C}$ (_resp._ D) into two open subsets $P,P^{\prime}$ (_resp._
$P_{h},P^{\prime}_{h}$). We may assume
$P=\\{z\in\mathbb{C}:\arg\left(\frac{z-z_{0}}{z_{k}-z_{0}}\right)\in(0,\pi)\\}$
and
$P_{h}=\\{z\in D:\arg\left(\frac{v(z)}{v(z_{k})}\right)\in(0,\pi)\\}.$
Then $z_{k+1}\in P$. If $\gamma_{h}$ is a straight line, $P_{h}=P\ni z_{k+1}$
and we have proved equation (5.2). If $\gamma_{h}$ is a round circular arc
orthogonal to $\\{|z|=1\\}$, there are two different cases.
Case 1: assume $z_{0},z_{k}$ are counterclockwise on $\gamma_{h}$ (see Figure
1 (A)). If $z_{k+1}\in P\backslash P_{h}$, $\angle z_{0}z_{k}z_{k+1}>\pi/2$ or
$\angle z_{k}z_{0}z_{k+1}>\pi/2$ and it is contradictory to the acuteness
assumption. So $z_{k+1}\in P_{h}$.
Case 2: assume $z_{0},z_{k}$ are clockwise on $\gamma_{h}$ (see Figure 1 (B)).
If $z_{k+1}\in P\backslash P_{h}$, $\angle z_{0}z_{k+1}z_{k}>\pi/2$ and it is
contradictory to the acuteness assumption. So $z_{k+1}\in P_{h}$.
(a) Case 1
(b) Case 2
Figure 1.
So we proved equation (5.2) and now prove equation (5.3). It is easy to see
that
$\arg\left(\frac{v(z_{k})}{z_{k}-z_{0}}\right)\in(-\frac{\pi}{2},\frac{\pi}{2}).$
for all $k=1,...,m$. We claim that
(5.4)
$\arg\left(\frac{v({z_{k+1}})}{v({z_{k})}}\right)+\arg\left(\frac{v(z_{k})}{z_{k}-z_{0}}\right)=\arg\left(\frac{z_{k+1}-z_{0}}{z_{k}-z_{0}}\right)+\arg\left(\frac{v(z_{k+1})}{z_{k+1}-z_{0}}\right).$
Since
$\exp(\sqrt{-1}\cdot LHS)=\exp(\sqrt{-1}\cdot
RHS)=\frac{v(z_{k+1})}{z_{k}-z_{0}},$
we have that
$LHS=RHS+2n\pi$
for some integer $n$. On the other hand $LHS$ and $RHS$ are both bounded in
$(0-\frac{\pi}{2},\pi+\frac{\pi}{2})=(-\frac{\pi}{2},\frac{3\pi}{2}),$
so $LHS=RHS$. Now by adding up equation (5.4) for $k=1,...,m$ we have that
$\sum_{k=1}^{m}\arg\left(\frac{v({z_{k+1}})}{v({z_{k})}}\right)=\sum_{k=1}^{m}\arg\left(\frac{z_{k+1}-z_{0}}{z_{k}-z_{0}}\right)=2\pi$
since $\phi$ is a geodesic embedding. So we proved equations (5.2) and (5.3),
and as a consequence there exists a hyperbolic embedding $\phi_{h}$ of
$|T_{1}|$ such that $\phi_{h}(j)=z_{j}$ for all $j\in V_{1}$.
By equation (5.2) it is not difficult to see that the two circumdisks
$D_{ij_{k}j_{k+1}}(\phi)$ and $D_{ij_{k}j_{k+1}}(\phi_{h})$ are the same for
$k=1,...,m$. So $\phi_{h}$ is Delaunay since $\phi$ is Delaunay.
∎
## Appendix A Proof of Lemma 5.1
###### Proof of Lemma 5.1.
It suffices to show that for all $z_{1},z_{2}\in D$,
$\sinh\frac{d_{h}(z_{1},z_{2})}{2}=\frac{|z_{1}-z_{2}|}{\sqrt{(1-|z_{1}|^{2})(1-|z_{2}|^{2})}}.$
We first consider a special case where $z_{1}=0$ and $z_{2}=r\in(0,1)$ is
real. Then
$d_{h}(z_{1},z_{2})=\ln\frac{1\cdot(1+r)}{1\cdot(1-r)}=\ln\frac{1+r}{1-r}$
and
$\sinh\frac{d_{h}(z_{1},z_{2})}{2}=\frac{1}{2}\sqrt{\frac{1+r}{1-r}}-\frac{1}{2}\sqrt{\frac{1-r}{1+r}}=\frac{r}{\sqrt{1-r^{2}}}=\frac{|z_{1}-z_{2}|}{\sqrt{(1-|z_{1}|^{2})(1-|z_{2}|^{2})}}.$
For general $z_{1},z_{2}\in D$, we can find a hyperbolic isometric map
$f(z)=\frac{z-a}{1-\bar{a}z}$ such that $f(z_{1})=0$ and $f(z_{2})$ is a
positive real number. We only need to verify that
$\frac{|f(z_{1})-f(z_{2})|^{2}}{{(1-|f(z_{1})|^{2})(1-|f(z_{2})|^{2})}}=\frac{|z_{1}-z_{2}|^{2}}{{(1-|z_{1}|^{2})(1-|z_{2}|^{2})}}.$
This equality can be derived from
$\displaystyle\frac{|f(z_{1})-f(z_{2})|^{2}}{{(1-|f(z_{1})|^{2})(1-|f(z_{2})|^{2})}}$
$\displaystyle=$
$\displaystyle\frac{\big{|}(z_{1}-a)(1-\bar{a}z_{2})-(1-\bar{a}z_{1})(z_{2}-a)\big{|}^{2}}{\big{(}|1-\bar{a}z_{1}|^{2}-|z_{1}-a|^{2}\big{)}\cdot\big{(}|1-\bar{a}z_{2}|^{2}-|z_{2}-a|^{2}\big{)}}$
$\displaystyle=$
$\displaystyle\frac{\big{|}(1-a\bar{a})(z_{1}-z_{2})\big{|}^{2}}{\big{(}|1-\bar{a}z_{1}|^{2}-|z_{1}-a|^{2}\big{)}\cdot\big{(}|1-\bar{a}z_{2}|^{2}-|z_{2}-a|^{2}\big{)}}$
and
$\displaystyle|1-\bar{a}z_{1}|^{2}-|z_{1}-a|^{2}=(1-\bar{a}z_{1})(1-a\bar{z}_{1})-(z_{1}-a)(\bar{z}_{1}-\bar{a})$
$\displaystyle=$
$\displaystyle(1-a\bar{a})(1-z_{1}\bar{z}_{1})=(1-a\bar{a})(1-|z_{1}|^{2}).$
and similarly
$|1-\bar{a}z_{2}|^{2}-|z_{2}-a|^{2}=(1-a\bar{a})(1-|z_{2}|^{2}).$
∎
## References
* [Ahl10] Lars Valerian Ahlfors. Conformal invariants: topics in geometric function theory, volume 371. American Mathematical Soc., 2010.
* [BPS15] Alexander I Bobenko, Ulrich Pinkall, and Boris A Springborn. Discrete conformal maps and ideal hyperbolic polyhedra. Geometry & Topology, 19(4):2155–2215, 2015.
* [CCS+21] Marcel Campen, Ryan Capouellez, Hanxiao Shen, Leyi Zhu, Daniele Panozzo, and Denis Zorin. Efficient and robust discrete conformal equivalence with boundary. ACM Transactions on Graphics (TOG), 40(6):1–16, 2021.
* [DGM22] Song Dai, Huabin Ge, and Shiguang Ma. Rigidity of the hexagonal delaunay triangulated plane. Peking Mathematical Journal, 5(1):1–20, 2022.
* [Duf62] RJ Duffin. The extremal length of a network. Journal of Mathematical Analysis and Applications, 5(2):200–215, 1962.
* [FLZ20] Ke Feng, Aijin Lin, and Xiaoxiao Zhang. Combinatorial p-th calabi flows for discrete conformal factors on surfaces. The Journal of Geometric Analysis, 30(4):3979–3994, 2020.
* [GGL+18] Xianfeng Gu, Ren Guo, Feng Luo, Jian Sun, Tianqi Wu, et al. A discrete uniformization theorem for polyhedral surfaces ii. Journal of differential geometry, 109(3):431–466, 2018.
* [GH18] Huabin Ge and Bobo Hua. On combinatorial calabi flow with hyperbolic circle patterns. Advances in Mathematics, 333:523–538, 2018.
* [GLSW18] Xianfeng David Gu, Feng Luo, Jian Sun, and Tianqi Wu. A discrete uniformization theorem for polyhedral surfaces. Journal of differential geometry, 109(2):223–256, 2018.
* [GLW19] David Gu, Feng Luo, and Tianqi Wu. Convergence of discrete conformal geometry and computation of uniformization maps. Asian Journal of Mathematics, 23(1):21–34, 2019.
* [GSC21] Mark Gillespie, Boris Springborn, and Keenan Crane. Discrete conformal equivalence of polyhedral surfaces. ACM Transactions on Graphics (TOG), 40(4):1–20, 2021.
* [He99] Zheng-Xu He. Rigidity of infinite disk patterns. Annals of Mathematics, pages 1–33, 1999.
* [LSW20] Feng Luo, Jian Sun, and Tianqi Wu. Discrete conformal geometry of polyhedral surfaces and its convergence. arXiv preprint arXiv:2009.12706, 2020.
* [Luo04] Feng Luo. Combinatorial yamabe flow on surfaces. Communications in Contemporary Mathematics, 6(05):765–780, 2004\.
* [Luo22] Yanwen Luo. Spaces of geodesic triangulations of surfaces. Discrete & Computational Geometry, pages 1–19, 2022.
* [LV73] Olli Lehto and Kaarlo Ilmari Virtanen. Quasiconformal mappings in the plane, volume 126. Citeseer, 1973.
* [LW19] Feng Luo and Tianqi Wu. Koebe conjecture and the weyl problem for convex surfaces in hyperbolic 3-space. arXiv preprint arXiv:1910.08001, 2019.
* [LWZ21a] Yanwen Luo, Tianqi Wu, and Xiaoping Zhu. The convergence of discrete uniformizations for genus zero surfaces. arXiv preprint arXiv:2110.08208, 2021.
* [LWZ21b] Yanwen Luo, Tianqi Wu, and Xiaoping Zhu. The deformation space of geodesic triangulations and generalized tutte’s embedding theorem. arXiv preprint arXiv:2105.00612, 2021.
* [LWZ21c] Yanwen Luo, Tianqi Wu, and Xiaoping Zhu. The deformation spaces of geodesic triangulations of flat tori. arXiv preprint arXiv:2107.05159, 2021.
* [LWZ22] Yanwen Luo, Tianqi Wu, and Xiaoping Zhu. The deformation space of delaunay triangulations of the sphere. arXiv preprint arXiv:2202.06402, 2022.
* [Spr19] Boris Springborn. Ideal hyperbolic polyhedra and discrete uniformization. Discrete & Computational Geometry, pages 1–46, 2019.
* [SWGL15] Jian Sun, Tianqi Wu, Xianfeng Gu, and Feng Luo. Discrete conformal deformation: algorithm and experiments. SIAM Journal on Imaging Sciences, 8(3):1421–1456, 2015.
* [WGS15] Tianqi Wu, Xianfeng Gu, and Jian Sun. Rigidity of infinite hexagonal triangulation of the plane. Transactions of the American Mathematical Society, 367(9):6539–6555, 2015.
* [Wu14] Tianqi Wu. Finiteness of switches in discrete Yamabe flow. PhD thesis, Master Thesis, Tsinghua University, Beijing, 2014.
* [WX21] Tianqi Wu and Xu Xu. Fractional combinatorial calabi flow on surfaces. arXiv preprint arXiv:2107.14102, 2021.
* [WZ20] Tianqi Wu and Xiaoping Zhu. The convergence of discrete uniformizations for closed surfaces, 2020\.
* [ZGZ+14] Min Zhang, Ren Guo, Wei Zeng, Feng Luo, Shing-Tung Yau, and Xianfeng Gu. The unified discrete surface ricci flow. Graphical Models, 76(5):321–339, 2014.
* [ZX19] Xiang Zhu and Xu Xu. Combinatorial calabi flow with surgery on surfaces. Calculus of Variations and Partial Differential Equations, 58(6):1–20, 2019.
|
# Quantum revivals in HgTe/CdTe quantum wells and topological phase
transitions
Alberto Mayorgas<EMAIL_ADDRESS>Department of Applied Mathematics,
University of Granada, Fuentenueva s/n, 18071 Granada, Spain Manuel Calixto
Department of Applied Mathematics, University of Granada, Fuentenueva s/n,
18071 Granada, Spain Institute Carlos I for Theoretical for Theoretical and
Computational Physics (iC1), Fuentenueva s/n, 18071 Granada, Spain Nicolás A.
Cordero Department of Physics, University of Burgos, 09001 Burgos, Spain
International Research Center in Critical Raw Materials for Advanced
Industrial Technologies (ICCRAM), University of Burgos, 09001 Burgos, Spain
Institute Carlos I for Theoretical for Theoretical and Computational Physics
(iC1), Fuentenueva s/n, 18071 Granada, Spain Elvira Romera Department of
Atomic, Molecular and Nuclear Physics, University of Granada, Fuentenueva s/n,
18071 Granada, Spain Institute Carlos I for Theoretical for Theoretical and
Computational Physics (iC1), Fuentenueva s/n, 18071 Granada, Spain Octavio
Castaños Institute of Nuclear Sciences, National Autonomous University of
Mexico, Apdo. Postal 70-543, 04510, CDMX, Mexico
###### Abstract
The time evolution of a wave packet is a tool to detect topological phase
transitions in two-dimensional Dirac materials, such as graphene and silicene.
Here we extend the analysis to HgTe/CdTe quantum wells and study the evolution
of their electron current wave packet, using 2D effective Dirac Hamiltonians
and different layer thicknesses. We show that the two different periodicities
that appear in this temporal evolution reach a minimum near the critical
thickness, where the system goes from normal to inverted regime. Moreover, the
maximum of the electron current amplitude changes with the layer thickness,
identifying that current maxima reach their higher value at the critical
thickness. Thus, we can characterize the topological phase transitions in
terms of the periodicity and amplitude of the electron currents.
###### pacs:
03.65.Vf, 03.65.Pm,
## I Introduction
The time evolution of wave packets can have interesting behaviors due to
quantum interference. Revivals occur when a well-localized wave-packet evolves
in time to recover, at least approximately, its initial waveform. This event
occurs periodically and the period is known as the revival period. The
phenomenon of quantum wave packet revivals has been investigated theoretically
in atomic systems, molecules, many body systems or 2D Materials [1, 2, 3, 4,
5, 6, 7, 8, 9, 10, 11] and observed experimentally in among others, Rydberg
atoms or molecular systems [12, 13, 14, 15, 16]. Recently, it has been shown
how revival and classical periods reveal quantum phase transitions in many-
body systems [5, 6]. Furthermore, it has also been seen how both periods are
capable of detecting topological phase transitions (TPTs for short) in two-
dimensional materials such as graphene [4] and silicene [7].
In this work, we focus on a particular zincblende heterostructure, the mercury
telluride-cadmium telluride (HgTe/CdTe) quantum wells (QWs). They have been
widely used to study the quantum spin Hall effect and new types of topological
phases [17, 18, 19, 20], and traditionally are part of optical and transport
experiments involving spin-related observations [21, 22, 23]. At present,
HgTe/CdTe QWs appear together with other topological insulators to construct
low-dimensional quantum devices, which can experimentally realize quantum
anomalous Hall effects [24, 25, 26, 27, 28]. One of the most interesting
properties of these materials is that we can switch between normal or inverted
band structures by simply changing the QW width (layer thickness in our
jargon). In particular, we study the time evolution of electron current wave
packets in HgTe/CdTe QWs in magnetic fields, for different values of the HgTe
layer thickness to characterize TPTs. We analyze the periodicities in the
dynamics of the wave packets and the amplitude of the electron currents. There
are other ways to detect topological-band insulator phase transitions, such as
information or entropic measures [29, 30, 31, 32, 33], or magneto-optical
properties [34, 35, 36, 37, 38, 39]. In contrast to these methods, quantum
revivals provide an straightforward approach to TPTs that has not been applied
to HgTe/CdTe QWs so far.
This paper is organized as follows. In the next section we will describe the
2D effective Dirac Hamiltonian for surface states in HgTe/CdTe QWs. In the
third section we will study the relation between wave packet revivals and
classical periodicities with topological phase transition. The relation
between the evolution of the electron currents and the TPTs will be described
in section IV. Finally we will present some conclusions.
## II $\text{HgTe}/\text{CdTe}$ quantum wells low-energy Hamiltonian
We shall use a 2D effective Dirac Hamiltonian to describe the surface states
in HgTe/CdTe QWs, following the prescription of the references [17, 18, 19,
20],
$H=\left(\begin{array}[]{cc}H_{+1}&0\\\
0&H_{-1}\end{array}\right),\,H_{s}(\bm{k})=\epsilon_{0}(\bm{k})\tau_{0}+\bm{d}_{s}(\bm{k})\cdot\bm{\tau},$
(1)
where $\tau_{i}$ are the Pauli matrices, $s=\pm 1$ is the spin and
$H_{-1}(\bm{k})=H_{+1}^{*}(-\bm{k})$ (temporarily reversed). It is convenient
to expand the Hamiltonian $H_{s}(\bm{k})$ around the center $\Gamma$ of the
first Brillouin zone [18],
$\epsilon_{0}(\bm{k})=\gamma-\delta\bm{k}^{2},\quad\bm{d}_{s}(\bm{k})=(\alpha
sk_{x},\alpha k_{y},\mu-\beta\bm{k}^{2}),$ (2)
where $\alpha,\beta,\gamma,\delta$ and $\mu$ are expansion parameters that
depend on the HgTe layer thickness $\lambda$, as can be found in [40] and in
Table 1. Among all these parameters, we highlight the mass or gap term $\mu$
related to the magnetic moment, and the Wilson term $\beta\bm{k}^{2}$
(introduced to avoid the Fermion doubling problem [41]).
$\lambda$(nm) | $\alpha$(meV.nm) | $\beta$(meV.nm2) | $\delta$(meV.nm2) | $\mu$(meV)
---|---|---|---|---
5.5 | 387 | -480 | -306 | 9
6.1 | 378 | -553 | -378 | -0.15
7.0 | 365 | -686 | -512 | -10
Table 1: Different values of the HgTe/CdTe QW expansion parameters depending
on the layer thicknesses $\lambda$ [40].
For $s=\pm 1$, the energy of the valence and conduction bands is
$\epsilon_{\pm}(\bm{k})=\epsilon_{0}(\bm{k})\pm\sqrt{\alpha^{2}\bm{k}^{2}+(\mu-\beta\bm{k}^{2})^{2}}.$
(3)
To differentiate between band insulator and topological insulator phases, one
can use the Thouless-Kohmoto-Nightingale-Nijs (TKNN) formula [42] providing
the Chern-Pontryagin number $\mathcal{C}$. In the case of the HgTe QWs (see
[39] for more details),
$\mathcal{C}_{s}=s[\mathrm{sign}(\mu)+\mathrm{sign}(\beta)].$ (4)
The Chern number depends on the sign of the material parameters $\mu$ and
$\beta$, and on the spin $s$. Considering Table 1, only $\mu$ changes sign for
different layer thicknesses $\lambda$, thus, the TPT is governed by
$\mathrm{sign}(\mu)$, or by $\mathrm{sign}(\mu/\beta)$ as can be found in the
literature [40]. Namely, around the critical HgTe layer thickness
$\lambda_{\textrm{c}}\approx 6.1$ nm, the system goes from normal
($\lambda<\lambda_{\textrm{c}}$ or $\mu/\beta<0$) to the inverted
($\lambda>\lambda_{\textrm{c}}$ or $\mu/\beta>0$) regimes.
We introduce the interaction with a perpendicular magnetic field $B$ along the
$z$-axis using minimal coupling $\bm{p}\to~{}\bm{P}=\bm{p}+e\bm{A}$, where
$\bm{A}=(A_{x},A_{y})=(-By,0)$ is the electromagnetic potential in the Landau
gauge, $e$ the electron charge, and $\bm{p}$ the momentum operator
($\bm{k}\to\bm{p}/\hbar$). Using Peierls’ substitution [43, 44], the
Hamiltonian (1) is written in terms of creation $a^{\dagger}$ and annihilation
$a$ operators [39],
$\displaystyle H_{+1}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cc}\gamma+\mu-\frac{(\delta+\beta)(2N+1)}{\ell_{B}^{2}}&\frac{\sqrt{2}\alpha}{\ell_{B}}a\\\
\frac{\sqrt{2}\alpha}{\ell_{B}}a^{\dagger}&\gamma-\mu-\frac{(\delta-\beta)(2N+1)}{\ell_{B}^{2}}\end{array}\right),$
(7) $\displaystyle H_{-1}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cc}\gamma+\mu-\frac{(\delta+\beta)(2N+1)}{\ell_{B}^{2}}&-\frac{\sqrt{2}\alpha}{\ell_{B}}a^{\dagger}\\\
-\frac{\sqrt{2}\alpha}{\ell_{B}}a&\gamma-\mu-\frac{(\delta-\beta)(2N+1)}{\ell_{B}^{2}}\end{array}\right),$
(10)
with $N=a^{\dagger}a$ and $\ell_{B}=\sqrt{\hbar/(eB)}$ the magnetic length.
Figure 1: HgTe/CdTe quantum well low-energy spectrum $E_{n}^{s}$ for $B=0.05$
T, as a function of the HgTe layer thickness $\lambda$. The thin solid lines
represent Landau levels $n=\pm 1,\pm 2,\pm 3$ (valence $(-)$ and conduction
$(+)$) for spin $s=-1$ (blue) and $s=+1$ (red), and the thick lines represent
edge states ($n=0$). A vertical dashed black line indicates the HgTe thickness
$\lambda_{\mathrm{inv}}(0.05)=6.173$ nm $\simeq\lambda_{\textrm{c}}$ where the
band inversion for edge states occurs for $B=0.05$ T according to (28).
The eigenvalues of both Hamiltonians $H_{+1}$ and $H_{-1}$ are
$E_{n}^{s}=\gamma-\tfrac{2\delta|n|-s\beta}{\ell_{B}^{2}}+\mathrm{sgn}(n)\Delta_{n}^{s}$
(12)
with
$\Delta_{n}^{s}=\sqrt{\tfrac{2\alpha^{2}|n|}{\ell_{B}^{2}}+\left(\mu-\tfrac{2\beta|n|-s\delta}{\ell_{B}^{2}}\right)^{2}}\,,$
(13)
for Landau level (LL) index $n=\pm 1,\pm 2,\pm 3,\dots$ [valence $(-)$ and
conduction $(+)$] , and
$E_{0}^{s}=\gamma-s\mu-\frac{\delta-s\beta}{\ell_{B}^{2}}\,,$ (14)
for the edge states $n=0$ [45, 46, 34]. The associated eigenvectors are
spinors containing Fock states $||n|\rangle$, that is,
$|\bm{n}\rangle_{s}=\left(\begin{array}[]{l}A_{n}^{s}\left||n|-\frac{s+1}{2}\right>\\\
B_{n}^{s}\left||n|+\frac{s-1}{2}\right>\end{array}\right),$ (15)
with coefficients
$\displaystyle A_{n}^{s}$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{l}\frac{\mathrm{sgn}(n)}{\sqrt{2}}\sqrt{1+\mathrm{sgn}(n)\cos\theta_{n}^{s}},\quad
n\neq 0,\\\ (1-s)/2,\quad n=0,\end{array}\right.$ (18) $\displaystyle
B_{n}^{s}$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{l}\frac{s}{\sqrt{2}}\sqrt{1-\mathrm{sgn}(n)\cos\theta_{n}^{s}},\quad
n\neq 0,\\\ (1+s)/2,\quad n=0,\end{array}\right.$ (23)
where
$\theta_{n}^{s}=\arctan\left(\frac{\sqrt{2|n|}\,\alpha/\ell_{B}}{\mu-\frac{2\beta|n|-s\delta}{\ell_{B}^{2}}}\right).$
(24)
Depending on $\mathrm{sgn}(n)$, the coefficients $A_{n}^{s}$ and $B_{n}^{s}$
can be written as $\cos(\theta_{n}^{s}/2)$ and $\sin(\theta_{n}^{s}/2)$ [47].
The two zero Landau levels $E_{0}^{+1}$ and $E_{0}^{-1}$ belong to different
Hamiltonians, that is to spin $s=+1$ and $s=-1$ respectively. The level cross
condition
$E_{0}^{+1}=E_{0}^{-1}\Rightarrow B_{\text{inv}}=\frac{\mu}{e\beta/\hbar},$
(25)
gives the critical magnetic field $B_{\text{inv}}$ which separates the quantum
spin Hall and quantum Hall regimes [46]. For instance, for a QW thickness
$\lambda=7.0$ nm (see Table 1), one obtains $B_{\mathrm{inv}}\simeq 9.60$ T.
This band inversion is also graphically represented in Figure 1.
It is convenient to perform a linear fit of the parameters in Table 1, in
order to analyze the HgTe QWs spectrum and properties for a continuous range
of the thicknesses $\lambda$,
$\displaystyle\mu(\lambda)$ $\displaystyle=$ $\displaystyle
77.31\,-12.53\lambda,$ $\displaystyle\alpha(\lambda)$ $\displaystyle=$
$\displaystyle 467.49-14.65\lambda,$ $\displaystyle\beta(\lambda)$
$\displaystyle=$ $\displaystyle 283.58-138.16\lambda,$
$\displaystyle\delta(\lambda)$ $\displaystyle=$ $\displaystyle
458.46-138.25\lambda,$ (26)
where we use the Table 1 units and $\lambda$ is in nanometers. The coefficient
of determination is $R^{2}>0.99$ in all cases. Using $\mu(\lambda)$ in (II),
we can estimate the critical HgTe thickness where the TPT occurs in the
absence of magnetic field, according to the criteria in eq.(4),
$\mu=0\Rightarrow\lambda_{\textrm{c}}=6.17~{}\mathrm{nm}.$ (27)
In addition, the linear fit (II) let us plot the low energy spectra (12,14) as
a function of the HgTe layer thickness $\lambda$. Namely, in Figure 1, we
extrapolate the linear fit (II) to the interval [4 nm, 8 nm]. The band
inversion formula (25) together with the linear fit (II) yield the relation
$\lambda_{\mathrm{inv}}(B)=\frac{368.31-2.05B}{59.7-B}$ (28)
between the applied magnetic field $B$ (in Tesla) and the HgTe layer thickness
$\lambda_{\mathrm{inv}}(B)$ (in nanometers) at which the band inversion
$E_{0}^{+1}=E_{0}^{-1}$ takes place. Note that
$\lambda_{\mathrm{inv}}(B)\simeq\lambda_{\textrm{c}}=6.17$ nm for low $B\ll 1$
T, and that $E_{0}^{+1}=E_{0}^{-1}\simeq 0$ meV at this point as shows Figure
1. The thickness $\lambda_{\mathrm{inv}}(B)$, where the band inversion
happens, is a deviation of the critical thickness $\lambda_{\textrm{c}}$ in
eq.(27) for $B>0$, so it is a way to characterize TPTs when the external
magnetic field is non-null.
Higher Landau levels with $|n|>0$ have a structural change when the spinor
components in eq.(LABEL:coefHg) have equal module $|A_{n}^{s}|=|B_{n}^{s}|$,
that is, when the angle (24) is $\theta_{n}^{s}=\pi/2$, which implies
$\mu=(2\beta|n|-s\delta)/\ell_{B}^{2}$. The valence and conduction band
contributions interchange their roles at this point, hence this is a way to
define a band inversion for higher Landau levels, or equivalently, we can
introduce the concept higher Landau level topological phase transition (HTPT,
see [47] for more details). The condition $\theta_{n}^{s}=\pi/2$ fixes a
relationship between the layer thickness and the magnetic field as it happens
in eq.(28),
$\lambda_{\textrm{HTPT}}(B,n,s)=\frac{77.31-0.86B|n|+0.7Bs}{12.53-0.42B|n|+0.21Bs}\,\quad
n\neq 0\,.$ (29)
This layer thickness is higher than $\lambda_{\text{inv}}(B)$ in eq.(28) for
all $B,\>n$ and $s$, and for low magnetic fields tends to
$\lambda_{\textrm{HTPT}}(B<<1,n,s)\simeq\lambda_{\mathrm{c}}$. In Ref.[47] it
has been shown that quantum fluctuations and entanglement in higher Landau
levels grow at the layer thickness $\lambda_{\textrm{HTPT}}$. The scope of the
next section is to use the periodicities in the wave packet evolution as TPT
and HTPT markers, and compare the critical thicknesses of this method with the
ones in eq.(29).
## III Classical and revival times in the topological phase transitions
detection
a
Figure 2: Classical period $T_{\text{Cl}}$ (top, eq.(33)) and revival time
$T_{\text{R}}$ (bottom, eq.(34)) as a function of the layer thickness
$\lambda$, for three different initial wave packets $n_{0}=5,10,15$. In both
figures, we set $B=0.05$ T and $s=+1$, and log-lin scale. The vertical dashed
line indicates the critical thickness $\lambda_{\textrm{c}}~{}=~{}6.17~{}$nm.
a
Figure 3: Layer thicknesses $\lambda_{\textrm{Cl}}(B,n)$ and
$\lambda_{\textrm{R}}(B,n)$ in which $T_{\textrm{Cl}}$ and $T_{\textrm{R}}$
achieve their minimum value respectively, as a function of the external
magnetic field $B$ and for different initial wave packets $n_{0}\in[0,30]$. In
both figures, we set $s=+1$ and a horizontal dashed line indicating the
critical thickness $\lambda_{\textrm{c}}=6.17~{}$nm. Figure 4: Layer thickness
$\lambda_{\textrm{Cl}}(B,n)$ in which $T_{\textrm{Cl}}$ achieve its minimum
value (solid lines) and layer thickness of HTPT (dashed lines, see eq.(29)),
as functions of the external magnetic field $B$ and for different initial wave
packets $n_{0}\in[1,10]$. In both figures, we set $s=+1$ and we mark the
critical thickness $\lambda_{\textrm{c}}=6.17~{}$nm in the vertical axis.
The time evolution of a wave packet for the time-independent Hamiltonian of
the HgTe QW (see eq.(LABEL:hamHgTe)) is given by
$|\Psi(t)\rangle_{s}=\sum_{n=-\infty}^{\infty}c_{n}^{s}|\bm{n}\rangle_{s}e^{-iE_{n}^{s}t/\hbar}\,,$
(30)
where $|n\rangle_{s}$ are the eigenvectors in (15), $E_{n}^{s}$ the energies
in (12), and $c_{n}^{s}={}_{s}\langle n|\Psi(0)\rangle$ with $|\Psi(0)\rangle$
the initial wave packet. For the sake of simplicity, we shall take $s$ fixed,
and $|\Psi(t)\rangle_{s}$, $E_{n}^{s}$, $c_{n}^{s}$ and
$\lambda_{\text{HTPT}}(B,n,s)$ will be referred to as $|\Psi(t)\rangle$,
$E_{n}$, $c_{n}$ and $\lambda_{\text{HTPT}}(B,n)$. We also select a Gaussian-
like initial wave packet, distributed around a given energy $E_{n_{0}}$ of the
spectrum $E_{n}$, so that
$c_{n}=\frac{1}{\sigma\sqrt{2\pi}}e^{-(n-n_{0})^{2}/2\sigma^{2}}\,,$ (31)
and we can Taylor expand the energy $E_{n}$ around the energy level $n_{0}$
[3]. Therefore, the exponential $\exp(-iE_{n}^{s}t/\hbar)$ in (30) yields
$\displaystyle\exp\left(-iE_{n_{0}}\frac{t}{\hbar}-2\pi
i(n-n_{0})\frac{t}{T_{\text{Cl}}}-2\pi
i(n-n_{0})^{2}\frac{t}{T_{\text{R}}}+\ldots\right)\,$ (32)
obtaining different time scales characterized by the classical period
$T_{\text{Cl}}=2\pi\hbar/|E^{\prime}_{n_{0}}|$ and the revival time
$T_{\text{R}}=4\pi\hbar/|E^{\prime\prime}_{n_{0}}|$ up to second order in the
series (the first term $\exp(iE_{n_{0}}t/\hbar)$ becomes an irrelevant global
phase in eq.(30)). In fact, the classical period is the time that the wave
packet needs to follow the expected semiclassical trajectory, and the revival
time is the time that the wave packet needs to return approximately to its
initial shape [3]. Quantum revivals are a consequence of the quantum beats
[48], representing interference effects of the terms in (30). Notice that
$T_{\text{Cl}}<<T_{\text{R}}$, and thus, a signal can be analyzed in these
different regimes. Both periods have been previously studied in 2D gapped
Dirac materials [4], and now we shall put the spotlight on HgTe QWs. In
particular, for the energies in (12), the classical and revival periods are
$\displaystyle T_{\text{Cl}}=$
$\displaystyle\,2\pi\hbar\ell_{B}^{2}\left[-2\mathrm{sgn}(n_{0})\delta+\tfrac{1}{\Delta_{n_{0}}^{s}}(\alpha^{2}-2\beta\chi_{n_{0}}^{s})\right]^{-1}\,,$
(33) $\displaystyle T_{\text{R}}=$
$\displaystyle\,4\pi\hbar\ell_{B}^{4}\,\mathrm{sgn}(n_{0})\left[-\tfrac{4\beta^{2}}{\Delta_{n_{0}}^{s}}+\tfrac{1}{(\Delta_{n_{0}}^{s})^{3}}(\alpha^{2}-2\beta\chi_{n_{0}}^{s})^{2}\right]^{-1}\,,$
(34)
where $\chi_{n}^{s}=\mu-(2\beta|n|-s\delta)/\ell_{B}^{2}$ is the denominator
in (24) and $\Delta_{n}^{s}$ is defined in eq.(13). Both periods are functions
of the magnetic field $B$, the wave packet center $n_{0}$, the spin (fixed to
$s=+1$ in this section), and the layer thickness $\lambda$ through the
parameters in equation (II).
The wave packets time evolution is visualized with the autocorrelation
function $A(t)=\langle\Psi(t)|\Psi(0)\rangle$, which turns into
$A(t)=\sum_{n=-\infty}^{\infty}|c_{n}|^{2}e^{-iE_{n}t/\hbar}$ (35)
for a Gaussian wave packet like the one in eq.(30).
a
Figure 5: Autocorrelation function amplitude $|A(t)|^{2}$ as a function of
time in $T_{\text{Cl}}=3.75$ ps (top) and $T_{\text{R}}=79.80$ ps (bottom)
units, for an initial wave packet with $n_{0}=5$ and
$\sigma=\sqrt{n_{0}}/5=0.47$. We set the HgTe parameters
$\lambda=\lambda_{\textrm{c}}$, $B=0.05$ T, and $s=+1$.
Throughout the article, we have selected an initial wave packet localized
around $n_{0}=5$ and with standard deviation $\sigma=\sqrt{n_{0}}/5\simeq
0.45$ in order to analyze the wave packet evolution in (30). We have also set
an external magnetic field of $B=0.05$ T in order to observe TPT phenomena
around $\lambda_{\text{inv}}(B)\simeq\lambda_{\textrm{c}}=6.17$ nm [39]. The
last restriction will be more evident later when we present Figures 3 and 9.
In Figure 2, we plot $T_{\text{Cl}}$ and $T_{\text{R}}$ as a function of the
layer thickness $\lambda$ for spin $s=+1$. Both periods reach a minimum near
the critical thickness $\lambda_{\textrm{c}}=6.17$ nm, hence they are useful
magnitudes to identify TPTs. These minima separate from $\lambda_{c}$ for
larger values of the magnetic field $B$ as shows Figure 3, where we present
the values of the thickness $\lambda_{\textrm{Cl}}(B,n_{0})$ and
$\lambda_{\textrm{R}}(B,n_{0})$ in which $T_{\textrm{Cl}}$ and
$T_{\textrm{R}}$ achieve their minimum value respectively, as a function of
the external magnetic field $B$ and the center of the wave packet $n_{0}$
(spin $s=+1$ fixed). For instance, for $n_{0}=5$ we obtain the bounds
$|\lambda_{\textrm{Cl}}(B,n_{0})-\lambda_{\textrm{c}}|<0.14$ nm and
$|\lambda_{\textrm{R}}(B,n_{0})-\lambda_{\textrm{c}}|<0.25$ nm in a magnetic
field range $B\leq 0.5$ T. For wave packets centered in high-energy states
(blue lines in Figure 3), the deviation from $\lambda_{\textrm{c}}$ is even
higher, representing a criticality of the system which differs from the band-
insulator phase transition. In order to characterize the criticality of
$T_{\text{Cl}}$ for magnetic fields $B>0.5$ T, in Figure 4 we compare the
minimum thicknesses $\lambda_{\textrm{Cl}}(B,n_{0})$ (solid lines) with the
ones $\lambda_{HTPT}(B,n_{0})$ in eq.(29) (dashed lines), where the HTPTs
occurs. Both solid and dashed lines exhibit different behaviors when varying
the magnetic field $B$ and the wave packet center $n_{0}$. Therefore, it seems
that there is no correlation between the minimum thicknesses
$\lambda_{\textrm{Cl}}$ and the HTPT at $\lambda_{HTPT}$. Nevertheless, when
the magnetic field approximates to zero, both thicknesses tend to
$\lambda_{\mathrm{c}}=6.17$ nm at zero field, that is,
$\lambda_{\mathrm{HTPT}}(B,n_{0})\simeq\lambda_{\textrm{Cl}}(B,n_{0})\simeq\lambda_{\mathrm{c}}$
for all $n_{0}\neq 0$ and $B<<1$ T.
In Figure 5, we present the squared modulus of the autocorrelation function
for $\lambda=\lambda_{\textrm{c}}$ and $s=+1$ in two different time scales.
The top panel displays the time in units of the classical period
$T_{\text{Cl}}=3.75$ ps, where each oscillation corresponds to one unit of the
scale; whereas the bottom panel shows the wave packet revivals at half of the
revival time $T_{\text{R}}/2=39.90$ ps, and the time scale is in
$T_{\text{R}}$ units.
## IV Electron current revivals and topological phase transitions
We have also identified topological phase transition by analyzing how the
electron current changes with the layer thickness and the time evolution. The
electron currents of the HgTe QWs have been previously studied in reference
[39], where the current operators are
$\displaystyle j_{x}^{s}$ $\displaystyle=$
$\displaystyle\frac{e}{\hbar}\left(s\alpha\tau_{x}-\sqrt{2}\frac{a^{\dagger}+a}{\ell_{B}}(\beta\tau_{z}+\delta\tau_{0})\right),$
$\displaystyle j_{y}^{s}$ $\displaystyle=$
$\displaystyle\frac{e}{\hbar}\left(\alpha\tau_{y}+\mathrm{i}\sqrt{2}\frac{a^{\dagger}-a}{\ell_{B}}(\beta\tau_{z}+\delta\tau_{0})\right)\,,$
(36)
Figure 6: Currents expected values $\langle j_{a}\rangle$ as a function of
time in $T_{\text{Cl}}=3.75$ ps (top) and $T_{\text{R}}=79.80$ ps (bottom)
units, for an initial wave packet with $n_{0}=5$ and
$\sigma=\sqrt{n_{0}}/5=0.47$. The red (blue) line correspond to the current in
the $x$ ($y$) axis. We set the HgTe parameters $\lambda=\lambda_{\textrm{c}}$,
$B=0.05$ T, and $s=+1$.
and the matrix elements in the eigenstate basis (15) are
$\displaystyle\langle\bm{m}|j_{x}^{s}|\bm{n}\rangle_{s}=$
$\displaystyle\,\frac{es\alpha}{\hbar}\Xi_{m,n}^{s,+}-\frac{\sqrt{2}e}{\hbar\ell_{B}}\Phi_{m,n}^{s,+}\,,$
$\displaystyle\langle\bm{m}|j_{y}^{s}|\bm{n}\rangle_{s}=$
$\displaystyle\,-\mathrm{i}\frac{e\alpha}{\hbar}\Xi_{m,n}^{s,-}+\mathrm{i}\frac{\sqrt{2}e}{\hbar\ell_{B}}\Phi_{m,n}^{s,-}\,,$
(37)
where
$\displaystyle\Xi_{m,n}^{s,\pm}=$
$\displaystyle\,(A_{m}^{s}B_{n}^{s}\delta_{|m|-s,|n|}\pm
A_{n}^{s}B_{m}^{s}\delta_{|m|+s,|n|})\,,$ (38)
$\displaystyle\Phi_{m,n}^{s,\pm}=$
$\displaystyle\,((\delta+\beta)A_{m}^{s}A_{n}^{s}+(\delta-\beta)B_{m}^{s}B_{n}^{s})$
$\displaystyle{\scriptstyle\times\left(\sqrt{|n|+1+\tfrac{s-1}{2}}\delta_{|m|-1,|n|}\right.\left.\pm\sqrt{|n|-\tfrac{s+1}{2}}\delta_{|m|+1,|n|}\right)}\,.$
For a Gaussian wave packet (30), the electron current expected value is
${}_{s}\langle\Psi(t)|j_{a}^{s}|\Psi(t)\rangle_{s}=\sum_{m,n=-\infty}^{\infty}\overline{c_{m}^{s}}c_{n}^{s}e^{-i(E_{n}^{s}-E_{m}^{s})t/\hbar}\langle\bm{m}|j_{a}^{s}|\bm{n}\rangle\,,$
(39)
where $a=x,y$ and the bar indicates complex conjugation. From now on we
identify $j_{a}^{s}\equiv j_{a}$ and choose $s=+1$ for simplicity. We plot
both currents in Figure 6, for the same values $\lambda=\lambda_{\textrm{c}}$,
$B=0.05$ T, $s=+1$, $n_{0}=5$, $\sigma=0.47$, as in the previous section. The
results are similar to the autocorrelation in Figure 5. We observe
oscillations in two different time scales, the classical ones (top panel) and
the revivals (bottom panel). After half of the revival time $T_{\mathrm{R}}/2$
in the bottom panel, the electron currents reach again their maximum initial
values revealing the quantum revival phenomenon. This is more evident in the
phase space plot of Figure 7, where both currents decrease to zero at
$t=T_{\mathrm{R}}/4$, and then they grow reaching their initial value at
$t=T_{\mathrm{R}}/2$. Notice that there is a phase difference of $\pi/2$ rad
between the currents $\langle j_{x}\rangle$ and $\langle j_{y}\rangle$, which
is also depicted in Figure 7. The behavior shown in Figure 6 is also found in
graphene [4, 49], and in 2D gapped Dirac materials under magnetic fields [39],
as silicene [50].
Figure 7: Parametric plot of the currents expected values $(\langle
j_{x}\rangle$,$\langle j_{y}\rangle)$ in the time intervals
$t\in[0,T_{\text{R}}/4]$ (blue) and $t\in[T_{\text{R}}/4,T_{\text{R}}/2]$
(yellow), for an initial wave packet with $n_{0}=5$ and
$\sigma=\sqrt{n_{0}}/5=0.47$. We set the HgTe parameters
$\lambda=\lambda_{\textrm{c}}$, $B=0.05$ T, and $s=+1$, so that the revival
time is $T_{\text{R}}=79.80$ ps. Figure 8: Maximum values of the current
amplitudes $\text{Max}_{t\in[0,T_{\text{R}}/2]}|\langle j_{a}\rangle|$ as a
function of the layer thickness $\lambda$, for an initial wave packet with
$n_{0}=5$ and $\sigma=\sqrt{n_{0}}/5=0.47$. The red (blue) dots correspond to
the current in the $x$ ($y$) axis. We set the parameters $B=0.05$ T, and
$s=+1$.
We have repeated the calculations of Figure 6 for different values of the
layer thickness $\lambda$, in order to study the impact of this parameter on
the electric currents. We select the maximum of the electron current
amplitudes $\text{Max}_{t\in[0,T_{\text{R}}/2]}|\langle j_{a}\rangle|$
(maximum in the time domain) for different thicknesses $\lambda$, and plot
them in Figure 8. Both current maxima reach their higher value at the critical
thickness. Therefore, measuring the amplitudes of the electron currents is
another way to characterize TPTs in HgTe QWs. For higher magnetic fields, this
maximal behavior deviates from the critical thickness $\lambda_{\textrm{c}}$.
In Figure 9, we plot the layer thickness $\lambda_{j_{a}}(B,n_{0})$ in which
the dots $\text{Max}_{t\in[0,T_{\text{R}}/2]}|\langle j_{a}\rangle|$ of Figure
8 achieve a maximum in the $\lambda$ domain, against the external magnetic
field $B$ and for an initial wave packet with $n_{0}=5$. The maxima
$\lambda_{j_{a}}$ are close to the critical thickness $\lambda_{\textrm{c}}$
in a region of the magnetic field, i.e.
$|\lambda_{j_{a}}(B,n_{0})-\lambda_{\textrm{c}}|<0.1$ nm for all $B<0.5$ T and
$n_{0}=5$. When increasing the magnetic field above $B\simeq 0.5$ T, the
maxima $\lambda_{j_{a}}$ (red and blue dots in Figure 9) start growing in a
similar way to the thickness $\lambda_{\mathrm{Cl}}$ where $T_{\mathrm{Cl}}$
achieves its minimum in Figure 3.
Figure 9: Layer thickness $\lambda_{j_{a}}(B,n_{0})$ in which the dots
$\text{Max}_{t\in[0,T_{\text{R}}/2]}|\langle j_{a}\rangle|$ of Figure 8
achieve a maximum in the $\lambda$ domain, as a function of the external
magnetic field $B$. The red and blue dots correspond to the directions $a=x$
and $a=y$ respectively, and the yellow line depicts the thicknesses where
$T_{\mathrm{Cl}}$ achieves its minimum (retrieved from Figure 3). We set
$s=+1$ and an initial wave packet with $n_{0}=5$ and
$\sigma=\sqrt{n_{0}}/5=0.47$. The horizontal dashed line indicates the
critical thickness $\lambda_{\textrm{c}}=6.17~{}$nm.
## V Conclusions
In summary, we have shown that the time evolution of a wave packet is useful
to detect TPTs in HgTe QWs, which corroborates the results previously found in
[7] for other 2D materials (silicene, germanene, tinene and indinene). Using
the 2D effective Dirac Hamiltonian for surface states in HgTe/CdTe QWs, it is
possible to analyze the time evolution of electron current wave packets. As a
general result, the classical and revival time appear as two different
periodicities in this temporal evolution, and reach their minima at different
values of the layer thickness, depending on the external magnetic field and
the Landau level where the packet is centered at. In addition, we have
investigated how the maximum of the electron current amplitude changes with
the thickness $\lambda$, identifying that current maxima reach their higher
value at the critical thickness, so we can characterize the TPTs in terms of
the amplitude of the electron currents. As a proposal for future work, this
quantum revival analysis could be extended to non-topological anisotropic
materials like phosphorene, which also present criticality when its energy gap
is closed by an external electric field [39].
## Acknowledgments
We thank the support of Junta de Andalucía through the project
PID2022-138144NB-I00. AM thanks the Spanish MIU for the FPU19/06376
predoctoral fellowship. OC is on sabbatical leave at Granada University,
Spain, since the 1st of September 2023. OC thanks support from the program
PASPA from DGAPA-UNAM.
a
## References
* Averbukh and Perelman [1989] I. S. Averbukh and J. F. Perelman, Fractional revivals: Universality in the long-term evolution of quantum wave packets beyond the correspondence principle dynamics, Phys. Lett. A 139, 449 (1989).
* Aronstein and Stroud [1997] D. L. Aronstein and C. R. Stroud, Fractional wave-function revivals in the infinite square well, Phys. Rev. A 55, 4526 (1997).
* Robinett [2004] R. Robinett, Quantum wave packet revivals, Physics Reports 392, 1 (2004).
* Romera and de los Santos [2009] E. Romera and F. de los Santos, Revivals, classical periodicity, and zitterbewegung of electron currents in monolayer graphene, Phys. Rev. B 80, 165416 (2009).
* de los Santos and Romera [2013] F. de los Santos and E. Romera, Revival times at quantum phase transitions, Phys. Rev. A 87, 013424 (2013).
* de los Santos _et al._ [2015] F. de los Santos, E. Romera, and O. Castaños, Time scales at quantum phase transitions in the Lipkin-Meshkov-Glick model, Phys. Rev. A 91, 043409 (2015).
* Romera, E. _et al._ [2016] Romera, E., Bolívar, J. C., Roldán, J. B., and de los Santos, F., Revivals of electron currents and topological-band insulator transitions in 2d gapped dirac materials, EPL 115, 20008 (2016).
* García _et al._ [2013] T. García, S. Rodríguez-Bolívar, N. A. Cordero, and E. Romera, Wavepacket revivals in monolayer and bilayer graphene rings, Journal of Physics: Condensed Matter 25, 235301 (2013).
* de-la Huerta-Sainz _et al._ [2022] S. de-la Huerta-Sainz, A. Ballesteros, and N. A. Cordero, Quantum revivals in curved graphene nanoflakes, Nanomaterials 12, 1953 (2022).
* de-la Huerta-Sainz _et al._ [2023a] S. de-la Huerta-Sainz, A. Ballesteros, and N. A. Cordero, Gaussian curvature effects on graphene quantum dots, Nanomaterials 13, 95 (2023a).
* de-la Huerta-Sainz _et al._ [2023b] S. de-la Huerta-Sainz, A. Ballesteros, and N. A. Cordero, Electric field effects on curved graphene quantum dots, Micromachines 14, 2035 (2023b).
* Rempe _et al._ [1987] G. Rempe, H. Walther, and N. Klein, Observation of quantum collapse and revival in a one-atom maser, Phys. Rev. Lett. 58, 353 (1987).
* Yeazell _et al._ [1990] J. A. Yeazell, M. Mallalieu, and C. R. Stroud, Observation of the collapse and revival of a Rydberg electronic wave packet, Phys. Rev. Lett. 64, 2007 (1990).
* Wals _et al._ [1994] J. Wals, H. H. Fielding, J. F. Christian, L. C. Snoek, W. J. van der Zande, and H. B. van Linden van den Heuvell, Observation of Rydberg wave packet dynamics in a Coulombic and magnetic field, Phys. Rev. Lett. 72, 3783 (1994).
* Vrakking _et al._ [1996] M. J. J. Vrakking, D. M. Villeneuve, and A. Stolow, Observation of fractional revivals of a molecular wave packet, Phys. Rev. A 54, R37 (1996).
* Kirchmair _et al._ [2013] G. Kirchmair, B. Vlastakis, Z. Leghtas, S. E. Nigg, H. Paik, E. Ginossar, M. Mirrahimi, L. Frunzio, S. M. Girvin, and R. J. Schoelkopf, Observation of quantum state collapse and revival due to the single-photon Kerr effect, Nature 495, 205 (2013).
* Novik _et al._ [2005] E. G. Novik, A. Pfeuffer-Jeschke, T. Jungwirth, V. Latussek, C. R. Becker, G. Landwehr, H. Buhmann, and L. W. Molenkamp, Band structure of semimagnetic ${\mathrm{hg}}_{1-y}{\mathrm{mn}}_{y}\mathrm{Te}$ quantum wells, Phys. Rev. B 72, 035321 (2005).
* Bernevig _et al._ [2006] B. A. Bernevig, T. L. Hughes, and S.-C. Zhang, Quantum spin Hall effect and topological phase transition in HgTe quantum wells, Science 314, 1757 (2006).
* König _et al._ [2007] M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi, and S.-C. Zhang, Quantum spin Hall insulator state in HgTe quantum wells, Science 318, 766 (2007).
* König _et al._ [2008] M. König, H. Buhmann, L. W. Molenkamp, T. Hughes, C.-X. Liu, X.-L. Qi, and S.-C. Zhang, The quantum spin Hall effect: Theory and experiment, Journal of the Physical Society of Japan 77, 031007 (2008).
* Szuszkiewicz _et al._ [2003] W. Szuszkiewicz, E. Dynowska, F. Ott, B. Hennion, M. Jouanne, J. F. Morhange, and J. Sadowski, Short-period gamnas/gaas superlattices: Optical and magnetic characterization, Journal of Superconductivity 16, 209 (2003).
* Scherbakov _et al._ [2001] A. V. Scherbakov, D. R. Yakovlev, A. V. Akimov, I. A. Merkulov, B. König, W. Ossau, L. W. Molenkamp, T. Wojtowicz, G. Karczewski, G. Cywinski, and J. Kossut, Acceleration of the spin-lattice relaxation in diluted magnetic quantum wells in the presence of a two-dimensional electron gas, Phys. Rev. B 64, 155205 (2001).
* Camilleri _et al._ [2001] C. Camilleri, F. Teppe, D. Scalbert, Y. G. Semenov, M. Nawrocki, M. Dyakonov, J. Cibert, S. Tatarenko, and T. Wojtowicz, Electron and hole spin relaxation in modulation-doped cdmnte quantum wells, Phys. Rev. B 64, 085331 (2001).
* Zhang _et al._ [2021] T.-Y. Zhang, Q. Yan, and Q.-F. Sun, Constructing low-dimensional quantum devices based on the surface state of topological insulators, Chinese Physics Letters 38, 077303 (2021).
* Deng _et al._ [2020] Y. Deng, Y. Yu, M. Z. Shi, Z. Guo, Z. Xu, J. Wang, X. H. Chen, and Y. Zhang, Quantum anomalous Hall effect in intrinsic magnetic topological insulator mnbi2te4, Science 367, 895 (2020).
* Li _et al._ [2019] J. Li, Y. Li, S. Du, Z. Wang, B.-L. Gu, S.-C. Zhang, K. He, W. Duan, and Y. Xu, Intrinsic magnetic topological insulators in van der waals layered mnbi2te4-family materials, Science Advances 5, eaaw5685 (2019).
* Kou _et al._ [2014] X. Kou, S.-T. Guo, Y. Fan, L. Pan, M. Lang, Y. Jiang, Q. Shao, T. Nie, K. Murata, J. Tang, Y. Wang, L. He, T.-K. Lee, W.-L. Lee, and K. L. Wang, Scale-invariant quantum anomalous Hall effect in magnetic topological insulators beyond the two-dimensional limit, Phys. Rev. Lett. 113, 137201 (2014).
* Chang _et al._ [2013] C.-Z. Chang, J. Zhang, X. Feng, J. Shen, Z. Zhang, M. Guo, K. Li, Y. Ou, P. Wei, L.-L. Wang, Z.-Q. Ji, Y. Feng, S. Ji, X. Chen, J. Jia, X. Dai, Z. Fang, S.-C. Zhang, K. He, Y. Wang, L. Lu, X.-C. Ma, and Q.-K. Xue, Experimental observation of the quantum anomalous Hall effect in a magnetic topological insulator, Science 340, 167 (2013).
* Calixto and Romera [2015a] M. Calixto and E. Romera, Identifying topological-band insulator transitions in silicene and other 2d gapped Dirac materials by means of Rényi-Wehrl entropy, EPL (Europhysics Letters) 109, 40003 (2015a).
* Romera and Calixto [2015a] E. Romera and M. Calixto, Uncertainty relations and topological-band insulator transitions in 2D gapped Dirac materials, Journal of Physics: Condensed Matter 27, 175003 (2015a).
* Calixto and Romera [2015b] M. Calixto and E. Romera, Inverse participation ratio and localization in topological insulator phase transitions, Journal of Statistical Mechanics: Theory and Experiment 2015, P06029 (2015b).
* Romera and Calixto [2015b] E. Romera and M. Calixto, Band inversion at critical magnetic fields in a silicene quantum dot, EPL (Europhysics Letters) 111, 37006 (2015b).
* Castaños _et al._ [2019] O. Castaños, E. Romera, and M. Calixto, Information theoretic analysis of landau levels in monolayer phosphorene under magnetic and electric fields, Materials Research Express 6, 106316 (2019).
* Scharf _et al._ [2015] B. Scharf, A. Matos-Abiague, I. Žutić, and J. Fabian, Probing topological transitions in HgTe/CdTe quantum wells by magneto-optical measurements, Phys. Rev. B 91, 235433 (2015).
* Stille _et al._ [2012] L. Stille, C. J. Tabert, and E. J. Nicol, Optical signatures of the tunable band gap and valley-spin coupling in silicene, Phys. Rev. B 86, 195405 (2012).
* Tabert and Nicol [2013a] C. J. Tabert and E. J. Nicol, Valley-spin polarization in the magneto-optical response of silicene and other similar 2d crystals, Phys. Rev. Lett. 110, 197402 (2013a).
* Tabert and Nicol [2013b] C. J. Tabert and E. J. Nicol, Magneto-optical conductivity of silicene and other buckled honeycomb lattices, Phys. Rev. B 88, 085434 (2013b).
* Zhou _et al._ [2015] X. Y. Zhou, R. Zhang, J. P. Sun, Y. L. Zou, D. Zhang, W. K. Lou, F. Cheng, G. H. Zhou, F. Zhai, and K. Chang, Landau levels and magneto-transport property of monolayer phosphorene, Scientific Reports 5, 12295 (2015).
* Calixto _et al._ [2023] M. Calixto, A. Mayorgas, N. A. Cordero, E. Romera, and O. Castaños, Faraday rotation and transmittance as markers of topological phase transitions in 2D materials, arXiv , 2305.14923v2 (2023).
* Qi and Zhang [2011] X.-L. Qi and S.-C. Zhang, Topological insulators and superconductors, Rev. Mod. Phys. 83, 1057 (2011).
* Zhou _et al._ [2017] Y.-F. Zhou, H. Jiang, X. C. Xie, and Q.-F. Sun, Two-dimensional lattice model for the surface states of topological insulators, Phys. Rev. B 95, 245137 (2017).
* Thouless _et al._ [1982] D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Quantized Hall conductance in a two-dimensional periodic potential, Phys. Rev. Lett. 49, 405 (1982).
* Peierls [1933] R. Peierls, Zur theorie des diamagnetismus von leitungselektronen, Zeitschrift für Physik 80, 763 (1933).
* Ezawa [2013] Z. Ezawa, _Quantum Hall Effects_ (World Scientific, 2013).
* Büttner _et al._ [2011] B. Büttner, C. X. Liu, G. Tkachov, E. G. Novik, C. Brüne, H. Buhmann, E. M. Hankiewicz, P. Recher, B. Trauzettel, S. C. Zhang, and L. W. Molenkamp, Single valley Dirac fermions in zero-gap HgTe quantum wells, Nature Physics 7, 418 (2011).
* Scharf _et al._ [2012] B. Scharf, A. Matos-Abiague, and J. Fabian, Magnetic properties of HgTe quantum wells, Phys. Rev. B 86, 075418 (2012).
* Calixto _et al._ [2022] M. Calixto, N. A. Cordero, E. Romera, and O. Castaños, Signatures of topological phase transitions in higher Landau levels of HgTe/CdTe quantum wells from an information theory perspective, Physica A: Statistical Mechanics and its Applications 605, 128057 (2022).
* Forrester _et al._ [1947] A. T. Forrester, W. E. Parkins, and E. Gerjuoy, On the possibility of observing beat frequencies between lines in the visible spectrum, Phys. Rev. 72, 728 (1947).
* García _et al._ [2014] T. García, N. A. Cordero, and E. Romera, Zitterbewegung and quantum revivals in monolayer graphene quantum dots in magnetic fields, Phys. Rev. B 89, 075416 (2014).
* Romera _et al._ [2014] E. Romera, J. Roldán, and F. de los Santos, Zitterbewegung in monolayer silicene in a magnetic field, Physics Letters A 378, 2582 (2014).
|
# Approximating a branch of solutions to the Navier–Stokes equations by
reduced-order modeling
Maxim A. Olshanskii Department of Mathematics, University of Houston, Houston,
Texas 77204 ([email protected]). Leo G. Rebholz School of Mathematical and
Statistical Sciences, Clemson University, Clemson SC 29634
([email protected]).
###### Abstract
This paper extends a low-rank tensor decomposition (LRTD) reduced order model
(ROM) methodology to simulate viscous flows and in particular to predict a
smooth branch of solutions for the incompressible Navier-Stokes equations.
Additionally, it enhances the LRTD-ROM methodology by introducing a non-
interpolatory variant, which demonstrates improved accuracy compared to the
interpolatory method utilized in previous LRTD-ROM studies. After presenting
the interpolatory and non-interpolatory LRTD-ROM, we demonstrate that with
snapshots from a few different viscosities, the proposed method is able to
accurately predict flow statistics in the Reynolds number range $[25,400]$.
This is a significantly wider and higher range than state of the art (and
similar size) ROMs built for use on varying Reynolds number have been
successful on. The paper also discusses how LRTD may offer new insights into
the properties of parametric solutions.
###### keywords:
Model order reduction, variable Reynolds number, flow around cylinder, low-
rank tensor decomposition, proper orthogonal decomposition
## 1 Introduction
We are interested in reduced order modeling of the incompressible Navier-
Stokes equations (NSE), which are given by
(1) $\left\\{\begin{aligned} \frac{\partial\mathbf{u}}{\partial
t}+(\mathbf{u}\cdot\nabla)\mathbf{u}-\nu\Delta\mathbf{u}+\nabla p&=0,\\\
\mbox{div}\mathbf{u}&=0,\end{aligned}\right.$
in a bounded Lipschitz domain $\Omega$ and for $t\in(0,T)$ with a final time
$T>0$, and suitable initial and boundary conditions. Here $\mathbf{u}$ and $p$
are the unknown fluid velocity and pressure, and $\nu>0$ is the kinematic
viscosity. We treat $\nu$ as a positive constant parameter that can take
values from the range $[\nu_{\min},\nu_{\max}]$. The problem addressed in the
paper consists of an effective prediction of flow statistics for the entire
range $[\nu_{\min},\nu_{\max}]$, based on the information learned from a set
of flow states $\mathbf{u}(t^{n},\nu^{k}),\,p(t^{n},\nu^{k})$ (further called
snapshots) computed for a finite sample of parameters (training set)
$\mathcal{A}=\\{\nu^{k}\\}_{k=1}^{K}\subset[\nu_{\min},\nu_{\max}]$ at given
time instances $\\{t^{n}\\}_{n=1}^{N}\subset(0,T]$.
Assuming $\mathbf{u},p$ depends smoothly on $\nu$, the problem outlined above
can be, of course, addressed by numerically solving (1) for a sufficiently
dense sample $\mathcal{A}$ and proceeding with interpolation. This strategy,
however, may entail prohibitive computational costs of solving the full order
model multiple times for a large set of parameters and impractical data
storage. Furthermore, for long-time simulations, such an interpolation
strategy may fail to be sustainable given that solution trajectories may
(locally) diverge exponentially fast (although in 2D the system has a finite
dimensional attractor [5] which could be captured by a ROM).
For a fast and accurate computations of flow statistics for any
$\nu\in[\nu_{\min},\nu_{\max}]$, the present paper considers a reduced order
model (ROM) which uses a low-rank tensor decomposition (LRTD) in the
space–time–parameter space as the core dimension reduction technique. As such,
LRTD replaces SVD/POD, the standard reduction method in more traditional
POD–ROMs. This allows for the recovery of information about the parameter-
dependence of reduced spaces from a smaller set of pre-computed snapshots and
to exploit this information for building parameter-specific ROMs. The LRTD–ROM
was recently introduced in [18] and further developed and analyzed in [19,
20].
This is the first time LRTD–ROM is applied to the system (1) and, more
generally, to predict the dynamics of a viscous fluid flow. The paper extends
the approach to the parameterized incompressible Navier–Stokes equations. We
introduce a non-interpolatory variant of LRTD–ROM. The method is applied to
predict drag and lift coefficients for a 2D flow passing a cylinder at
Reynolds numbers $\text{Re}\in[25,400]$. This branch of solutions contains the
first bifurcation point at around $\text{Re}=50$ [4, 27], when the steady
state flow yields to an unsteady periodic flow.
Predicting flow dynamics along a parameterized branch of solutions is a
challenging task for traditional ROMs, since building a universal low-
dimensional space for a range of parameter(s) may be computationally expensive
if possible at all. Recent studies that develop or apply reduced order
modeling to parameterized fluid problems include [16, 22, 13, 13, 23]. In
particular, several papers addressed the very problem of applying ROMs to
predict a flow passing a 2D cylinder for varying Re number. The authors of [8]
applied dynamic mode decomposition (DMD) with interpolation between pre-
computed solutions for 16 values of viscosity to predict flow for
$\text{Re}\in[85,100]$. In [1] the DMD with 14 viscosity values in the
training set was applied to forecast the flow around the first bifurcation
point $R=50$ (the actual Re numbers are not specified in the paper). A
stabilized POD–ROM was tested in [25] to predict the same flow for
$\text{Re}\in[100,200]$. In [10] the same problem of the 2D flow around a
circular cylinder for varying Re numbers was approached with a POD–ROM based
on greedy sampling of the parameter domain. Such POD–ROM required then offline
computations of FOM solutions for 51 values of Re to predict flow statistics
for $\text{Re}\in[75,100]$. Compared to these studies, the LRTD–ROM is able to
handle significantly larger parameter variations with nearly the same or
smaller training sets. For example, we found 13 values of Re log-uniformly
sampled to be sufficient for LRTD–ROM with reduced dimension of 20 to
reasonably predict the same flow statistics for $\text{Re}\in[25,400]$. This
exemplifies the prediction capability of LRTD based projection ROM for fluid
problems.
The remainder of the paper is organized as follows. Section 2 describes the
FOM, which is a second-order in time Scott-Vogelius finite element method on a
sequence of barycenter refined triangulations. Section 3 introduces the
reduced order model. In section 4 the model is applied to predict the 2D flow
along a smooth branch of solutions.
## 2 Full order model
To define a full order model for our problem of interest, we consider a
conforming finite element Galerkin method: Denote by $\mathbf{V}_{h}\subset
H^{1}(\Omega)^{d}$ and $Q_{h}\subset L^{2}_{0}(\Omega)$ velocity and pressure
finite element spaces with respect to a regular triangulation
$\mathcal{T}_{h}$ of $\Omega$. For $\mathbf{V}_{h}$ and $Q_{h}$ we choose the
lowest order Scott-Vogelius finite element pair:
(2) $\begin{split}\mathbf{V}_{h}&=\\{\mathbf{v}\in
C(\Omega)^{2}:\,\mathbf{v}\in\left[\mathbb{P}_{2}(T)\right]^{2}~{}\forall\,T\in\mathcal{T}_{h}\\},\\\
Q_{h}&=\\{q\in
L^{2}(\Omega):\,q\in\mathbb{P}_{1}(T)~{}~{}~{}~{}\forall\,T\in\mathcal{T}_{h}\\}.\end{split}$
The lowest order Scott-Vogelius (SV) element is known [2] to be LBB stable in
2D on barycenter refined meshes (also sometimes referred to as Alfeld split
meshes). Hence we consider $\mathcal{T}_{h}$ such that it is obtained by one
step of barycenter refinement applied to a coarser triangulation. Since
$\mbox{div}(\mathbf{V}_{h})\subseteq Q_{h}$, it is an example of a stable
element which enforces the divergence free constraint for the finite element
velocity pointwise.
Denote by $I_{h}(\cdot)$ any suitable interpolation operator of velocity
boundary values. We use $(f,g):=\int_{\Omega}f\cdot g\,dx$ notation for both
scalar and vector functions $f,g$. We also adopt the notation
$\mathbf{u}_{h}^{n}$ and $p_{h}^{n}$ for the finite element approximations of
velocity and pressure at time $t_{n}=n\Delta t$, with $\Delta t=T/N$ and
$n=0,1,2,\dots,N$.
The second order in time FE Galerkin formulation of (1) with $\mathbf{u}={\bf
g}$ on $\partial\Omega$ reads: Find $\mathbf{u}_{h}^{n}\in\mathbf{V}_{h}$,
$\mathbf{u}_{h}^{n}=I_{h}(\mathbf{g}(t_{n}))$ on $\partial\Omega$ and
$p_{h}^{n}\in Q_{h}\cap L^{2}_{0}(\Omega)$, for $n=1,2,\dots,N$, such that
satisfying
(3)
$\Big{(}\frac{3\mathbf{u}_{h}^{n}-4\mathbf{u}_{h}^{n-1}+\mathbf{u}_{h}^{n-2}}{2\Delta
t},\mathbf{v}_{h}\Big{)}+((2\mathbf{u}_{h}^{n-1}-\mathbf{u}_{h}^{n-2})\cdot\nabla\mathbf{u}_{h}^{n},\mathbf{v}_{h})\\\
+\nu(\nabla\mathbf{u}_{h}^{n},\nabla\mathbf{v}_{h})-(p_{h}^{n},\mbox{div}\mathbf{v}_{h})+(\mbox{div}\mathbf{u}_{h}^{n},q_{h})=0,$
for all $\mathbf{v}_{h}\in\mathbf{V}_{h}$, s.t. $\mathbf{v}_{h}={\bf 0}$ on
$\partial\Omega$, $q_{h}\in Q_{h}$, and $\mathbf{u}_{h}^{0}=\mathbf{u}(0)$.
The first step for $n=1$ is done by the first order implicit Euler method.
The stability and convergence of the method can be analyzed following textbook
arguments (e.g. [9, 7]), implying the estimate
(4)
$\max_{n=1,\dots,N}\|\mathbf{u}^{n}_{h}-\mathbf{u}(t^{n})\|^{2}_{L^{2}(\Omega)}+\Delta
t\nu\sum_{n=1}^{N}\|\nabla(\mathbf{u}^{n}_{h}-\mathbf{u}(t^{n}))\|^{2}_{L^{2}(\Omega)}\leq
C(\mathbf{u},p,\nu)(|\Delta t|^{4}+h^{4}),$
where $h=\max_{T\in\mathcal{T}_{h}}\text{diam}(T)$, and $C(\mathbf{u},p,\nu)$
is independent on the mesh parameters but depends on the regularity
(smoothness) of $\mathbf{u}$ and $p$. Under extra regularity assumptions, the
optimal order velocity and pressure estimates follow in the
$L^{\infty}(L^{2})$-norms [14]:
(5)
$\max_{n=1,\dots,N}(\|\mathbf{u}^{n}_{h}-\mathbf{u}(t^{n})\|_{L^{2}(\Omega)}+h\|p^{n}_{h}-p(t^{n})\|_{L^{2}(\Omega)})\leq
C(\mathbf{u},p,\nu)(|\Delta t|^{2}+h^{3}).$
## 3 Reduced order model
The LRTD–ROM is a projection based ROM, where the solution is sought in a
parameter dependent low dimensional space. Since the divergence-free finite
elements are used for the FOM model, the low dimensional ROM space is a
velocity space $\mathbf{V}^{\ell}(\nu)\subset\mathbf{V}_{h}$,
$\mbox{dim}(\mathbf{V}^{\ell}(\nu))=\ell\ll M$, such that
(6) $\mbox{div}\mathbf{v}_{\ell}=0\quad\text{for
all}~{}\mathbf{v}_{\ell}\in\mathbf{V}^{\ell}(\nu).$
Thanks to (6), the pressure does not enter the projected equations and the
reduced order model reads: Find
$\mathbf{u}_{\ell}^{n}\in\mathbf{V}_{\ell}(\nu)$, for $n=1,2,\dots,N$, such
that satisfy
(7)
$\Big{(}\frac{3\mathbf{u}_{\ell}^{n}-4\mathbf{u}_{\ell}^{n-1}+\mathbf{u}_{\ell}^{n-2}}{2\Delta
t},\mathbf{v}_{\ell}\Big{)}+((2\mathbf{u}_{\ell}^{n-1}-\mathbf{u}_{\ell}^{n-2})\cdot\nabla\mathbf{u}_{\ell}^{n},\mathbf{v}_{\ell})+\nu(\nabla\mathbf{u}_{\ell}^{n},\nabla\mathbf{v}_{\ell})=0,$
for all $\mathbf{v}_{\ell}\in\mathbf{V}_{\ell}(\nu)$, and
$\mathbf{u}_{\ell}^{0}=\mathbf{P}_{\ell}\mathbf{u}(0)$, where
$\mathbf{P}_{\ell}$ is a projector into $\mathbf{V}_{\ell}(\nu)$. Similar to
the FOM, the implicit Euler method is used for $n=1$. Once the velocities
$\mathbf{u}_{\ell}^{n}$ are known, the corresponding pressure functions
$p_{\ell}^{n}\in Q_{h}$ can be recovered by a straightforward post-processing
step, see, e.g. [3].
The critical part of the ROM is the design of a parameter-specific low-
dimensional space $\mathbf{V}_{\ell}(\nu)$. This is done within a framework of
a LRTD–ROM (sometimes referred to as Tensor ROM or TROM), which replaces the
matrix SVD – a traditional dimension reduction technique – by a low-rank
tensor decomposition. The application of tensor technique is motivated by a
natural space–time–parameters structure of the system. This opens up
possibilities for the fast (online) finding of an efficient low-dimensional
$\nu$-specific ROM space for arbitrary incoming viscosity parameter $\nu$. The
resulting LRTD–ROM consists of (7), offline part of applying LRTD, and online
part with some fast linear algebra to determine $\mathbf{V}_{\ell}(\nu)$.
Further details of LRTD–ROM are provided next.
### 3.1 LRTD–ROM
Similar to the conventional POD, on an offline stage a representative
collection of flow velocity states, referred to as snapshots, is computed at
times $t_{j}$ and for pre-selected values of the viscosity parameter:
$\mathbf{u}_{h}(t_{j},\nu_{k})\in\mathbf{V}_{h},\quad j=1,\ldots,N,\quad
k=1,\ldots,K.$
Here $\mathbf{u}_{h}$ are solutions of the full order model (3) for a set of
$K$ viscosity parameters $\nu_{k}\in[\nu_{\min},\nu_{\max}]$.
A standard POD dimension reduction consists then in finding a subspace
$\mathbf{V}^{\rm pod}_{\ell}\subset\mathbf{V}_{h}$ that approximates the space
spanned by all observed snapshots in the best possible way (subject to the
choice of the norm). This way, the POD reduced order space captures
_cumulative_ information regarding the snapshots’ dependence on the viscosity
parameter. Lacking parameter specificity, $\mathbf{V}^{\rm pod}_{\ell}$ and so
POD–ROM may lack robustness for parameter values outside the sampling set and
may necessitate $\ell$ and $K$ to be large to accurately represent the whole
branch of solutions. This limitation motivates the application of a tensor
technique based on low-rank tensor decomposition to preserve information about
parameter dependence in reduced-order spaces.
Denote by the upright symbol $\mathrm{u}^{j}(\nu)\in\mathbb{R}^{M}$ the vector
of representation for $\mathbf{u}_{h}(t_{j},\nu)$ in the nodal basis.
Recalling that the POD basis can be defined from the low-rank approximation
(given by a truncated SVD) of the snapshot matrix, one can interpret LRTD as a
multi-linear extension of POD: Instead of arranging snapshots in a matrix
$\Phi_{\rm pod}$, one seeks to exploit the tensor structure of the snapshots
domain and to utilize the LRTD instead of the matrix SVD for solving a tensor
analogue of the low-rank approximation problem.
For LRTD–ROM, the coefficient vectors of velocity snapshots are organized in
the _multi-dimensional_ array
(8) $(\boldsymbol{\Phi})_{:,k,j}=\mathrm{u}^{j}(\nu_{k}),$
which is a 3D tensor of size $M\times K\times N$. The first and the last
indices of $\boldsymbol{\Phi}$ correspond to the spatial and temporal
dimensions, respectively.
Unfolding of $\boldsymbol{\Phi}$ along its first mode into a $M\times NK$
matrix and proceeding with its truncated SVD constitutes the traditional POD
approach. In the tensor ROM the truncated SVD of the unfolded matrix is
replaced with a truncated LRTD of $\boldsymbol{\Phi}$.
Although the concept of tensor rank is somewhat ambiguous, there is an
extensive literature addressing the issue of defining tensor rank(s) and LRTD;
see e.g. [11]. In [18, 20], the tensor ROM has been introduced for three
common rank-revealing tensor formats: canonical polyadic, Tucker, and tensor
train. The LRTD in any of these formats can be seen as an extension of the SVD
to multi-dimensional arrays. While each format has its distinct numerical and
compression properties and would be suitable, we use Tucker for the purpose of
this paper.
We note that the LRTD approach is effectively applicable for multi-parameter
problems. In case of a higher parameter space dimension one may prefer a
hierarchical Tucker format [12] such as tensor train to avoid exponential
growth of LRTD complexity with respect to the parameter space dimension.
In the Tucker format [26, 17] one represents $\boldsymbol{\Phi}$ by the
following sum of direct products of three vectors:
(9)
$\boldsymbol{\Phi}\approx\widetilde{\boldsymbol{\Phi}}=\sum_{m=1}^{\widetilde{M}}\sum_{k=1}^{\widetilde{K}}\sum_{n=1}^{\widetilde{N}}(\mathbf{C})_{m,k,n}\mathbf{w}^{m}\otimes\boldsymbol{\sigma}^{k}\otimes\mathbf{v}^{n},$
with $\mathbf{w}^{m}\in\mathbb{R}^{M}$,
$\boldsymbol{\sigma}^{k}\in\mathbb{R}^{K}$, and
$\mathbf{v}^{n}\in\mathbb{R}^{N}$. Here $\otimes$ denotes the outer vector
product. The numbers $\widetilde{M}$, $\widetilde{K}$, and $\widetilde{N}$ are
referred to as Tucker ranks of $\widetilde{\boldsymbol{\Phi}}$. The Tucker
format delivers an efficient compression of the snapshot tensor, provided the
size of the _core tensor_ $\mathbf{C}$ is (much) smaller than the size of
$\boldsymbol{\Phi}$, i.e., $\widetilde{M}\ll M$, $\widetilde{K}\ll K$, and
$\widetilde{N}\ll N$.
Denote by $\|\boldsymbol{\Phi}\|_{F}$ the tensor Frobenius norm, which is the
square root of the sum of the squares of all entries of $\boldsymbol{\Phi}$.
Finding the best approximation of a tensor in the Frobenius norm by a fixed-
ranks Tucker tensor is a well-posed problem with a constructive algorithm
known to deliver quasi-optimal solutions [17]. Furthermore, using this
algorithm, which is based on the truncated SVD for a sequence of unfolding
matrices, one finds $\widetilde{\boldsymbol{\Phi}}$ in the Tucker format that
satisfies
(10)
$\big{\|}\widetilde{\boldsymbol{\Phi}}-\boldsymbol{\Phi}\big{\|}_{F}\leq\widetilde{\varepsilon}\big{\|}\boldsymbol{\Phi}\big{\|}_{F}$
for a given $\widetilde{\varepsilon}>0$ and the sets $\\{\mathbf{w}^{m}\\}$,
$\\{\boldsymbol{\sigma}^{k}\\}$, $\\{\mathbf{v}^{n}\\}$ are orthogonal.
Corresponding Tucker ranks are then recovered in the course of factorization.
The resulting decomposition for $\widetilde{\varepsilon}=0$ is also known as
Higher Order SVD (HOSVD) of $\boldsymbol{\Phi}$ [6].
For arbitrary but fixed $\nu\in[\nu_{\min},\nu_{\max}]$, one can ‘extract’
from $\widetilde{\boldsymbol{\Phi}}$ specific (local) information for building
$\mathbf{V}_{\ell}(\nu)$. We consider two approaches herein: The first one
adopts interpolation between available snapshots but is done directly in the
low-rank format, while another avoids the interpolation step.
#### 3.1.1 Interpolatory LRTD–ROM
To formulate interpolatory LRTD–ROM, we need several further notations. We
assume an interpolation procedure
(11) ${\boldsymbol{\chi}}\,:\,[\nu_{\min},\nu_{\max}]\to\mathbb{R}^{K}$
such that for any smooth function $g:\,[\nu_{\min},\nu_{\max}]\to\mathbb{R}$,
$I(g):=\sum_{k=1}^{K}{\boldsymbol{\chi}}(\nu)_{k}g(\nu_{k})$ defines an
interpolant for $g$. Our choice is the Lagrange interpolation of order $p$:
(12)
${\boldsymbol{\chi}}(\nu)_{k}=\begin{cases}\prod\limits_{\begin{subarray}{c}m=1,\\\
m\neq
k\end{subarray}}^{p}(\nu_{i_{m}}-\nu)\Big{/}\prod\limits_{\begin{subarray}{c}m=1,\\\
m\neq k\end{subarray}}^{p}(\nu_{i_{m}}-\nu_{k}),&\text{if
}k=i_{k}\in\\{i_{1},\ldots,i_{p}\\},\\\
\qquad\qquad\qquad\qquad\qquad\qquad\quad 0,&\text{otherwise},\end{cases}$
where $\nu_{i_{1}},\ldots,\nu_{i_{p}}\in[\nu_{\min},\nu_{\max}]$ are the $p$
closest to $\nu$ viscosity values from the training set.
The $\nu$-specific _local_ reduced space $V^{\ell}(\nu)$ is the space spanned
by the first $\ell$ left singular vectors of the matrix
$\widetilde{\Phi}(\nu)$, defined through the in-tensor interpolation procedure
for $\widetilde{\boldsymbol{\Phi}}$:
(13)
$\widetilde{\Phi}(\nu)=\widetilde{\boldsymbol{\Phi}}\times_{2}{\boldsymbol{\chi}}(\nu)\in\mathbb{R}^{M\times
N},$
where $\times_{2}$ denotes the tensor-vector multiplication along the second
mode.
Consider a nodal basis denoted as ${\xi_{h}^{j}}$ in the finite element
velocity space
$\mathbf{V}_{h}=\text{span}\\{\xi_{h}^{1},\dots,\xi_{h}^{M}\\}$. The
corresponding finite element LRTD–ROM space is then
(14)
$\begin{split}\mathbf{V}^{\ell}(\nu)&=\\{\mathbf{v}_{h}\in\mathbf{V}_{h}\,:\,\mathbf{v}_{h}=\sum_{i=1}^{M}\xi^{i}_{h}(\mathbf{x})v_{i},~{}~{}\text{for}~{}(v_{1},\dots,v_{M})^{T}\in
V^{\ell}(\nu)\\},\\\ &\text{where}\quad
V^{\ell}(\nu)=\mbox{range}(\mathrm{S}(\nu)(1:\ell)),~{}\text{for}~{}\\{\mathrm{S}(\nu),\Sigma(\nu),\mathrm{V}(\nu)\\}=\text{SVD}(\widetilde{\Phi}(\nu)).\end{split}$
In section 3.1.3 we will discuss implementation details omitted here.
#### 3.1.2 Non-interpolatory LRTD–ROM
In non-interpolatory LRTD–ROM, the basis of the local ROM space is built as an
optimal $\ell$-dimensional space approximating the space spanned by snapshots
corresponding to several nearest in-sample viscosity values. For this we need
the extraction procedure
(15)
$\widetilde{\Phi}_{k}=\widetilde{\boldsymbol{\Phi}}\times_{2}\mathbf{e}_{k}\in\mathbb{R}^{M\times
N},$
so that $\widetilde{\Phi}_{k}$ is the $k$-th space-time slice of
$\widetilde{\boldsymbol{\Phi}}$.
As in the interpolatory LRTD–ROM, let
$\nu_{i_{1}},\ldots,\nu_{i_{p}}\in[\nu_{\min},\nu_{\max}]$ be the $p$ closest
to $\nu$ sampled viscosity values. Then the $\nu$-specific _local_ reduced
space $V^{\ell}(\nu)$ is the space spanned by the first $\ell$ left singular
vectors the following low-rank matrix $\widetilde{\Phi}(\nu)$:
(16)
$\widetilde{\Phi}(\nu)=[\widetilde{\Phi}_{i_{1}},\dots,\widetilde{\Phi}_{i_{p}}]\in\mathbb{R}^{M\times
pN}.$
The corresponding finite element LRTD–ROM space is defined in the same way as
in (14).
A remarkable feature of the LRTD–ROM is that finding the basis of
$V^{\ell}(\nu)$, i.e. finding the left singular vectors of
$\widetilde{\Phi}(\nu)$, does not require building or working with the ’large’
matrix $\widetilde{\Phi}(\nu)$. For any given $\nu\in[\nu_{\max},\nu_{\min}]$
it involves calculations with lower dimension objects only and so it can be
effectively done online. This implementation aspect of LRTD–ROM is recalled
below.
#### 3.1.3 Implementation
The implementation of the Galerkin LRTD-ROM follows a two-stage procedure.
Offline stage. For a set of sampled viscosity parameters, the snapshot tensor
$\boldsymbol{\Phi}$ is computed and for chosen $\varepsilon>0$ the truncated
HOSVD is used to find $\widetilde{\boldsymbol{\Phi}}$ satisfying (10). This
first stage defines the _universal reduced space_ $\widetilde{V}$, which is
the span of all $\mathbf{w}$-vectors from the Tucker decomposition (9):
(17)
$\widetilde{V}=\text{span}\big{\\{}\mathbf{w}_{1},\dots,\mathbf{w}_{\widetilde{M}}\big{\\}}\subset\mathbb{R}^{M}.$
Hence the dimension of $\widetilde{V}$ is equal to the first Tucker rank
$\widetilde{M}$ and $\mathbf{w}_{1},\dots,\mathbf{w}_{\widetilde{M}}$ is an
orthonormal basis. At this point the system (3) is ‘projected’ into
$\widetilde{V}$, i.e. the projected velocity mass, stiffness matrices, initial
velocity, and the projected inertia term are passed to the online stage.
###### Remark 1 (Nonlinear terms).
To handle the inertia term, we benefit from its quadratic non-linearity. More
precisely, we compute a sparse 3D array
$\mathrm{N}\in\mathbb{R}^{M\times M\times M},\quad\text{with
entries}~{}\mathrm{N}_{ijk}=(\xi_{i}\cdot\nabla\xi_{j},\xi_{k}),$
and project it into $\widetilde{V}$ by computing
$\widetilde{\mathrm{N}}=\mathrm{N}\times_{1}\mathrm{W}\times_{2}\mathrm{W}\times_{3}\mathrm{W},$
where
$\mathrm{W}=[\mathbf{w}_{1},\cdots,\mathbf{w}_{\widetilde{M}}]\in\mathbb{R}^{M\times\widetilde{M}}$
and $\times_{i}$ is now the tensor-matrix product along $i$-th mode. The
${\widetilde{M}\times\widetilde{M}\times\widetilde{M}}$ array
$\widetilde{\mathrm{N}}$ is passed to the online stage.
An alternative, which we do not pursue here, would be the application of a
LRTD–DEIM technique [20] to handle the nonlinear terms.
Online stage. The online stage receives the projected matrices, and the 3D
array $\widetilde{\mathrm{N}}$. From the LRTD (9) it receives the core tensor
$\mathbf{C}$ and the matrix
$\mathrm{S}=[\boldsymbol{\sigma}^{1},\dots,\boldsymbol{\sigma}^{\widetilde{K}}]^{T}$.
To find $V^{\ell}(\nu)$ for any $\nu\in[\nu_{\min},\nu_{\max}]$, one first
computes a local core matrix
(18)
$\mathrm{C}(\nu)=\left\\{\begin{split}&\mathbf{C}\times_{2}\left(\mathrm{S}{\boldsymbol{\chi}}(\nu)\right)\in\mathbb{R}^{\widetilde{M}\times\widetilde{N}}&\text{\footnotesize
interpolatory case},\\\
&\left[\mathbf{C}\times_{2}\left(\mathrm{S}\mathbf{e}_{i_{1}}\right),\dots,\mathbf{C}\times_{2}\left(\mathrm{S}\mathbf{e}_{i_{p}}\right)\right]\in\mathbb{R}^{\widetilde{M}\times
p\widetilde{N}}&\text{\footnotesize non-interpolatory
case}.\end{split}\right.$
and its thin SVD,
$\mathrm{C}(\nu)=\mathrm{U}_{c}\Sigma_{c}\mathrm{V}_{c}^{T}.$ It can be easily
seen that
(19)
$\widetilde{\Phi}(\nu)=\left({\mathrm{W}}\mathrm{U}_{c}\right)\Sigma_{c}\mathrm{Y}^{T},$
with an orthogonal matrix $\mathrm{Y}$. Since the local ROM space
$V^{\ell}(\nu)$ is spanned by the first $\ell$ singular vectors of
$\widetilde{\boldsymbol{\Phi}}(\nu)$ and (19) is the SVD of
$\widetilde{\Phi}(\nu)$, _the coordinates_ of the local reduced basis in the
universal basis $\\{\mathbf{w}_{i}\\}_{i=1}^{\widetilde{M}}$ are the first
$\ell$ left singular vectors of $\mathrm{C}(\nu)$, i.e. the first $\ell$
columns of $\mathrm{U}_{c}$. The pre-projected initial velocity, mass and
stiffness matrices and $\widetilde{\mathrm{N}}$ are projected further into
$V^{\ell}(\nu)$. The projection is done through multiplication with matrix
$\mathrm{U}_{c}$. This allows the execution of the proposed ROM (7).
If the ROM needs to be rerun for a different value of $\nu$, only calculations
starting with (18) need to be redone, without any reference to the offline
stage data.
| Offline part | | Online part
---|---|---|---
| $\overbrace{\mbox{\hskip 120.55518pt}}$ | $\overbrace{\mbox{\hskip 90.41638pt}}$
| | $-\text{\scriptsize LRTD}\hskip-2.58334pt\rightarrow$ | | | $-\text{\scriptsize LRLA}\hskip-2.58334pt\rightarrow$
Spaces | $\mathbf{V}_{h}$ | $\supset$ | $\widetilde{\mathbf{V}}_{h}$ | $\supset$ | $\mathbf{V}^{\ell}(\nu)$
| $\shortparallel$ | | $\shortparallel$ | | $\shortparallel$
| $\mbox{span}\\{\xi_{h}^{i}\\}_{i=1}^{M}$ | | $\mbox{span}\\{w_{h}^{i}\\}_{i=1}^{\widetilde{M}}$ | | $\mbox{span}\\{u_{h}^{i}(\nu)\\}_{i=1}^{\ell}$
Matrices | FOM | | Projected | | Double-projected
| matrices | | matrices | | matrices
Table 1: Data structure of the LRTD–ROM. LRLA stands for “low-rank linear
algebra”, which means that all calculations are done with low-dimension
objects.
We summarize the structure of LRTD-ROM in Table 1. The intermediate finite
element space $\widetilde{\mathbf{V}}_{h}$ is the finite element counterpart
of the universal space $\widetilde{V}$ from (17). The basis
$\\{w_{h}^{i}\\}_{i=1}^{\widetilde{M}}$ of $\widetilde{\mathbf{V}}_{h}$ is
given in terms of its coordinates in the nodal basis
$\\{\xi_{h}^{i}\\}_{i=1}^{M}$. In turn, the basis
$\\{u_{h}^{i}(\nu)\\}_{i=1}^{\ell}$ of the $\nu$-specific local space
$\mathbf{V}^{\ell}(\nu)$ is given by its coordinates in
$\\{w_{h}^{i}\\}_{i=1}^{\widetilde{M}}$. Hence, FE matrices are first
projected on $\widetilde{\mathbf{V}}_{h}$ during the offline phase. They are
stored online and double-projected for any incoming $\nu$ before executing the
ROM (7). In general, it holds
$\mbox{dim}(\mathbf{V}_{h})\gg\mbox{dim}(\widetilde{\mathbf{V}}_{h})\gg\mbox{dim}(\mathbf{V}^{\ell}(\nu))$,
e.g. in the example from the next section we have
$\mbox{dim}(\mathbf{V}_{h})=121,064$,
$\mbox{dim}(\widetilde{\mathbf{V}}_{h})=404$, and
$\mbox{dim}(\mathbf{V}^{\ell}(\nu))=20$.
## 4 Numerical tests
We now test the proposed LRTD–ROM on a benchmark test for incompressible
Navier-Stokes flow. After describing the test problem setup, FOM and ROM
construction details, we test the proposed ROM’s accuracy in predicting a
branch of solutions for the Navier-Stokes equations for $Re$ in [25,400] using
snapshots from solutions using 13 and 25 different viscosities.
### 4.1 Test problem description
The test problem we consider is 2D channel flow past a cylinder [24]. The
domain is [0, 2.2]$\times$[0, 0.41], which represents a rectangular channel,
and with a cylinder centered at $(0.2,0.2)$ having radius $0.05,$ see Figure
1.
Figure 1: Shown above is the domain for the flow past a cylinder test
problem.
There is no external forcing for this test, no-slip boundary conditions are
enforced on the walls and cylinder, and an inflow/outflow profile
$\displaystyle u_{1}(0,y,t)$
$\displaystyle=u_{1}(2.2,y,t)=\frac{6}{0.41^{2}}y(0.41-y),$ $\displaystyle
u_{2}(0,y,t)$ $\displaystyle=u_{2}(2.2,y,t)=0$
is enforced as a Dirichlet boundary condition. Of interest for comparisons and
accuracy testing are the predicted lift and drag, and for these quantities we
use the definitions
$\displaystyle c_{d}(t)$ $\displaystyle=20\int_{S}\left(\nu\frac{\partial
u_{t_{S}}(t)}{\partial n}n_{y}-p(t)n_{x}\right)dS,$ $\displaystyle c_{l}(t)$
$\displaystyle=20\int_{S}\left(\nu\frac{\partial u_{t_{S}}(t)}{\partial
n}n_{x}-p(t)n_{y}\right)dS,$
where $u_{t_{S}}$ is the tangential velocity, $S$ the cylinder, and $n=\langle
n_{x},n_{y}\rangle$ is the outward unit normal vector. For the calculations,
we used the global integral formulation from [15].
### 4.2 Full order model simulations
To study the performance of the LRTD–ROM with respect to the spatial mesh
refinement, we consider three regular triangulations $\mathcal{T}_{h}$ of
$\Omega$. The finest triangulation consists of 62,805 triangles, while the
inter-medium and the coarsest meshes have 30,078 and 8,658 triangles; the
coarsest mesh is illustrated in Figure 2. We note the triangulations are
constructed by first creating a Delaunay triangulation followed by a
barycenter refinement (Alfeld split). All FOM simulations used the scheme (3)
with time step $\Delta t=0.002$, and lowest order Scott-Vogelius elements as
described in section 2. With this choice of elements, the three meshes
provided 252,306, 121,064 and 35,020 total spatial degrees of freedom(dof).
For a given viscosity $\nu$, the corresponding Stokes solution was found with
this element choice and mesh to generate the initial condition.
Figure 2: Shown above is the coarsest mesh used for the flow past a cylinder
test simulations.
The viscosity parameter sampling set consists of $K$ viscosity values log-
uniformly distributed over the interval [$2.5\cdot 10^{-4},4\cdot 10^{-3}$],
which corresponds to $25\leq Re\leq 400$. $K=25$ was the maximum value we used
for training the ROM, and results are shown below for varying $K$.
All FOM simulations were run for $t\in[0,6]$, and by $t=5$ the von Karman
vortex street was fully develop behind the cylinder for $Re\gtrapprox 50$. For
$Re<50$, the flows had reached a steady state by $t=5$. For each $\nu$ from
the sampling set, 251 velocity snapshots were collected for $t\in[5,6]$ in
uniformly distributed time instances. This resulted in a $M\times K\times N$
snapshot tensor $\boldsymbol{\Phi}$, with $M$=dof for each mesh, $K$ different
viscosities, and $N=251$. We note that the Stokes extension of the boundary
conditions (which is very close to the snapshot average but preserves an
energy balance [21]) was subtracted from each snapshot used to build
$\boldsymbol{\Phi}$.
### 4.3 ROM basis and construction
Table 2 shows the dependence of the tensor $\widetilde{\boldsymbol{\Phi}}$
ranks on the targeted compression accuracy $\varepsilon$ and the FOM spatial
resolution. The first rank determines the universal space dimension. As
expected, higher ranks are needed for better accuracy. At the same time the
dependence on spatial resolution is marginal.
| Mesh 1 | Mesh 2 | Mesh 3
---|---|---|---
target accuracy / M | 35020 | 121064 | 252306
$\varepsilon=10^{-1}$ | [15,12 ,7] | [15,11,7] | [18,13,8]
$\varepsilon=10^{-2}$ | [74,21,40] | [78,21,40] | [89,21,45]
$\varepsilon=10^{-3}$ | [190 ,22 ,80] | [213,22,85] | [239,22,93]
$\varepsilon=10^{-4}$ | [365,23,113] | [404,23,124] | [444,23,135]
Table 2: HOSVD ranks of the $\varepsilon$-truncated LRTD for the snapshot
tensor. Figure 3: Singular values decay for POD matrix and local LRTD matrix
for 10 parameter values. Left and right panels show result for interpolatory
and non-interpolatory LRTD–ROMs, respectively.
Figure 3 illustrates on why finding $\nu$-dependent local ROM spaces through
LRTD is beneficial compared to employing a POD space, which is universal for
all parameter values. The faster decay of singular values
$\sigma(\tilde{\Phi}(\nu))$ allows for attainment of the desired accuracy
using lower ROM dimensions compared to the POD–ROM. At the same time, the
decay rate of $\sigma(\tilde{\Phi}(\nu))$ depends on $\nu$ with faster decay
observed for larger viscosity values, i.e. smaller Reynolds numbers.
Unsurprisingly, the snapshots collected for $\text{Re}<50$ show very little
variability, indicated by the abrupt decrease of $\sigma_{n}$ for $n>1$, since
the flow approaches an equilibrium state in these cases.
A plot of the first 7 modes for non-interpolatory LRTD–ROM using
$\epsilon=10^{-4}$ with $Re=$110 and 380, and for the full POD constructed
with data from tests using all the parameter values, are shown in figure 4. We
observe that for the LRTD–ROM cases, the modes quickly go from larger scales
in the first few modes to much finer scales by mode 7, whereas for the full
POD, the first 7 mode are all still at larger scales. This is consistent with
figure 3, which shows the decay of singular values is much slower, meaning
more modes are needed to characterize the space.
Figure 4: Modes 2,3,4,5,6,7 (from top to bottom) for (left) Re=110 POD-ROM,
(center) Re=380 POD-ROM, and (right) universal basis
### 4.4 ROM accuracy
Table 3: Relative $L^{2}$ norms of the errors between FOM and ROM solutions for 3 values of $Re$ numbers that are not in the training set. The results are for $\ell=20$. | Re=30 | Re=110 | Re=380
---|---|---|---
K | 13 | 25 | 13 | 25 | 13 | 25
interp. LRTD–ROM | 4.1e-7 | 2.1e-8 | 1.7e-3 | 1.7e-3 | 1.9e-2 | 1.6e-2
non-interp. LRTD–ROM | 1.6e-6 | 2.2e-7 | 2.8e-3 | 2.2e-3 | 1.8e-2 | 1.5e-2
POD–ROM | 7.6e-5 | 5.5e-5 | 5.6e-2 | 5.2e-2 | 1.6e-1 | 9.0e-2
We next study the dependence of the LRTD–ROMs’ solution accuracy on the
parameter sampling and the ROM design. We consider two training sets with
$K=13$ and $K=25$ viscosity parameters log-uniformly sampled in the parameter
domain, i.e.
$\nu_{i}=\nu_{\min}\big{(}\frac{\nu_{\max}}{\nu_{\min}}\big{)}^{(i-1)/K}$ for
$i=1,\dots,K$. Other parameters of the ROMs were $\varepsilon=10^{-4}$ and
$\ell=20$ (dimension of the ROM space). We run ROM simulations for $Re=100$,
which corresponds to $\nu=10^{-3}$ from the training sets, and for
$Re\in\\{30,\,110,\,380\\}$, which are viscosity parameters not in the
training sets. For the initial flow velocity we use linear interpolation
between known velocity values at the two closest points from the training set
at $t=5$.
Table 3 shows the relative $L^{2}((5,6)\times\Omega)$ error in three different
norms for both interpolatory and non-interpolatory versions of the LRTD–ROM
(for both $K=13$ and $K=25$) and compares both to the POD–ROM. We observe that
the POD–ROM is worse in all cases, often by an order of magnitude or more. The
interpolatory LRTD–ROM is somewhat more accurate in this measure than the non-
interpolatory one for $Re=110$ and $Re=30$ but has similar accuracy at
$Re=380$.
Figure 5: Prediction of lift and drag coefficients for Re=100 (which is in
the training set). Number of parameters in the training set is $K=13$, and
$\ell=20$.
Figure 6: Prediction of lift and drag coefficients for Re=110 and Re=380 not
from the training set. Number of parameters in the training set is $K=13$, and
$\ell=20$.
Figure 7: Prediction of lift and drag coefficients for Re=110 and Re=380 not
from the training set. Number of parameters in the training set is $K=25$, and
$\ell=20$.
In addition to accuracy in norms, we are interested in the ability of the
tensor ROMs to predict critical statistics such as drag and lift coefficients
for the cylinder. We are interested in the prediction accuracy of the method
both outside the training set and beyond the time interval used to collect the
snapshots. The results for $Re=100$ and $K=13$ are shown in Figure 5. We note
that $Re=100$ is in the training set, and observe that the interpolatory and
non-interpolatory LRTD-ROM results were quite good, and match the FOM flow
lift and drag quite well. The POD-ROM results, however, were very inaccurate.
As discussed above, the POD-ROM may need many more modes to be able to capture
finer scale detail that the LRTD-ROMs are able to capture.
The results for $Re=110$ and $Re=380$ are presented in Figure 6 for $13$
parameters in the training set and in Figure 7 for $25$ parameters in the
training set. The plots are for $6\leq t\leq 8$, since we are not starting
with the “correct” flow state and the system may take some time to reach the
quasi-equilibrium (periodic) state. We observe that for both $K=13$ and
$K=25$, POD-ROM results are inaccurate. For $K=25$, both interpolatory and
non-interpolatory results are reasonably accurate, although for $Re=380$ the
drag predictions show some slight error. For $K=13$, results are less
accurate; in the latter case, we observe non-interpolatory LRTD-ROM results to
be slightly better compared to interpolatory LRTD-ROM (similar accuracy is
found for total kinetic energy plots, which are omitted).
Figure 8: Prediction of drag and lift coefficients for $Re=380$ with non-
interpolatory LRTD–ROM dimensions $\ell=20$ and $\ell=30$. Number of
parameters in the training set is $K=25$.
Increasing the dimension of the LRTD–ROM improves the accuracy, as should be
expected. The effect is illustrated in figure 8, which shows the results for
the non-interpolatory LRTD–ROM with $\ell=20$ and $\ell=30$. While the results
for the lift prediction are almost indistinguishable, for the drag coefficient
$\ell=30$ has the ROM and FOM values match, while $\ell=20$ is seen to
slightly overshoot minimal and maximal values for $Re=380$. Thus for further
simulations we chose the non-interpolatory LRTD–ROM with $\ell=30$, and
trained on the set of $25$ parameters.
Figure 9: Minimal and the maximal values of drag and lift coefficients along
the smooth branch of solutions.
### 4.5 Predicting an entire branch of solutions
We are interested in applying the ROM to approximate the flow statistics along
the whole branch of solutions, and for these tests we use non-interpolatory
LRTD–ROM with $\ell=30$ and $K=25$. To this end, we run the LRTD-ROM for 99
viscosity values log-uniformly sampled in our parameter domain and calculate
the solution up to final $T=50$ starting from an initial condition which is
interpolated from snapshots at $t_{0}=5$. Figure 9 shows the predicted lift
and drag coefficient’s variation for varying $Re$, after quasi-periodic state
is reached in each flow. We find the transition point from steady-state to
periodic flow to be near $Re=48$, which agrees closely with the literature [4,
27].
Figure 10: Shown below are spectrums of the lift coefficients for varying
$Re$.
We next consider the spectrum of the flow statistics by computing the Fourier
transform of the lift coefficient for time interval $t\in[10,50]$. In Figure
10, this is shown for $Re$=50, 100, 200 and 400. For $Re=50$, only one spike
is observed, indicating a single dominant frequency. For $Re=100$, some
smaller spikes are shown in the plot, but they are nearly 3 orders of
magnitude smaller than the largest spike and have minimal effect on the
solution’s periodicity. By $Re=200$, the second biggest spike is a little over
two orders of magnitude smaller than the biggest one, and by $Re=400$ there is
less than two orders of magnitude difference suggesting that this flow is
moving away from a purely periodic flow to one with more complex behavior in
time.
Besides building more effective ROM, the LRTD may offer new insights into the
properties of parametric solutions. To give an example, let us consider the
HOSVD singular vectors of $\boldsymbol{\Phi}$. Figure 11 shows several
dominant vectors in time and parameter directions, which are the first several
HOSVD singular vectors of the snapshot tensor in the time and parameter modes.
Larger amplitudes of parameter singular vectors with Re number increase
suggest higher sensitivity of flow patterns to the variation of the viscosity
parameter, for flows with larger Reynolds numbers.
Figure 11: First four HOSVD singular vectors in time and parameter modes.
The first singular vectors in time and space direction are approximately
constant, cf. Fig. 11.
Figure 12: Frobenius norms of space-time stuctures $\Phi_{k}$ from
decomposition (20).
This indicates that the parametric solution possesses dominant space–parameter
and space–time states which are persistent in time and Reynolds number,
respectively. Let us focus on persistence in Reynolds number. For HOSVD, the
$\boldsymbol{\sigma}$ vectors from (9) are the first $\widetilde{K}$ singular
vectors of the second mode unfolding of $\boldsymbol{\Phi}$, and so
$\boldsymbol{\Phi}$ can be written as the sum of $K$ direct products:
(20) $\boldsymbol{\Phi}=\sum_{k=1}^{K}\Phi_{k}\otimes\boldsymbol{\sigma}^{k},$
where $\Phi_{k}\in\mathbb{R}^{M\times N}$ are space–time states (note that
these are not actual physical states) whose evolution in Reynolds number is
determined by $\boldsymbol{\sigma}^{k}$. Matrices $\Phi_{k}$ are mutually
orthogonal in the sense of the element-wise product,
$\mbox{tr}(\Phi_{k}\Phi_{j}^{T})=0$ for $k\neq j$, and since
$\|\Phi_{k}\|_{F}$ equals the $k$-th singular value of the second mode
unfolding of $\boldsymbol{\Phi}$, it also holds that
$\|\Phi_{1}\|_{F}\geq\|\Phi_{2}\|_{F}\geq\dots\geq\|\Phi_{K}\|_{F}.$
Figure 12 shows the norms of the persistent space–time states. We see that
$\Phi_{1}$ is not overly dominating and about 20 persistent space–time states
contribute to the parametric solution. Using orthogonality of
$\boldsymbol{\sigma}^{k}$ one finds from (20) that
(21) $\Phi_{k}=\boldsymbol{\Phi}\times_{2}\boldsymbol{\sigma}^{k}.$
Therefore, $\Phi_{k}$ are easily recovered for $k=1,\dots,\widetilde{K}$ once
$\boldsymbol{\sigma}^{k}$ are provided by HOSVD LRTD. From (21) and the
observation that $\boldsymbol{\sigma}^{1}$ is nearly constant in Re, we
conclude that the dominant space–time state $\Phi_{1}$ is close to a scaled
average of the parametric solution in Reynolds number (similar conclusion
holds for the dominant space–parameter state — it is close to a scaled time
averaged solution).
To gain further insight into the structure of $\Phi_{1}$, we display in Figure
13 the dominant spatial modes of $\Phi_{1}$. These are obtained by computing
the SVD of $\Phi_{1}$. Singular values of $\Phi_{1}$ drop rapidly so that the
first spatial mode, shown in Figure 13, captures nearly 99.9% of the energy.
1. 2.
3. 4.
Figure 13: Shown above are the first four spatial modes, taken from
$\Phi_{1}$, the first space-time persistent state.
## 5 Conclusions
The LRTD-ROM, an extension of a POD-ROM for parametric problems, retains
essential information about model variation in the parameter domain in a
reduced order format. When applied to the incompressible Navier-Stokes
equations parameterized with the viscosity coefficient, the LRTD-ROM
facilitates accurate prediction of flow statistics along a smooth branch of
solutions. Moreover, it enables the identification of parameter structures
that may not be apparent through standard POD analysis.
Previously, LRTD-ROMs have demonstrated success in addressing multi-parameter
linear and scalar non-linear problems. A natural next step is to extend it to
multi-parameter problems of fluid dynamics. Additionally, current research
efforts are directed towards developing LRTD-ROMs based on sparse sampling of
the parametric domain.
## Acknowledgments
The author M.O. was supported in part by the U.S. National Science Foundation
under award DMS-2309197. The author L.R. was supported by the U.S. National
Science Foundation under award DMS-2152623.
This material is based upon work supported by the National Science Foundation
under Grant No. DMS-1929284 while the authors were in residence at the
Institute for Computational and Experimental Research in Mathematics in
Providence, RI, during the semester program.
## References
* [1] F. Andreuzzi, N. Demo, and G. Rozza, A dynamic mode decomposition extension for the forecasting of parametric dynamical systems, SIAM Journal on Applied Dynamical Systems, 22 (2023), pp. 2432–2458.
* [2] D. N. Arnold and J. Qin, Quadratic velocity/linear pressure Stokes elements, Advances in Computer Methods for Partial Differential Equations, 7 (1992), pp. 28–34.
* [3] A. Caiazzo, T. Iliescu, V. John, and S. Schyschlowa, A numerical investigation of velocity–pressure reduced order models for incompressible flows, Journal of Computational Physics, 259 (2014), pp. 598–616.
* [4] J.-H. Chen, W. Pritchard, and S. Tavener, Bifurcation for flow past a cylinder between parallel planes, Journal of Fluid Mechanics, 284 (1995), pp. 23–41.
* [5] P. Constantin and C. Foias, Global Lyapunov exponents, Kaplan–Yorke formulas and the dimension of the attractors for 2D Navier–Stokes equations, Communications on Pure and Applied Mathematics, 38 (1985), pp. 1–27.
* [6] L. De Lathauwer, B. De Moor, and J. Vandewalle, A multilinear singular value decomposition, SIAM Journal on Matrix Analysis and Applications, 21 (2000), pp. 1253–1278.
* [7] A. Ern and J.-L. Guermond, Theory and practice of finite elements, vol. 159, Springer, 2004.
* [8] Z. Gao, Y. Lin, X. Sun, and X. Zeng, A reduced order method for nonlinear parameterized partial differential equations using dynamic mode decomposition coupled with k-nearest-neighbors regression, Journal of Computational Physics, 452 (2022), p. 110907.
* [9] V. Girault and P.-A. Raviart, Finite element methods for Navier–Stokes equations: theory and algorithms, vol. 5, Springer Science & Business Media, 2012.
* [10] M. Guo and J. S. Hesthaven, Data-driven reduced order modeling for time-dependent problems, Computer Methods in Applied Mechanics and Engineering, 345 (2019), pp. 75–99.
* [11] W. Hackbusch, Tensor spaces and numerical tensor calculus, vol. 42, Springer, 2012.
* [12] W. Hackbusch and S. Kühn, A new scheme for the tensor representation, Journal of Fourier analysis and applications, 15 (2009), pp. 706–722.
* [13] M. W. Hess, A. Quaini, and G. Rozza, A data-driven surrogate modeling approach for time-dependent incompressible Navier–Stokes equations with dynamic mode decomposition and manifold interpolation, Advances in Computational Mathematics, 49 (2023), p. 22.
* [14] J. G. Heywood and R. Rannacher, Finite-element approximation of the nonstationary Navier–Stokes problem. part iv: error analysis for second-order time discretization, SIAM Journal on Numerical Analysis, 27 (1990), pp. 353–384.
* [15] V. John, Reference values for drag and lift of a two dimensional time-dependent flow around a cylinder, International Journal for Numerical Methods in Fluids, 44 (2004), pp. 777–788.
* [16] E. N. Karatzas, M. Nonino, F. Ballarin, and G. Rozza, A reduced order cut finite element method for geometrically parametrized steady and unsteady Navier–Stokes problems, Computers & Mathematics with Applications, 116 (2022), pp. 140–160.
* [17] T. G. Kolda and B. W. Bader, Tensor decompositions and applications, SIAM Review, 51 (2009), pp. 455–500.
* [18] A. V. Mamonov and M. A. Olshanskii, Interpolatory tensorial reduced order models for parametric dynamical systems, Computer Methods in Applied Mechanics and Engineering, 397 (2022), p. 115122.
* [19] A. V. Mamonov and M. A. Olshanskii, Analysis of a tensor POD–ROM for parameter dependent parabolic problems, arXiv preprint arXiv:2311.07883, (2023).
* [20] A. V. Mamonov and M. A. Olshanskii, Tensorial parametric model order reduction of nonlinear dynamical systems, arXiv preprint arXiv:2302.08490 (to appear in SIAM Journal on Scientific Computing), (2023).
* [21] M. Mohebujjaman, L. Rebholz, X. Xie, and T. Iliescu, Energy balance and mass conservation in reduced order models of fluid flows, Journal of Computational Physics, 346 (2017), pp. 262–277.
* [22] F. Pichi, F. Ballarin, G. Rozza, and J. S. Hesthaven, An artificial neural network approach to bifurcating phenomena in computational fluid dynamics, Computers & Fluids, 254 (2023), p. 105813.
* [23] R. Reyes, O. Ruz, C. Bayona-Roa, E. Castillo, and A. Tello, Reduced order modeling for parametrized generalized Newtonian fluid flows, Journal of Computational Physics, 484 (2023), p. 112086.
* [24] M. Sch$\ddot{\mbox{a}}$fer and S. Turek, The benchmark problem ‘flow around a cylinder’ flow simulation with high performance computers II, in E.H. Hirschel (Ed.), Notes on Numerical Fluid Mechanics, 52, Braunschweig, Vieweg (1996), pp. 547–566.
* [25] G. Stabile and G. Rozza, Finite volume POD-Galerkin stabilised reduced order methods for the parametrised incompressible Navier–Stokes equations, Computers & Fluids, 173 (2018), pp. 273–284.
* [26] L. R. Tucker, Some mathematical notes on three-mode factor analysis, Psychometrika, 31 (1966), pp. 279–311.
* [27] C. H. Williamson, Vortex dynamics in the cylinder wake, Annual Review of Fluid Mechanics, 28 (1996), pp. 477–539.
|
# Grover’s Algorithm Offers No Quantum Advantage
E.M. Stoudenmire Center for Computational Quantum Physics, Flatiron
Institute, 162 5th Avenue, New York, NY 10010, USA Xavier Waintal PHELIQS,
Université Grenoble Alpes, CEA, Grenoble INP, IRIG, Grenoble 38000, France
###### Abstract
Grover’s algorithm is one of the primary algorithms offered as evidence that
quantum computers can provide an advantage over classical computers. It
involves an “oracle” (external quantum subroutine) which must be specified for
a given application and whose internal structure is not part of the formal
scaling of the quantum speedup guaranteed by the algorithm. Grover’s algorithm
also requires exponentially many steps to succeed, raising the question of its
implementation on near-term, non-error-corrected hardware and indeed even on
error-corrected quantum computers. In this work, we construct a quantum
inspired algorithm, executable on a classical computer, that performs Grovers’
task in a linear number of call to the oracle — an exponentially smaller
number than Grover’s algorithm — and demonstrate this algorithm explicitly for
boolean satisfiability problems (3-SAT). Our finding implies that there is no
a priori theoretical quantum speed-up associated with Grover’s algorithm. We
critically examine the possibility of a practical speed-up, a possibility that
depends on the nature of the quantum circuit associated with the oracle. We
argue that the unfavorable scaling of the success probability of Grover’s
algorithm, which in the presence of noise decays as the exponential of the
exponential of the number of qubits, makes a practical speedup unrealistic
even under extremely optimistic assumptions on both hardware quality and
availability.
## I Introduction
Two classes of algorithms dominate the landscape of possible applications for
quantum computing. The first class computes a non-trivial result then extracts
this result using the quantum Fourier transform. This class includes the
seminal Shor’s algorithm for integer factorization [1, 2, 3] as well as the
quantum phase estimation algorithm [3] proposed for solving quantum chemistry
problems and several other algorithms [4]. Some of these algorithms, in
particular Shor’s, offer an exponential speedup over any known classical
methods, though only for a handful of rather specific applications.
The second class includes Grover’s algorithm (GA) and its generalizations,
such as amplitude amplification [5, 6, 7]. Grover’s algorithm promises a less
spectacular quadratic speedup, but in return enjoys wide popularity due to its
many possible use cases. Theoretically, quite a large number of problems could
be accelerated by simply replacing the critical part of the classical
algorithm by a call to a Grover’s routine implemented on a quantum computer.
It is also appealing that the quadratic speedup of Grover’s algorithm can be
put on firm mathematical grounds and is provably optimal under certain
assumptions [8], in contrast to Shor’s where the speedup is only conjectured.
This quadratic speedup is very convenient theoretically as it does not require
any knowledge of the oracle which encodes the problem into the quantum
algorithm. The class of problems for which Grover’s algorithm can be applied
include instances of NP-complete problems which are extremely challenging
computationally. Applications where Grover’s algorithm is the main subroutine
range from optimizing functions [9] for e.g. analyzing high energy physics
data [10] to solving various graph problems [11] to option pricing [12] to
pattern recognition (finding a string in a text) [13] to various form of
machine learning (including supervised learning [14], perceptrons [15], active
learning agents [15] and reinforcement learning [16]). Specific versions have
been implemented on small quantum processors with up to $n\leq 5$ qubits [17,
18, 19, 20] but the success probability is still low for the largest systems.
Grover’s algorithm solves the problem of inverting an unknown function. Given
a function $y=f(b)$ where $b$ can take $N=2^{n}$ values, Grover’s algorithm
finds the value $b=f^{-1}(y)$ for a given $y$ in only $\sqrt{N}$ calls to the
quantum “oracle” function which implements $f(b)$ while a naive exhaustive
search would require about $N$ calls to the corresponding classical function.
Grover’s theoretical speedup is of a peculiar nature: it is an abstract
speedup that considers the oracle as a black box function and counts the
computational cost solely in terms of the number of calls to the oracle. In
any actual implementation, however, the oracle must be realized as a specific
quantum circuit, hence the internal structure of $f(b)$ must be exposed. In
particular the computational cost of one call to the $f(b)$ oracle may (almost
always will) depend on $N$ in some way.
In this work, we argue that there is no generic theoretical speed-up in
Grover’s algorithm. To this end, we construct a quantum inspired Grover
algorithm (QiGA) that solves Grover’s problem on a classical computer. In
contrast to the usual assumptions made to discuss quantum advantage in this
context, the input of QiGA is not the classical oracle $f(b)$ but the quantum
oracle, which is the same quantum circuit that would be given to the quantum
computer. The comparison with a quantum computer is fairer: both computers are
given the same input. We find that QiGA solves the problem in at most $O(\log
N)$ calls and in many cases a single call to the quantum oracle. In other
words, if one is capable of simulating the quantum circuit of the oracle once,
then one can obtain the same result that would require a quantum computer to
make exponentially many calls to the oracle. Our findings provide a new
comparison point for the discussion of any possible advantage.
In the second half of this work we discuss the implications of our work for
the possibility of a practical quantum advantage using Grover. We argue that
if the oracle is too hard to simulate classically, so that a quantum computer
becomes necessary, then generic, practical problems arise. We show Grover’s
algorithm to be very sensitive to noise with the success probability decaying
as the exponential of an exponential. Beyond ruling out near-term
implementations for more than a few qubits, such a rapid accumulation of noise
would overwhelm any known quantum error correction protocol.
This is why we say Grover’s algorithm offers no quantum advantage. By _quantum
advantage_ we mean for a specific problem of a fixed size, that a quantum
algorithm running on actual hardware would reach the solution faster than the
best classical strategy. For a given problem, if the oracle is easy to
simulate then a quantum computer is not needed, while if it is hard to
simulate the quantum implementation will be, as we shall see, beyond the reach
of foreseeable hardware.
This article starts with a summary of the main results (Section II) for
readers mostly interested in the implications of our work. The rest of the
article is split into two parts. In the first (Sections III and IV), we
construct the quantum inspired algorithm that mimics Grover’s algorithm in a
classical computer but takes advantage of its internal structure to require
exponentially fewer calls to the oracle. We explicitly demonstrate and test
this algorithm using both random and quasi-one-dimensional instances of NP-
complete random boolean satisfiability problems (3-SAT). In the second part of
the article, we examine the implication of our findings on the possibility of
a quantum speed-up (Sections VI and VII). In particular, Section VI
establishes the lack of resilience of Grover’s algorithm to noise, an
important aspect needed for the discussion of a possible quantum advantage.
## II Summary of the main results
Figure 1: Schematic showing the main implications of our quantum-inspired
Grover’s algorithm (QiGA) for the possibility of Grover’s algorithm offering a
genuine quantum advantage.
Figure 2: Entanglement entropy, in units of $\log(2)$, of the quantum state
during a simulation of Grover’s algorithm for $n=10$ qubits for the optimal
number $r=25$ of iterations. Substeps between each complete iteration show the
properties of the state after each layer of the quantum circuits described in
Appendix A which implement the oracle and diffusion operators. The entropy is
dominated by the oracle step for the first half of the algorithm, then becomes
dominated by the diffusion step for the second half of the algorithm. During
the simulation the matrix product state (MPS) rank (not shown) goes up and
down in a sawtooth shape with maxima of $\chi=11$ during an iteration and
minima of $\chi=2$ between each iteration. The dashed line along the lower
envelope is the theoretical prediction for the entanglement between iterations
of the algorithm (the oscillations are a small $1/N$ effect). The lower panel
shows a zoom of the region in the box in the upper panel, labeling substeps
where the oracle and diffuser circuits act.
The logic behind our main claim is summarized in Fig. 1. The same “oracle”
circuit that defines an implementation of GA on a quantum computer is given to
a classical simulator which is limited only by entanglement (a tensor
network). If this simulator can calculate a single oracle circuit with a
complexity better than $2^{n/2}$, our QiGA algorithm solves the GA problem
parametrically faster than a quantum computer would. Even when the simulator
scales as a less favourable exponential, we explicitly demonstrate cases where
simulating the oracle can be done in just hours on a desktop computer for
$n\lesssim 40$ qubits and likely on a supercomputer for $n\lesssim 80$ qubits.
For larger numbers of qubits there must be some instances of NP-complete
problems where simulating the oracle circuit remains infeasible. Yet for such
problems, GA would require at least $2^{80/2}=2^{40}\approx 10^{12}$
application of the oracle circuit on a quantum computer which translates into
an astronomically large time-to-solution even under favourable hardware
assumptions.
At the root of this work is the observation that, in between the calls to the
oracle, the level of entanglement present in the internal state of a quantum
computer running GA is extremely low. In between each iteration of GA, the
entanglement entropy between any two subgroups of qubits is at most $\log(2)$.
In other words, GA strongly leverages the ability of quantum mechanics to
create superpositions (quantum parallelism) but it barely uses the possibility
to entangle states.
Before describing our “quantum inspired” GA (where we use the resources
available in a classical computer in the most efficient way), we start with a
“simulation” of GA (where we only use operations implementable on a quantum
computer). Figure 2 shows the entanglement entropy of the quantum state during
a simulation of Grover’s algorithm for $n=10$ qubits, using the quantum
circuit described in Appendix A to implement the oracle and diffusion steps.
As claimed, the entanglement in between iterations never exceeds $\log(2)$, a
value which is reached when the algorithm is halfway done. The entanglement
entropy does become higher during the partial application of the oracle and
diffusion operator circuits. The value of this intra-oracle entanglement
barrier is strongly problem dependent and will determine the performance of
the quantum inspired GA.
As we shall see, the low-entanglement states occurring in GA at integer and
half integer iteration number have a simple classical representation in terms
of a rank-2 “matrix product state” [21, 22, 23] (MPS) which can easily be kept
on a classical computer at a linear cost in the number $n$ of qubits just by
storing $2n$ matrices of size $2\times 2$. Similarly, the oracle itself is a
rank-2 “matrix product operator” [24, 25] (MPO), another standard object of
tensor network theory.
A second, very important observation of this work is that being able to
compute the action of the oracle circuit on the initial product state (the
post-oracle state) is always sufficient to complete GA in a _single_ call to
the oracle. In fact, it is sufficient to even just compute $\sim\log(N)$
amplitudes of this state in the $|\pm\rangle$ basis. In this way the quantum
inspired algorithm differs from a mere _simulation_ of GA that would still
require an exponentially large number $\sqrt{N}=2^{n/2}$ of applications of
the oracle circuit. Hence, the initial problem is mapped onto the following
question: given a quantum circuit that implements a given oracle, how can one
compute its action on an MPS? There exist standard techniques for computing
the outputs of quantum circuits as tensor networks [26, 24, 27, 28]. We
illustrate them in two cases: in the first we take an explicit problem (known
as 3-SAT) and show that the explicit knowledge of the oracle’s quantum circuit
can be exploited to enact the oracle efficiently in many cases. In the second,
we consider the generic case of an arbitrary quantum circuit and show that the
full post-oracle state can be constructed from $O(n)$ amplitudes of this state
using the so-called tensor cross interpolation algorithm.
It remains to discuss the implications of our findings on the possibility of
quantum advantage using GA. On a theoretical level our quantum inspired GA
requires exponentially fewer number of calls to the oracle. In turn, a
classical calculation of the quantum oracle may be exponentially harder than
the corresponding task on a quantum computer, depending on the quantum circuit
structure and depth. Hence, which of the two approaches (classical or quantum)
is faster depends on the actual problem and can not be decided a priori. In
particular oracles whose quantum circuits have small depths (and therefore
more of chance to be eventually implemented in quantum hardware) can be solved
trivially using our quantum inspired approach.
An interesting corollary of the quantum inspired GA algorithms relates to
classical complexity theory. It is widely believed that $P\neq NP$, meaning
the so-called NP-complete problems are exponentially difficult in the hardest
cases. Since NP-complete problems, such as the random 3-SAT examples we
consider below, can be simulated with GA, it follows that $P\neq NP$ implies
that in at least one instance the entanglement barrier of the 3-SAT oracle
must be exponentially high. Otherwise, because the cost of MPS simulations are
determined by the entanglement, a single call to the oracle could be simulated
in polynomial time and hence the full problem as well, which would imply
$P=NP$. Thus there must be a direct connection between the complexity of a
classical problem and the entanglement level of the associated quantum
circuit.
On a practical level our quantum inspired GA puts very stringent conditions on
putative quantum hardware that could be used to implement the GA. In presence
of noise or imperfections, the probability of success of GA decreases
exponentially with the number of applied gates in the circuit. Since the
number of oracle calls, hence the number of gates, also scales exponentially
with $n$ in GA, it follows that the overall success rate decays as an
“exponential of an exponential”, i.e. very quickly. We estimate the associated
constraints on the hardware in terms of qubit number $n$, noise level, and
quantum computer clock frequency and conclude that none of the requirements
are realistic, even if fault-tolerant quantum technologies were available.
## III Problem formulation
Grover’s algorithm [5, 6] aims to harness the potential of quantum parallelism
by gradually extracting the result of a parallel function evaluation through a
series of _Grover iterations_. For a problem involving classical states of
length $N$, for which $n$ qubits are needed such that $N=2^{n}$, the number of
Grover iterations needed to extract the result of the problem scales as
$\sqrt{N}$, which is to be compared to worst-case classical strategies such as
guessing solutions at random or performing an exhaustive search which have a
cost scaling as $N$.
### III.1 Notation and Problem Setting
Let $b\in\\{0,1,\cdots,N-1\\}$ be an integer whose binary representation is
$b_{n-1}\cdots b_{1}b_{0}$, that is $b=\sum_{i=0}^{n-1}b_{i}2^{i}$ with
$b_{i}\in\\{0,1\\}$. We denote by $|b\rangle=|b_{n-1}\cdots b_{1}b_{0}\rangle$
the corresponding $n$-qubit state in the computational basis.
Let $f(b)$, be a function that takes a bitstring as an input $b$ and returns
$\displaystyle f(b)=\begin{cases}1,&\text{if}\ b=w\\\ 0,&\text{if}\ b\neq
w\end{cases}\ \ .$ (1)
Here $w$ is a (unknown) specific bitstring. GA aims to solve the problem of
finding the value of $w$ in as few calls to the function $f$ as possible. This
problem can be viewed as inverting $f$, that is, computing $w=f^{-1}(1)$.
GA also assumes one can implement an _oracle operator_ $U_{w}$ such that for
states $|b\rangle$ in the computational basis
$\displaystyle U_{w}|b\rangle=(-1)^{f(b)}|b\rangle.$ (2)
Since quantum computer can perform classical logic (at the price of adding
ancilla qubits to ensure reversibility), a classical algorithm that computes
$f(b)$ can be turned into a quantum circuit that implements Eq. (2). The
explicit form of $U_{w}$ reads,
$\displaystyle U_{w}|b\rangle=\begin{cases}-|b\rangle,&\text{if}\ b=w\\\
+|b\rangle,&\text{if}\ b\neq w\end{cases}$ (3)
therefore $U_{w}$ is equivalent to the operator
$\displaystyle U_{w}=1-2|w\rangle\langle w|\ .$ (4)
However, for any real application of practical interest, one does not know the
value of $w$ ahead of time and only knows how to implement $U_{w}$ as a
quantum circuit based on the function $f$.
GA straightforwardly generalizes to the case of multiple solutions
$\\{w^{\alpha}\\}_{\alpha=1}^{S}$ such that $f(w^{\alpha})\equiv 1$. One
defines the oracle as
$\displaystyle U_{w}=1-2\sum_{\alpha=1}^{S}|w^{\alpha}\rangle\langle
w^{\alpha}|.$ (5)
Each solution $w^{\alpha}$ has a binary representation $w^{\alpha}_{n-1}\cdot
w^{\alpha}_{2}w^{\alpha}_{1}w^{\alpha}_{0}$. In this article, we focus on the
case where the problem has a fixed number of solutions $S$ (or more
generically where $S$ grows at most polynomially with $n$). For problems that
have an exponential number of solutions, our algorithm would have to be
revisited, but we conjecture that easy classical solutions exist in that case.
For each qubit, we define the two states $|+\rangle$ and $|-\rangle$ as,
$\displaystyle|\pm\rangle=\frac{|0\rangle\pm|1\rangle}{\sqrt{2}}$ (6)
and the equal weight superposition state $|s\rangle$ as,
$\displaystyle|s\rangle$ $\displaystyle=|+++\cdots+\rangle$ (7)
$\displaystyle=\frac{1}{\sqrt{2^{n}}}\sum_{x_{n-1}\cdots
x_{0}\in\\{0,1\\}^{n}}|x_{n-1}\cdots x_{0}\rangle\ .$ (8)
Last, GA requires a second operator, the diffusion operator $U_{s}$ that has a
structure similar to the oracle but with respect to the known state
$|s\rangle$:
$\displaystyle U_{s}=1-2|s\rangle\langle s|\ .$ (9)
### III.2 Definition of the Grover Algorithm
Given an oracle $U_{w}$, GA proceeds as follows
1. 1.
initiate the qubits in state $|000\cdots 0\rangle$
2. 2.
apply a Hadamard gate on each qubit to obtain $|s\rangle$
3. 3.
apply the oracle operator $U_{w}$
4. 4.
apply the diffusion operator $U_{s}$
5. 5.
repeat steps 3 and 4 each $q$ times
6. 6.
measure the qubits in the computational basis and find $|w\rangle$ with a
probability very close to one
The optimal number of steps of GA can be estimated to be about $q\approx r$
with $r\equiv\frac{\pi}{4}\sqrt{N}=\frac{\pi}{4}2^{n/2}$. In the case where
there are multiple solutions, the measurement at the end produces one of the
$w^{\alpha}$ with uniform probability. GA has an appealing geometrical
interpretation [29]: $U_{w}$ and $U_{s}$ are mirror operators with respect to
the hyper-planes perpendicular to $w$ and $s$. It follows that the product
$U_{s}U_{w}$ is a rotation inside the ($|s\rangle,|w\rangle$) plane that
gradually rotates the state from $|s\rangle$ to $|w\rangle$.
### III.3 On the level of entanglement inside Grover’s algorithm
The type of classical calculation we will consider involves representing the
quantum state as a tensor network, specifically a matrix product state (MPS).
An MPS compresses a quantum state by factoring it into a network of smaller
tensors contracted in a one-dimensional chain like structure. For states
having low to moderate entanglement, the MPS rank or dimension $\chi$ of the
“bond” indices connecting the MPS tensors can be chosen relatively small while
representing the state to very high or even perfect accuracy. Quantum states
such as GHZ or W states are exactly MPS of rank $\chi=2$ and product states
such as the initial state $|s\rangle$ of Grover’s algorithm are $\chi=1$ MPS.
An important fact we will use below is that any state which is a sum of $P$
product states can be explicitly written as an MPS of rank $\chi=P$ [24].
In the context of GA, one finds that after any application of $U_{w}$ or
$U_{s}$, the internal state $|\Psi\rangle$ of the quantum computer lies in a
superposition of $|s\rangle$ and $|w\rangle$ [29],
$\displaystyle|\Psi\rangle=\alpha|s\rangle+\beta|w\rangle$ (10)
with $|\alpha|^{2}+|\beta|^{2}=1$, i.e. in the superposition of two
unentangled states ($1+S$ states in the general case). It follows that
$|\Psi\rangle$ can be cast into the form,
$\displaystyle|\Psi\rangle$ $\displaystyle=$
$\displaystyle\begin{bmatrix}\alpha&\beta\end{bmatrix}\begin{bmatrix}|w_{1}\rangle&0\\\
0&|+\rangle\end{bmatrix}\begin{bmatrix}|w_{2}\rangle&0\\\
0&|+\rangle\end{bmatrix}\ \ \ $ $\displaystyle...$
$\displaystyle\begin{bmatrix}|w_{n-1}\rangle&0\\\
0&|+\rangle\end{bmatrix}\begin{bmatrix}|w_{n}\rangle&0\\\
0&|+\rangle\end{bmatrix}\begin{bmatrix}1\\\ 1\end{bmatrix}$ (11)
which is manifestly a rank $\chi=2$ MPS or $\chi=1+S$ in the general case.
Such a state is minimally entangled and can easily be kept inside the memory
of a classical computer at a linear cost in the number of qubits. In other
words, while Grover’s algorithm takes advantage of quantum parallelism (i.e.
superposition), it uses very little entanglement for most of the algorithm.
The only possible exception is while the oracle circuit has only been
partially applied.
## IV A quantum inspired algorithm for simulating Grover’s algorithm in a
single call to the oracle
We now detail the different steps of our quantum inspired Grover’s algorithm
(QiGA ). Although we use MPS and MPO technology for both QiGA and mere
simulations of GA, we emphasize that the goals are very different. In the
first, we aim at solving the Grover problem with as few computations as
possible while in the latter we want to mimic what would happen in a (possibly
noisy) actual quantum computer.
### IV.1 A low rank Matrix Product Operator (MPO) for Grover’s oracle
A crucial observation that makes QiGA possible is that the oracle $U_{w}$ can
be cast into the form of a rank-$2$ MPO (rank $1+S$ in the case of multiple
solutions). The explicit form of this MPO is
$\displaystyle
U_{w}=\begin{bmatrix}1&1\end{bmatrix}\left(\prod_{i=1}^{n}M_{i}\right)\begin{bmatrix}1\\\
-2\end{bmatrix}$ (12)
with
$\displaystyle M_{i}=\begin{bmatrix}I_{i}&0&0&...\\\
0&|w^{1}_{i}\rangle\langle w^{1}_{i}|&0&...\\\ 0&0&|w^{2}_{i}\rangle\langle
w^{2}_{i}|&...\\\ ...&...&...&...\\\ ...&0&0&|w^{S}_{i}\rangle\langle
w^{S}_{i}|\end{bmatrix}$ (13)
where ${\rm I}_{i}$ is the $2\times 2$ identity matrix acting on qubit $i$ and
$|w^{\alpha}_{i}\rangle\langle w^{\alpha}_{i}|$ the projector on the bitstring
$i$ of solution $\alpha$. We emphasize that this MPO exists but its
construction is not necessarily easy since, by definition, one does not have
access to the solutions $w^{\alpha}$. A similar MPO can be written for the
diffusion operator $U_{s}$ with the replacement of $M_{i}$ by
$\displaystyle M^{\prime}_{i}=\begin{bmatrix}I_{i}&0\\\
0&|+\rangle\langle+|\end{bmatrix}$ (14)
in Eq.(12).
### IV.2 Solving the problem in a single call to the oracle
Assuming one has access to an explicit form of $U_{w}$, such as the MPO form
above, a product of a small number of MPOs, or an efficiently simulable
quantum circuit, one can construct the MPS state
$|\Psi_{w}\rangle=U_{w}|s\rangle$. Using the definition of the oracle Eq. (5)
and the fact that $\langle w^{\alpha}|s\rangle=\frac{1}{\sqrt{2^{n}}}$, we
obtain
$\displaystyle|\Psi_{w}\rangle=|s\rangle-\frac{2}{\sqrt{2^{n}}}\sum_{i=1}^{S}|w_{i}\rangle\
.$ (15)
This expression is explicitly the sum of $1+S$ product states, thus the state
$|\Psi_{w}\rangle=U_{w}|s\rangle$ is exactly an MPS of bond dimension
$\chi=1+S$.
The form of $|\Psi_{w}\rangle$ as a sum of product states in Eq. (15)
immediately presents a classical strategy to extract the solution states
$\\{|w_{i}\rangle\\}$ in a single step: one simply subtracts the state
$|s\rangle$. Subtracting a product state such as $|s\rangle$ from an MPS is a
straightforward operation with a cost scaling as $n\chi^{3}\propto\log{N}$.
For example, in the case of $n=100$ qubits and $\chi=50$ the subtraction takes
about 0.2 seconds.
It is important to note that this subtraction operation has no quantum
equivalent. This can be seen easily with an argument analogous to the no-
cloning theorem: if there existed a unitary matrix that maps
$|\Psi\rangle\otimes|s\rangle$ to
$(|\Psi\rangle-|s\rangle)\otimes|\Phi\rangle$ for all $|\Psi\rangle$, then
this matrix would map $|s\rangle\otimes|s\rangle$ to the null vector which
contradicts the assumption that the matrix is unitary. It follows that our
algorithm cannot be used as a “classical preconditioner” for amplitude
amplification [7]. See the associated discussion in section VII.1.
In summary the different steps of QiGA are:
1. 1.
Use classical simulation techniques to compute
$|\Psi_{w}\rangle=U_{w}|s\rangle$ as an MPS of rank $\chi=1+S$
2. 2.
Compute $|\tilde{W}\rangle=|\Psi_{w}\rangle-|s\rangle$ and normalize it to
obtain $|W\rangle$, an MPS of rank $\chi=S$
3. 3.
Sample from $|W\rangle$ to obtain the states $|w^{\alpha}\rangle$ with uniform
probability, using the fact that perfect sampling of MPS can be performed with
a cost $n\chi^{2}$ [30, 31]
If $S$ is small enough, the states $|w^{\alpha}\rangle$ can also be fully
enumerated.
One can modify the classical approach described above not only to sample
individual solutions $w^{\alpha}$ but even to count the number of solutions.
To do so, one acts with $U_{w}$ on the _unnormalized_ state
$\sum_{b}|b\rangle$. Then the squared norm of the resulting state gives the
number of solutions.
### IV.3 Obtaining the MPS using tensor cross interpolation
The problem is therefore reduced to the construction of the explicit MPS form
of $|\Psi_{w}\rangle=U_{w}|s\rangle$ which is known to have a finite rank
$\chi=1+S$.
Suppose for a specific bitstring $|b\rangle$ one is capable of classically
calculating the amplitude $\langle b|\Psi_{w}\rangle$. Such a simulation is
known as a closed simulation and is much easier [32] than a so-called open
simulation which provides the full state $|\Psi_{w}\rangle$. We will comment
in section VII on the practical limitations to these kind of simulations.
There has been recent progress in algorithms able to construct a MPS
representation of the state $|\Psi_{w}\rangle$ in $O(\chi^{2}n)$ calls to a
routine that calculates $\langle b|\Psi_{w}\rangle$. Here, we make use of
tensor cross interpolation [33, 34, 35, 36, 37] following the implementation
described in [37]. The advantage of tensor cross interpolation is that it is
agnostic to the details of the quantum circuit $U_{w}$ and only requires an
external classical subroutine that computes $\langle b|\Psi_{w}\rangle$. The
algorithm requests the value of $\langle b|\Psi_{w}\rangle$ for certain values
of $b$ using an active learning algorithm. It follows that it is directly
compatible with the most advanced methods that have been developed to
calculate amplitudes of quantum circuits (including those that leverage on the
high level of parallelism available in supercomputers).
Before we can use tensor cross interpolation effectively, a small adjustment
must be performed. A direct calculation of $\langle b|\Psi_{w}\rangle$
provides
$\displaystyle\langle
b|\Psi_{w}\rangle=\frac{1}{2^{n/2}}\left[1-2\sum_{\alpha=1}^{S}\delta_{b,w^{\alpha}}\right]\
\ .$ (16)
For a random bitstring $b$, one has $\langle b|\Psi_{w}\rangle=1/\sqrt{2^{n}}$
since it is exponentially unlikely that $b$ matches one of the solutions
$w^{\alpha}$. It follows that the tensor cross interpolation algorithm would
fail to reconstruct the MPS as its exploration of the $\langle
b|\Psi_{w}\rangle$ function would have an exponentially low probability of
finding the relevant part (second half of the right hand side of Eq.(16)).
Another way to see the same problem is to write $\langle b|\Psi_{w}\rangle$ in
terms of the calls to the function $f(b)$. It reads
$\displaystyle\langle b|\Psi_{w}\rangle=\frac{1}{2^{n/2}}(-1)^{f(b)}$ (17)
i.e. the amplitudes can be calculated in a single call to $f(b)$. Hence, if
$|\Psi_{w}\rangle$ MPS could be reconstructed from $O(n)=O(\log N)$ calls to
$\langle b|\Psi_{w}\rangle$, it would mean that the original problem could be
solved in $O(n)$ calls to the function $f(b)$ hence that NP-complete problems
could be solved in polynomial time, an unlikely situation.
To solve this issue, we turn to the $|\pm\rangle$ basis and calculate
$\langle\beta|\Psi_{w}\rangle$ instead of $\langle b|\Psi_{w}\rangle$ where
$|\beta\rangle$ is a product of $|\pm\rangle$ states (e.g.
$|+-+-...-+\rangle$). Denoting the binary representation of $\beta$ as
$\beta_{0},\beta_{1}...\beta_{n}$ with $\beta_{i}=0$ for a state $|+\rangle$
and $\beta_{i}=1$ for a state $|-\rangle$, we find
$\displaystyle\langle\beta|\Psi_{w}\rangle=\delta_{\beta,0}-\frac{2}{2^{n}}\sum_{\alpha=1}^{S}(-1)^{\sum_{i=0}^{n-1}\beta_{i}w_{i}^{\alpha}}$
(18)
This form is directly suitable for tensor cross interpolation since
information about the solutions $w^{\alpha}$ is now present for any bitstring
$\beta$. We emphasize that QiGA itself knows nothing about the solution
$w^{\alpha}$ and only uses the amplitudes $\langle\beta|\Psi_{w}\rangle$.
Calculating these amplitudes $\langle\beta|\Psi_{w}\rangle$ has the same
complexity as calculating $\langle b|\Psi_{w}\rangle$ since the two quantum
circuits only differ by a layer of Hadamard gates at the end. Similarly, when
the MPS is known in the $|\beta\rangle$ basis, it is simply a matter of
applying local Hadamard gates to get it back in the $|b\rangle$ basis. We have
checked in explicit numerical calculations that our implementation of tensor
cross interpolation can reconstruct the MPS in $O(n)$ calls to the
$\langle\beta|\Psi_{w}\rangle$ subroutine up to at least $n=1000$.
In terms of call the the function $f(b)$, the amplitudes
$\langle\beta|\Psi_{w}\rangle$ take the form,
$\displaystyle\langle\beta|\Psi_{w}\rangle=\frac{1}{2^{n}}\sum_{b=0}^{2^{n}-1}(-1)^{f(b)+\sum_{i=0}^{n-1}b_{i}\beta_{i}}$
(19)
which takes $O(2^{n})$ calls to the classical function, if one does not take
advantage of its quantum circuit form. Hence we recover the expected $O(N)$
classical scaling to solve the search problem if one does not use insights
about the function $f$. When using the quantum circuit to compute the oracle
amplitudes, the QiGA complexity will depend on the entanglement barrier
present in a single application of the quantum oracle as illustrated in Fig.2.
We emphasize that the approach outlined above is only feasible for a closed,
classical simulation of the oracle circuit; it cannot be attempted on a
quantum computer. Indeed, a quantum computer only provides bitstrings $\beta$
distributed according to the probability $|\langle\beta|\Psi_{w}\rangle|^{2}$
but it does not provide the actual value $\langle\beta|\Psi_{w}\rangle$ (nor
can one choose the value of $\beta$).
## V Illustration with an explicit construction for the 3-SAT problem
To illustrate our quantum inspired GA with a practical application, we have
implemented a simulation of the oracle for the 3-SAT boolean satisfiability
problem. 3-SAT is an NP-complete problem and finding fast, possibly heuristic,
algorithms for solving it is the subject of active research, with applications
including cryptanalysis [38, 39], industrial operations research [40], and
computational biology [41].
In a SAT problem, the function $f(b)$ is given by a set of $p$ clauses that
must all be satisfied for $b$ to be a solution. In 3-SAT, each clause $\delta$
is constructed out of 3 variables $b_{i_{\delta}}$, $b_{j_{\delta}}$ and
$b_{k_{\delta}}$ from the binary representation of $b$. $f(b)$ takes the form,
$\displaystyle
f(b)=(\tilde{b}_{i_{1}}\lor\tilde{b}_{j_{1}}\lor\tilde{b}_{k_{1}})\land(\tilde{b}_{i_{2}}\lor\tilde{b}_{j_{2}}\lor\tilde{b}_{k_{2}})\land\ldots$
$\displaystyle\ldots\land(\tilde{b}_{i_{p}}\lor\tilde{b}_{j_{p}}\lor\tilde{b}_{k_{p}})$
(20)
where $\lor$ means logical “or”, $\land$ logical “and”, and
$\tilde{b}_{a}=b_{a}$ or $\tilde{b}_{a}=\lnot b_{a}$ (not $b_{a}$) depending
on the clause.
The problems we consider are defined by choosing the ratio $\alpha=p/n$ of
clauses to variables (or qubits) to be fixed, usually between $4<\alpha<5$
since in this range the number of satisfying solutions $S$ becomes small.
Otherwise the choice of which variables enter into each clause and whether a
variable is negated is made with uniform probability. Below we will consider
totally random SAT clauses in section V.2 then clauses with quasi-one-
dimensional structure in section V.3.
### V.1 Tensor Network SAT Approach
To explicitly implement the Grover’s oracle operator for 3-SAT and construct
the post-oracle state $|\Psi_{w}\rangle$, first prepare the state of the
system to be
$\displaystyle|+\rangle_{1}|+\rangle_{2}|+\rangle_{3}\cdots|+\rangle_{n}|1\rangle_{A}=|s\rangle|1\rangle_{A}$
(21)
where the extra qubit in the $|1\rangle_{A}$ state acts as an ancilla whose
role is to record which states of the previous $n$ qubits either satisfy
($|1\rangle_{A}$) or do not satisfy ($|0\rangle_{A}$) all of the SAT clauses
applied so far.
Next, each SAT clause $C$ such as $C=(b_{3}\lor\lnot b_{7}\lor b_{8})$ is
mapped to an operator by noting that there is only one bitstring which _fails
to satisfy_ the clause. In the example above, this bitstring is
$0_{3},1_{7},0_{8}$. Using this bitstring, one defines an operator
$\displaystyle O_{C}$ $\displaystyle=P^{0}_{3}\otimes P^{1}_{7}\otimes
P^{0}_{8}\otimes P^{0}_{A}+(1-P^{0}_{3}\otimes P^{1}_{7}\otimes
P^{0}_{8})\otimes 1_{A}\ .$ (22)
which projects the ancilla qubit into the $|0\rangle_{A}$ state for any state
containing the unsatisfying bitstring. Otherwise it leaves the ancilla
unchanged. Here $P^{0}_{i}=|0\rangle\langle 0|$ and
$P^{1}_{i}=|1\rangle\langle 1|$ are projectors onto the $|0\rangle$ or
$|1\rangle$ states for qubit $i$.
In our classical implementation, the operator $O_{C}$ can be applied in its
above form using straightforward tensor network methods. We used the approach
of implementing each $O_{C}$ as an MPO and applying these MPOs to the quantum
state represented as an MPS. As an MPO, $O_{C}$ has a rank $\chi=3$, which can
be understood from the fact that when one expands all the terms it is the sum
of three product operators [24].
After applying the $O_{C}$ operators for every SAT clause, the state of the
system becomes
$\displaystyle\frac{1}{\sqrt{2^{n}}}\sum_{i=1}^{S}|w_{i}\rangle|1\rangle_{A}+\frac{1}{\sqrt{2^{n}}}\sum_{j=1}^{2^{n}-S}|\tilde{w}_{j}\rangle|0\rangle_{A}$
(23)
where the $\\{w_{i}\\}$ are the satisfying bitstrings and the
$\\{\tilde{w}_{j}\\}$ are the unsatisfying ones. To convert this state to a
conventional Grover’s post-oracle state Eq. (15), one can perform a post-
selected, or forced, measurement of the ancilla qubit to be in the
$|-\rangle=H|1\rangle$ state. (Note that for a tensor network such a forced
measurement always succeeds on the first try and can be done by just acting
with a single-qubit projection operator.) After dropping the now unentangled
ancilla qubit, the state will take the form of Eq. (15). If one is only
interested in solving the Grover problem rather than constructing the post-
oracle state, one simply projects the ancilla of Eq.(23) onto the state
$|1\rangle_{A}$.
### V.2 Random SAT Experiments
We have tested this classical oracle implementation on fully random SAT
clauses (variables $b_{i_{p}}$ chosen by drawing $i_{p}$ randomly from
$1,2,..,n$ and with each variable negated with probability $1/2$) for up to
$n=40$ variables or qubits, using the ITensor software [42, 43] running on a
single workstation with four Xeon 3.6 GHz processors and 256 Gb of RAM. For
all experiments we used a ratio of clauses to variables of $p/n=4.2$. The
results shown in Table 1 are for various experiments over a range of $n$ and
different random instances for the same $n$ with different numbers $S$ of
satisfying bitstrings. We report both the maximum MPS rank
$\chi_{\text{max}}$, which was the largest rank encountered during application
of the $O_{C}$ operators, and the total time required to apply all of the
operators and construct the post-oracle state.
n | S | $\chi_{\text{max}}$ | time
---|---|---|---
30 | 4 | 467 | 21 s
32 | 2 | 954 | 1.8 minutes
34 | 48 | 1162 | 3.2 minutes
36 | 16 | 1994 | 8.3 minutes
38 | 8 | 5867 | 1.6 hours
40 | 0 | 1402 | 4.2 minutes
40 | 28 | 2926 | 21 minutes
40 | 161 | 5690 | 1.65 hours
40 | 174 | 10374 | 6.5 hours
Table 1: Maximum MPS ranks $\chi_{\text{max}}$ and execution times to compute
the post-oracle state corresponding to random 3-SAT problem instances
involving $n$ variables ($n$ qubits). The table also shows the number of
satisfying assignments or bitstrings $S$ for each problem instance.
After each post-oracle state was prepared, its quality was checked by
projecting (post-selecting) the ancilla qubit into the state $|1\rangle_{A}$
then sampling 5000 bitstrings from the other $n$ qubits to verify that all
samples satisfied the SAT clauses. To count the number $S$ of satisfying
bitstrings (#SAT problem) we applied the MPOs $O_{C}$ to an _unnormalized_
state with each qubit (except the ancilla) initialized to
$(|0\rangle+|1\rangle)$. Afterward, we projected the ancilla into the
$|1\rangle_{A}$ state and computed the norm of the resulting state which is
equal to $S$. For smaller systems these counts were verified to be correct by
checking with exact enumeration. The resulting counts $S$ are shown in the
second column of Table 1,
These results indicate the post-Grover’s-oracle state can be prepared
classically for typical 3-SAT instances for at least $n=40$ qubits on a single
workstation. For problems of this size, the optimal number of iterations of
Grover’s algorithm would be $r=823,500$ in contrast to the single application
of the oracle used here. The largest MPS rank encountered across the
experiments we performed was $\chi=10,374$ which is a small fraction ($1\%$)
of the maximum possible rank $2^{40/2}\approx 10^{6}$. The entanglement
barrier in the random 3-SAT problem is not only relatively modest for finite
problem sizes, but was typically observed to be lower for the case of fewer
satisfying solutions $S$. Hence QiGA appears to perform better on problems
with few solutions.
It is important to note that the approach above can be straightforwardly
parallelized through a “slicing” approach, similar to other recent high-
performance quantum circuit simulations [44, 45, 46]. Instead of initializing
all the $n$ input qubits to the $|+\rangle$ state, a subset of $p$ of these
qubits can be initialized to either the $|0\rangle$ or $|1\rangle$ states. By
running $2^{p}$ separate calculations for each setting of these computational-
basis qubits one can obtain the same final result by combining the solutions
obtained from $2^{p}$ computers working with no communication overhead. When
trying this in practice, we found that the total computational effort (sum of
running times of each of the parallel computers) was comparable to the
original (serial) running time, while the maximum time for any one computer
was significantly less than the original time. Because the oracle acts
classically on the computational basis qubits, the maximum entanglement during
the oracle application is generally much lower for each sliced input so that
the parallel approach results in a faster time to solution.
Note that our implementation of a SAT solver as a Grover oracle operator is
nevertheless far slower than the most efficient classical SAT solving
algorithms, some of which also happen to use tensor networks [47] and can
solve typical $n=40$ instances of 3-SAT in under one second.
### V.3 Quasi-One-Dimensional SAT Experiments
In this section, we design instances of the 3-SAT problem where the QiGA
approach has _provably linear scaling_ in the number of qubits $n$, that is a
logarithmic scaling with problem size $\log(N)$. This is in sharp contrast to
the $2^{n}$ scaling of an unstructured classical problem. The goal of the
construction and associated experiments we perform below is to illustrate two
points. First, it shows that there are classes of problems for which QiGA is
always exponentially faster than GA. Second, the underlying structure that
makes the problem easier for QiGA need not be known a priori: QiGA discovers
this structure and takes advantage of it automatically.
We consider a quasi-1D case that involves grouping variables into blocks of
size $B$ along a 1D path, with the first block $(1,2,\ldots,B)$, second block
$(B+1,\ldots,2B)$, etc. The SAT problem is then defined by two sets of SAT
clauses required to be satisfied altogether:
1. 1.
$N_{\text{intra}}$ fully random SAT clauses where variables in each clause
only act within each block
2. 2.
$L_{\text{inter}}$ layers of random SAT clauses where variables span across
two neighboring blocks
The cases we consider will allow $N_{\text{intra}}$ to be any size while
$L_{\text{inter}}$ is fixed to a small value such as $L_{\text{inter}}=1,2,3$.
The proof of linear scaling consists of bounding the cost of applying the
constraints of clauses in sets (1) and (2) above separately. We will use a
similar approach as in Section V.1 above, with the slight modification that we
will project out any unsatisfying bitstrings for a clause
$C_{p}=(b_{i_{p}}\lor b_{j_{p}}\lor b_{k_{p}})$ by acting with an operator
$\displaystyle
O_{C_{p}}=(1-P^{b_{i_{p}}}_{i_{p}}P^{b_{j_{p}}}_{j_{p}}P^{b_{k_{p}}}_{k_{p}})$
(24)
that sets the state containing the single bitstring not satisfying $C_{p}$ to
zero. There is no ancilla qubit in this approach. The process of applying the
MPOs $O_{C_{p}}$ is depicted in Fig. 3 and explained in detail below.
Figure 3: Schematic of the process of applying the operators (MPOs)
$O_{C_{p}}$ enforcing the SAT clauses defining the quasi-1D SAT problem. The
qubits are organized into blocks of size $B$ and a number of random, intra-
block SAT clauses are enforced which act only inside each block. Then a fixed
number of layers of inter-block clauses are enforced which act between
neighboring blocks.
Starting from the product superposition state
$\displaystyle|s\rangle=|+\rangle_{1}|+\rangle_{2}|+\rangle_{3}\cdots|+\rangle_{n}$
(25)
and applying the operators for the set (1) of intra-block clauses, the cost
scales as $(N_{\text{intra}}\ n\ 2^{3B/2})$ in the worst case. To see why,
note that the state after applying the operators will be a product of MPS for
each block, and the maximum bond dimension of an MPS with $B$ sites is
$\chi=2^{B/2}$.. Algorithms for applying operators to MPS of this size scale
as $B\chi^{3}=B\,2^{3B/2}$. One has to apply each of the operators $O_{C_{p}}$
and there are $N_{\text{intra}}$ of these. Finally there are $n/B$ blocks.
Multiplying each of these costs gives the above scaling. Thus the cost of
enforcing the set (1) clauses scales only linearly with number of qubits $n$.
Next one enforces the $L_{\text{inter}}$ layers inter-block clauses in set
(2). For each layer of such clauses, one can group the clauses into two sets,
one acting across blocks 1 and 2, then 3 and 4, then 5 and 6, etc. and the
next set acting across blocks 2 and 3, 4 and 5, 6 and 7, etc. The cost of
applying the first layer scales at most as $n\ 2^{3B/2}$ and doubles the bond
dimension between blocks in the worst case, since the bond dimension of the
$O_{C_{p}}$ MPOs is $2$. The cost of applying the next layer will then scale
at most as $(n\ 8\ 2^{3B/2})$, the extra factor of $8=2^{3}$ coming from the
doubling of bond dimension due to the first layer. In general the cost of
enforcing the $(2L_{\text{inter}})$ layers of inter-block clauses will be $(n\
2^{2L_{\text{inter}}-1}\ 2^{3B/2})$.
Therefore the overall cost of enforcing all of the 1D 3-SAT clauses scales as
$\displaystyle(n\ N_{\text{intra}}\ \ 2^{3B/2})+(n\
2^{2L_{\text{inter}}-1}2^{3B/2})$ (26)
which is manifestly linear in $n$, assuming $B$ and $L_{\text{inter}}$ are
held constant, and that $N_{\text{intra}}$ depends only on $B$ (is chosen
proportional to $B$).
n | S | $\chi_{\text{max}}$ | time
---|---|---|---
40 | 99 | 21 | 0.973s
40 | 50 | 22 | 0.973s
40 | 108 | 16 | 0.989s
40 | 0 | 19 | 0.985s
60 | 4530 | 22 | 2.00s
60 | 0 | 19 | 1.98s
60 | 17920 | 19 | 1.96s
Table 2: Maximum MPS ranks $\chi_{\text{max}}$ and execution times to compute
the post-oracle state corresponding to blocked 1D 3-SAT problem instances
involving $n$ variables ($n$ qubits). In all cases the block size was chosen
as $B=10$ and $N_{\text{intra}}=37$ random clauses were applied within each
block while $N_{\text{inter}}=2$ layers of clauses were applied between
neighboring blocks. The table also shows the number of satisfying assignments
or bitstrings $S$ for each problem instance.
In practice, when implementing the above method on test systems of $n=40$
qubits, using a fixed block size of $B=10$, taking $L_{\text{inter}}=2$ and
choosing $N_{\text{intra}}=3.7\cdot B$, we find that all the clauses can be
enforced in a running time of under 1 second. The maximum MPS bond dimension
observed is $\chi=22$. Systems with $n=60$ qubits are just as easy for the
same block size and number of clauses per block, with similar maximum bond
dimension and running times just under 2 seconds. See Table 2 for detailed
results.
Figure 4: (a) Probability of success versus number of iterations of Grover’s
algorithm for $n=30$ qubits and different levels of noise $\lambda$. Also
shown are theoretical fits to the cases $\lambda=10^{-5},10^{-4}$. For $n=30$
the theoretically optimal number of iterations is $r=25,736$. The final
success probability reaches a reasonable value for noise $\lambda<5\times
10^{-5}$, but once the noise becomes as large as $\lambda=10^{4}$, the success
probability reaches a maximum of only 0.04 at the 10,000th iteration then
falls to about 0.006 by the final iteration. (b) Final probability of success
after the optimal number of iterations $r$ as a function of total noise
$\Lambda=\lambda r$ where $\lambda$ is the amount of depolarizing noise per
iteration.
## VI Scaling of Errors in Grover’s Algorithm
We now turn to the second part of this work where we discuss the possibility
of quantum advantage for GA. Before we can do that, we need to study the
sensitivity of GA to the presence of imperfections such as gate errors or
decoherence. There has been some previous literature on this subject [48]. The
advantage of our tensor network approach is the capability to study relatively
large system from which we can extract the actual scaling of the errors in GA.
At the end of GA, the quantum computer is supposed to be in the state
$|w\rangle$ so that upon measuring the qubits one finds the solution $w$
almost certainly (up to negligible exponentially small $1/N$ corrections).
However, due to imperfection in the gates or environmental coupling and
decoherence the system will be in some other state instead, described by the
density matrix $\rho$. The probability to get the correct output $w$ when one
performs the measurement of the qubits is $F=\langle w|\rho|w\rangle$. $F$ is
known as the fidelity of the calculation which matches the success probability
of the algorithm for GA. Generically, the fidelity decays exponentially with
the number of gates applied in the calculation as well as with any idle time
(decoherence). This exponential behavior has been observed ubiquitously in
experiments and was established on a large scale in the seminal “quantum
supremacy” experiment of Google [49]. Denoting by $\epsilon$ the typical error
per gate and $N_{g}$ the total number of gates used, we therefore expect the
decay rate to be $F\approx e^{-\epsilon N_{g}}$ with $N_{g}=r(N_{o}+N_{d})$
where $N_{o}$ (resp. $N_{d}$) is the number of gates in the quantum circuit of
the oracle (resp. diffusion operator). In other words, the success probability
of GA is expected to follow a rather unfavorable double exponential decay with
$n$,
$\displaystyle
F\approx\exp\big{[}-\frac{\pi\epsilon}{4}(\sqrt{2})^{n}\,(N_{o}+N_{d})\big{]}.$
(27)
To substantiate the scaling of GA with noise and number of qubits, we have
performed classical simulations using an MPO state representation capable of
representing mixed states. For these simulations we also implement the oracle
and diffusion operators as MPO tensor networks as described in Section IV.1.
As discussed earlier, without noise the MPS bond dimension $\chi$ of the state
in between the oracle and diffusion steps (so at every step of these
simulations) is $\chi=2$, leading to an MPO-mixed-state representation of only
$\chi=4$ so that each step can be performed rather quickly and it is possible
to run the whole algorithm up to about $n=30$ qubits in under an hour. Adding
noise to the simulations only modifies the bond dimension of the state very
slightly and observed bond dimensions always remain less than $\chi\lesssim
10$.
To model the effects of noise, we apply a depolarizing noise channel
$\Delta_{\lambda}(\rho)=(1-\lambda)\rho+\frac{\lambda}{2^{n}}I$ in between
Grover iterations, that is once per iteration. By not including any noise
during the application of the oracle or the diffuser our simulations are being
very generous in favor of the success of GA. The noise per iteration $\lambda$
relates simply to the noise per gate $\lambda=\epsilon(N_{o}+N_{d})$.
The results of our simulations for $n=30$ qubits and various levels of noise
$\lambda$ are shown in Fig. 4(a). As long as the noise $\lambda\lesssim
5\times 10^{-5}$ the final probability of success reaches a reasonable value
after the optimal number of iterations, which for $n=30$ is $r=25,736$.
However, the probability of success after $r$ iterations falls below $1\%$
once the noise becomes larger than about $10^{-4}$. Note that non-zero noise
leads to a maximum in the success probability at an earlier iteration
$q^{*}<r$. We analyze the height and location of this maximum further below.
Due to the transparent nature of GA and the depolarizing noise model, one can
actually work out an exact result for the state after $q$ steps of the
algorithm. By a recursive procedure, one finds:
$\displaystyle\rho_{q}=(1-\lambda)^{q}|\Psi_{q}\rangle\langle\Psi_{q}|+(1-(1-\lambda)^{q})I\frac{1}{2^{n}}$
(28)
where $|\Psi_{q}\rangle$ is the ideal pure state after $q$ noiseless Grover
iterations. Using a well-known result for the probability of success of the
noiseless GA after $q$ iterations [50]
$\displaystyle|\langle w|\Psi_{q}\rangle|^{2}$
$\displaystyle=\sin^{2}((2q+1)\theta)$ (29) $\displaystyle\theta$
$\displaystyle=\arcsin\Big{(}\frac{1}{\sqrt{2^{n}}}\Big{)}$ (30)
it follows that the probability of success after iteration $q$ is given by
$\displaystyle p_{q}=\langle
w|\rho_{q}|w\rangle=(1-\lambda)^{q}\sin^{2}((2q+1)\theta)\ \ .$ (31)
Since $q\gg 1$, ignoring exponentially small corrections, we have:
$\displaystyle p_{q}=e^{-\lambda q}\sin^{2}(2q\theta)\ \ .$ (32)
We show in Fig. 4 that this fit works well, though there is a slight
disagreement which we attribute not to the fit, but to a loss of precision in
the early iterations of the numerical simulations due to the use of double-
precision floating point numbers and the very small signal of GA through the
early iterations [51].
Interestingly, the form of $p_{q}$ Eq. (32) means that if one defines the
noise level in terms of a parameter $\Lambda$ such that $\lambda=\Lambda/r$
then the final success probability is
$\displaystyle p_{r}=e^{-\Lambda}$ (33)
regardless of the number of qubits, using the fact that
$\sin^{2}(2r\theta)\approx 1$ up to exponentially small corrections. One can
interpret $\Lambda=\lambda\cdot r$ as the total accumulated noise throughout a
complete run of GA. In Fig. 4 we show how the final success probability
$p_{r}$ observed in noisy simulations fits very well to $e^{-\Lambda}$.
In the presence of large noise, due to a maximum that develops in the fidelity
curve, it is advantageous for GA to stop the iteration at a certain smaller
value of $q=q^{*}$. An explicit calculation of the optimum of Eq.(32) provides
$\displaystyle\tan[2(r-q^{*})\theta]=\frac{\Lambda}{\pi}$ (34)
from which we arrive at the optimum success probability,
$\displaystyle p_{\rm
success}=\frac{e^{\frac{2\Lambda}{\pi}\arctan(\frac{\Lambda}{\pi})}}{1+(\Lambda/\pi)^{2}}e^{-\Lambda}.$
(35)
This formula behaves as $p_{\rm success}\approx e^{-\Lambda}$ for small
$\Lambda$ (approaching 1.0 as $\Lambda\rightarrow 0$) and behaves as $p_{\rm
success}\approx e^{-2}\cdot(\pi/\Lambda)^{2}$ for large $\Lambda$.
Because $p_{\rm success}$ depends only on $\Lambda$, an important conclusion
is that for the GA success probability to scale successfully to large values
of $n$, the total noise $\Lambda$ must be held independent of $n$. The noise
per iteration $\lambda$ must therefore scale as
$\displaystyle\lambda=\frac{\Lambda}{r}\propto\frac{\Lambda}{(\sqrt{2})^{n}}$
(36)
showing the noise per iteration must be reduced exponentially with $n$.
## VII On the possibility of quantum advantage in Grover’s algorithm
There are two kinds of quantum advantages. The theoretical one, i.e. the
possibility that in an idealized world a perfect quantum computer could
perform parametrically better than a classical one for a given task. And the
practical one, i.e. the possibillty that an actual device does something
useful faster than a classical machine.
With respect to the first question our QiGA merely reverses the charge of the
proof: we show that there is no theoretical quantum advantage unless proven
otherwise and quantum advantage has to be decided in a case-by-case manner.
With respect to the second kind of quantum advantage involving an actual
machine, the existence of QiGA and the demands for implementing GA place
drastic bounds on the hardware needed which we will argue are simply out of
reach. When discussing hardware, there is long list of specifications that we
could consider including heat management, control electronics, classical data
bandwidth for e.g. syndrom analysis, device variability, power
consumption…Here we limit ourselves to discussing three aspects: the total
qubit count, the error budget per gate and the time to solution.
### VII.1 Absence of generic theoretical quantum advantage
QiGA implies that if the quantum circuit for the oracle can be simulated
classically, then $O(n)$ such calculations are sufficient to solve the problem
while a quantum computer needs $O(2^{n/2})$ calls to the oracle. An immediate
consequence is that no theoretical quantum advantage can be claimed
generically, i.e. irrespectively of the nature of the underlying quantum
circuit for the oracle. This is an important point to make with respect to the
large literature which assumes, implicitly or explicitly, the existence of a
quantum speed-up every time a GA replaces its classical counterpart [52].
If the complexity for calculating one amplitude of the oracle is smaller than
$(\sqrt{2})^{n}$, then QiGA is parametrically faster than its quantum
counterpart. Constructing an oracle whose classical simulation is provably
harder than $(\sqrt{2})^{n}$ can most likely be done. Indeed, in the large
depth limit classical simulations of quantum circuits have a generic
complexity of $2^{n}$. Yet, we are not aware of such a construction for an
actual oracle (i.e. a circuit whose output amplitudes are only $\pm 1$).
Conversely, there are clear cases where classical algorithms win. For
instance, if the oracle can be simulated with a fixed depth, then the problem
can be solved in linear time using MPS technology while GA would require an
exponential time. The Quasi-1D SAT is another example.
We emphasize that our work does not contradict previous work that formally
proves that the quantum speed-up of GA is optimal [8]. While the proof is
certainly valid technically, its usefulness for a given problem requires the
best known classical strategy to scale as $2^{n}$ (i.e. worst-case classical
scaling) for that problem. But for any specific problem, if Grover’s can be
applied there must exist an explicit circuit for the oracle. So there is
always at least some structure to the problem: the structure of the oracle
circuit. One can always try to simulate this oracle circuit by applying it to
a tensor network. Until such a “simulability check” has been done, the
applicability of the proof remains in doubt because the problem might not
satisfy the proof’s assumptions. In other words, one must be very careful with
using the concept of an “oracle” which, however appealing theoretically, might
not be relevant to practical applications.
A corollary of the existence of QiGA is that the quantum circuit associated to
the oracle of an NP complete problem must be exponentially difficult to
simulate in the general case, i.e. presents an exponentially high entanglement
barrier. Indeed, otherwise, one could simulate it in polynomial time which
would prove $P=NP$ a statement widely believed to be untrue. Hence QiGA
provides a direct connection between classical complexity theory and the
entanglement level of a quantum circuit.
Lastly, we would like to discuss the relation of this work to amplitude
amplification [7], a direct generalisation of GA. In some cases, there exist
fast classical heuristic algorithms that can solve NP-hard problems faster
than GA, though still scaling exponentially. For instance, there exist very
fast classical algorithms for the 3-SAT problem which we considered earlier in
our QiGA demonstrations (incidentally, among the best are tensor network
approaches [47]). Amplitude amplification is claimed to recover the quadratic
Grover speedup over such fast classical algorithms by combining these
algorithms with a slight modification of GA. Below, we show that QiGA applies
in this context as well. We again argue that the question of whether the
oracle can be simulated classically is a crucial one.
A classical heuristic algorithm takes the form of a function $b=G(r)$ that
proposes a bitstring $b$ as a possible solution. Here the variable $r$
captures the probabilistic nature of the heuristic, e.g. it can be the seed of
the pseudo random number generator used in the heuristic. In a good heuristic,
$r$ need to span a much smaller number of values than $b$. For instance, in
the context of 3-SAT, the Schoning algorithm [53] solves 3-SAT (with a
complexity $(4/3)^{n}$) by an iterative process over bitstrings where at each
stage an unsatisfied clause is chosen randomly and then a bit of the clause is
flipped in order for the clause to become satisfied. To transform this
heuristic into a quantum algorithm, amplitude amplification does not search
for the bitstring $b$ that satisfies $f(b)=1$ but instead uses GA to search
for the value of $r$ that satisfies $f(G(r))=1$ (see the discussion around
Theorem 6 of [7]), i.e. one needs to use GA with the oracle
$U_{w}|r\rangle=(-1)^{f(G(r))}|r\rangle$. It follows that our QiGA approach
applies directly to the amplitude amplification of these heuristics: one only
needs to modify the oracle in the same way it would be modified for the
quantum computer. Hence, the question of the existence of a theoretical
quantum advantage translates again into the one of an entanglement barrier,
here of the new oracle $f(G(r))$. Anticipating on the discussion of the next
section, these heuristics make classical problems computable up to very large
values of $n$. For instance for 3-SAT, problems with $n\sim 10,000$ are well
within range. It follows that the entrance level for GA for these problems
would be much higher than for the problems for which no heuristic is
available. As we shall see in the estimates below, this would translate into
inaccessible number of required qubits and astronomically large times to
solution.
### VII.2 Absence of practical quantum advantage on a noisy quantum computer
We now turn to the question of a practical advantage and provide some
estimates about the specifications a quantum hardware must fullfill to solve a
task better than what one can do classically. We start by estimating the total
gate count $N_{g}$. The diffusion operator typically requires $2n$ Toffoli
gates and the oracle at least the same (a simpler oracle would likely be
rather easy to solve classically). Each Toffoli gate must be decomposed into
15 gates (including $6$ Control-NOT and $7$ T gates). We arrive at a total
gate count for GA of $N_{g}\geq 60\,n\,2^{n/2}$ assuming perfect connectivity
(i.e. that the two-qubit gates can be applied between any pairs of qubits). In
order for the final success probability to be of order unity (here we choose
$p_{\rm success}=1/2$) we need $\Lambda\approx 0.8$ which translates into
$\epsilon\leq 1/(60\,n\,2^{n/2})$.
It follows that in order to apply GA on just $5$ qubits, one needs
$\epsilon\leq 5.10^{-4}$ which is much better than any existing hardware.
Indeed, the experimental value of the error per gate $\epsilon$ has been
mostly stable in the last ten years, typically around $\epsilon\approx 0.01$
for the state of the art [49] and slightly better for some systems with few
($<10$) qubits. Previous applications of GA for a few qubits used a much
smaller gate count in order to retain a large enough fidelity. This is
possible for contrived examples where one uses Eq. (3) instead of Eq. (2) i.e.
one explicitly uses information about the solution $w$ instead of performing
the classical logic of computing $f(b)$. While this is perfectly acceptable
for proof of principle experiments, this does not correspond to the full
application of GA to actually solve a problem. Going to $n=40$ which we can
easily solve on a desktop computer using QiGA leads to $\epsilon\leq
4.10^{-10}$. Manipulating tens of qubits with precisions better than one part
in a billion for a total of billions of gates is in our view completely
unrealistic. Using the best available algorithms available on supercomputers
(see [32] for a recent review) to perform QiGA we estimate that $n=80$
problems are accessible on supercomputers (most probably $n\geq 100$). Solving
such a problem would require $\epsilon\leq 2.10^{-16}$ as well as a time to
solution (assuming very fast $10$ ns gates) of more than one year of
uninterrupted running time.
### VII.3 Absence of practical quantum advantage on a fault tolerant quantum
computer
The problem of limited qubit fidelity can in principle be solved by quantum
error correction which should allow to lower the effective error level per
gate $\epsilon_{L}$ by constructing robust logical qubits out of several
physical qubits. However, quantum error correction trades a better effective
$\epsilon_{L}$ with a much higher qubit count $n_{L}$ as well as a much higher
time to solution since logical qubits require many physical qubit operations
in order to make a single logical one. Hence, we can already anticipate that
quantum error correction is unlikely to help given the above already high
estimate of the time to solution.
To make quantitative estimates, we focus on the example of the surface code,
one of the most robust QEC code with a clear path to implementation [54]. We
ignore non-correctable errors [55] for simplicity. We also ignore the so-
called syndrome analysis although this would be quite problematic in practice.
In the surface code, the error $\epsilon_{\rm L}$ per logical gate on a
logical qubit scales as
$\displaystyle\epsilon_{\rm L}\propto\epsilon_{\rm
ph}\left(\frac{\epsilon_{\rm ph}}{\epsilon_{\rm th}}\right)^{\sqrt{N_{c}}/2}$
(37)
where $N_{c}$ is the number of physical qubit per logical qubit,
$\epsilon_{\rm ph}$ is the error per gate on the physical qubits and
$\epsilon_{\rm th}$ is the threshold of the code, around $\epsilon_{\rm
th}\approx 0.01$ for the surface code [54]. Ignoring logarithmic corrections,
we immediately observe that the exponentially large running time which implies
that one must have $\epsilon_{\rm L}(\sqrt{2})^{n}\leq 1$ translates into
$\displaystyle N_{c}\propto n^{2}.$ (38)
i.e. the number of physical qubits per logical one increases quadratically
with $n$ in sharp contrast to e.g. Shor’s algorithm where $N_{c}\propto(\log
n)^{2}$ has a weak logarithmic increase. We can already surmise that, since
the estimates for algorithms with exponential speed-up already involve
millions, or more realistically billions, of very high quality physical qubits
to address $n=100$, the added complexity in Grover would move these estimates
far beyond anything reasonable [56].
To continue, we need to modify our gate count estimate and consider separately
the Control-NOT gates and the T gates. In the surface code, Control-NOT are
performed through “braiding”, a process that requires making a loop of one
qubit around the other which costs an extra $\sqrt{n}\sqrt{N_{c}}$ factor.
Also, the time scale is no longer limited by the time it takes to execute one
gate but by the time to make a projective measurement of the so-called
stabilizers. For instance in superconducting transmon qubit, the former can be
as low as $10$ ns while the later takes around $1\mu$s. Systems based on
atomic physics such as cold ions are typically much slower. Also, the error
per gate $\epsilon$ is likely to be limited by the measurement errors that are
typically much worst than the error per gate. Considering only Control-NOT
operations we arrive at a time to solution of $10^{5}$ years for $n=80$.
Applying T gates on a surface code requires the availability of special states
that can be obtained through “magic state distillation”, a lengthy process
that involve several logical qubits for each T gate that one wants to apply.
In order for the already rather large time to solution not to be slowed down
by the T gates, one would need to incorporate a large number of “T gate
factories” on the quantum chip thereby raising the total qubit count
dramatically. We need not to elaborate further, it should be clear at this
stage that the fate of quantum error correction for the implementation of GA
is extremely doubtful.
## VIII Conclusion
Grover’s algorithm is an elegant intellectual construction. Unfortunately our
analysis indicates that it will remain so for the foreseeable future.
We have constructed a quantum inspired version of Grover’s algorithm that can
solve quantum problem in a single call to the oracle, requiring exponentially
fewer steps than Grover’s algorithm, provided one is able to compute
individual amplitudes of the oracle. We have also provided specific cases
where this “classical advantage” can be realized.
Since our classical algorithm is fully general, it provides a clear benchmark
against which one can evaluate the potential speed-up of Grover algorithm both
theoretically and practically. While we cannot exclude a _theoretical_ quantum
scaling advantage for every problem, assuming an idealized quantum
implementation, we estimate that practical quantum implementations will be
associated with astronomically large computing times. On the other hand,
problems for which a quantum implementation may seem necessary could have
hidden structure revealed by our classical algorithm in the form of low
entanglement barriers on the way to a solution. And even if the entanglement
barrier does grow with problem size and produces an exponential scaling, it
remains possible this scaling could be better than $2^{n/2}$ for specific
classes of problems.
A work which has some overlap with ours is the proposal of Chamon and Mucciolo
[57] to use MPS tensor networks as a classical computing platform for solving
Grover’s algorithm problems. An important technical difference, however, is
that their algorithm depends on computing and comparing exponentially small
numbers, which could become challenging when working with fixed precision
floating-point. Also, while their work discusses the growth of entanglement in
Grover oracles, only worst-case bounds are stated.
A separate line of work based on the problem size needed for a
quantum/classical scaling crossover has also cast doubt on the usefulness of
Grover’s algorithm since it only offers at best a quadratic speedup, while the
estimated speeds of error-corrected quantum operations are expected to remain
much slower than classical operations [58, 59]. This is an entirely distinct
argument from ours which further casts doubt on the usefulness of Grover’s
algorithm as a practical tool.
Beyond the above rather negative results, our quantum inspired algorithm may
also lead to more positive results. For instance, while we have focused on
exact calculations of the the quantum circuit amplitudes, an interesting
possibility would be to construct the MPS from approximate calculations of the
amplitudes $\langle\beta|\Psi_{w}\rangle$ using standard MPS compression
techniques. It is unclear if the resulting MPS would provide an efficient
heuristic for solving the Grover problem.
###### Acknowledgements.
EMS acknowledges insightful discussions with Jielun Chris Chen, Matthew
Fishman, Roger Mong, Vadim Oganesyan, Nicola Pancotti, Olivier Parcollet,
Dries Sels, Simon Trebst, and Steve White. The Flatiron Institute is a
division of the Simons Foundation. XW acknowledges funding from the French ANR
QPEG and the Plan France 2030 ANR-22-PETQ-0007 “EPIQ”.
## References
* Shor [1994] P. Shor, Algorithms for quantum computation: discrete logarithms and factoring, in _Proceedings 35th Annual Symposium on Foundations of Computer Science_ (1994) pp. 124–134.
* Beauregard [2002] S. Beauregard, Circuit for shor’s algorithm using 2n+3 qubits 10.48550/ARXIV.QUANT-PH/0205095 (2002).
* Kitaev [1995] A. Y. Kitaev, Quantum measurements and the abelian stabilizer problem (1995).
* Harrow _et al._ [2009] A. W. Harrow, A. Hassidim, and S. Lloyd, Quantum algorithm for linear systems of equations, Phys. Rev. Lett. 103, 150502 (2009).
* Grover [1997] L. K. Grover, Quantum mechanics helps in searching for a needle in a haystack, Phys. Rev. Lett. 79, 325 (1997).
* Grover [2001] L. K. Grover, From Schrödinger’s equation to the quantum search algorithm, Pramana - J. Phys. 56, 333 (2001).
* Brassard _et al._ [2002] G. Brassard, P. Hoyer, M. Mosca, and A. Tapp, Quantum amplitude amplification and estimation, Contemporary Mathematics 305, 53 (2002).
* Bennett _et al._ [1997] C. H. Bennett, E. Bernstein, G. Brassard, and U. Vazirani, Strengths and weaknesses of quantum computing, SIAM Journal on Computing 26, 1510 (1997), https://doi.org/10.1137/S0097539796300933 .
* Baritompa _et al._ [2005] W. P. Baritompa, D. W. Bulger, and G. R. Wood, Grover’s quantum algorithm applied to global optimization, SIAM Journal on Optimization 15, 1170 (2005).
* Wei _et al._ [2020] A. Y. Wei, P. Naik, A. W. Harrow, and J. Thaler, Quantum algorithms for jet clustering, Phys. Rev. D 101, 094015 (2020).
* Dürr _et al._ [2006] C. Dürr, M. Heiligman, P. HOyer, and M. Mhalla, Quantum query complexity of some graph problems, SIAM Journal on Computing 35, 1310 (2006).
* Stamatopoulos _et al._ [2020] N. Stamatopoulos, D. J. Egger, Y. Sun, C. Zoufal, R. Iten, N. Shen, and S. Woerner, Option Pricing using Quantum Computers, Quantum 4, 291 (2020).
* Ramesh and Vinay [2003] H. Ramesh and V. Vinay, String matching in o(n+m) quantum time, Journal of Discrete Algorithms 1, 103 (2003), combinatorial Algorithms.
* Aïmeur _et al._ [2013] E. Aïmeur, G. Brassard, and S. Gambs, Quantum speed-up for unsupervised learning, Machine Learning 90, 261 (2013).
* Kapoor _et al._ [2016] A. Kapoor, N. Wiebe, and K. Svore, Quantum perceptron models, in _Advances in Neural Information Processing Systems_, Vol. 29, edited by D. Lee, M. Sugiyama, U. Luxburg, I. Guyon, and R. Garnett (Curran Associates, Inc., 2016).
* Dong _et al._ [2008] D. Dong, C. Chen, H. Li, and T.-J. Tarn, Quantum reinforcement learning, IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 38, 1207 (2008).
* Dewes _et al._ [2012] A. Dewes, R. Lauro, F. R. Ong, V. Schmitt, P. Milman, P. Bertet, D. Vion, and D. Esteve, Quantum speeding-up of computation demonstrated in a superconducting two-qubit processor, Phys. Rev. B 85, 140503 (2012).
* Figgatt _et al._ [2017] C. Figgatt, D. Maslov, K. A. Landsman, N. M. Linke, S. Debnath, and C. Monroe, Complete 3-qubit grover search on a programmable quantum computer, Nature communications 8, 1 (2017).
* Mandviwalla _et al._ [2018] A. Mandviwalla, K. Ohshiro, and B. Ji, Implementing Grover’s Algorithm on the IBM Quantum Computers, in _2018 IEEE International Conference on Big Data (Big Data)_ (2018) pp. 2531–2537.
* Pokharel and Lidar [2022] B. Pokharel and D. Lidar, Better-than-classical grover search via quantum error detection and suppression (2022).
* Östlund and Rommer [1995] S. Östlund and S. Rommer, Thermodynamic limit of density matrix renormalization, Phys. Rev. Lett. 75, 3537 (1995).
* Vidal [2003] G. Vidal, Efficient classical simulation of slightly entangled quantum computations, Phys. Rev. Lett. 91, 147902 (2003).
* Perez-Garcia _et al._ [2007] D. Perez-Garcia, F. Verstraete, M. M. Wolf, and J. I. Cirac, Matrix product state representations, Quantum Info. Comput. 7, 401–430 (2007).
* McCulloch [2007] I. P. McCulloch, From density-matrix renormalization group to matrix product states, Journal of Statistical Mechanics: Theory and Experiment 2007, P10014 (2007).
* Verstraete _et al._ [2004] F. Verstraete, J. J. García-Ripoll, and J. I. Cirac, Matrix product density operators: Simulation of finite-temperature and dissipative systems, Phys. Rev. Lett. 93, 207204 (2004).
* Crosswhite and Bacon [2008] G. M. Crosswhite and D. Bacon, Finite automata for caching in matrix product algorithms, Phys. Rev. A 78, 012356 (2008).
* Pirvu _et al._ [2010] B. Pirvu, V. Murg, J. I. Cirac, and F. Verstraete, Matrix product operator representations, New Journal of Physics 12, 025012 (2010).
* Zaletel _et al._ [2015] M. P. Zaletel, R. S. K. Mong, C. Karrasch, J. E. Moore, and F. Pollmann, Time-evolving a matrix product state with long-ranged interactions, Phys. Rev. B 91, 165112 (2015).
* Nielsen and Chuang [2016] M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information (10th Anniversary edition)_ (Cambridge University Press, 2016).
* Ferris and Vidal [2012] A. J. Ferris and G. Vidal, Perfect sampling with unitary tensor networks, Phys. Rev. B 85, 165146 (2012).
* Stoudenmire and White [2010] E. M. Stoudenmire and S. R. White, Minimally entangled typical thermal state algorithms, New Journal of Physics 12, 055026 (2010).
* Ayral _et al._ [2022] T. Ayral, T. Louvet, Y. Zhou, C. Lambert, E. M. Stoudenmire, and X. Waintal, A density-matrix renormalization group algorithm for simulating quantum circuits with a finite fidelity (2022).
* Oseledets and Tyrtyshnikov [2010] I. Oseledets and E. Tyrtyshnikov, Tt-cross approximation for multidimensional arrays, Linear Algebra and its Applications 432, 70 (2010).
* Savostyanov and Oseledets [2011] D. Savostyanov and I. Oseledets, Fast adaptive interpolation of multi-dimensional arrays in tensor train format, in _The 2011 International Workshop on Multidimensional (nD) Systems_ (IEEE, 2011) pp. 1–8.
* Savostyanov [2014] D. V. Savostyanov, Quasioptimality of maximum-volume cross interpolation of tensors, Linear Algebra and its Applications 458, 217 (2014).
* Dolgov and Savostyanov [2020] S. Dolgov and D. Savostyanov, Parallel cross interpolation for high-precision calculation of high-dimensional integrals, Computer Physics Communications 246, 106869 (2020).
* Núñez Fernández _et al._ [2022] Y. Núñez Fernández, M. Jeannin, P. T. Dumitrescu, T. Kloss, J. Kaye, O. Parcollet, and X. Waintal, Learning feynman diagrams with tensor trains, Phys. Rev. X 12, 041018 (2022).
* Massacci and Marraro [2000] F. Massacci and L. Marraro, Logical cryptanalysis as a sat problem, Journal of Automated Reasoning 24, 165 (2000).
* Mironov and Zhang [2006] I. Mironov and L. Zhang, Applications of sat solvers to cryptanalysis of hash functions, in _International Conference on Theory and Applications of Satisfiability Testing_ (Springer, 2006) pp. 102–115.
* [40] L. Perron and V. Furnon, Or-tools.
* Corblin _et al._ [2007] F. Corblin, L. Bordeaux, Y. Hamadi, E. Fanchon, and L. Trilling, A sat-based approach to decipher gene regulatory networks, Integrative Post-Genomics, RIAMS, Lyon (2007).
* Fishman _et al._ [2022a] M. Fishman, S. R. White, and E. M. Stoudenmire, The ITensor Software Library for Tensor Network Calculations, SciPost Phys. Codebases , 4 (2022a).
* Fishman _et al._ [2022b] M. Fishman, S. R. White, and E. M. Stoudenmire, Codebase release 0.3 for ITensor, SciPost Phys. Codebases , 4 (2022b).
* Chen _et al._ [2018] J. Chen, F. Zhang, C. Huang, M. Newman, and Y. Shi, Classical simulation of intermediate-size quantum circuits (2018), arXiv:1805.01450 [quant-ph] .
* Gray and Kourtis [2021] J. Gray and S. Kourtis, Hyper-optimized tensor network contraction, Quantum 5, 410 (2021).
* Pan _et al._ [2022] F. Pan, K. Chen, and P. Zhang, Solving the sampling problem of the sycamore quantum circuits, Phys. Rev. Lett. 129, 090502 (2022).
* Kourtis _et al._ [2019] S. Kourtis, C. Chamon, E. R. Mucciolo, and A. E. Ruckenstein, Fast counting with tensor networks, SciPost Phys. 7, 060 (2019).
* Koch _et al._ [2019] D. Koch, A. Torrance, D. Kinghorn, S. Patel, L. Wessing, and P. M. Alsing, Simulating quantum algorithms using fidelity and coherence time as principle models for error (2019), arxiv:1098.04229 .
* Arute _et al._ [2019] F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. Brandao, D. A. Buell, _et al._ , Quantum supremacy using a programmable superconducting processor, Nature 574, 505 (2019).
* Zhang and Korepin [2020] K. Zhang and V. E. Korepin, Depth optimization of quantum search algorithms beyond grover’s algorithm, Phys. Rev. A 101, 032346 (2020).
* SaiToh [2013] A. SaiToh, A multiprecision C++ library for matrix-product-state simulation of quantum computing: Evaluation of numerical errors, in _Journal of Physics: Conference Series_ , Vol. 454 (IOP Publishing, 2013) p. 012064.
* Montanaro [2016] A. Montanaro, Quantum algorithms: an overview, npj Quantum Information 2, 1 (2016).
* Schoning [1999] T. Schoning, A probabilistic algorithm for k-sat and constraint satisfaction problems, in _40th Annual Symposium on Foundations of Computer Science (Cat. No. 99CB37039)_ (IEEE, 1999) pp. 410–414.
* Fowler _et al._ [2012] A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, Surface codes: Towards practical large-scale quantum computation, Phys. Rev. A 86, 032324 (2012).
* Waintal [2019] X. Waintal, What determines the ultimate precision of a quantum computer, Phys. Rev. A 99, 042318 (2019).
* Reiher _et al._ [2017] M. Reiher, N. Wiebe, K. M. Svore, D. Wecker, and M. Troyer, Elucidating reaction mechanisms on quantum computers, Proceedings of the National Academy of Sciences 114, 7555 (2017), https://www.pnas.org/doi/pdf/10.1073/pnas.1619152114 .
* Chamon and Mucciolo [2012] C. Chamon and E. R. Mucciolo, Virtual parallel computing and a search algorithm using matrix product states, Phys. Rev. Lett. 109, 030503 (2012).
* Babbush _et al._ [2021] R. Babbush, J. R. McClean, M. Newman, C. Gidney, S. Boixo, and H. Neven, Focus beyond quadratic speedups for error-corrected quantum advantage, PRX Quantum 2, 010103 (2021).
* [59] M. Troyer, Disentangling hype from reality: Achieving practical quantum advantage., Q2B Practical Quantum Computing Conference, 2020.
* Aaronson and Gottesman [2004] S. Aaronson and D. Gottesman, Improved simulation of stabilizer circuits, Phys. Rev. A 70, 052328 (2004).
* Chen _et al._ [2022] S. Chen, J. Cotler, H.-Y. Huang, and J. Li, The complexity of nisq (2022).
* Gottesman [1998] D. Gottesman, The heisenberg representation of quantum computers 10.48550/ARXIV.QUANT-PH/9807006 (1998).
* Garcia-Saez and Latorre [2011] A. Garcia-Saez and J. I. Latorre, An exact tensor network for the 3SAT problem 10.48550/ARXIV.1105.3201 (2011).
* Ma and Yang [2022] L. Ma and C. Yang, Low rank approximation in simulations of quantum algorithms, Journal of Computational Science 59, 101561 (2022).
* Regev and Schiff [2008] O. Regev and L. Schiff, Impossibility of a quantum speed-up with a faulty oracle, in _International Colloquium on Automata, Languages, and Programming_ (Springer, 2008) pp. 773–781.
* Nest [2008] M. V. d. Nest, Classical simulation of quantum computation, the gottesman-knill theorem, and slightly beyond 10.48550/ARXIV.0811.0898 (2008).
* Yang _et al._ [2021] S. Yang, W. Zi, B. Wu, C. Guo, J. Zhang, and X. Sun, Efficient quantum circuit synthesis for sat-oracle with limited ancillary qubit (2021), arXiv:2101.05430 .
## Appendix A Quantum Circuits for Oracle and Diffusion Operators
We discuss here some details of the quantum circuit used in the _simulations_
of GA, not to be confused with the QiGA calculations.
For obtaining the entanglement entropy plot Fig. 2, which shows the
entanglement not only between Grover iterations but also inside of the
substeps of the oracle and diffusion circuits, we used the circuits shown in
the figure below. These circuits are for the case where the target bitstring
$w$ is known. (The target for the diffusion operator is always known, since it
can be implemented as the oracle which targets $|000...0\rangle$ pre- and
post-processed by a Hadamard gate on each qubit.) The circuit pattern below
uses at most three-qubit gates. This is in contrast to the implementation
sometimes seen where the oracle is implemented by a “multi-controlled” gate,
which is equivalent to our observation in the main text that the oracle can
always in principle be implemented by a rank $\chi=2$ MPO (for the case of a
single targeted bitstring $w$).
For the specific case of four qubits and a target bitstring $w=1011$, the
oracle circuit pattern used for the Fig. 2 simulation was:
For Grover’s algorithm on $n$ qubits, the operator above is initialized by
preparing $n+1$ additional ancilla qubits, $n$ in the $|0\rangle$ state and
the $n+1$ qubit in the $|-\rangle=H|1\rangle$ state. By using Toffoli gates
acting on the upper and lower registers, the ancillas are flipped to indicate
that each qubit of the target bitstring has been found (upper control) and
that all previous bits have been found (lower control). If so, the next
ancilla qubit is flipped to 1.
The notation of the white circle for the upper control of the second Toffoli
gate stands for an “anti-control” meaning the gate acts only if that qubit is
$|0\rangle$. This kind of control can be viewed as shorthand for:
that is, NOT gates on either side of a regular control.
At the center of the circuit, if the $n^{\text{th}}$ ancilla qubit is 1, then
the $n+1$ ancilla is acted on by a NOT gate which results in a minus sign
(“phase kickback mechanism”) for the amplitude of the state with the target
bitstring in the upper register. Lastly, the Toffoli gates are acted in
reverse order to “uncompute” the ancilla register, restoring all of the
ancillas to their initial product state. It is easy to check by inspection
that applying the above circuit to $|b\rangle|0000-\rangle_{A}$ leads to
$\pm|b\rangle|0000-\rangle_{A}$ depending on wither $b=w$ or not.
|
# UniTSA: A Universal Reinforcement Learning Framework for V2X Traffic Signal
Control
Maonan Wang , Xi Xiong , Yuheng Kan, Chengcheng Xu , Man-On Pun School of
Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China
and Shanghai AI Laboratory, Shanghai, China.Key Laboratory of Road and Traffic
Engineering, Ministry of Education, Tongji University, Shanghai,
China.SenseTime Group Limited and Shanghai AI Laboratory, Shanghai,
China.School of Science and Engineering, The Chinese University of Hong Kong,
Shenzhen, Shenzhen 518172, China.Corresponding author<EMAIL_ADDRESS>
###### Abstract
Traffic congestion is a persistent problem in urban areas, which calls for the
development of effective traffic signal control (TSC) systems. While existing
Reinforcement Learning (RL)-based methods have shown promising performance in
optimizing TSC, it is challenging to generalize these methods across
intersections of different structures. In this work, a universal RL-based TSC
framework is proposed for Vehicle-to-Everything (V2X) environments. The
proposed framework introduces a novel agent design that incorporates a
junction matrix to characterize intersection states, making the proposed model
applicable to diverse intersections. To equip the proposed RL-based framework
with enhanced capability of handling various intersection structures, novel
traffic state augmentation methods are tailor-made for signal light control
systems. Finally, extensive experimental results derived from multiple
intersection configurations confirm the effectiveness of the proposed
framework. The source code in this work is available at
https://github.com/wmn7/Universal_Light
_K_ eywords Traffic signal control $\cdot$ Universal models $\cdot$
Reinforcement learning $\cdot$ Traffic state augmentation
## 1 Introduction
Traffic congestion presents a critical challenge in urban areas worldwide,
leading to wasted time for individuals, excessive fuel consumption, and
increased greenhouse gas emissions [1]. To alleviate congestion, conventional
traffic signal control (TSC) methods such as fixed-cycle traffic control [2],
the Webster method [3], and Self-Organizing Traffic Light Control (SOTL) [4]
have been developed. However, as cities continue to grow, these traditional
traffic management approaches often prove insufficient to handle increasing
traffic volumes and dynamic road conditions [5]. The emergence of V2X
technologies in transportation systems offers a promising solution by
facilitating communication and data exchange among vehicles, infrastructure,
and other road users [6]. By leveraging real-time data derived from vehicles,
the traffic management infrastructure can efficiently regulate vehicle and
pedestrian movements at intersections by dynamically adapting to the changing
traffic conditions [7].
To control signal lights based on real-time traffic conditions, various RL-
based methods have been proposed. These methods employ three primary
approaches for adjusting traffic lights, namely “Choose the next phase", “Keep
or change the phase" and “Set the phase duration". More specifically, in
“Choose the next phase", the RL agent determines the next phase to activate,
allowing for a flexible phase sequence rather than a predetermined one [8, 9,
10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. While this approach
offers flexibility, it has the potential to confuse drivers since the phase
selection may appear random, which could incur increased risks of traffic
accidents. For the “Keep or change the phase" approach, the RL agent decides
whether to maintain or change the current phases [23, 24, 25]. Finally, the
“Set the phase duration" approach selects the optimal duration for the current
phase from a set of predefined options [26, 27, 28]. Through direct
interaction with the environment, the RL agent effectively learns to adapt to
traffic condition changes based on real-world experiences.
Despite the significant improvements achieved by the aforementioned RL-based
methods, a major limitation is that they are designed for interactions of
designated structures. In other words, these RL models have to be redesigned
and re-trained from scratch when dealing with intersections of different
structures including approaching roads, lanes, and phases, which incurs
substantial resources in terms of data collection, model development, and
testing [29]. Thus, it is crucial to develop universal models that can be
easily adapted and deployed across a wide range of intersections. As a result,
the implementation of V2X can be efficiently scaled up without requiring
extensive customization or redevelopment for each individual intersection
[30]. In the literature, several generalized models were proposed for
different junctions [11, 18, 10, 12]. Despite their good performance, these
generalized models are only applicable to those specific intersection
configurations considered in their design. Furthermore, these models exhibit
performance degradation when encountering intersections of unseen
configurations.
Motivated the discussions above, we present a Universal Traffic State
Augmentation RL-based framework (UniTSA). The proposed framework enables the
training of a universal agent using augmented data for TSC. In order to handle
intersections of diverse configurations, a junction matrix is developed to
describe the intersection state. By using this newly proposed matrix,
intersections of different structures can be characterized by matrices of the
same size. In addition, the “Keep or change the current phase” approach is
employed as the action design in this work, ensuring consistent model
structures across intersections of different configurations. To cope with
unseen intersection structures, five traffic state augmentation methods are
developed to enrich the agent’s data collection during the training process.
These augmentation methods provide more comprehensive training, which
ultimately leads to improved performance in intersections of configurations
not available in the training set. Furthermore, the Low-Rank Adaptation (LoRA)
[31] is employed to further fine-tune the model for crucial intersections.
Finally, extensive experiments using the Simulation of Urban MObility (SUMO)
[32] platform are conducted by taking into account intersections of various
approaching roads, lanes, and phases. The experimental results demonstrate
that the proposed UniTSA model achieves excellent performance even in unseen
intersections.
Our contributions can be summarized as follows:
* •
An adaptive TSC framework called UniTSA is proposed for V2X by leveraging a
universal RL model built upon novel agent designs that can handle diverse
intersection structures. In addition, a fine-tuning mechanism is devised to
further enhance the performance in key intersections;
* •
Traffic state augmentation methods are developed for the proposed TSC
framework, enhancing the agent’s understanding of diverse intersections with
improved performance in both training and testing sets;
* •
Extensive experiments on $12$ intersections of diverse structures confirm that
the proposed UniTSA model substantially outperforms conventional universal
models. Moreover, given a new intersection, UniTSA can fine-tune its pre-
trained model to achieve comparable or even better performance with
significantly reduced training time as compared to training a new model from
scratch.
The remainder of the paper is structured as follows: Section 2 provides a
summary on the TSC-related work whereas Section 3 introduces the terminology
related to road and traffic signals. After that, Section 4 presents the
proposed UniTSA framework and the five traffic state augmentation methods
before Section 5 elaborates on the experimental setup and results on UniTSA.
Finally, Section 6 concludes the paper.
## 2 Related Work
Extensive research has been conducted in the field of transportation to study
TSC. Conventionally, fixed-time signal control is one of the earliest and
widely used approaches in TSC [33]. These methods rely on pre-determined
signal timings based on historical traffic patterns or engineering guidelines.
Various optimization techniques have been proposed to determine optimal fixed-
time plans for specific intersection configurations, where the Webster [3]
method is one of the most successful TSC methods for the single intersection
case. It can calculate the cycle length and phase split for a single
intersection according to the traffic volume during a certain period (i.e.,
past $15$ minutes or $30$ minutes). However, such fixed-time control methods
often incur suboptimal performance due to their lack of adaptability to
dynamically changing traffic conditions. To cope with this problem, actuated
control methods such as Sydney Coordinated Adaptive Traffic System (SCATS)
[34], Max-pressure control [35] and Self-Organizing Traffic Light Control
(SOTL) [4] were designed to adaptively adjust signal timings based on real-
time traffic demand. Despite their many advantages, these actuated control
methods are handicapped by the necessity that expert settings are required for
each intersection. Furthermore, the performance of these actuated control
methods degrades in complex scenarios.
Recently, RL-based TSC methods have attracted substantial attention due to
their outstanding adaptability to real-time traffic conditions and impressive
capability of learning the optimal control policies in complex scenarios [36].
Generally speaking, these RL-based TSC methods can be categorized into three
categories, namely the valued-based methods [8, 23, 25, 9, 10, 11, 12, 13,
14], the policy-based methods [15, 16, 17, 18, 19] and the actor-critic
methods [20, 28, 21, 22]. Despite their good performance, most existing RL-
based TSC methods focus on training models for specific intersection
configurations or scenarios. A few attempts on training generalized TSC models
have been made in the literature. For instance, MetaLight [11] trains a more
universal model by incorporating the meta-learning strategy proposed in [37].
However, MetaLight requires re-training of its model parameters for each new
intersection encountered. To overcome this shortcoming, [18, 10, 12]
established universal models through parameter sharing. However, these methods
do not preserve the original phase structure of traffic lights. In contrast,
our proposed method can maintain the original signal light structure while
fine-tuning crucial intersections to achieve significantly improved
performance.
Figure 1: A standard $4$-way intersection illustration.
## 3 Preliminary
In this section, some concepts used in this work are defined using a standard
four-way intersection as depicted in Fig. 1 as an example. These concepts can
be easily extended to the intersections of different structures.
* •
Lane: A lane refers to a roadway that provides a defined path for vehicles to
travel in a specific direction. At a typical intersection, there are two types
of lanes: incoming lanes $l_{in}$ (where vehicles enter) and outgoing lanes
$l_{out}$ (where vehicle exit);
* •
Traffic Movement: A traffic movement refers to the connection between an
incoming lane $l_{in}$ to an outgoing lane $l_{out}$. For the common $4$-way
intersection in the left side of Fig. 1, there are $12$ movements in total,
including right turns, left turns, and through movements in each of the four
directions;
* •
Movement Signal: A movement signal is defined on the traffic movement. The
green signal means the corresponding movement is allowed whereas the red
signal is prohibited. As the right-turn traffic can move regardless of the
signal, only eight movement signals out of the $12$ possible movements in a
$4$-way intersection are used. More specifically, these eight movements
denoted by $M_{1},M_{2},\cdots,M_{8}$ are Northbound (N), Northbound Left-turn
(NL), Eastbound (E), Eastbound Left-turn (EL), Westbound(W), Westbound Left-
turn (WL), Southbound (S), Southbound Left-turn (SL) movements, respectively.
For instance, $m_{8}$ indicates that the vehicles can travel from west to
north;
* •
Phase: A phase is a combination of movement signals. Each phase allows a
specific group of traffic movements to occur while restricting others. The
top-right portion of Fig. 1 illustrates the four phases of a $4$-way
intersection. For instance, Phase-1 involves $m_{1}$ and $m_{5}$, enabling
vehicles traveling north-south to proceed while simultaneously prohibiting
other movements;
* •
Signal Plan: A signal plan represents a prearranged sequence and duration of
phases used to control the traffic signals at an intersection. Mathematically,
a signal plan is denoted by
$\\{(p_{1},t_{1}),(p_{2},t_{2}),\cdots,(p_{i},t_{i}),\cdots\\}$, where $p_{i}$
and $t_{i}$ represent a phase and its duration, respectively. Usually, the
phase sequence is in a cyclic order. For instance, the bottom-right portion of
Fig. 1 shows a cycle-based signal plan and the duration of each phase is
$t_{1}=50$, $t_{2}=20$, $t_{3}=50$ and $t_{4}=20$ as an example.
Figure 2: The overall framework of UniTSA.
## 4 Methodology
### 4.1 Framework
As shown in Fig. 2, the proposed UniTSA framework consists of two modules, one
designed to train a universal agent through different intersections and one
for fine-tuning the model for key intersections. More specifically, the first
module capitalizes on a novel RL agent design to allow the model to
characterize intersections of various topologies and signal schemes using the
same structure by incorporating features on movements and actions with next or
not while exploiting five novel traffic state augmentation methods. After
that, the second module fine-tunes the model derived by the first module for
some specific key intersections. More details about each step above will be
elaborated in the following sections.
### 4.2 Agent Design and Junction Matrix
State: Intersections may possess different numbers of lanes, which results in
state space of different dimensions when recording features in lane units. As
discussed in Section 3, it is observed that there are only eight valid
movement signals in a $4$-way intersection, regardless of the number of lanes.
Inspired by this observation, we propose to represent the information of an
intersection at time $t$ as a junction matrix denoted as
${\bm{J}}_{t}=\left[{\bm{m}}_{1}^{t},{\bm{m}}_{2}^{t},\cdots,{\bm{m}}_{8}^{t}\right]^{T},$
(1)
where $\left[\cdot\right]^{T}$ stands for the transpose of the enclosed matrix
while vector ${\bm{m}}_{i}^{t}$ of length eight represents information
extracted from the $i$-th movement at time $t$ for $i=1,2,\cdots,8$. More
specifically, ${\bm{m}}_{i}^{t}$ encompasses three components: traffic
characteristics, movement characteristics, and traffic signal characteristics.
The traffic characteristics quantify the congestion level of the movement
using parameters such as the average traffic flow $F^{i,t}$, the maximum
occupancy $O^{i,t}_{max}$, and the average occupancy $O^{i,t}_{mean}$ within
two consecutive actions. In addition, the movement characteristics provide
specific details about the movement itself, including the direction
($I^{i}_{s}$) indicating whether it is a straight movement or not and the
number of lanes ($L_{i}$) it occupies. Finally, the traffic signal
characteristics comprise three binary parameters, namely $I^{i,t}_{cg}$,
$I^{i,t}_{ng}$ and $I^{i,t}_{mg}$. These three parameters indicate whether the
movement signal is currently green or not, whether it will be green in the
next signal phase or not, and whether the current green duration has reached
the minimum duration or not, respectively. These eight features can be readily
obtained, making this design suitable for practical deployment. Therefore,
vector ${\bm{m}}^{t}_{i}$ is defined as follows:
${\bm{m}}_{i}^{t}=\left[F^{i,t},O^{i,t}_{max},O^{i,t}_{mean},I^{i}_{s},L_{i},I^{i,t}_{cg},I^{i,t}_{ng},I^{i,t}_{mg}\right]^{T}.$
(2)
(a)
(b)
Figure 3: (a) A $3$-way intersection with (b) its junction matrix with zero
padding.
It has been observed that ${\bm{J}}_{t}$ at one time instance is not
sufficient to provide a comprehensive understanding of the traffic dynamics
for RL-based signal light controllers. To circumvent this obstacle, we propose
to incorporate temporal traffic information by exploiting the $K$ latest
observations, which enables the agent to better capture traffic patterns and
trends in traffic behaviors. As a result, the agent can more effectively adapt
its decision-making process in response to evolving traffic conditions.
Mathematically, we define the state ${\bm{S}}_{t}\in\mathbb{R}^{K\times
8\times 8}$ of the proposed agent at time $t$ as shown in Eq. (3):
${\bm{S}}_{t}=\left[{\bm{J}}_{t-K+1},{\bm{J}}_{t-K+2},\cdots,{\bm{J}}_{t}\right].$
(3)
Finally, it is worth pointing out that zero padding is employed when the
number of movements at an intersection is less than $8$, e.g. a $3$-way
intersection. For instance, Fig. 3a shows a common $3$-way intersection in
which only E, EL, W, and SL movement signals are in use. As shown in Fig. 3b,
zero padding is applied on the rows corresponding to those $4$ unused movement
signals, maintaining the matrix size identical to those of the $4$-way
intersection.
Action: A realistic and implementable action design has to take into account
the safety of all traffic participants. While the action design choose next
phase [10, 18, 12] can significantly improve the intersection efficiency, it
disrupts the original cycle sequence of signal lights, thereby compromising
driver safety. In sharp contrast, this work adopts the action design of "Keep
or change" [23, 25, 9]. This design adheres to the concept of cycles,
executing each phase sequentially (e.g., Phase 1, Phase 2, Phase 3, Phase 4,
Phase 1, Phase 2, and so on). The agent determines whether to keep the current
phase or change to the next phase based on the state ${\bm{S}}_{t}$. Due to
the availability of current phase information $I_{cg}$ and next phase
information $I_{ng}$ in the junction matrix, it is later shown that this
action design exhibits excellent scalability for intersections with different
signal plans.
Reward: The negative of the average queue length in each movement $q_{i}$ is
adopted as the reward. Metrics such as waiting time, travel time, and delay
are not used since it is impractical to obtain these metrics from real-world
traffic detection devices. Consequently, the proposed reward function is
defined as follows:
$r_{t}=\frac{\left(-\displaystyle\sum_{i=1}^{8}{q_{i}}\right)-\mu}{\sigma+\epsilon},$
(4)
where $\epsilon$ is a small number to prevent division by zero. Furthermore,
$\mu$ and $\sigma$ represent the mean and standard deviation of the first $R$
rewards, respectively. Mathematically, $\mu$ and $\sigma$ take the following
form:
$\displaystyle\mu$ $\displaystyle=$
$\displaystyle\frac{1}{R-1}\displaystyle\sum_{j=1}^{R-1}{r_{j}},$ (5)
$\displaystyle\sigma$ $\displaystyle=$
$\displaystyle\sqrt{\frac{1}{R-1}\displaystyle\sum_{j=1}^{R-1}{(r_{j}-\mu)^{2}}}.$
(6)
The reward is normalized to facilitate a faster training process.
Figure 4: Illustration of three traffic state augmentation methods applied to
both $4$-way and $3$-way intersections. Figure 5: Detailed steps in traffic
state augmentation block.
### 4.3 Traffic State Augmentation
In recent studies, data augmentation techniques have demonstrated their
effectiveness in enhancing the generalization capabilities of RL models [38,
39, 40]. By training on a more diverse set of augmented samples, RL agents can
improve their capability of handling unseen tasks. The most common data
augmentation techniques reported in the literature are Gaussian noise
addition, and masking. In this work, we propose three additional novel traffic
state augmentation methods specifically designed for RL-based TSC tasks,
namely movement shuffling, change of lane numbers and traffic flow scaling as
illustrated in Fig. 4.
Fig. 5 provides a detailed illustration of the step-by-step process involving
the traffic state ${\bm{S}}_{t}$ as it undergoes the five data augmentation
methods, leading to the final output $\tilde{\bm{S}}_{t}$. During training, a
minibatch of data is randomly sampled from the replay buffer or recently
augmented trajectories. While augmentation across the minibatch is stochastic,
it is consistent across the stacked frames. It is also worth noting that these
five traffic state augmentation methods can be directly applied to the
junction matrix ${\bm{J}}$, enabling the agent to learn and adapt to different
traffic scenarios and intersection structures.
Movement Shuffling: This method involves shuffling the rows of the junction
matrix, simulating different rotations, and flipping at the same intersection.
Intuitively, shuffling can be interpreted as an effective rotation of the
original intersection as depicted in Fig. 4. The assumption behind shuffling
is that the action taken by the agent should not change after the rotation of
the intersection. Mathematically, the movement shuffling operation can be
modeled as follows:
$\tilde{\bm{J}}_{t}^{\textrm{ms}}={\bm{P}}\cdot{\bm{J}}_{t},$ (7)
where ${\bm{J}}_{t}$ and $\tilde{\bm{J}}_{t}^{\textrm{ms}}$ represent the
original and augmented junction matrices, respectively. Furthermore,
${\bm{P}}$ is a permutation matrix designed to exchange rows in
${\bm{J}}_{t}$. After “movement shuffling” is applied to all junction matrices
in ${\bm{S}}_{t}$ in Eq. (3), the new state can be represented as
$\tilde{\bm{S}}_{t}^{\textrm{ms}}$:
$\tilde{\bm{S}}_{t}^{\textrm{ms}}=\left[\tilde{\bm{J}}_{t-K+1}^{\textrm{ms}},\tilde{\bm{J}}_{t-K+2}^{\textrm{ms}},\cdots,\tilde{\bm{J}}_{t}^{\textrm{ms}}\right].$
(8)
Change of Lane Numbers: To expose the agent to a wider range of road structure
combinations, we propose to randomly modify the number of lanes $L_{i}$ in
each movement vector
$\tilde{\bm{m}}_{i}^{\textrm{ms}}\in\tilde{\bm{J}}_{t}^{\textrm{ms}}$. This
augmentation method allows the agent to encounter various lane configurations
during training, enhancing its capability of handling diverse intersection
layouts. In addition, the traffic characteristics (e.g., traffic flow and
occupancy) are multiplied by the corresponding coefficients to maintain
relative values.
An example of this traffic state augmentation method is shown in Fig. 4. In
this example, a $4$-way intersection was modified to two in all directions by
reducing the number of lanes. Similarly, for the $3$-way intersection, we
increase the number of west and east-approaching lanes to $4$. While modifying
the number of lanes, we maintain the relative number of vehicles in each
direction. Thus, it is reasonable to assume that the actions taken by the
agent before and after such modifications should be identical. Mathematically,
the operation to change the lane number can be expressed as:
$\tilde{\bm{m}}_{i}^{\textrm{cln}}=f(\tilde{\bm{m}}_{i}^{\textrm{ms}},\tilde{L}_{i}),\
i=1,2,\cdots,8,$ (9)
where $L_{i}$ and $\tilde{L}_{i}$ are uniformly distributed random variables
representing the original and modified numbers of lanes, respectively.
Furthermore, $\tilde{\bm{m}}_{i}^{\textrm{ms}}$ stands for an individual
movement vector within the junction matrix after the “movement shuffling”
method whereas $f\left(\cdot,\cdot\right)$ denotes the function that adjusts
the traffic characteristics. Specifically, $f\left(\cdot,\cdot\right)$ can
take the following form:
$f(\tilde{\bm{m}}_{i}^{\textrm{ms}},L^{\prime})=\left\\{\begin{aligned}
&\left[\tilde{\bm{m}}_{i}^{\textrm{ms}}\right]_{k}\times\left(\frac{\tilde{L}_{i}}{L_{i}}\right),&\text{if}\
k=1,2,3,5\\\
&\left[\tilde{\bm{m}}_{i}^{\textrm{ms}}\right]_{k},&\text{otherwise}\end{aligned}\right.$
(10)
where $\left[\tilde{\bm{m}}_{i}^{\textrm{ms}}\right]_{k}$ represents the
$k$-th entry of $\tilde{\bm{m}}_{i}^{\textrm{ms}}$. Note that $f$ is designed
to ensure that the traffic characteristics $F$, $O_{max}$, and $O_{mean}$
maintain relative values regardless of the variations in the lane
configuration. After applying the “change of lane numbers” method in all
junction matrices in $\tilde{\bm{S}}_{t}^{\textrm{ms}}$, the new state can be
represented as $\tilde{\bm{S}}_{t}^{\textrm{cln}}$:
$\tilde{\bm{S}}_{t}^{\textrm{cln}}=\left[\tilde{\bm{J}}_{t-K+1}^{\textrm{cln}},\tilde{\bm{J}}_{t-K+2}^{\textrm{cln}},\cdots,\tilde{\bm{J}}_{t}^{\textrm{cln}}\right].$
(11)
where
$\tilde{\bm{J}}_{t}^{\textrm{cln}}=\left[\tilde{\bm{m}}_{1}^{\textrm{cln}},\tilde{\bm{m}}_{2}^{\textrm{cln}},\cdots,\tilde{\bm{m}}_{8}^{\textrm{cln}}\right]^{T}.$
(12)
Traffic Flow Scaling: To shift the agent’s focus from absolute car numbers to
relative car distributions, we introduce a flow scaling factor. By multiplying
the flow and occupancy values in the junction matrix with a uniformly
distributed random number $\alpha$, we can create variations in the relative
traffic volume for each movement. Notably, $\alpha$ remains consistent across
all movements within the same traffic state, ensuring that the relative
vehicle proportions between movements remain unchanged. This augmentation
method facilitates the agent to prioritize the relative significance of
different movements, thereby reducing its reliance on absolute values. It is
worth noting that traffic flow scaling does not alter the number of lanes in
each movement.
Fig. 4 illustrates two plausible scaling methods by proportionally increasing
or decreasing the number of vehicles on each approach or adding small changes
to the traffic volume in two scenarios. Mathematically, the flow scaling
operation can be defined as:
$\tilde{\bm{m}}_{i}^{\textrm{tfs}}=g(\tilde{\bm{m}}_{i}^{\textrm{cln}},\alpha),\
i=1,2,\cdots,8,$ (13)
where $\alpha$ is the flow scaling factor. Furthermore,
$g\left(\cdot,\cdot\right)$ denotes a function that scales the flow and
occupancy values and takes the following form:
$g(\tilde{\bm{m}}_{i}^{\textrm{cln}},\alpha)=\left\\{\begin{aligned}
&\left[\tilde{\bm{m}}_{i}^{\textrm{cln}}\right]_{k}\times\alpha,&\text{if}\
k=1,2,3\\\
&\left[\tilde{\bm{m}}_{i}^{\textrm{cln}}\right]_{k}.&\text{otherwise}\end{aligned}\right.$
(14)
After replacing $\tilde{\bm{m}}_{1}^{\textrm{cln}}$ in Eq. (12) with
$\tilde{\bm{m}}_{i}^{\textrm{tfs}}$, the resulting state from the “traffic
flow scaling” method is denoted as $\tilde{\bm{S}}_{t}^{\textrm{tfs}}$.
Gaussian Noise Addition: Gaussian noise is added directly to the junction
matrix to introduce randomness into the training data. The additive noise can
affect all components of the junction matrix, including traffic
characteristics, movement characteristics, and traffic signal characteristics.
This augmentation method allows the agent to adapt to noisy and uncertain
traffic conditions, improving its robustness during inference. Mathematically,
the Gaussian noise addition operation can be modeled as:
$\tilde{\bm{J}}_{t}^{\textrm{gna}}=\tilde{\bm{J}}_{t}^{\textrm{tfs}}+{\bm{\Psi}}_{t},$
(15)
where ${\bm{\Psi}}_{t}\sim\mathcal{N}({\bm{0}},{\bm{I}})$ denotes the Gaussian
noise matrix, and
$\tilde{\bm{J}}_{t}^{\textrm{tfs}}\in\tilde{\bm{S}}_{t}^{\textrm{tfs}}$
corresponds to the junction matrix after applying the “traffic flow scaling”
method. After applying “Gaussian noise addition”, the new state can be
represented as $\tilde{\bm{S}}_{t}^{\textrm{gna}}$.
Masking: To encourage the agent to learn the traffic flow changes, we randomly
set the values in the junction matrix to zero at a specific time instance. By
masking certain components of the junction matrix, we create situations where
the agent must rely on the information before and after the masked period to
infer traffic dynamics. This augmentation method promotes the agent’s
capability of understanding and responding to traffic fluctuations.
Finally, a novel state denoted as $\tilde{\bm{S}}_{t}$ is obtained after the
application of the five data augmentation methods discussed above.
Figure 6: Three kinds of intersection feature extraction block (a) CNN-based
Structure; (b) RNN-based Structure; (c) Transformer-based Structure.
### 4.4 Intersection Feature Extraction
In this section, three neural network structures are utilized to extract
intersection information from the augmented traffic states, namely
Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and
transformers.
CNN-based Structure: As shown in Fig. 6(a), a CNN-based structure equipped
with two 2D convolutional layers is utilized to extract time series
information within a road junction, resulting in a hidden representation
${\bm{C}}_{t}$ given as:
${\bm{C}}_{t}=\text{ReLU}\left(\mathbf{W}^{2d}_{2}\mathbf{W}^{2d}_{1}{\tilde{\bm{S}}_{t}}\right),$
(16)
where $\mathbf{W}^{2d}_{1}$ and $\mathbf{W}^{2d}_{2}$ represent the learnable
parameters of the two 2D convolutional blocks, respectively. More
specifically, the first convolutional block, $\mathbf{W}^{2d}_{1}$, extracts
information regarding the traffic movement, and subsequently, the second
convolutional block, $\mathbf{W}^{2d}_{2}$, captures information specific to
the junction based on the movement information. Furthermore,
$\text{ReLU}(\cdot)$ denotes the ReLU function.
Next, the resulting ${\bm{C}}_{t}$ is passed through a multilayer perceptron
(MLP) layers, producing a feature vector ${\bm{O}}_{t}$ as follows:
${\bm{O}}_{t}=\mathbf{W}_{u}{\bm{C}}_{t}+\mathbf{b}_{u},$ (17)
where $\mathbf{W}_{u}$ and $\mathbf{b}_{u}$ are learnable parameters of the
MLP layer.
RNN-based Structure: In contrast to the CNN-based approach, the RNN-based
structure illustrated in Fig. 6(b) employs a parameter-sharing 1D
convolutional layer to extract information from each junction matrix. The 1D
convolutional layer operates on each junction matrix $\tilde{\bm{J}}_{t-K+i}$
for $i=1,2,\cdots,K$, which is the augmented state at a particular time step
within the history window. The resulting outputs denoted by
$\bm{c}_{1},\bm{c}_{2},\ldots,\bm{c}_{k}$, capture the information from the
intersection at $K$ different time instances and can be expressed as:
$\bm{c}_{i}=\text{ReLU}\left(\mathbf{W}^{1d}_{2}\mathbf{W}^{1d}_{1}\tilde{\bm{J}}_{t-K+i}\right),\
i=1,2,\cdots,K,$ (18)
where $\mathbf{W}^{1d}_{1}$ and $\mathbf{W}^{1d}_{2}$ represent the learnable
parameters of the two 1D convolutional layers. These outputs are then fed into
the RNN module:
$\bm{h}_{i}=\tanh(\mathbf{W}_{x}\bm{c}_{i}+\bm{h}_{i-1}\mathbf{W}_{h}+\mathbf{b}_{h}),\
i=1,2,\cdots,K,$ (19)
where $\mathbf{W}_{x}$, $\mathbf{W}_{h}$, and $\mathbf{b}_{h}$ represent the
weights for the hidden layer in the RNN structure. The final hidden state
$\bm{h}_{K}$ derived from the last RNN output is used to calculate the
features of the entire intersection over a period of time, denoted as
${\bm{O}}_{t}$:
${\bm{O}}_{t}=\mathbf{W}_{v}\bm{h}_{K}+\mathbf{b}_{v},$ (20)
where $\mathbf{W}_{v}$ and $\mathbf{b}_{v}$ are the weights for the output
layer in the RNN structure.
Transformer-based Structure: Fig. 6(c) shows the transformer-based approach.
we employ a weight-shared CNN network to extract features from the junction
matrix at each time step, as mentioned in Eq. (18). However, instead of using
an RNN block, we utilize a transformer encoder to capture temporal
dependencies in the sequence of features
$\tilde{\bm{C}}_{t}=[\bm{c}_{1},\bm{c}_{2},\ldots,\bm{c}_{K}]$. To incorporate
timing information, we insert a learnable embedding denoted as
$\bm{c}_{\text{class}}$ to $\tilde{\bm{C}}_{t}$ as follows:
$\bm{C}_{t}=\left[\bm{c}_{\text{class}},\tilde{\bm{C}}_{t}\right]=\left[\bm{c}_{\text{class}},\bm{c}_{1},\bm{c}_{2},\ldots,\bm{c}_{K}\right].$
(21)
The output state of the transformer encoder, based on this modified input
sequence, serves as the traffic state representation $\bm{O}_{t}$. In the
transformer encoder block, self-attention is calculated as follows:
$\bm{Q}_{C}=\bm{C}_{t}\bm{W}_{Q},\bm{K}_{C}=\bm{C}_{t}\bm{W}_{K},\bm{V}_{C}=\bm{C}_{t}\bm{W}_{V},$
(22)
and
$\bm{Z}_{t}=\phi\left(\frac{\bm{Q}_{C}\bm{K}_{C}^{T}}{\sqrt{d}}\right)\bm{V}_{C},$
(23)
where $\bm{Q}_{C}$, $\bm{K}_{C}$, and $\bm{V}_{V}$ represent the projected
query, key and value features, respectively, while $\bm{W}_{Q}$, $\bm{W}_{K}$,
and $\bm{W}_{V}$ the corresponding parameter metrics. In addition,
$\bm{Z}_{t}$ represents the output of self-attention, and $\phi(\cdot)$
denotes the softmax function. The final output $\bm{O}_{t}$ is the first value
of $\bm{Z}_{t}$.
### 4.5 RL Training and Fine-tuning
The Proximal Policy Optimization (PPO) algorithm [41] is adopted to train our
UniTSA model. As depicted in Fig. 2, the agent gathers trajectories consisting
of observations, actions, and rewards during the interactions with diverse
traffic scenarios of different structures. These trajectories serve as the
basis for computing the policy loss and value loss utilized to update the
weights of the Actor and Critic networks.
The policy loss quantifies the difference between the current policy and the
updated policy derived from the collected trajectories. It encourages the
agent to increase the probabilities of actions that lead to higher rewards
while reducing the probabilities of actions that lead to lower rewards.
Mathematically, the policy loss can be formulated as:
$\mathcal{L}_{\text{pf}}(\theta)=\hat{\mathbb{E}}_{t}\left[\min\left(r_{t}A_{t},\text{clip}\left(r_{t},1-\epsilon,1+\epsilon\right)A_{t}\right)\right],$
(24)
where $\hat{\mathbb{E}}_{t}$ stands for the expectation operator and $r_{t}$
represents the ratio between the new policy $\pi_{\theta}(a_{t}|s_{t})$ and
the old policy $\pi_{\theta_{\text{old}}}(a_{t}|s_{t})$ :
$r_{t}=\frac{\pi_{\theta}(a_{t}|s_{t})}{\pi_{\theta_{\text{old}}}(a_{t}|s_{t})}.$
(25)
Furthermore, $A_{t}=r_{t}+\gamma V(s_{t+1})-V(s_{t})$ denotes the advantage
function whereas the clip function can ensure stable policy updates.
The value loss measures the discrepancy between the estimated value function
and the actual rewards obtained during the interaction. It makes the value
function to better approximate the expected cumulative rewards. The value loss
can be defined as:
$\mathcal{L}_{\text{vf}}(\theta)=\hat{\mathbb{E}}_{t}\left[\left(V_{\theta}(s_{t})-\hat{R}_{t}\right)^{2}\right],$
(26)
where $V_{\theta}(s_{t})$ is the estimated value function under policy
$\theta$, and $\hat{R}_{t}=\sum_{k=0}^{\infty}{\gamma^{k}r_{t+k}}$ denotes the
reward-to-go. Once the policy loss and the value loss have been calculated,
the final objective function is expressed as Eq. (27):
$\mathcal{L}(\theta)=-\mathcal{L}_{\text{pf}}(\theta)+\lambda\mathcal{L}_{\text{vf}}(\theta),$
(27)
where $\lambda$ is the coefficient of value loss.
This work proposes an effective universal model to control the traffic signals
by optimizing this objective function through PPO. Furthermore, since certain
intersections are more critical than others in practice, LoRA [31] is adopted
in the proposed model to fine-tune the performance on these important
intersections. More specifically, the LoRA modules are added to the weights of
dense layers in the Actor and Critic networks as shown in Fig. 2. During the
fine-tuning process, the original pretrained weights are kept constant while
the LoRA modules is being updated. This design allows the model to adapt to
the intersection-specific features without significantly increasing the number
of parameters, while maintaining training efficiency.
Figure 7: All intersection topologies with their available phases which used
for the training and testing.
Let us consider a pretrained weight matrix ${\bm{W}}\in\mathbb{R}^{n\times m}$
in the network accompanied by a LoRA module
$\Delta{\bm{W}}={\bm{W}}_{A}{\bm{W}}_{B}^{T}$, where
${\bm{W}}_{A}\in\mathbb{R}^{n\times d}$, ${\bm{W}}_{B}\in\mathbb{R}^{m\times
d}$ with $d\ll n$. The output of this layer can be obtained as:
$z={\bm{W}}x+\Delta{\bm{W}}x={\bm{W}}x+\frac{\alpha}{r}{\bm{W}}_{A}{\bm{W}}_{B}^{T}x,$
(28)
where ${\bm{W}}_{A}$ and ${\bm{W}}_{B}$ are initialized as a zero matrix and a
zero-mean Gaussian distribution matrix, respectively. Furthermore, $\alpha$ is
a constant scale hyperparameter whereas $r$ is the rank of the LoRA module.
Through the combination of RL training using PPO and fine-tuning with LoRA,
the proposed universal model can effectively address the challenges posed by
important intersections of varying structures.
Table 1: Hyperparameter Setting Hyper-parameter | Value
---|---
Learning rate | $0.0001$
Trajectory memory size | $3000$
Clipping range $\epsilon$ | $0.2$
Discount factor $\gamma$ | $0.99$
Value function coefficient $\lambda$ | $0.9$
Scale hyperparameter $\alpha$ | $1$
Rank of LoRA module | $8$
## 5 Experiments
### 5.1 Experiment Settings
Extensive experiments were conducted to validate the proposed model using the
SUMO software package [32] in this section. SUMO is an open-source microscopic
traffic simulation tool designed for handling large networks. It provides the
Traffic Control Interface (TraCI) for controlling traffic lights and
retrieving traffic condition information for intersections. We calculate the
flow and occupancy of each movement by analyzing the positions and
trajectories of vehicles on the road. It is important to note that, in order
to simulate real-world conditions, we consider only vehicles within proximity
of $150$ m to the intersection, in lieu of all vehicles along the entire road.
Moreover, a green light was followed by a yellow light of $3$ s before
transitioning to a red light to ensure driver safety. The waiting time per
vehicle was used as a performance metric to evaluate the effectiveness of the
different methods. A low waiting time indicates that vehicles spent less time
passing through the intersection.
We utilized the Proximal Policy Optimization (PPO) implementation provided by
the Stable Baselines3 library [42]. To accelerate training, we employed $30$
parallel processes, and the total number of training environment steps was set
to $10$M. The state representation included the previous $K=8$ snapshots of
the junction matrix. The interval between two consecutive actions was $5$ s.
The hyper-parameters are configured as shown in Table 1. Furthermore, the
actor and critic networks were designed as two-layer fully connected networks.
The input sizes were $\\{64,32\\}$, and the output sizes were $\\{32,2\\}$ and
$\\{32,1\\}$ respectively for the actor and critic networks.
Table 2: All intersection configurations
| Training Dataset | | Test Dataset
---|---|---|---
Intersection ID | INT-1 | INT-2 | INT-3 | INT-4 | INT-5 | INT-6 | INT-7 | INT-8 | | INT-9 | INT-10 | INT-11 | INT-12
roads | 4 | 4 | 4 | 4 | 4 | 4 | 3 | 3 | | 4 | 4 | 3 | 3
lanes per road | (3,3,3,3) | (3,3,3,3) | (3,3,3,3) | (3,4,4,5) | (3,4,4,5) | (3,4,4,5) | (3,3,3) | (3,3,3) | | (3,4,3,4) | (3,3,3,3) | (4,3,3) | (2,3,2)
phases | 4 | 4 | 2 | 4 | 4 | 6 | 3 | 3 | | 4 | 5 | 3 | 3
### 5.2 Datasets
This study considers intersections with diverse structures, aiming to employ
one single universal model to predict actions for all intersections.
Specifically, $12$ intersections of varying numbers of phases, lanes on each
road, and approaching roads (i.e., $3$-way or $4$-way intersections) are
constructed and used for experiments. Among these $12$ intersections, eight
are used for training, while the remaining four are reserved for testing. The
topologies and phases of the $12$ intersections are depicted in Fig. 7, and
Table 2 provides a summary on their configurations. For instance, INT-4 shown
in Fig. 7 consists of four bi-directional approaching roads with three lanes
in the north-south direction, five lanes in the west-east direction, and four
lanes in each of the other two directions. INT-4 includes four phases, each
combining two different movement signals. Consequently, the configurations for
INT-4 in Table 2 indicate the presence of four roads, lanes per road specified
as $(3,4,4,5)$ in clockwise order, and four phases.
There are three different intersection topologies in the training dataset.
INT-1, INT-2 and INT-3 represent a regular $4$-way intersection scenario, with
each road consisting of three lanes; INT-4, INT-5 and INT-6 represent a large
$4$-way intersection scenario, featuring more than four lanes per road; INT-7,
INT-8 and INT-9 depict the $3$-way intersection scenario. For the
intersections with the same topology, we generate new intersections by
altering the sequence of phases or the number of phases. For instance, INT-1
and INT-2 have identical configurations, but the sequence of phases differs.
Similarly, INT-1 and INT-3 vary in terms of the number of phases.
To assess the performance of the proposed model on unseen intersections, four
testing scenarios are formed, namely INT-9, INT-10, INT-11 and INT-12. For
instance, INT-9 and INT-10 also represent $4$-way intersections, but INT-9
features different lane configurations compared to the training set, while
INT-10 differs in the number and combination of phases from the intersections
in the training set. INT-11 modifies the lane and phase of each road based on
INT-8. Finally, INT-12 simulates real-world traffic within the city of
Ingolstadt, Germany [43].
In addition to considering intersections with different structures, we
generate $100$ unique pieces of the route for each intersection. Three-
quarters of these routes were utilized for training, while the remaining
quarter was reserved for evaluation. Each route has a duration of $30,000$
seconds, equivalent to approximately $8$ hours.
### 5.3 Compared Methods
To evaluate the performance of the proposed UniTSA, we compare the resulting
universal model with several classic and state-of-the-art RL-based methods for
TSC.
FixTime [2]: Fixed-time control utilizes a predetermined cycle and phase
duration plan, which is widely used in situations with steady traffic flow. We
consider two versions of FixTime, FixTime-30 (Fix-30) and FixTime-40 (Fix-40).
These variants correspond to fixed-time signal control plans where each phase
has a duration of $30$ seconds and $40$ seconds, respectively.
Webster [3]: The Webster method determines the cycle length and phase split
based on traffic volume during a specific period. It has been proven that when
the traffic is uniform, the Webster method minimizes the travel time of all
vehicles passing the intersection or maximizes the intersection capacity [36].
Additionally, the Webster method can be adapted for real-time applications.
For fairness, we employ the Webster method to adjust the traffic lights based
on real-time traffic in this study.
SOTL [4]: Self-Organizing Traffic Light Control (SOTL) is an actuated signal
control method that dynamically adjusts signal durations based on a manually
determined threshold for the number of waiting vehicles. In this experiment,
we set the threshold based on [44].
MPLight [10]: MPLight incorporates the FRAP structure proposed in [9] and a
pressure-based reward mechanism inspired by [45]. Furthermore, MPLight employs
a sharing-parameter multilayer perceptron (MLP) to enhance adaptability across
different intersection configurations.
AttendLight [18]: This method adopts the attention mechanism to train a
universal model capable of handling intersections with diverse structures and
traffic flow distributions. It employs two attention models: the first
attention model addresses variations in the number of roads and lanes, while
the second attention model enables decision-making across intersections with
different numbers of phases.
In addition to the baseline methods, we also consider several variations of
our method:
UniTSA (Single): This method focuses on training the model within a single
environment. It utilizes the agent design described in Section 4.2 and employs
an RNN-based structure for extracting information from the traffic state.
UniTSA (Multi): In contrast to UniTSA (Single), this method trains the model
across multiple environments simultaneously. We explore the performance of
UniTSA (Multi) using various neural network designs, as discussed in Section
4.4, including UniTSA (Multi+CNN), UniTSA (Multi+RNN), and UniTSA
(Multi+Trans).
UniTSA (Multi+TSA): This variant of UniTSA enhances the training process by
incorporating five traffic state augmentation methods into UniTSA (Multi). It
results in UniTSA (Multi+CNN+TSA), UniTSA (Multi+RNN+TSA), and UniTSA
(Multi+Trans+TSA).
Table 3: Quantitative results (average waiting time per vehicle) of training intersections for universal models. A lower value indicates better performance and the lowest values are highlighted in bold. | INT-1 | INT-2 | INT-3 | INT-4 | INT-5 | INT-6 | INT-7 | INT-8
---|---|---|---|---|---|---|---|---
Fix-30 [2] | 39.458 | 38.862 | 8.323 | 35.123 | 35.031 | 51.923 | 18.920 | 18.595
Fix-40 [2] | 50.696 | 52.108 | 10.667 | 44.888 | 45.540 | 61.743 | 23.411 | 24.759
Webster [3] | 26.466 | 26.889 | 5.751 | 25.413 | 24.541 | 40.128 | 12.755 | 13.574
SOTL [4] | 16.048 | 16.561 | 2.764 | 28.169 | 27.902 | 27.404 | 8.639 | 7.925
MPLight [10] | 19.111 | 14.659 | 4.469 | 16.067 | 19.925 | 19.115 | 6.654 | 7.523
AttendLight [18] | 16.483 | 13.893 | 3.860 | 16.903 | 18.915 | 20.795 | 6.532 | 8.104
UniTSA (Multi+CNN) | 13.776 | 13.580 | 3.265 | 14.790 | 15.437 | 18.751 | 6.592 | 6.894
UniTSA (Multi+RNN) | 13.692 | 13.437 | 3.495 | 15.198 | 14.896 | 18.612 | 6.393 | 6.625
UniTSA (Multi+Trans) | 14.976 | 20.459 | 2.811 | 16.489 | 16.481 | 23.793 | 7.552 | 7.175
UniTSA (Multi+CNN+TSA) | 14.071 | 14.150 | 3.036 | 15.990 | 16.472 | 24.383 | 6.329 | 6.328
UniTSA (Multi+RNN+TSA) | 13.450 | 13.578 | 3.007 | 14.470 | 14.462 | 18.689 | 6.242 | 6.164
UniTSA (Multi+Trans+TSA) | 13.335 | 13.314 | 3.311 | 14.648 | 14.566 | 18.579 | 6.643 | 6.571
### 5.4 Results of the training intersections
In this section, the performance of the proposed UniTSA is compared against
that derived from several existing approaches, including the FixTime approach,
Webster model, SOTL, MPLight, and AttendLight, on the training intersections.
In particular, UniTSA models of different network structures were examined.
Table 3 shows the average waiting time per vehicle achieved by UniTSA and
other baseline methods on the training intersections. It is clear that the RL-
based approaches demonstrated superior performance as compared to the
conventional approaches in most intersection scenarios. Despite that SOTL
achieved the shortest waiting time in the INT-2, it required manually defined
thresholds for different environments, limiting its generalization in large-
scale scenarios.
Among the RL-based universal methods, namely MPLight, AttendLigh and UniTSA,
UniTSA demonstrated significant performance improvements as compared to other
RL-based methods. On average, UniTSA achieved $15\%$ and $12\%$ performance
improvement over MPLight and AttendLight, respectively, across the eight
intersections evaluated. When compared to MPLight, the proposed method
incorporates not only parameter sharing techniques before replacing the MLP
with RNN or Transformer block to capture the temporal information of the
traffic state. In contrast to AttendLight, UniTSA simplifies the state design
by representing the traffic state at a specific time instance using the
junction matrix. Furthermore, five methods of traffic state enhancement are
introduced, allowing the agent to observe a wider variety of traffic
intersection states during training, which contributes to the improved
performance of our model.
Finally, we explored the impact of different network structures and traffic
state augmentation. Inspection of Table 3 reveals that an RNN-based structure
yielded superior results compared to a CNN-based structure. Furthermore, in
the absence of traffic state augmentation, UniTSA (Multi+CNN) and UniTSA
(Multi+RNN) outperformed UniTSA (Multi+Trans) across most intersections, which
is rather surprising. This can be attributed to the fact that the Transformer-
based approach necessitates a larger volume of training data. However, in the
presence of traffic state augmentation, the model can interact with a broader
range of intersections with varying structures. As a result, UniTSA
(Multi+Trans+TSA) outperformed UniTSA (Multi+RNN) and UniTSA (Multi+RNN+TSA)
in many scenarios.
Table 4: Quantitative results of test intersections for universal models. | INT-9 | INT-10 | INT-11 | INT-12
---|---|---|---|---
Fix-30 | 40.341 | 38.360 | 17.136 | 17.440
Fix-40 | 60.043 | 50.580 | 23.504 | 20.570
Webster | 26.191 | 27.766 | 11.981 | 13.280
SOTL | 23.031 | 28.066 | 7.452 | 8.070
MPLight | 23.674 | 21.475 | 8.447 | 15.047
AttendLight | 18.080 | 18.501 | 7.393 | 12.982
UniTSA (Multi+CNN) | 17.877 | 18.244 | 7.063 | 12.820
UniTSA (Multi+RNN) | 16.771 | 15.450 | 6.807 | 10.140
UniTSA (Multi+Trans) | 17.640 | 16.269 | 6.598 | 9.630
UniTSA (Multi+CNN+TSA) | 13.075 | 14.245 | 6.794 | 7.550
UniTSA (Multi+RNN+TSA) | 12.677 | 14.208 | 5.914 | 6.730
UniTSA (Multi+Trans+TSA) | 13.054 | 16.186 | 5.983 | 6.190
### 5.5 Results of the test intersections
Next, we evaluate the key feature of UniTSA to see whether it can be utilized
for unseen intersections. Four intersections, namely INT-9, INT-10, INT-11 and
INT-12, are specifically used for the testing purposes. As discussed in
Section 5.2, these intersections differ from the scenarios in the training set
in terms of the number of lanes or the number of phases. Table 4 summarizes
the performance achieved by different methods, including baseline approaches
and various variants of our proposed UniTSA method. Consistent with the
results obtained on the training set, the RL-based algorithms significantly
outperformed the traditional traffic control algorithms.
Among the RL-based universal methods, UniTSA models demonstrated superior
performance across all test intersections as shown in Table 4. Among all the
methods, UniTSA (Multi+RNN+TSA) and UniTSA (Multi+Trans+TSA) excelled in terms
of reducing average travel time for vehicles passing through the
intersections. UniTSA (Multi+RNN+TSA) showcases an average travel time
reduction of approximately $32.9\%$ and $41.3\%$ as compared to MPLight and
AttendLight, respectively. These results confirm the effectiveness of our
approach in optimizing TSC and enhancing traffic flow efficiency.
Notably, incorporating traffic state augmentation techniques leads to improved
performance among the different UniTSA variants. For instance, UniTSA
(Multi+RNN+TSA), UniTSA (Multi+RNN+TSA), and UniTSA (Multi+RNN+TSA) exhibit
enhancements of $23.4\%$, $19.8\%$, and $17.9\%$, respectively, when compared
to UniTSA (Multi+RNN), UniTSA (Multi+RNN), and UniTSA (Multi+RNN). This
improvement can be attributed to the inclusion of a greater variety of
intersection scenarios within the training data through traffic state
augmentation techniques, such as the “Lane number change" method, which
enables the generation of diverse combinations of lane configurations.
(a)
(b)
(c)
(d)
Figure 8: The environment steps of different methods at the four test intersections. (a) INT-9. (b) INT-10. (c) INT-11. (d) INT-12. Table 5: Quantitative results of fine-tuning in test intersections. | INT-9 | INT-10 | INT-11 | INT-12
---|---|---|---|---
UniTSA (Multi+RNN) | 16.771 | 15.450 | 6.807 | 10.140
UniTSA (Multi+RNN+TSA) | 12.677 | 14.208 | 5.914 | 6.730
1M Environment Steps
UniTSA (Single) | 23.683 | 14.863 | 8.417 | 5.024
UniTSA (Multi+RNN+TSA) + FT | 10.995 | 12.553 | 4.475 | 3.534
10M Environment Steps
UniTSA (Single) | 10.353 | 12.254 | 4.359 | 3.089
UniTSA (Multi+RNN+TSA) + FT | 10.292 | 11.081 | 4.153 | 3.110
### 5.6 Results of fine-tuning in test intersections
In practical scenarios, certain intersections require special attention due to
their significance. To address this, we begin with the universal model trained
by UniTSA (Multi+RNN). The RNN-based UniTSA is chosen because it demonstrates
superior performance across most intersections compared with both CNN-based
and Transformer-based structures. In comparison to the UniTSA (Single), which
is trained on a single scenario, the resulting model can quickly reach or even
surpass the performance of the single-environment model after only a few
training steps.
Fig. 8 shows the change of cumulative rewards over training steps for
different models in the test intersections. The green dashed line represents
the result of applying the universal model directly to new intersections
without any fine-tuning. Notably, the model already exhibits promising results
without any additional fine-tuning or transfer learning. The blue line
represents the model trained from scratch whereas the orange line the fine-
tuned model based on the universal model. It is observed that the single-
environment model converges at approximately $3$M training steps. In sharp
contrast, the fine-tuning model achieves comparable performance with only
around $1$M training steps, resulting in approximately $66\%$ reduction in
computation time while maintaining similar performance.
Table 5 provides a detailed analysis of the model’s performance after fine-
tuning. At $1$M training steps, the fine-tuning model demonstrated an average
performance improvement of $36\%$ as compared to UniTSA (Single) across the
four test intersections. Even after $10$M training steps, the fine-tuning
models continued to outperform the models trained from scratch by $3\%$. This
aspect is particularly appealing for real-time applications. For instance, in
a road network with over $1000$ junctions, it is possible to significantly
reduce the number of interactions with the environment while maintaining
comparable performance, thereby greatly enhancing training efficiency.
(a)
(b)
(c)
(d)
Figure 9: Comparative analysis of traffic state augmentation methods on the
selected training and test intersections (a) INT-1. (b) INT-7. (c) INT-9. (d)
INT-11.
### 5.7 Comparative analysis on traffic state augmentations
In this section, we analyze the effectiveness of traffic state augmentation
methods in improving the performance of the UniTSA (Multi+RNN) model. We
conducted experiments by applying different combinations of two traffic state
augmentation techniques on both the training and test intersections. To
evaluate the impact of these methods, we propose the Average Waiting Time
(AWT) ratio between the models with and without traffic state augmentations
with an AWT ratio of less than $1$ indicating an improvement achieved by the
corresponding traffic state augmentation methods.
Fig. 9 showcases the outcomes of the comparative analysis on INT-1, INT-7,
INT-9, and INT-11 intersections, which represent common intersection
structures encountered in real-world scenarios. Each bar in the figure
corresponds to the average AWT ratio for a specific combination of traffic
state augmentations, and the error bars represent the $95\%$ confidence
interval. The blue dashed line represents the AWT ratio of UniTSA
(Multi+RNN+TSA), which employs all available traffic state augmentation
methods.
Fig. 9a and Fig. 9b depict the results obtained from the training
intersections INT-1 and INT-7, respectively. Inspection of these figures
reveals that the average performance improvement achieved through traffic
state augmentations was around $2\%$. Furthermore, the inclusion of noise and
mask methods incurred performance degradation in INT-7, which can be
attributed to the fact that the model has already captured the underlying
patterns and characteristics of the training intersections to a large extent.
As a result, introducing additional variations through traffic state
augmentation may not provide substantial benefits in the training set.
However, traffic state augmentation methods demonstrated significant
improvements when confronted with unseen intersection structures.
Fig. 9c and Fig. 9d illustrate the AWT ratios for the test intersections INT-9
and INT-11, respectively. These results clearly demonstrate that most traffic
state augmentation methods enhanced the performance of the base policy in the
test intersections, which consists of unseen intersections. The diverse
training samples generated through traffic state augmentation contribute to
the improved performance. Among the traffic state augmentation techniques,
movement shuffle and traffic flow scale were particularly effective in
enhancing the model’s performance. These techniques enable the model to adapt
and learn from a wider range of scenarios, resulting in improved performance
on the test set.
## 6 Conclusion
In this paper, a universal RL-based TSC framework called UniTSA has been
proposed for diverse intersection structures in V2X environments. More
specifically, UniTSA offers the capability to train a universal RL agent by
incorporating a junction matrix to characterize intersection states. To handle
unseen intersections, new traffic state augmentation methods have been
proposed to enrich the agent’s data collection, resulting in improved
performance and generalization for unseen intersection configurations. As a
result, UniTSA eliminates the necessity of extensive customization and
redevelopment for each individual intersection while offering a simple,
efficient, and open-sourced implementation, which makes UniTSA a valuable
framework for future research in data-efficient and generalizable RL-based TSC
methods within the field of V2X. Extensive experimental results have
demonstrated that UniTSA achieved the shortest average waiting time across
various intersection configurations, surpassing the performance of the
existing methods and outperforming the models trained from scratch with fine-
tuning.
## Acknowledgments
This work was supported National Key Research and Development Program of China
under Grant No. 2020YFB1807700 and the Shanghai Pujiang Program under Grant
No. 21PJD092.
## References
* [1] Fehda Malik, Hasan Ali Khattak, and Munam Ali Shah. Evaluation of the impact of traffic congestion based on sumo. In 2019 25th International Conference on Automation and Computing (ICAC), pages 1–5. IEEE, 2019.
* [2] Alan J Miller. Settings for fixed-cycle traffic signals. Journal of the Operational Research Society, 14(4):373–386, 1963.
* [3] Thomas Urbanik, Alison Tanaka, Bailey Lozner, Eric Lindstrom, Kevin Lee, Shaun Quayle, Scott Beaird, Shing Tsoi, Paul Ryus, Doug Gettman, et al. Signal timing manual, volume 1. Transportation Research Board Washington, DC, 2015.
* [4] Carlos Gershenson. Self-organizing traffic lights. arXiv preprint nlin/0411066, 2004.
* [5] Ishu Tomar, S Indu, and Neeta Pandey. Traffic signal control methods: Current status, challenges, and emerging trends. Proceedings of Data Analytics and Management: ICDAM 2021, Volume 1, pages 151–163, 2022.
* [6] Wang Tong, Azhar Hussain, Wang Xi Bo, and Sabita Maharjan. Artificial intelligence for vehicle-to-everything: A survey. IEEE Access, 7:10823–10843, 2019.
* [7] Tamás Wágner, Tamás Ormándi, Tamás Tettamanti, and István Varga. Spat/map v2x communication between traffic light and vehicles and a realization with digital twin. Computers and Electrical Engineering, 106:108560, 2023.
* [8] Yit Kwong Chin, Lai Kuan Lee, Nurmin Bolong, Soo Siang Yang, and Kenneth Tze Kin Teo. Exploring q-learning optimization in traffic signal timing plan management. In 2011 third international conference on computational intelligence, communication systems and networks, pages 269–274. IEEE, 2011.
* [9] Guanjie Zheng, Yuanhao Xiong, Xinshi Zang, Jie Feng, Hua Wei, Huichu Zhang, Yong Li, Kai Xu, and Zhenhui Li. Learning phase competition for traffic signal control. In Proceedings of the 28th ACM international conference on information and knowledge management, pages 1963–1972, 2019.
* [10] Chacha Chen, Hua Wei, Nan Xu, Guanjie Zheng, Ming Yang, Yuanhao Xiong, Kai Xu, and Zhenhui Li. Toward a thousand lights: Decentralized deep reinforcement learning for large-scale traffic signal control. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 3414–3421, 2020.
* [11] Xinshi Zang, Huaxiu Yao, Guanjie Zheng, Nan Xu, Kai Xu, and Zhenhui Li. Metalight: Value-based meta-reinforcement learning for traffic signal control. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 1153–1160, 2020.
* [12] Enming Liang, Zicheng Su, Chilin Fang, and Renxin Zhong. Oam: An option-action reinforcement learning framework for universal multi-intersection control. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 4550–4558, 2022.
* [13] Azzedine Boukerche, Dunhao Zhong, and Peng Sun. A novel reinforcement learning-based cooperative traffic signal system through max-pressure control. IEEE Transactions on Vehicular Technology, 71(2):1187–1198, 2022.
* [14] Liang Zhang, Qiang Wu, Jun Shen, Linyuan Lü, Bo Du, and Jianqing Wu. Expression might be enough: representing pressure and demand for reinforcement learning based traffic signal control. In International Conference on Machine Learning, pages 26645–26654. PMLR, 2022.
* [15] Yuanhao Xiong, Guanjie Zheng, Kai Xu, and Zhenhui Li. Learning traffic signal control from demonstrations. In Proceedings of the 28th ACM International Conference on Information and Knowledge Management, pages 2289–2292, 2019.
* [16] Stefano Giovanni Rizzo, Giovanna Vantini, and Sanjay Chawla. Time critic policy gradient methods for traffic signal control in complex and congested scenarios. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’19, page 1654–1664, New York, NY, USA, 2019. Association for Computing Machinery.
* [17] Tianshu Chu, Jie Wang, Lara Codecà, and Zhaojian Li. Multi-agent deep reinforcement learning for large-scale traffic signal control. IEEE Transactions on Intelligent Transportation Systems, 21(3):1086–1095, 2019.
* [18] Afshin Oroojlooy, Mohammadreza Nazari, Davood Hajinezhad, and Jorge Silva. Attendlight: Universal attention-based reinforcement learning model for traffic signal control. Advances in Neural Information Processing Systems, 33:4079–4090, 2020.
* [19] Zian Ma, Chengcheng Xu, Yuheng Kan, Maonan Wang, and Wei Wu. Adaptive coordinated traffic control for arterial intersections based on reinforcement learning. In 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pages 2562–2567. IEEE, 2021.
* [20] Seyed Sajad Mousavi, Michael Schukat, and Enda Howley. Traffic light control using deep policy-gradient and value-function-based reinforcement learning. IET Intelligent Transport Systems, 11(7):417–423, 2017.
* [21] Mohammad Aslani, Mohammad Saadi Mesgari, Stefan Seipel, and Marco Wiering. Developing adaptive traffic signal control by actor–critic and direct exploration methods. In Proceedings of the Institution of Civil Engineers-Transport, volume 172, pages 289–298. Thomas Telford Ltd, 2019.
* [22] Haoran Su, Yaofeng D Zhong, Joseph YJ Chow, Biswadip Dey, and Li Jin. Emvlight: A multi-agent reinforcement learning framework for an emergency vehicle decentralized routing and traffic signal control system. Transportation Research Part C: Emerging Technologies, 146:103955, 2023.
* [23] Elise Van der Pol and Frans A Oliehoek. Coordinated deep reinforcement learners for traffic light control. Proceedings of learning, inference and control of multi-agent systems (at NIPS 2016), 8:21–38, 2016.
* [24] Patrick Mannion, Jim Duggan, and Enda Howley. An experimental review of reinforcement learning algorithms for adaptive traffic signal control. Autonomic road transport support systems, pages 47–66, 2016.
* [25] Hua Wei, Guanjie Zheng, Huaxiu Yao, and Zhenhui Li. Intellilight: A reinforcement learning approach for intelligent traffic light control. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2496–2505, 2018.
* [26] Lun-Hui Xu, Xin-Hai Xia, and Qiang Luo. The study of reinforcement learning for traffic self-adaptive control under multiagent markov game environment. Mathematical Problems in Engineering, 2013, 2013.
* [27] Mohammad Aslani, Mohammad Saadi Mesgari, and Marco Wiering. Adaptive traffic signal control with actor-critic methods in a real-world traffic network with different traffic disruption events. Transportation Research Part C: Emerging Technologies, 85:732–752, 2017.
* [28] Mohammad Aslani, Stefan Seipel, Mohammad Saadi Mesgari, and Marco Wiering. Traffic signal optimization through discrete and continuous reinforcement learning with robustness analysis in downtown tehran. Advanced Engineering Informatics, 38:639–655, 2018.
* [29] Halit Bugra Tulay and Can Emre Koksal. Road state inference via channel state information. IEEE Transactions on Vehicular Technology, pages 1–14, 2023.
* [30] Mohamed MG Farag, Hesham A Rakha, Emadeldin A Mazied, and Jayanthi Rao. Integration large-scale modeling framework of direct cellular vehicle-to-all (c-v2x) applications. Sensors, 21(6):2127, 2021.
* [31] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
* [32] Pablo Alvarez Lopez, Michael Behrisch, Laura Bieker-Walz, Jakob Erdmann, Yun-Pang Flötteröd, Robert Hilbrich, Leonhard Lücken, Johannes Rummel, Peter Wagner, and Evamarie Wießner. Microscopic traffic simulation using sumo. In 2018 21st international conference on intelligent transportation systems (ITSC), pages 2575–2582. IEEE, 2018.
* [33] Peter Koonce and Lee Rodegerdts. Traffic signal timing manual. Technical report, United States. Federal Highway Administration, 2008.
* [34] Arthur G Sims and Kenneth W Dobinson. The sydney coordinated adaptive traffic (scat) system philosophy and benefits. IEEE Transactions on vehicular technology, 29(2):130–137, 1980.
* [35] Pravin Varaiya. The max-pressure controller for arbitrary networks of signalized intersections. Advances in dynamic network modeling in complex transportation systems, pages 27–66, 2013.
* [36] Hua Wei, Guanjie Zheng, Vikash Gayah, and Zhenhui Li. A survey on traffic signal control methods. arXiv preprint arXiv:1904.08117, 2019.
* [37] Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. Meta-learning in neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(9):5149–5169, 2021.
* [38] Misha Laskin, Kimin Lee, Adam Stooke, Lerrel Pinto, Pieter Abbeel, and Aravind Srinivas. Reinforcement learning with augmented data. Advances in neural information processing systems, 33:19884–19895, 2020.
* [39] Ilya Kostrikov, Denis Yarats, and Rob Fergus. Image augmentation is all you need: Regularizing deep reinforcement learning from pixels. arXiv preprint arXiv:2004.13649, 2020.
* [40] Nicklas Hansen and Xiaolong Wang. Generalization in reinforcement learning by soft data augmentation. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 13611–13617. IEEE, 2021.
* [41] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
* [42] Antonin Raffin, Ashley Hill, Adam Gleave, Anssi Kanervisto, Maximilian Ernestus, and Noah Dormann. Stable-baselines3: Reliable reinforcement learning implementations. The Journal of Machine Learning Research, 22(1):12348–12355, 2021.
* [43] James Ault and Guni Sharon. Reinforcement learning benchmarks for traffic signal control. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021.
* [44] Seung-Bae Cools, Carlos Gershenson, and Bart D’Hooghe. Self-organizing traffic lights: A realistic simulation. Advances in applied self-organizing systems, pages 45–55, 2013.
* [45] Hua Wei, Chacha Chen, Guanjie Zheng, Kan Wu, Vikash Gayah, Kai Xu, and Zhenhui Li. Presslight: Learning max pressure control to coordinate traffic signals in arterial network. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1290–1298, 2019.
|
# Double lines in the quintic del Pezzo fourfold
Kiryong Chung Department of Mathematics Education, Kyungpook National
University, 80 Daehakro, Bukgu, Daegu 41566, Korea<EMAIL_ADDRESS>
###### Abstract.
Let $Y$ be the del Pezzo $4$-fold defined by the linear section
$\textrm{Gr}(2,5)$ by $\mathbb{P}^{7}$. In this paper, we classify the type of
normal bundles of lines in $Y$ and describe its parameter space. As a
corollary, we obtain the desigularized model of the moduli space of stable
maps in $Y$. Also we compute the intersection Poincaré polynomial of the
stable maps space.
###### Key words and phrases:
Rational curves, Fano variety, Desingularization, Intersection cohomology
###### 2020 Mathematics Subject Classification:
14E15, 14E08, 14M15, 32S60.
## 1\. Introduction
### 1.1. Motivation
In previous series of papers [CK11, CHK12, CM17], the authors completely
solved the comparison problem of different moduli spaces (i.e., the stable
maps space, Hilbert scheme of curves and the stable sheaves space) of rational
curves in a homogeneous variety $X$ when the degree of curves is $\leq 3$. As
a result, we obtain the moduli theoretic birational model (in the sense of log
minimal model program) and compute the cohomology group of the moduli spaces.
In this case, the convexity of $X$ provides the mild singularity of the moduli
space of stable maps and thus one can set it as the initial point of
comparison. But many of Fano varieties are not convex. As such a toy example
is the minimal compactification of $\mathbb{C}^{3}$: the quintic del Pezzo
$3$-fold $W_{5}$ and Mukai variety $W_{22}$. In the case of $W_{5}$, our
starting point of the comparison is the Hilbert scheme (which is isomorphic to
the moduli space of stable sheaves). In [Chu22], we obtain the desingularized
model of the moduli space of stable maps in $W_{5}$. In this paper, as well-
known example of the minimal compactification of $\mathbb{C}^{4}$, we study
the rational curves in quintic del Pezzo $4$-fold $Y$ which is unique up to
isomorphism. We deal with the first non-trivial case, that is, the degree two
rational curves in $Y$. We obtain the desingularized model of stable maps
space and thus its intersection cohomology group. Similar to the $3$-fold case
([Chu22]), the crucial part is to classify types of the normal bundle of a
line in $Y$. In general, the geometry of lines in Fano variety has been an
important role for determining the geometric properties of Fano variety
([PZ16, KPS18, PCS19]).
### 1.2. Results
Unless otherwise stated we define the quintic del Pezzo $4$-fold $Y$ by the
linear section $\mathrm{Gr}(2,5)$ by $\\{p_{12}-p_{03}=p_{13}-p_{24}=0\\}$
where $\\{p_{ij}\\}$ are the Plücker coordinate of $\mathbb{P}^{9}$ under the
Plücker embedding $\mathrm{Gr}(2,5)\subset\mathbb{P}^{9}$. It is known that
the normal bundle $N_{L/Y}$ of a line $L$ in $Y$ is one of the following types
([PZ16, Lemma 1.6])
$N_{L/Y}\cong\mathcal{O}_{L}(1)\oplus\mathcal{O}_{L}^{\oplus
2}\;\mathrm{or}\;\mathcal{O}_{L}(-1)\oplus\mathcal{O}_{L}(1)^{\oplus 2}.$
Let us call the line of the first case (resp. the second case) by free line
(resp. non-free line). Let $\mathbf{H}_{d}(Y)$ be the Hilbert scheme of curves
$C$ with Hilbert polynomial $\chi(\mathcal{O}_{C}(m))=dm+1$ in $Y$. Let us
define the _double line_ $L^{2}$ as the non-split extension sheaf $F$
($\cong\mathcal{O}_{L^{2}}$)
$0\rightarrow{\mathcal{O}_{L}(-1)}\rightarrow{F}\rightarrow{\mathcal{O}_{L}}\rightarrow
0,$
where $L$ is a line. A double line $L^{2}$ in $Y$ supported on $L$ is
classified by
$\mathrm{Ext}^{1}(\mathcal{O}_{L},\mathcal{O}_{L}(-1))\cong\mathrm{H}^{0}(N_{L/Y}(-1))$.
Hence the double line $L^{2}$ for a (resp. non-free) free line $L$ in $Y$ is
unique (resp. isomorphic to $\mathbb{P}^{1}$). The main result of this paper
is the following.
###### Theorem 1.1.
Let $\mathbf{D}(Y)$ be the locus of double lines in $\mathbf{H}_{2}(Y)$. Then
$\mathbf{D}(Y)$ is a $4$-dimensional smooth subvariety of $\mathbf{H}_{2}(Y)$.
By combining the result of [CHL18] and Theorem 1.1, it turns out that non-free
lines in $Y$ consist of lines meeting with the _dual conic_ $C_{v}^{\vee}$ at
a point (Corollary 3.2). Furthermore we obtain the designularized model (i.e.,
a subvariety of _complete conics_) of the moduli space of stable maps of
degree $2$ in $Y$, which enable to compute the intersection cohomology group
of the moduli space. For detail description, see Corollary 3.4.
###### Notation 1.2.
* •
Let us denote by $\mathrm{Gr}(k,n)$ the Grassimannian variety parameterizing
$k$-dimensional subspaces in a fixed vector space $V$ with $\dim V=n$.
* •
We sometimes do not distinguish the moduli point $[x]\in\mathcal{M}$ and the
object $x$ parameterized by $[x]$ when no confusion can arise.
* •
Let us shortly denote the projectivized linear subspace
$\mathbb{P}(e_{i},\cdots,e_{j})=\mathbb{P}(\text{span}\\{e_{i},\cdots,e_{j}\\})$
in $\mathbb{P}(V_{5})$ where $\\{e_{0},e_{2},\cdots,e_{4}\\}$ is the standard
basis of the vector space $V_{5}$ of dimension $5$.
### Acknowledgements
The author gratefully acknowledges the many helpful suggestions of Sang-Bum
Yoo and Joonyeong Won during the preparation of the paper.
## 2\. Preliminaries
In this section we collect some facts about the quintic del Pezzo fourfold
which are mostly taken from [Pro94] and [CHL18].
One can define a Schburt variety relating with lines and planes in
$\mathrm{Gr}(2,5)$ as follow. For fixed a flag
$p\in\mathbb{P}^{1}\subset\mathbb{P}^{2}\subset\mathbb{P}^{3}\subset\mathbb{P}^{4}$,
let
* •
$\sigma_{3,2}=\\{\ell\,|\,p\in\ell\subset\mathbb{P}^{2}\\}$,
* •
$\sigma_{3,1}=\\{\ell\,|\,p\in\ell\subset\mathbb{P}^{3}\\}$,
* •
$\sigma_{2,2}=\\{\ell\,|\,\ell\subset\mathbb{P}^{2}\\}$.
Clearly, $\sigma_{3,2}$ is a line in $\mathrm{Gr}(2,5)$ and thus it is
parameterized by $\mathrm{Gr}(1,3,5)$. Also, we note that the planes in
$\text{Gr}(2,5)$ with $\sigma_{3,1}$(resp. $\sigma_{2,2}$)-type is
parameterized by the _flag_ variety $\text{Gr}(1,4,5)$ (resp.
$\text{Gr}(3,5)$). The projection maps
$v_{1}:\mathrm{Gr}(1,3,5)\to\mathrm{Gr}(1,5)$ and
$v_{2}:\mathrm{Gr}(1,4,5)\to\mathrm{Gr}(1,5)$ are called by the _vertex map_.
In [Pro94], the Hilbert scheme of lines and planes in $Y$ are explicitly
described. For a projective variety $X$ with fixed embedding in
$\mathbb{P}^{N}$, let $\mathbf{H}_{1}(X)$ (resp. $\mathbf{F}_{2}(X)$) be the
Hilbert scheme of lines (resp. planes) in $X$
###### Proposition 2.1 ([Pro94, Proposition 2.7]).
Let
$i:\mathbf{H}_{1}(Y)\subset\mathbf{H}_{1}(\mathrm{Gr}(2,5))=\mathrm{Gr}(1,3,5)$
be the inclusion map and $v_{1}:\mathrm{Gr}(1,3,5)\rightarrow\mathrm{Gr}(1,5)$
be the vertex map. Then the composition map $v_{1}\circ
i:\mathbf{H}_{1}(Y)\to\mathrm{Gr}(1,5)$ is a smooth blow-up along the smooth
conic $C_{v}\subset\mathrm{Gr}(1,5)$.
Let us call $C_{v}$ by the _vertex conic_ in Proposition 2.1.
###### Proposition 2.2 ([Pro94, Proposition 2.2]).
The Hilbert scheme of planes in $Y$ is isomorphic to
$\mathbf{F}_{2}(Y)\cong C_{v}\sqcup\\{[S]\\}.$
Here each point $t\in C_{v}(\cong\mathbb{P}^{1})$ parameterizes the
$\sigma_{3,1}$-type planes $P_{t}$ such that the vertex of the plane $P_{t}$
is the point $\\{t\\}$ in $C_{v}$. Also the point $[S]$ parameterizes the
$\sigma_{2,2}$-type plane $S$ in $Y$ determined by the linear spanning
$\langle C_{v}\rangle\subset\mathrm{Gr}(1,5)\cong\mathbb{P}^{4}$ of $C_{v}$.
Let $\\{e_{0},e_{1},e_{2},e_{3},e_{4}\\}$ be the standard coordinate vectors
of the space $V_{5}(\cong\mathbb{C}^{5})$, which provides the original
projective space $\mathbb{P}(V_{5})(=\mathbb{P}^{4})$. Let
$\\{p_{ij}\\}_{0\leq i<j\leq 4}$ be the Plücker coordinates of
$\mathbb{P}^{9}$. Let $\mathbb{P}^{7}=H_{1}\cap H_{2}$ be the linear subspace
of $\mathbb{P}^{9}$ defined by $p_{12}-p_{03}=p_{13}-p_{24}=0$. The vertex
conic $C_{v}$ is given by ([CHL18, Lemma 6.3])
$C_{v}=\\{[a_{0}:a_{1}:a_{2}:a_{3}:a_{4}]\mid
a_{0}a_{4}+a_{1}^{2}=a_{2}=a_{3}=0\\}\subset\mathbb{P}(V_{5}).$
###### Remark 2.3.
From the proof of [CHL18, Lemma 6.3], we know that $\sigma_{3,1}$-type planes
$P_{t}$ in $Y$ are $P_{t}=\mathbb{P}(V_{1}\wedge V_{4})$ where
$V_{1}=\text{span}\\{e_{0}+te_{1}-t^{2}e_{4}\\}$ and
$V_{4}=\text{span}\\{e_{0},e_{1},e_{2}+te_{3},e_{4}\\}$. Also the unique plane
$S$ in $Y$ is given by $S=\mathbb{P}(\wedge^{2}V_{3})$ such that
$V_{3}=\text{span}\\{e_{0},e_{1},e_{4}\\}$.
The positional relation of planes in $Y$ are as follows.
###### Proposition 2.4 ([Pro94, Proposition 2.2]).
Let $P_{t}$ be a $\sigma_{3,1}$-type plane and $S$ be the unique
$\sigma_{2,2}$-type plane in $Y$. Then
1. (1)
the intersection part $P_{t}\cap S$ is tangent line of the dual conic111That
is, the curve is generated by the tangent lines of $C_{v}$. $C_{v}^{\vee}$ in
$Y$.
2. (2)
the intersection part $P_{t}\cap P_{t^{\prime}}$ is a point in $S$ for any
$t\neq t^{\prime}\in C_{v}$.
The lines in $Y$ have a stratification relating with the plane’s type in $Y$.
###### Proposition 2.5 ([Pro94, Corollary 3.7]).
Let $L$ be a line in $Y$ and $R=\bigcup\limits_{t\in C_{v}}P_{t}$ be the union
of planes in $Y$. Then there are five types of lines in $Y$ such that the
automorphism group $\mathrm{Aut}(Y)$ of $Y$ transitively acts on each stratum.
1. (a)
$L\nsubseteq R\cup S$.
2. (b)
$L\subset R$, $L\cap S=\\{\mathrm{pt}.\\}$ and $L\cap C_{v}^{\vee}=\emptyset$.
3. (c)
$L\subset R$, $L\cap S=L\cap C_{v}^{\vee}=\\{\mathrm{pt}.\\}$.
4. (d)
$L\subset S$ and $L$ is a tangent line of $C_{v}^{\vee}$.
5. (e)
$L\subset S$ and $L\cap C_{v}^{\vee}=\\{p_{1},p_{2}\\}$ for $p_{1}\neq p_{2}$.
In Section $6$ of [CHL18], the authors reproduce the results of Proposition
2.1, 2.2, and 2.4 by specifying the linear subspace
$\mathbb{P}^{7}\subset\mathbb{P}^{9}$.
###### Example 2.6.
Let $P_{t_{0}}$ be the plane determined by the vertex $\mathbb{P}(e_{0})$ and
the three dimensional space $\mathbb{P}(e_{0},e_{1},e_{2},e_{4})$. The
intersection point is $P_{t_{0}}\cap C_{v}^{\vee}=\mathbb{P}(e_{0}\wedge
e_{1})$ which is the tangent line of $C_{v}$ at $\mathbb{P}(e_{0})$.
Furthermore, the example of lines in Proposition 2.5 are given in Table 1.
Type | Vertex | Plane
---|---|---
(a) | $\mathbb{P}(e_{2})$ | $\;\;\;\;\;\;\mathbb{P}(e_{0},e_{2},e_{3})$
(b) | $\mathbb{P}(e_{0})$ | $\;\;\;\;\;\;\mathbb{P}(e_{0},e_{2},e_{4})$
(c) | $\mathbb{P}(e_{0})$ | $\;\;\;\;\;\;\mathbb{P}(e_{0},e_{1},e_{2})$
(d) | $\mathbb{P}(e_{0})$ | $\;\;\;\;\;\;\mathbb{P}(e_{0},e_{1},e_{4})$
(e) | $\mathbb{P}(e_{1})$ | $\;\;\;\;\;\;\mathbb{P}(e_{0},e_{1},e_{4})$
Table 1. Example of lines in $Y$
Let $\mathbf{H}_{2}(Y)$ be the Hilbert scheme of conics in $Y$. For a general
conic $C$ in $Y$, it determines linear spanning in two meanings: the linear
space $\mathbb{P}^{2}$ containing $C$ in
$\mathbb{P}(\wedge^{2}V_{5})=\mathbb{P}^{9}$ and the linear space
$\mathbb{P}^{3}$ containing two skew lines in
$\mathbb{P}(V_{5})=\mathbb{P}^{5}$. Motivated this observation, we have a
birational model of $\mathbf{H}_{2}(Y)$ as follow. Let $\mathcal{U}$ be the
universal subbundle on $\mathrm{Gr}(4,5)$ and
$\mathcal{K}:=\mathrm{ker}\\{\wedge^{2}\mathcal{U}\subset\wedge^{2}\mathcal{O}^{\oplus
5}\to\mathcal{O}^{\oplus 2}\\}$
be the kernel of the composition map where the arrow is given by
$\\{p_{12}-p_{03},p_{13}-p_{24}\\}$. Let
$\mathbf{S}(Y):=\mathrm{Gr}(3,\mathcal{K})$ be the relative Grassmannian over
$\mathrm{Gr}(4,5)$.
###### Proposition 2.7 ([CHL18, Proposition 6.7 and Remark 6.8]).
Under above definition and notation, $\mathbf{H}_{2}(Y)$ is obtained from
$\mathbf{S}(Y)$ by a blow-down followed by a blow-up
$\textstyle{\widetilde{\mathbf{S}}(Y)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathbf{S}(Y)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\Psi}$$\textstyle{\mathbf{H}_{2}(Y),}$
where
1. (1)
the blow-up center in $\mathbf{S}(Y)$ (resp. $\mathbf{H}_{2}(Y)$) is a
disjoint union $\mathbb{P}^{1}\sqcup\mathbb{P}^{1}$ (resp. $\mathbb{P}^{5}$)
of $\mathbb{P}^{1}$’s and
2. (2)
the space $\widetilde{\mathbf{S}}(Y)$ is a relative conics space over
$\mathrm{Gr}(4,5)$ such the fiber over $\mathrm{Gr}(4,5)$ is the Hilbert
scheme $\mathbf{H}_{2}(\mathrm{Gr}(2,4)\cap H_{1}\cap H_{2})$ of conics in the
quadric surface $\mathrm{Gr}(2,4)\cap H_{1}\cap H_{2}$.
In special $\mathbf{H}_{2}(Y)$ is an irreducible and smooth variety of
dimension $7$.
###### Remark 2.8.
The relative Grassimannian $\mathbf{S}(Y)=\mathrm{Gr}(3,\mathcal{K})$ in
Proposition 2.7 can be regarded as the incident variety
(1) $\mathbf{S}(Y)=\\{(U_{3},V_{4})\mid
U_{3}\subset\mathcal{K}_{[V_{4}]}\\}\subset\mathrm{Gr}(3,\wedge^{2}V_{5})\times\mathrm{Gr}(4,V_{5}),$
where
$\mathcal{K}_{[V_{4}]}=\text{ker}\\{\wedge^{2}V_{4}\subset\wedge^{2}V_{5}\stackrel{{\scriptstyle(p_{12}-p_{03})\oplus(p_{13}-p_{24})}}{{\rightarrow}}\mathbb{C}\oplus\mathbb{C}\\}$.
Also the birational correspondence
$\Psi:\mathbf{S}(Y)\dashrightarrow\mathbf{H}_{2}(Y)$ is
$\Psi([(U_{3},V_{4})])=\mathbb{P}(U_{3})\cap\mathrm{Gr}(2,V_{4})$. Note that
the map $\Psi$ is not defined at the two distinct points $[P_{t}]$ and $[S]$
over a linear subspace $\mathbb{P}^{1}(\cong C_{v})$ in $\mathrm{Gr}(4,5)$
([CHL18, Remark 6.8]).
## 3\. Results
In this section we prove Theorem 1.1. As corollaries, we have a description of
the locus of non-free lines in $Y$ (Corollary 3.2). Also we obtain the
desinguarized model of stable maps space in $Y$ and thus its intersection
cohomology (Corollary 3.4).
### 3.1. Proof of Theorem 1.1
Firstly, we describe the closure of the birational inverse $\Psi^{-1}$ of the
double line in $\mathbf{H}_{2}(Y)$ in Proposition 2.7. Then we find explicitly
the strict transform of the closure along the blow-up/down maps in Proposition
2.7.
Let $\bar{\mathbf{D}}(Y)$ be the locus of the pairs $(U_{3},V_{4})$ in
$\mathbf{S}(Y)$ such that the restriction $q_{G}|_{\mathbb{P}(U_{3})}$ to
$\mathbb{P}(U_{3})$ of the quadric form $q_{G}$ associated to
$\mathrm{Gr}(2,V_{4})$ is rank $\leq 1$. Let
$p=p_{2}\circ i:\bar{\mathbf{D}}(Y)\rightarrow\mathrm{Gr}(4,5)$
be the composition of the second projection map
$p_{2}:\mathbf{S}(Y)\rightarrow\mathrm{Gr}(4,5)$ in equation (1) and the
inclusion map $i:\bar{\mathbf{D}}(Y)\subset\mathbf{S}(Y)$.
###### Lemma 3.1.
Under above definition and notations, the image
$p(\bar{\mathbf{D}}(Y)):=Q_{3}$ is an irreducible quadric $3$-fold in
$\mathrm{Gr}(4,5)$ with the homogenous coordinates
$x_{0},x_{1},x_{2},x_{3},x_{4}$ such that $Q_{3}$ is defined by
$x_{1}^{2}+4x_{0}x_{2}=0$.
###### Proof.
For the chart $x_{3}\neq 0$, let
$[V_{4}]:=\left(\begin{matrix}1&0&0&a&0\\\ 0&1&0&b&0\\\ 0&0&1&c&0\\\
0&0&0&d&1\\\ \end{matrix}\right)$
be an affine chart of $\mathrm{Gr}(4,5)$ with
$a=x_{0}/x_{3},b=x_{1}/x_{3},c=x_{4}/x_{3},d=x_{2}/x_{3}$ and
$V_{4}=\text{span}\\{e_{0}+ae_{3},e_{1}+be_{3},e_{2}+ce_{3},e_{4}+de_{3}\\}$.
Then for an affine chart of $\mathrm{Gr}(2,V_{4})$
$[V_{2}]=\left(\begin{matrix}1&0&t_{1}&t_{3}\\\ 0&1&t_{2}&t_{4}\\\
\end{matrix}\right),$
the affine chart of $\mathrm{Gr}(2,V_{4})$ in $\mathrm{Gr}(2,5)$ is
$[V_{2}][V_{4}]=\left(\begin{matrix}1&0&t_{1}&a+ct_{1}+dt_{3}&t_{3}\\\
0&1&t_{2}&b+ct_{2}+dt_{4}&t_{4}\end{matrix}\right).$
After eliminating the variables $\\{t_{1},t_{2},t_{3},t_{4}\\}$ by the
computer program ([GS]), we have a defining equation of
$\mathrm{Gr}(2,V_{4})\cap H_{1}\cap H_{2}$
(2) $\begin{split}\langle
bcp_{01}^{2}+c^{2}p_{01}p_{02}&+cdp_{01}p_{04}-ap_{01}^{2}+bp_{01}p_{04}+cp_{02}p_{04}+dp_{04}^{2}+dp_{01}p_{14}-p_{02}p_{14},\\\
\;h_{1},\;h_{2},\;h_{3},&\;h_{4},\;h_{5},\;h_{6}\rangle\end{split}$
where
$\begin{split}h_{1}&=p_{03}-p_{12},\;h_{2}=p_{12}-bp_{01}-cp_{02}-dp_{04},\;h_{3}=p_{13}-p_{24},\\\
h_{4}&=p_{23}+ap_{02}+bp_{12}-dp_{24},\;h_{5}=p_{24}+ap_{01}-cp_{12}-dp_{14},\;h_{6}=p_{34}-ap_{04}-bp_{14}-cp_{24}\end{split}$
in $\mathbb{P}^{9}\times\mathbb{C}_{(a,b,c,d)}$.
For the chart $x_{4}\neq 0$, let
$a=x_{0}/x_{4},b=x_{1}/x_{4},u=x_{3}/x_{4},d=x_{2}/x_{4}$. By doing the same
calculation as before, we obtain the local equation of
$\mathrm{Gr}(2,V_{4})\cap H_{1}\cap H_{2}$ as follows:
(3) $\begin{split}\langle
ap_{04}^{2}+ap_{01}p_{14}+bp_{04}p_{14}&-dp_{14}^{2}-p_{01}p_{34}-aup_{04}p_{14}-bup_{14}^{2}-up_{04}p_{34}+u^{2}p_{14}p_{34},\\\
\;k_{1},\;k_{2},\;k_{3},&\;k_{4},\;k_{5},\;k_{6}\rangle\end{split}$
where
$\begin{split}k_{1}=&p_{02}+bp_{01}+dp_{04}-up_{12},\;k_{2}=p_{03}-p_{12},\;k_{3}=p_{12}-ap_{01}+dp_{14}-up_{24},\\\
k_{4}=&p_{13}-p_{24},\;k_{5}=p_{23}+ap_{12}+bp_{24}-dp_{34},\;k_{6}=p_{24}+ap_{04}+bp_{14}-up_{34}\end{split}$
in $\mathbb{P}^{9}\times\mathbb{C}_{(a,b,u,d)}$.
Since the restricted form $q_{G}$ associated to $\mathrm{Gr}(2,V_{4})$ is of
$\text{rank}(q_{G}|_{\mathbb{P}(U_{3})})\leq 1$, the quadratics of the
defining equations (2) and (3) is of rank $\leq 3$. Therefore the defining
equation of the image $p(\bar{\mathbf{D}}(Y))$ is given by
$\langle b^{2}+4ad\rangle$
in both cases. ∎
Obviously, the singular locus of $\text{Sing}(Q_{3})(\cong\mathbb{P}^{1})$ is
defined by $I_{\mathrm{Sing}(Q_{3})}=\langle x_{0},x_{1},x_{2}\rangle$.
###### Proof of Theorem 1.1.
Step 1. For each $[V_{4}]\in Q_{3}\setminus\text{Sing}(Q_{3})$, the quadric
surface $\mathrm{Gr}(2,V_{4})\cap H_{1}\cap H_{2}$ is of rank $3$. Hence the
fiber $p^{-1}([V_{4}])$ is isomorphic to $\mathbb{P}^{1}$ which parameterizes
tangent planes (i.e., lines) of the quadric cone $\mathrm{Gr}(2,V_{4})\cap
H_{1}\cap H_{2}$. If $[V_{4}]\in\text{Sing}(Q_{3})$, the singular quadric
surface $\mathrm{Gr}(2,V_{4})\cap H_{1}\cap H_{2}$ is the union of the plane
$P_{t}$ and $S$. In fact, for the affine chart $x_{3}\neq 0$ (similarly,
$x_{4}\neq 0$), it is defined by the union of $\sigma_{3,1}$-type planes:
$\langle
c^{2}p_{01}+cp_{04}-p_{14},p_{23},cp_{24}-p_{34},cp_{02}-p_{12},-cp_{12}+p_{24},p_{13}-p_{24},p_{03}-p_{12}\rangle$
and the $\sigma_{2,2}$-plane $S$:
$\langle
p_{02},p_{23},cp_{24}-p_{34},cp_{02}-p_{12},-cp_{12}+p_{24},p_{13}-p_{24},p_{03}-p_{12}\rangle$
which matches with Remark 2.3 (by letting $c=t$). Hence the fiber
$p^{-1}([V_{4}])$ is isomorphic to $\mathbb{P}^{1}$ which parameterizes planes
containing the intersection line $P_{t}\cap S$. After all, $\bar{D}(Y)$ is a
$\mathbb{P}^{1}$-fiberation over $Q_{3}$.
Step 2. Note that the birational map $\Psi$ in Proposition 2.7 is not defined
for the two points: $\\{[P_{t}],[S]\\}$ over
$\text{Sing}(Q_{3})(\cong\mathbb{P}^{1})$. Hence the blow-up center of
$\eta:\widetilde{\mathbf{S}}(Y)\rightarrow\mathbf{S}(Y)$ is contained in
$\bar{\mathbf{D}}(Y)$ and thus the strict transform of $\bar{\mathbf{D}}(Y)$
by the blow-up map $\eta$ is nothing but the blow-up
$\widetilde{\mathbf{D}}(Y)$ of $\bar{\mathbf{D}}(Y)$ along the center
$\mathbb{P}^{1}\sqcup\mathbb{P}^{1}$. Since the blow-center of
$\bar{\mathbf{D}}(Y)$ is of $\mathbb{Z}_{2}$-quotient singularity, one can
easily check that $\widetilde{\mathbf{D}}(Y)$ is smooth and the exceptional
divisor $\mathbf{E}$ in $\widetilde{\mathbf{D}}(Y)$ is a
$\mathbb{P}(1,2,2)(\cong\mathbb{P}^{2})$-bundle over
$\mathbb{P}^{1}\sqcup\mathbb{P}^{1}$. Each fiber $\mathbb{P}^{2}$
parameterizes the double line in the plane because any flat family in
$\bar{\mathbf{D}}(Y)$ is obviously supported on lines by its construction.
Step 3. The restriction to each fiber $\mathbb{P}^{1}$ of the normal bundle
$\mathcal{N}_{\mathbf{E}/\widetilde{\mathbf{D}}(Y)}$ of the exceptional
divisor $\mathbf{E}$ is
$\mathcal{N}_{\mathbf{E}/\widetilde{\mathbf{D}}(Y)}|_{\mathbb{P}^{1}}\cong\mathcal{O}_{\mathbb{P}^{1}}(-1)$,
the image $\mathbf{D}(Y)$ of the restriction to $\widetilde{\mathbf{D}}(Y)$ of
the blow-down map $\widetilde{\mathbf{S}}(Y)\rightarrow\mathbf{H}_{2}(Y)$ is
smooth by the Fujiki-Nakano criterion ([FN72]). So we finish the proof. ∎
### 3.2. Non-free lines in $Y$ and the intersection cohomology of stable maps
###### Corollary 3.2.
Let $Z$ be the locus of non-free lines in the Hilbert scheme
$\mathbf{H}_{1}(Y)$ of lines in $Y$. Then $Z$ is isomorphic to a
$\mathbb{P}^{1}$-fiberation over the vertex conic
$C_{v}(\cong\mathbb{P}^{1})$.
###### Proof.
Geometrically, non-free lines are lines in $Y$ meeting with the dual conic
$C_{v}^{\vee}$ at a point uniquely. Since the automorphism of $Y$ transitively
acts on each stratum of Proposition 2.5, it is enough to check each case in
Table 1. For the case (d) and (e), $L$ is non-free line if and only if $L$ is
a tangent line of $C_{v}^{\vee}$ by [CHL18, Proposition 6.6]. Thus the lines
of the case (d) are only non-free. For the case (c), the line $L$ is defined
by $p_{03}=p_{04}=p_{12}=p_{13}=p_{14}=p_{23}=p_{24}=p_{34}=0$. Thus for the
affine chart $x_{3}\neq 0$, it lies on irreducible quadric cones defined by
$dp_{04}^{2}+(dp_{01}-p_{02})p_{14}=h_{1}=p_{12}-dp_{04}=h_{3}=p_{23}-dp_{24}=p_{24}-dp_{14}=p_{34}=0$
for $d\neq 0$. Hence there exists one parameter family of double lines
supported on $L$. That is, $L$ is non-free. For other cases (a) and (b), we
know that each line is free by a similar computation. ∎
Let $C$ be a projective connected reduced curve. A map $f:C\to Y$ is
considered _stable_ if $C$ has at worst nodal singularities and
$|\mathrm{Aut}(f)|<\infty$. Let $\mathcal{M}(Y,d)$ be the moduli space of
isomorphism classes of stable maps $f:C\to Y$ with genus $g(C)=0$ and
$\mathrm{deg}(f^{*}\mathcal{O}_{Y}(1))=d$. The moduli space $\mathcal{M}(Y,d)$
might be singular and reducible depending on the geometric property (for
example, convexity) of $Y$.
###### Remark 3.3.
Let $f:C\rightarrow L\subset Y$ be a stable map of the degree $\deg(f)=2$ such
that $L$ is non-free. From the tangent bundle sequence of $L\subset Y$,
$\mathrm{H}^{1}(f^{*}T_{Y})\cong\mathrm{H}^{1}(f^{*}N_{L/Y})\cong\mathrm{H}^{1}(f_{*}\mathcal{O}_{C}\otimes
N_{L/Y})\cong\mathbb{C}$. That is, $Y$ is not convex and thus
$\mathcal{M}(Y,2)$ is not smooth stack ([FP97]).
Let $X$ be a quasi-projective variety. For the (resp. intersection) Hodge-
Deligne polynomial $\mathrm{E}_{c}(X)(u,v)$ (resp. $\mathrm{IE}_{c}(X)(u,v)$)
for compactly supported (resp. intersection) cohomology of $X$, let
$\mathrm{P}(X)=\mathrm{E}_{c}(X)(-t,-t)\;(\mathrm{resp.}\;\mathrm{IP}(X)=\mathrm{IE}_{c}(X)(-t,-t))$
be the _virtual_ (resp. intersection) Poincaré polynomial of $X$. A map
$\pi:X\rightarrow Y$ is _small_ if for a locally closed stratification of
$Y=\bigsqcup_{i}Y_{i}$ such that the restriction map
$\pi|_{\pi^{-1}(Y_{i})}:\pi^{-1}(Y_{i})\rightarrow Y_{i}$ is _etale_ locally
trivial, the inequality
$\dim\pi^{-1}(y)<\frac{1}{2}\mathrm{codim}_{Y}(Y_{i})$
holds for each closed point $y\in Y_{i}$ except a dense open stratum of $Y$.
Let $\pi:X\rightarrow Y$ be a small map such that $X$ has at most finite group
quotient singularities. Then $\mathrm{P}(X)=\mathrm{IP}(Y)$ ([Max18,
Definition 6.6.1 and Theorem 6.6.3]).
###### Corollary 3.4.
The intersection cohomology of the moduli space $\mathcal{M}(Y,2)$ is given by
$\mathrm{IP}(\mathcal{M}(Y,2))=1+4t^{2}+10t^{4}+15t^{6}+15t^{8}+10t^{10}+4t^{12}+t^{14}.$
###### Proof.
By the same method of the proof of Theorem 1.2 in [Chu22], one can show that
the blow-up $\widetilde{\mathbf{H}}_{2}(Y)$ of $\mathbf{H}_{2}(Y)$ along
$\mathbf{D}(Y)$ is smooth one and thus we have a birational morphism
$\pi:\mathbf{H}_{2}(Y)\rightarrow\mathcal{M}(Y,2)$
such that the exceptional divisor (i.e., $\mathbb{P}^{2}$-bundle over
$\mathbf{D}(Y)$) contracts to a $\mathbb{P}^{2}$-bundle over
$\mathbf{H}_{1}(Y)$. From Corollary 3.2, the map $\pi$ is a small map and thus
$\mathrm{P}(\widetilde{\mathbf{H}}_{2}(Y))=\mathrm{IP}(\mathcal{M}(Y,2))$. By
the equality
$\mathrm{P}(\widetilde{\mathbf{H}}_{2}(Y))=\mathrm{P}(\mathbf{H}_{2}(Y))+(\mathrm{P}(\mathbb{P}^{2})-1)\cdot\mathrm{P}(\mathbf{D}(Y)),$
Proposition 2.1 and Corollary 3.2, we obtain the result. ∎
## References
* [CHK12] Kiryong Chung, Jaehyun Hong, and Young-Hoon Kiem. Compactified moduli spaces of rational curves in projective homogeneous varieties. J. Math. Soc. Japan, 64(4):1211–1248, 2012.
* [CHL18] Kiryong Chung, Jaehyun Hong, and SangHyeon Lee. Geometry of moduli spaces of rational curves in linear sections of grassmannian gr(2,5). Journal of Pure and Applied Algebra, 222(4):868 – 888, 2018.
* [Chu22] Kiryong Chung. Desingularization of kontsevich’s compactification of twisted cubics in $v_{5}$. manuscripta mathematica, 2022.
* [CK11] Kiryong Chung and Young-Hoon Kiem. Hilbert scheme of rational cubic curves via stable maps. Amer. J. Math., 133(3):797–834, 2011.
* [CM17] Kiryong Chung and Han-Bom Moon. Mori’s Program for the Moduli Space of Conics in Grassmannian. Taiwanese Journal of Mathematics, 21(3):621 – 652, 2017.
* [FN72] Akira Fujiki and Shigeo Nakano. Supplement to “On the inverse of monoidal transformation”. Publ. Res. Inst. Math. Sci., 7:637–644, 1971/72.
* [FP97] W. Fulton and R. Pandharipande. Notes on stable maps and quantum cohomology. In Algebraic geometry—Santa Cruz 1995, volume 62 of Proc. Sympos. Pure Math., pages 45–96. Amer. Math. Soc., Providence, RI, 1997\.
* [GS] Daniel R. Grayson and Michael E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available at http://www.math.uiuc.edu/Macaulay2/.
* [KPS18] Alexander G. Kuznetsov, Yuri G. Prokhorov, and Constantin A. Shramov. Hilbert schemes of lines and conics and automorphism groups of fano threefolds. Japan. J. Math., pages 685–789, 2018.
* [Max18] Laurentiu G. Maxim. Intersection homology and perverse sheaves, with applications to singularities. Book Project, 2018.
* [PCS19] V. V. Przyjalkowski, I. A. Cheltsov, and K. A. Shramov. Fano threefolds with infinite automorphism groups. Izvestiya: Mathematics, 83(4):860–907, aug 2019.
* [Pro94] Yuri Prokhorov. Compactifications of $\mathbb{C}^{4}$ of index $3$. In Algebraic geometry and its applications (Yaroslavl’ , 1992) Aspects Math., E25:159–169, Vieweg, Braunschweig, 1994.
* [PZ16] Yuri Prokhorov and Mikhail Zaidenberg. Examples of cylindrical fano fourfolds. European Journal of Mathematics, 2(1):262–282, 2016.
|
# Anisotropic Special Relativity
Hassan Ganjitabar<EMAIL_ADDRESS>School of Engineering,
Newcastle University, Newcastle upon Tyne, NE1 7RU, UK
###### Abstract
Anisotropic Special Relativity (ASR) is the relativistic theory of nature with
a preferred direction in space-time. By relaxing the full-isotropy constraint
on space-time to the preference of one direction, we get a perturbative
modification of the Minkowski metric as
$\mathscr{g}_{\mu\nu}=\eta_{\mu\nu}+b\epsilon_{\mu\nu}$ leading to an
extension to the geometrical objects such as line element. The symmetry group
in ASR is obtained to be the space-time translations together with the group
of 2-dimensional complex special linear transformations, $SL(2,C)$, generated
by a complexification of $A=K_{x}+K_{y}$ and $B=J_{y}-J_{x}$ and $C=K_{z}$, in
the case of $z$-direction preference. A procedure to make the anisotropic
quantum field theory is provided where the Lorentz invariant Lagrangians need
to be replaced with their ASR version in which the inner product of any pair
of covariant/contravariant indices is mediated by the anisotropic metric
$\mathscr{g}_{\mu\nu}$.
PACS numbers
11.10.-z, 03.30.+p, 11.30.Cp
###### pacs:
Valid PACS appear here
## I Introduction
One of the widely accepted fundamental theories of physics is Special
Relativity (SR) in which Quantum Field Theory (QFT) and Standard Model (SM) of
the universe are grounded. In SR, some quantities, like the line element
${ds}^{2}=(dt^{2}-dx^{2}-dy^{2}-dz^{2})$, are invariant under the Lorentz
group transformations. Despite the very strict restrictions on departure from
Lorentz symmetry [1, 2, 3], the Lorentz Invariance Violation (LIV) idea [4, 5]
is still popular. One of the scenarios for LIV is Standard Model Extension
(SME) [6] in which the Lorentz violation is allowed by adding non-relativistic
terms to the Lagrangian. On the other side, there are theories wherein the LIV
does not require the complete breakdown of relativistic symmetry. The examples
of latter are perturbation approach of Coleman and Glashow [7] and Very
Special Relativity (VSR) proposed by Cohen and Glashow [8].
There are two more theories allowing for LIV, Finslerian structure of the
space-time [9] and Non-Commutative Quantum Field Theory [10], in context of
which VSR can be realised [11, 12]. In fact, a deformation of VSR symmetry can
be seen in the case of a Finsler space-time in which the fundamental metric
$g$ can be obtained from a function
$F={(\eta_{\mu\nu}dx^{\mu}dx^{\nu})}^{(1-b)/2}{(n_{\alpha}dx^{\alpha})}^{b}$
using $g_{\mu\nu}=\frac{1}{2}{\partial_{\mu}\partial_{\nu}}F^{2}$, where
$\eta_{\mu\nu}$ is the Minkowski metric of SR, $n^{\alpha}=(1,0,0,1)$ is a
null vector and $b$ is a constant parameter. VSR suggests a subgroup of the
Lorentz group, SIM(2), in addition to the space-time translations to be the
maximal symmetry of nature wherein one of the three spatial directions is
preferred. The anisotropy of space is one of the possible scenarios for LIV,
whose upper bound has already been investigated by analysing the cosmological
data [13] as well as performing terrestrial experiments [14, 15].
On the track of building an anisotropic version of SR, Bogoslovsky suggested
the Finslerian line element
$ds={(\eta_{\mu\nu}dx^{\mu}dx^{\nu})}^{(1-b)/2}{(n_{\alpha}dx^{\alpha})}^{b}$
which is not SR invariant but VSR invariant [16, 17]. In this letter, we ask
if there may be any other possible form of line element that is invariant
under a reduced Lorentz symmetry but not under the transformations of the full
Lorentz group?
## II Results and Discussion
We obtain such a line element by easing the isotropy constraint on the
Minkowski metric, but considering a preferred direction, $n^{\mu}$, in space-
time instead. In an anisotropic theory we need non-zero off-diagonal elements
in the metric. But, the only part of the Minkowski metric we are promoting
here is its $x-y$ block because we want $n^{\mu}$ to be yet invariant. The
promotion of the $x-y$ block must be done in a symmetric way as we are not
looking for a full isotropy breakdown; the isotropy in x-y plane is still
reserved. A modification of Minkowski metric as
$\eta_{\mu\nu}\rightarrow\mathscr{g}_{\mu\nu}=\left(\begin{tabular}[]{cccc}$1$&$0$&$0$&$0$\\\
$0$&$-1+b$&$-b$&$0$\\\ $0$&$-b$&$-1+b$&$0$\\\ $0$&$0$&$0$&$-1$\\\
\end{tabular}\right)$ (1)
is showing such a desirable anisotropy in space-time with the $t$\- and
$z$-elements being left as same as before, leading to a preferred direction
determined by the null vector $n^{\mu}=(1,0,0,1)$. By Anisotropic Special
Relativity (ASR) we mean a relativistic theory in which the inner product of a
pair of covariant/contravariant indices is mediated by the anisotropic metric
$\mathscr{g}$.
Applying metric (1) gives our version of Lorentz violating line element as
${ds}^{2}=(dt^{2}-dx^{2}-dy^{2}-dz^{2})+b{(dx-dy)}^{2}$ (2)
which can be considered as a perturbation of the Lorentz invariant line
element with $b$ factor to be a constant parameter controlling the
perturbation. Obviously, setting $b=0$ turns the situation back to the full
Lorentz symmetry. The metric $\mathscr{g}$ itself can be also written as a
perturbation of the Minkowski metric
$\mathscr{g}_{\mu\nu}=\eta_{\mu\nu}+b\epsilon_{\mu\nu}$ (3)
where
$\epsilon_{\mu\nu}=\left(\begin{tabular}[]{cccc}$0$&$0$&$0$&$0$\\\
$0$&$1$&$-1$&$0$\\\ $0$&$-1$&$1$&$0$\\\ $0$&$0$&$0$&$0$\\\
\end{tabular}\right),$ (4)
or can be obtained through the Finsler approach using a function of the form
$F=\sqrt{(\eta_{\mu\nu}+b\epsilon_{\mu\nu})dx^{\mu}dx^{\nu}}$ (5)
Moreover, it can be checked that the metric introduced above is compatible
with the $SL(2,C)$ symmetry group. Starting from the line element invariance
principal
${\mathscr{g}^{\prime}}_{\mu\nu}{dx^{\prime}}^{\mu}{dx^{\prime}}^{\nu}=\mathscr{g}_{\mu\nu}dx^{\mu}dx^{\nu}$
and considering $\Lambda$ as the most general transformation of the theory
(excluding the space-time translations), one can write
$\Lambda^{T}(\Lambda\mathscr{g}\Lambda^{T})\Lambda=\mathscr{g}$ in the matrix
form. Defining $\Lambda=e^{\omega}$, and in terms of infinitesimal
transformation, $\Lambda\simeq\mathds{1}+\omega$, we have
$(\omega^{T}+\omega)\mathscr{g}(\omega^{T}+\omega)=\mathscr{g}$ suggesting
$\omega$ as
$\omega=\left(\begin{tabular}[]{cccc}$0$&$\alpha$&$\alpha$&$\xi$\\\
$\alpha$&$0$&$\omega_{12}$&$\omega_{13}$\\\
$\alpha$&$-\omega_{12}$&$0$&$\omega_{23}$\\\
$\xi$&$-\omega_{13}$&$-\omega_{23}$&$0$\\\ \end{tabular}\right)$ (6)
where $\alpha$ and $\xi$ are two constant parameter. The metric $\mathscr{g}$
given by (1) has been used in the last step. Eq. (6) can be written as
$\omega=-i\alpha(K_{x}+K_{y})-i\xi
K_{z}+i\omega_{23}J_{x}-i\omega_{13}J_{y}+i\omega_{12}J_{z}$ where $K_{i}$ and
$J_{i}$ are respectively the boost and rotation generators of the Lorentz
group. The appearance of $K_{x}+K_{y}$ and its commutation with $K_{z}$
constraints the parameters $\omega_{13}$ and $\omega_{23}$ to be equal, say
$\omega_{13}=\omega_{23}=\beta$, and sets $\omega_{12}=0$ in order to keep the
algebra closed. Therefore, we can rewrite (6) as $\omega=-i\alpha A-i\beta
B-i\xi C$ with the generators $A=K_{x}+K_{y}$, $B=J_{y}-J_{x}$ and $C=K_{z}$
satisfying the commutator relations $[A,B]=i2C$, $[A,C]=-iB$ and $[B,C]=iA$. A
complexification of these generators in the form of
$G^{+}=\frac{A+iB}{\sqrt{2}},\,\,\,G^{-}=\frac{A-iB}{\sqrt{2}},\,\,\,X=2C$ (7)
gives the below algebra
$[G^{+},G^{-}]=X,\,\,\,[X,G^{\pm}]=\pm 2G^{\pm}$ (8)
which is the algebra of the group of complex special linear transformation in
two dimension, $SL(2,C)$.
Since $SL(2,C)$ has no abelian minimal subgroup, we can expect non-trivial
irreducible representations in ASR. As a matter of fact, $SL(2,C)$ is
isomorphic to the entire Lorentz group, SO(1,3), which in turn means that the
representations of ASR not only do exist but also are the same as those of the
full Lorentz group.
As another consequence of our metric $\mathscr{g}$ given by (1), we can infer
to introducing a Lorentz violating term into the fermionic Lagrangian to
explain the neutrino mass in a natural way without violating the lepton number
or adding sterile right-handed neutrinos. This has been discussed for the
first time in [18, 19]; however, we ask if we can add a different Lorentz
violating term as an implication of our metric $\mathscr{g}$ which preserve
the features. We consider the initially massless Majorana Lagrangian of
neutrino and modify it just by applying metric $\mathscr{g}_{\mu\nu}$ to lower
the index of one of the contravariant vectors e.i.
$\mathcal{L}^{ASR}=i\bar{\nu}_{M}\gamma^{\alpha}\mathscr{g}_{\alpha\beta}\partial^{\beta}\nu_{M}$
leading to
$\mathcal{L}^{ASR}=\mathcal{L}^{SR}+ib\bar{\nu}_{M}(\gamma^{1}-\gamma^{2})(\partial^{1}-\partial^{2})\nu_{M}$
(9)
where $\mathcal{L}^{SR}$ is the massless Majorana Lagrangian of neutrino in SR
given by
$i\bar{\nu}_{M}(\gamma^{0}\partial^{0}-\gamma^{1}\partial^{1}-\gamma^{2}\partial^{2}-\gamma^{3}\partial^{3})\nu_{M}$
with $\nu_{M}$ to be the four-component Majorana neutrino and
$\gamma^{\alpha}$, $\alpha=0-3$, to be the Dirac matrices. Eq. (9) gives the
dispersion relation as
${(m^{2})}^{ASR}=(p_{\alpha}p^{\alpha})^{SR}+b(p_{x}-p_{y})^{2}$ for which
$(p_{\alpha}p^{\alpha})^{SR}={(m^{2})}^{SR}$ is assumed to be zero; even in
such a case, it is still possible for $(m^{2})^{ASR}$ to be not zero due to
the appearance of the perturbative term $b(p_{x}-p_{y})^{2}$. However, for the
neutrinos moving parallel to the $z$ direction or, more generally, those
carrying the same momentum in $x$\- and $y$-direction, $p_{x}=p_{y}$, the
corrected mass yet remains zero. In total, the neutrino mass in anisotropic
space-time will depend on the direction of its movement which in turn means
that the neutrino flavour oscillation allowed by anisotropic theories like ASR
and VSR [20] could be direction-dependent as well.
It can also be noticed that the field theory (FT) obtained in ASR remains
local in contrast to the non-local FT emerging from the correction term
introduced by Cohen and Glashow in VSR [18]. The term they suggested to
correct the equations of motion with is
$\frac{m_{\nu}^{2}\eta_{\alpha\beta}\gamma^{\alpha}n^{\beta}}{2n.\partial}\nu_{M}$
where the appearance of $\partial$ operator in the denominator makes the
theory non-local. However, ASR-FT remains local as only the lower orders of
the derivative operator appear in the Lagrangian.
In general, the mass of any field in ASR can be modified just similar to the
line element in Eq. (2) i.e.
${(m^{2})}^{SR}\,\,\rightarrow\,\,{(m^{2})}^{ASR}={(m^{2})}^{SR}+b(p_{x}-p_{y})^{2}$
(10)
where $m^{SR}$ is the mass of the field in SR given by
$(E^{2}-{p_{x}}^{2}-{p_{y}}^{2}-{p_{z}}^{2})$ with $E$ and $p^{i}$, $i=1-3$,
to be respectively the energy and momentum of the field.
In ASR, we just need to lower the indices of any contravariant vector or
tensor with the anisotropic metric $\mathscr{g}$. This gives the inner product
of two arbitrary vectors $v^{\mu}$ and $u^{\mu}$ as
${(v_{\mu}u^{\mu})}^{ASR}={(v_{\mu}u^{\mu})}^{SR}+b(v^{1}-v^{2})(u^{1}-u^{2})$
(11)
where ${(v_{\mu}u^{\mu})}^{SR}$ is the inner product of the vectors in SR. In
the case that at least one of the two vectors is parallel to the preferred
direction ($v^{1}=v^{2}=0$ or $u^{1}=u^{2}=0$) or its $x$\- and $y$-component
are equal ($v^{1}=v^{2}$ or $u^{1}=u^{2}$), the inner product reduces down to
its SR version.
In order to make an anisotropic quantum field theory, one can promote any
Lagrangian by replacing the isotropic inner product of any pair of
covariant/contravariant indices with its anisotropic version. For example, the
Dirac Lagrangian for the fermions can be modified to
$\mathcal{L}_{D}^{ASR}=\mathcal{L}_{D}^{SR}+ib\bar{\psi}(\gamma^{1}-\gamma^{2})(\partial^{1}-\partial^{2})\psi$
(12)
a consequence of which is the extension of the dispersion relation as (10).
Similarly, in the electrodynamics Lagrangian
$\mathcal{L}_{E}^{SR}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$ we just need to lower
the indices using $\mathscr{g}$ i.e.
$\mathcal{L}_{E}^{ASR}=-\frac{1}{4}\mathscr{g}_{\mu\alpha}\mathscr{g}_{\nu\beta}F^{\alpha\beta}F^{\mu\nu}$
resulting in equations of motion as
$(\partial_{\mu}F^{\mu\nu})^{SR}+b\epsilon_{\mu\alpha}\partial^{\alpha}F^{\mu\nu}=0$
(13)
from which the anisotropically modified Maxwell equations can be derived.
The shape of gauge symmetry in ASR remains the same as in SR
$\left\\{\begin{tabular}[]{c}$\psi\,\,\rightarrow\,\,e^{iq\vartheta}\psi$\\\
$\partial_{\mu}\,\,\rightarrow\,\,D_{\mu}=\partial_{\mu}-iqA_{\mu}$\\\
$A_{\mu}\,\,\rightarrow A_{\mu}+\partial_{\mu}\vartheta$\end{tabular}\right.$
(14)
with the difference that, here, all the covariant indices are made from the
contravariant ones by applying the metric $\mathscr{g}$. For instance, the ASR
version of $\partial_{\mu}$ and $A_{\mu}$ are
$\left\\{\begin{tabular}[]{c}${\partial_{\mu}}^{ASR}={\partial_{\mu}}^{SR}+b(0,\partial^{1}-\partial^{2},\partial^{2}-\partial^{1},0)$\\\
${A_{\mu}}^{ASR}={A_{\mu}}^{SR}+b(0,A^{1}-A^{2},A^{2}-A^{1},0)$\end{tabular}\right.$
(15)
## III Conclusion
In this letter, we proposed an anisotropic relativistic theory, ASR, as an
alternative to the special theory of relativity. ASR can be formalised by
replacing the Minkowski metric with a perturbative anisotropic metric in which
the space-time isotropy constraint has been replaced with the preference of a
direction determined by the null vector $n^{\mu}$. On the one hand, the
partial breakdown of Lorentz symmetry is allowed while on the other hand, the
symmetry group $SL(2,C)$ guarantees the irreducible representations of the
theory to be just the same as those of SR. Despite a direction-dependent
modification to the dispersion relation, the QFT built up in ASR remains
local.
As one of the interesting applications of ASR, we can infer to redesigning of
neutrino oscillation experiments. ASR suggests the neutrino mass, hence the
neutrino oscillation, to be direction dependent; so, designing a neutrino
oscillation experiment in which the direction of propagation of neutrinos can
sweep different spatial directions could be of special interest. In addition,
as the transformation of ASR, presented by $SL(2,C)$ group, are direction
dependent, we expect most of the relativistic phenomena like length
contraction, time dilation, Doppler effect and aberration of light, etc. to be
direction dependent too. A possible origin for the space-time anisotropy could
be the anisotropic mass/energy distribution in the universe, which can be
reflected in cosmic microwave background. This means that different locations
in the universe could be expected to experience different level or direction
of anisotropy. In such a case, a universal version of ASR can be achieved by
promoting the perturbative parameter $b$ from a constant to a field.
## References
* [1] V. A. Kostelecky and N. Russell, “Data tables for lorentz and cpt violation,” Reviews of Modern Physics, vol. 83, no. 1, pp. 11–32, 2011.
* [2] D. Mattingly, “Modern tests of lorentz invariance,” Living Reviews in Relativity, vol. 8, no. 1, p. 5, 2005.
* [3] S. Liberati, “Tests of lorentz invariance: a 2013 update,” Classical and Quantum Gravity, vol. 30, no. 13, p. 133001, 2013.
* [4] D. A. Kirzhnits and V. A. Chechin, “The ultra-high-energy cosmic rays and a possible generalization of the relativistic theory,” Yadern Fiz, vol. 15, no. 5, pp. 1051–1059, 1972.
* [5] T. G. Pavlopoulos, “Breakdown of lorentz invariance,” Physical Review, vol. 159, no. 5, pp. 1106–1110, 1967.
* [6] D. Colladay and V. A. Kostelecký, “Lorentz-violating extension of the standard model,” Physical Review D, vol. 58, no. 11, p. 116002, 1998.
* [7] S. Coleman and S. L. Glashow, “High-energy tests of lorentz invariance,” Physical Review D, vol. 59, no. 11, p. 116008, 1999.
* [8] A. G. Cohen and S. L. Glashow, “Very special relativity,” Physical Review Letters, vol. 97, no. 2, p. 021601, 2006.
* [9] P. Antonelli, R. Ingarden, and M. Matsumoto, The Theory of Sprays and Finsler Spaces with Applications in Physics and Biology. 1993\.
* [10] M. R. Douglas and N. A. Nekrasov, “Noncommutative field theory,” Rev. Mod. Phys., vol. 73, pp. 977–1029, Nov 2001.
* [11] L. Zhang and X. Xue, “The finsler type of space-time realization of deformed very special relativity,” 2012.
* [12] M. M. Sheikh-Jabbari and A. Tureanu, “Realization of cohen-glashow very special relativity on noncommutative space-time,” Physical Review Letters, vol. 101, no. 26, p. 261601, 2008.
* [13] E. Komatsu, J. Dunkley, M. R. Nolta, C. L. Bennett, B. Gold, G. Hinshaw, N. Jarosik, D. Larson, M. Limon, L. Page, D. N. Spergel, M. Halpern, R. S. Hill, A. Kogut, S. S. Meyer, G. S. Tucker, J. L. Weiland, E. Wollack, and E. L. Wright, “Five-year wilkinson microwave anisotropy probe observations: Cosmological interpretation,” The Astrophysical Journal Supplement Series, vol. 180, no. 2, pp. 330–376, 2009.
* [14] C. Eisele, A. Y. Nevsky, and S. Schiller, “Laboratory test of the isotropy of light propagation at the ${10}^{-17}$ level,” Physical Review Letters, vol. 103, no. 9, p. 090401, 2009.
* [15] B. Altschul, “Bounding isotropic lorentz violation using synchrotron losses at lep,” Physical Review D, vol. 80, no. 9, p. 091901, 2009.
* [16] G. Y. Bogoslovsky, “A special-relativistic theory of the locally anisotropic space-time,” Il Nuovo Cimento B (1971-1996), vol. 40, no. 1, pp. 99–115, 1977.
* [17] G. Y. Bogoslovsky, “A special-relativistic theory of the locally anisotropic space-time,” Il Nuovo Cimento B (1971-1996), vol. 40, no. 1, pp. 116–134, 1977.
* [18] A. Cohen and S. Glashow, “A lorentz-violating origin of neutrino mass?,” 2006\.
* [19] A. E. Bernardini, “Dirac neutrino mass from the beta-decay end point modified by the dynamics of a lorentz-violating equation of motion,” Physical Review D, vol. 75, no. 9, p. 097901, 2007.
* [20] J. Alfaro, P. González, and R. Ávila, “Electroweak standard model with very special relativity,” Physical Review D, vol. 91, no. 10, p. 105007, 2015\.
|
# Strong Laws of Large Numbers for Generalizations of Fréchet Mean Sets
Christof Schötz
###### Abstract
A Fréchet mean of a random variable $Y$ with values in a metric space
$(\mathcal{Q},d)$ is an element of the metric space that minimizes
$q\mapsto\mathbb{E}[d(Y,q)^{2}]$. This minimizer may be non-unique. We study
strong laws of large numbers for sets of generalized Fréchet means. Following
generalizations are considered: the minimizers of
$\mathbb{E}[d(Y,q)^{\alpha}]$ for $\alpha>0$, the minimizers of
$\mathbb{E}[H(d(Y,q))]$ for integrals $H$ of non-decreasing functions, and the
minimizers of $\mathbb{E}[\mathfrak{c}(Y,q)]$ for a quite unrestricted class
of cost functions $\mathfrak{c}$. We show convergence of empirical versions of
these sets in outer limit and in one-sided Hausdorff distance. The derived
results require only minimal assumptions.
## 1 Fréchet Mean Sets
For a random variable $Y$ with values in $\mathbb{R}^{s}$ and
$\mathbb{E}[\|Y\|^{2}]<\infty$, it holds
$\mathbb{E}[Y]=\operatorname*{arg\,min}_{q\in\mathbb{R}^{s}}\mathbb{E}[d(Y,q)^{2}]\,,$
where $d(x,y)=\|x-y\|$ is the Euclidean distance. We can also write
$\mathbb{E}[Y]=\operatorname*{arg\,min}_{q\in\mathbb{R}^{s}}\mathbb{E}[d(Y,q)^{2}-d(Y,0)^{2}]\,,$
as we just add a constant term. In the latter equation, we only require $Y$ to
be once integrable, $\mathbb{E}[\|Y\|]<\infty$, instead of twice as
$|\|y-q\|^{2}-\|y\|^{2}|\leq 2\|y\|\|q\|+\|q\|^{2}$.
The concept of Fréchet mean, proposed in [Fréchet, 1948], builds upon this
minimizing property of the Euclidean mean to generalize the expected value to
random variables with values in a metric space. Let $(\mathcal{Q},d)$ be a
metric space. As a shorthand we may write $\overline{q,\\!p}$ instead of
$d(q,p)$. Let $Y$ be a random variable with values in $\mathcal{Q}$. Fix an
arbitrary element $o\in\mathcal{Q}$. The _Fréchet mean set_ of $Y$ is
$M=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\mathbb{E}[\overline{Y,\\!q}^{2}-\overline{Y,\\!o}^{2}]$
assuming the expectations exist. This definition does not depend on $o$. The
reason for subtracting $\overline{Y,\\!o}^{2}$ is the same as in the Euclidean
case: We need to make less moment assumptions to obtain a meaningful value:
The triangle inequality implies
$\left|\overline{y,\\!q}^{2}-\overline{y,\\!o}^{2}\right|\leq\overline{o,\\!q}\left(\overline{o,\\!q}+2\overline{y,\\!o}\right)$
for all $y,q,o\in\mathcal{Q}$. Thus, if
$\mathbb{E}[\overline{Y,\\!o}]<\infty$, then
$\mathbb{E}[\overline{Y,\\!q}^{2}-\overline{Y,\\!o}^{2}]<\infty$ for all
$q\in\mathcal{Q}$.
In Euclidean spaces and other Hadamard spaces (metric spaces with nonpositive
curvature), the Fréchet mean is always unique [Sturm, 2003, Proposition 4.3].
This is not true in general. On the circle, a uniform distribution on two
antipodal points has two Fréchet means. For a deeper analysis of Fréchet means
on the circle, see [Hotz and Huckemann, 2015]. Similarly, Fréchet means on
many positively curved spaces like (hyper-)spheres may not be unique. For the
metric space $\mathcal{Q}=\mathbb{R}$ with $d(q,p)=\sqrt{\left|q-p\right|}$,
the Fréchet mean set is the set of medians, which may also be non-unique.
These examples underline the importance of considering sets of minimizers in a
general theory instead of assuming uniqueness.
The notion of Fréchet mean can be generalized to cases where the cost function
to be minimized is not a squared metric, e.g. [Huckemann, 2011]. We will not
explicitly write down measurablity conditions, but silently demand that all
spaces have the necessary measurable structure and all functions are
measurable when necessary. Let $(\mathcal{Q},d)$ be a metric space and
$\mathcal{Y}$ be a set. Let
$\mathfrak{c}\colon\mathcal{Y}\times\mathcal{Q}\to\mathbb{R}$ be a function.
Let $Y$ be a random variable with values in $\mathcal{Y}$. Let
$M:=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\mathbb{E}[\mathfrak{c}(Y,q)]$
assuming the expectations exist. In this context, $\mathfrak{c}$ is called
cost function, $\mathcal{Y}$ is called data space, $\mathcal{Q}$ is called
descriptor space, $q\mapsto\mathbb{E}[\mathfrak{c}(Y,q)]$ is called objective
function (or Fréchet function), and $M$ is called generalized Fréchet mean set
or $\mathfrak{c}$-Fréchet mean set.
This general scenario contains the setting of general M-estimation. It
includes many important statistical frameworks like maximum likelihood
estimation, where $\mathcal{Q}=\Theta$ parameterizes a family of densities
$(f_{\vartheta})_{\vartheta\in\Theta}$ on $\mathcal{Y}=\mathbb{R}^{p}$ and
$\mathfrak{c}(x,\vartheta)=-\log f_{\vartheta}(x)$, or linear regression,
where $\mathcal{Q}=\mathbb{R}^{s+1}$,
$\mathcal{Y}=(\\{1\\}\times\mathbb{R}^{s})\times\mathbb{R}$,
$\mathfrak{c}((x,y),\beta)=(y-\beta\\!^{\top}\\!x)^{2}$. It also includes
nonstandard settings, e.g. [Huckemann, 2011], where geodesics in $\mathcal{Q}$
are fitted to points in $\mathcal{Y}$.
Fix an arbitrary element $o\in\mathcal{Q}$. We will use cost functions
$\mathfrak{c}(y,q)=H(\overline{y,\\!q})-H(\overline{y,\\!o})$, where
$H(x)=\int_{0}^{x}h(t)\mathrm{d}t$ for a non-decreasing function $h$, and
$\mathfrak{c}(y,q)=\overline{y,\\!q}^{\alpha}-\overline{y,\\!o}^{\alpha}$ with
$\alpha>0$. In both cases the set of minimizers does not depend on $o$. We
call the minimizers of the former cost function $H$-Fréchet means. In the
latter case, we call the minimizers power Fréchet means or $\alpha$-Fréchet
means. We can interpret the different exponents $\alpha=2$, $\alpha=1$,
$\alpha\to 0$, $\alpha\to\infty$ as mean, median, mode, and circumcenter (or
mid-range), respectively, see [MacQueen, 1967]. The minimizers for $\alpha=1$
are sometimes called Fréchet median, e.g. [Arnaudon et al., 2013]. If
$\mathcal{Q}$ is a Banach space, then they are called geometric or spatial
median, e.g. [Kemperman, 1987]. $H$-Fréchet means serve as a generalization of
$\alpha$-Fréchet means for $\alpha>1$ as well as an intermediate result for
proving strong laws of large numbers for $\alpha$-Fréchet mean sets with
$\alpha\in(0,1]$.
For a function $f\colon\mathcal{Q}\to\mathbb{R}$ and $\epsilon\geq 0$, define
$\epsilon\text{-}\operatorname*{arg\,min}_{q\in\mathcal{Q}}f(q):=\\{q\in\mathcal{Q}\nonscript\,|\allowbreak\nonscript\,\mathopen{}f(q)\leq\epsilon+\inf_{q\in\mathcal{Q}}f(q)\\}\,.$
Let $Y_{1},\dots,Y_{n}$ be independent random variables with the same
distribution as $Y$. Choose
$(\epsilon_{n})_{n\in\mathbb{N}}\subseteq[0,\infty)$ with
$\epsilon_{n}\xrightarrow{n\to\infty}0$. Let
$M_{n}:=\epsilon_{n}\text{-}\operatorname*{arg\,min}_{q\in\mathcal{Q}}\frac{1}{n}\sum_{i=1}^{n}\mathfrak{c}(Y_{i},q)$.
Our goal is to show almost sure convergence of elements in $M_{n}$ to elements
in $M$.
###### Remark 1.1.
Considering sets of elements that minimize the objective only up to
$\epsilon_{n}$ makes the results more relevant to applications in which
Fréchet mean sets are approximated numerically. Furthermore, it may allow us
to find more elements of $M$ in the limit of $M_{n}$ than for
$\epsilon_{n}=0$, as discussed in 2.5 below and appendix A.
There are different possibilities of how a convergence of sets $M_{n}$ to a
set $M$ can be described.
###### Definition 1.2.
Let $(\mathcal{Q},d)$ be a metric space.
1. (i)
Let $(B_{n})_{n\in\mathbb{N}}$ with $B_{n}\subseteq\mathcal{Q}$ for all
$n\in\mathbb{N}$. Then the _outer limit_ of $(B_{n})_{n\in\mathbb{N}}$ is
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}B_{n}:=\bigcap_{n\in\mathbb{N}}\overline{\bigcup_{k\geq
n}B_{k}}\,,$
where $\overline{B}$ denotes the closure of the set $B$.
2. (ii)
The _one-sided Hausdorff distance_ between $B,B^{\prime}\subseteq\mathcal{Q}$
is
$d_{\subseteq}(B,B^{\prime}):=\sup_{x\in B}\inf_{x^{\prime}\in
B^{\prime}}d(x,x^{\prime})\,.$
3. (iii)
The _Hausdorff distance_ between $B,B^{\prime}\subseteq\mathcal{Q}$ is
$d_{\mathsf{H}}(B,B^{\prime}):=\max(d_{\subseteq}(B,B^{\prime}),d_{\subseteq}(B^{\prime},B))\,.$
###### Remark 1.3.
1. (i)
The outer limit is the set of all points of accumulation of all sequences
$(x_{n})_{n\in\mathbb{N}}$ with $x_{n}\in B_{n}$. We may write
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}B_{n}=\\{q\in\mathcal{Q}\nonscript\,|\allowbreak\nonscript\,\mathopen{}\liminf_{n\to\infty}d(B_{n},q)=0\\}$,
where $d(B,q):=\inf_{p\in B}d(p,q)$ for a subset $B\subseteq\mathcal{Q}$. The
inner limit is dual to the outer limit. It is defined as
$\operatorname*{innerlim}_{n\to\infty}B_{n}:=\\{q\in\mathcal{Q}\nonscript\,|\allowbreak\nonscript\,\mathopen{}\limsup_{n\to\infty}d(B_{n},q)=0\\}$.
Clearly,
$\operatorname*{innerlim}_{n\to\infty}B_{n}\subseteq\operatorname*{lim\,\overline{sup}}_{n\to\infty}B_{n}$.
Thus, results of the form
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}B_{n}\subseteq B$, which we
show below, are stronger than
$\operatorname*{innerlim}_{n\to\infty}B_{n}\subseteq B$.
2. (ii)
It holds $d_{\subseteq}(B,B^{\prime})=0$ if and only if
$B\subseteq\overline{B^{\prime}}$, but $d_{\mathsf{H}}(B,B^{\prime})=0$ if and
only if $\overline{B}=\overline{B}^{\prime}$. The function $d_{\mathsf{H}}$ is
a metric on the set of closed and bounded subsets of $\mathcal{Q}$.
3. (iii)
Elements from a sequence of sets might have sub-sequences that have no point
of accumulation and are bounded away from the outer limit of the sequence of
sets. That cannot happen with the one-sided Hausdorff limit. Here, every sub-
sequence is eventually arbitrarily close to the limiting set. As an example,
the outer limit of the sequence of sets $\\{0,n\\}$, $n\in\mathbb{N}$ on the
Euclidean real line is $\\{0\\}$, but
$d_{\subseteq}(\\{0,n\\},\\{0\\})\xrightarrow{n\to\infty}\infty$. Aside from
an element with diverging distance ($n$ in the example), another cause for the
two limits to not align may be non-compactness of bounded sets: Consider the
space $\ell^{2}$ of all sequences
$(x_{k})_{k\in\mathbb{N}}\subseteq\mathbb{R}$ with
$\sum_{k=1}^{\infty}x_{k}^{2}<\infty$ with distance
$d((x_{k})_{k\in\mathbb{N}},(y_{k})_{k\in\mathbb{N}})=(\sum_{k=1}^{\infty}(x_{k}-y_{k})^{2})^{\frac{1}{2}}$.
Let $\underline{0}\in\ell^{2}$ be the sequence with all entries equal to 0.
Let $e^{n}:=(e^{n}_{k})_{k\in\mathbb{N}}\in\ell^{2}$ with $e_{n}^{n}=1$ and
$e^{n}_{k}=0$ for all $k\neq n$. Then
$d_{\subseteq}(\\{\underline{0},e_{n}\\},\\{\underline{0}\\})=1$ for all
$n\in\mathbb{N}$, but
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\\{\underline{0},e_{n}\\}=\\{\underline{0}\\}$.
We will state conditions so that
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}M_{n}\subseteq M$ almost
surely or $d_{\subseteq}(M_{n},M)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0$,
where the index $\mathsf{a.s.}$ indicates almost sure convergence. It is not
easily possible to show
$d_{\mathsf{H}}(M_{n},M)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0$ if $M$ is
not a singleton, as discussed in 2.5 below and appendix A. These limit
theorems may be called strong laws of large numbers of the Fréchet mean set or
(strong) consistency of the empirical Fréchet mean set. Notably, in [Evans and
Jaffe, 2020] the connection to convergence in the sense of topology is made:
If the set of closed subsets of $\mathcal{Q}$ is equipped with the Kuratowski
upper topology, a sequence of closed subsets $(B_{n})_{n\in\mathbb{N}}$
converges to a closed subset $B$ if and only if
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}B_{n}\subseteq B$. If the set
of nonempty compact subsets of $\mathcal{Q}$ is equipped with the Hausdorff
upper topology, a sequence of nonempty compact subsets
$(B_{n})_{n\in\mathbb{N}}$ converges to a nonempty compact subset $B$ if and
only if $d_{\subseteq}(B_{n},B)\xrightarrow{n\to\infty}0$.
[Ziezold, 1977] shows a strong law in outer limit for Fréchet mean sets with a
second moment condition. [Sverdrup-Thygeson, 1981] shows a strong law in outer
limit for power Fréchet mean sets in compact spaces. [Bhattacharya and
Patrangenaru, 2003] shows almost sure convergence of Fréchet mean sets in one-
sided Hausdorff distance with a second moment condition. The independent
parallel work [Evans and Jaffe, 2020] shows strong laws in outer limit and
one-sided Hausdorff distance for $\alpha$-Fréchet mean sets requiring
$\mathbb{E}[\overline{Y,\\!o}^{\alpha}]<\infty$, which is a second moment
condition for the Fréchet mean. In contrast, we show strong laws of large
numbers for power Fréchet mean sets in outer limit and in one-sided Hausdorff
distance with less moment assumptions: For power $\alpha>1$, we require
$\mathbb{E}[\overline{Y,\\!o}^{\alpha-1}]<\infty$, and for $\alpha\in(0,1]$ no
moment assumption is made, see 5.1 and 5.2. Thus, $\alpha$-Fréchet means may
be of interest in robust statistics. [Huckemann, 2011] shows almost sure
convergence in one-side Hausdorff distance as well as in outer limit for
generalized Fréchet means. Our results for $\mathfrak{c}$-Fréchet means
require slightly less strict assumptions, see Theorem 3.2 and Theorem 3.5,
which make them applicable in a larger class of settings and allows us to
derive our results for $H$\- and $\alpha$-Fréchet means with minimal moment
assumptions. Results in [Artstein and Wets, 1995, Korf and Wets, 2001, Choirat
et al., 2003] imply strong laws and ergodic theorems in outer limit for
generalized Fréchet means. We recite parts of these results to state Theorem
3.2. Furthermore, we show strong laws of large numbers for $H$-Fréchet means
sets in outer limit, 4.3, and one-sided Hausdorff distance, 4.4. When $M$ is
singleton a quantitative version (rates of convergence) of the results
presented in this article is given in [Schötz, 2019].
Before we consider the probabilistic setting, we present theory on convergence
of minimizing sets for deterministic functions in section 2, where we
partially follow [Rockafellar and Wets, 1998]. Thereafter, we derive strong
laws of large numbers for $\mathfrak{c}$-Fréchet mean sets in section 3, for
$H$-Fréchet mean sets in section 4, and for $\alpha$-Fréchet mean sets in
section 5. Appendix A uses the median as a simple example to illustrate some
peculiarities when dealing with sets of Fréchet means. All strong laws in the
main part of this article build upon [Korf and Wets, 2001, Theorem 1.1] – a
deep convergence result for functions $\mathcal{Q}\to\mathbb{R}$. In appendix
B, we show a different route to a strong law in one-side Hausdorff distance.
This is illustrative, but requires slightly stricter assumptions. In appendix
C some auxiliary results are stated and proven.
## 2 Convergence of Minimizer Sets of Deterministic Functions
Let $(\mathcal{Q},d)$ be a metric space. The diameter of a set
$B\subseteq\mathcal{Q}$ is defined as
$\operatorname{\mathsf{diam}}(B)=\sup_{q,p\in B}d(q,p)$. The following notion
of convergence of functions will be useful to infer a convergence results of
their minimizers.
###### Definition 2.1.
Let $f,f_{n}\colon\mathcal{Q}\to\mathbb{R}$, $n\in\mathbb{N}$. The sequence
$(f_{n})_{n\in\mathbb{N}}$ _epi-converges_ to $f$ _at_ $x\in\mathcal{Q}$ if
and only if
$\displaystyle\forall(x_{n})_{n\in\mathbb{N}}\subseteq\mathcal{Q},x_{n}\to
x\colon$ $\displaystyle\liminf_{n\to\infty}f_{n}(x_{n})\geq
f(x)\qquad\text{and}$
$\displaystyle\exists(y_{n})_{n\in\mathbb{N}}\subseteq\mathcal{Q},y_{n}\to
x\colon$ $\displaystyle\limsup_{n\to\infty}f_{n}(y_{n})\leq f(x)\,.$
The sequence $(f_{n})_{n\in\mathbb{N}}$ _epi-converges_ to $f$ if and only if
it epi-converges at all $x\in\mathcal{Q}$. We then write
$f_{n}\xrightarrow{n\to\infty}_{\mathsf{epi}}f$.
We introduce some short notation. Let $f\colon\mathcal{Q}\to\mathbb{R}$ and
$\epsilon\geq 0$. Denote $\inf f=\inf_{x\in\mathcal{Q}}f(x)$,
$\operatorname*{arg\,min}f=\\{x\in\mathcal{Q}\nonscript\,|\allowbreak\nonscript\,\mathopen{}f(x)=\inf
f\\}$,
$\epsilon\text{-}\operatorname*{arg\,min}f=\\{x\in\mathcal{Q}\nonscript\,|\allowbreak\nonscript\,\mathopen{}f(x)\leq\epsilon+\inf
f\\}$. Let $\delta>0$ and $x_{0}\in\mathcal{Q}$. Denote
$\operatorname{\mathrm{B}}_{\delta}(x_{0})=\\{x\in\mathcal{Q}\nonscript\,|\allowbreak\nonscript\,\mathopen{}d(x,x_{0})<\delta\\}$.
Furthermore, $f$ is called _lower semi-continuous_ if and only if
$\liminf_{x\to x_{0}}f(x)\geq f(x_{0})$ for all $x_{0}\in\mathcal{Q}$.
To state convergence results for minimizing sets of deterministic functions,
we need one final definition.
###### Definition 2.2.
A sequence $(B_{n})_{n\in\mathbb{N}}$ of sets $B_{n}\subseteq\mathcal{Q}$ is
called _eventually precompact_ if and only if there is $n\in\mathbb{N}$ such
that the set $\bigcup_{k=n}^{\infty}B_{k}$ is precompact, i.e. its closure is
compact.
The first theorem of this section relates epi-convergence of functions to
convergence of their sets of minimizers in outer limit.
###### Theorem 2.3.
Let $f,f_{n}\colon\mathcal{Q}\to\mathbb{R}$. Let
$(\epsilon_{n})_{n\in\mathbb{N}}\subseteq[0,\infty)$ with
$\epsilon_{n}\xrightarrow{n\to\infty}0$. Assume
$f_{n}\xrightarrow{n\to\infty}_{\mathsf{epi}}f$. Then
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n}\subseteq\operatorname*{arg\,min}f$
and
$\limsup_{n\to\infty}\inf f_{n}\leq\inf f\,.$
Large parts of this theorem can be found e.g., in [Rockafellar and Wets, 1998,
chapter 7]. To make this article more self-contained, we give a proof here.
###### Proof.
Let
$x\in\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n}$.
Then there is a sequence
$x_{n}\in\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n}$ with a subsequence
converging to $x$, i.e., $x_{n_{i}}\xrightarrow{i\to\infty}x$, where
$n_{i}\xrightarrow{i\to\infty}\infty$. Let $y\in\mathcal{Q}$ be arbitrary. As
$f_{n}\xrightarrow{n\to\infty}_{\mathsf{epi}}f$, there is a sequence
$(y_{n})_{n\in\mathbb{N}}\subseteq\mathcal{Q}$ with
$y_{n}\xrightarrow{n\to\infty}y$ and $\limsup_{n\to\infty}f_{n}(y_{n})\leq
f(y)$. It holds $f_{n_{i}}(x_{n_{i}})\leq\epsilon_{n_{i}}+\inf
f_{n_{i}}\leq\epsilon_{n_{i}}+f_{n_{i}}(y_{n_{i}})$. Thus, by the definition
of epi-convergence and $\epsilon_{n}\xrightarrow{n\to\infty}0$, we obtain
$f(x)\leq\liminf_{i\to\infty}f_{n_{i}}(x_{n_{i}})\leq\liminf_{i\to\infty}\left(\epsilon_{n_{i}}+f_{n_{i}}(y_{n_{i}})\right)\leq\limsup_{i\to\infty}f_{n_{i}}(y_{n_{i}})\leq
f(y)\,.$
Thus, $x\in\operatorname*{arg\,min}f$. Next, we turn to the inequality of the
infima. For $\epsilon>0$ choose an arbitrary
$x\in\epsilon\text{-}\operatorname*{arg\,min}f$. There is a sequence
$(y_{n})_{n\in\mathbb{N}}\subseteq\mathcal{Q}$ with
$y_{n}\xrightarrow{n\to\infty}x$ and
$f_{n}(y_{n})\xrightarrow{n\to\infty}f(x)$. Thus,
$\limsup_{n\to\infty}\inf f_{n}\leq\limsup_{n\to\infty}f_{n}(y_{n})\leq\inf
f+\epsilon\,.$
∎
It is illustrative to compare this result with Theorem B.2, which shows that a
stronger notion of convergence for functions – convergences uniformly on
bounded sets – yields convergence of sets of minimizers in one-sided Hausdorff
distance, which is a stronger notion of convergence of sets as the next
theorem shows.
###### Theorem 2.4.
Let $(B_{n})_{n\in\mathbb{N}}$ with $B_{n}\subseteq\mathcal{Q}$ for all
$n\in\mathbb{N}$. Let $B\subseteq\mathcal{Q}$.
1. (i)
If $d_{\subseteq}(B_{n},B)\xrightarrow{n\to\infty}0$ then
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,B_{n}\subseteq\overline{B}$.
2. (ii)
Assume $(B_{n})_{n\in\mathbb{N}}$ is eventually precompact. If
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,B_{n}\subseteq\overline{B}$
then $d_{\subseteq}(B_{n},B)\xrightarrow{n\to\infty}0$.
###### Proof.
1. (i)
Assume $d_{\subseteq}(B_{n},B)\xrightarrow{n\to\infty}0$. Let
$x_{\infty}\in\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,B_{n}$, i.e.,
there is a sequence $(x_{n_{k}})_{k\in\mathbb{N}}\subseteq\mathcal{Q}$ with
$n_{1}<n_{2}<\dots$ and $x_{n_{k}}\in B_{n_{k}}$ such that
$x_{n_{k}}\xrightarrow{k\to\infty}x_{\infty}$. Thus,
$\inf_{x\in B}d(x_{\infty},x)\leq d(x_{\infty},x_{n_{k}})+\inf_{x\in
B}d(x_{n_{k}},x)\xrightarrow{k\to\infty}0\,.$
This shows $\inf_{x\in B}d(x_{\infty},x)=0$. Hence,
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,B_{n}\subseteq\\{x_{\infty}\in\mathcal{Q}\nonscript\,|\allowbreak\nonscript\,\mathopen{}\inf_{x\in
B}d(x_{\infty},x)=0\\}=\overline{B}\,.$
2. (ii)
Assume
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,B_{n}\subseteq\overline{B}$.
Further assume the existence of $\epsilon>0$ and a sequence
$(x_{n_{k}})_{k\in\mathbb{N}}\subseteq\mathcal{Q}$ with $n_{1}<n_{2}<\dots$
and $x_{n_{k}}\in B_{n_{k}}$ such that $\inf_{x\in
B}d(x_{n_{k}},x)\geq\epsilon$. As $(B_{n_{k}})_{k\in\mathbb{N}}$ is eventually
precompact, the sequence $(x_{n_{k}})_{k\in\mathbb{N}}$ has an accumulation
point $x_{\infty}$ in $\overline{\bigcup_{k\geq k_{0}}B_{n_{k}}}$ for some
$k_{0}\in\mathbb{N}$ with $\inf_{x\in B}d(x_{\infty},x)\geq\epsilon$. In
particular, $x_{\infty}\not\in\overline{B}$, which contradicts the first
assumption in the proof. Thus, a sequence $(x_{n_{k}})_{k\in\mathbb{N}}$ with
these properties cannot exist, which implies
$d_{\subseteq}(B_{n},B)\xrightarrow{n\to\infty}0$.
∎
Note that the argument for the second part is essentially the same as in
[Huckemann, 2011, proof of Theorem A.4].
###### Remark 2.5.
Together Theorem 2.3 and Theorem 2.4 may yield convergence of minimizers in
one-sided Hausdorff distance. But even if
$d_{\subseteq}(\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n},\operatorname*{arg\,min}f)\xrightarrow{n\to\infty}0$,
$d_{\mathsf{H}}(\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n},\operatorname*{arg\,min}f)$
does not necessarily vanish unless $\operatorname*{arg\,min}f$ is a singleton.
Similarly, for an arbitrary sequence $\epsilon_{n}\xrightarrow{n\to\infty}0$,
the outer limit of $\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n}$ may be
a strict subset of $\operatorname*{arg\,min}f$. But according to [Rockafellar
and Wets, 1998, Theorem 7.31 (c)], there exists a sequence
$(\epsilon_{n})_{n\in\mathbb{N}}$ with $\epsilon_{n}\xrightarrow{n\to\infty}0$
slow enough such that
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n}=\operatorname*{arg\,min}f$.
An explicit example of this phenomenon is presented in appendix A.
## 3 Strong Laws for $\mathfrak{c}$-Fréchet Mean Sets
Let $(\mathcal{Q},d)$ be a metric space, the descriptor space. Let
$\mathcal{Y}$ be a set, the data space. Let
$\mathfrak{c}\colon\mathcal{Y}\times\mathcal{Q}\to\mathbb{R}$ be a function,
the cost function. Let $(\Omega,\Sigma,\mathbb{P})$ be a probability space
that is silently underlying all random variables in this section. Let $Y$ be a
random variable with values in $\mathcal{Y}$. Denote the
$\mathfrak{c}$-Fréchet mean set of $Y$ as
$M=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\mathbb{E}[\mathfrak{c}(Y,q)]$.
Let $Y_{1},\dots,Y_{n}$ be independent random variables with the same
distribution as $Y$. Choose
$(\epsilon_{n})_{n\in\mathbb{N}}\subseteq[0,\infty)$ with
$\epsilon_{n}\xrightarrow{n\to\infty}0$. Set
$M_{n}=\epsilon_{n}\text{-}\operatorname*{arg\,min}_{q\in\mathcal{Q}}\frac{1}{n}\sum_{i=1}^{n}\mathfrak{c}(Y_{i},q)$.
###### Assumptions 3.1.
* •
Polish: $(\mathcal{Q},d)$ is separable and complete.
* •
LowerSemiContinuity: $q\mapsto\mathfrak{c}(y,q)$ is lower semi-continuous.
* •
Integrable:
$\mathbb{E}[\left\nonscript\;|\nonscript\;\mathfrak{c}(Y,q)\right\nonscript\;|\nonscript\;]<\infty$
for all $q\in\mathcal{Q}$.
* •
IntegrableInf: $\mathbb{E}[\inf_{q\in\mathcal{Q}}\mathfrak{c}(Y,q)]>-\infty$.
###### Theorem 3.2.
Assume Polish, LowerSemiContinuity, Integrable, and IntegrableInf. Then,
almost surely,
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,M_{n}\subseteq M\,.$
###### Proof.
Define $F(q)=\mathbb{E}[\mathfrak{c}(Y,q)]$,
$F_{n}(q)=\frac{1}{n}\sum_{i=1}^{n}\mathfrak{c}(Y_{i},q)$. By Integrable,
$F(q)<\infty$. [Korf and Wets, 2001, Theorem 1.1] states that
$F_{n}\xrightarrow{n\to\infty}_{\mathsf{epi}}F$ almost surely if Polish,
LowerSemiContinuity, and IntegrableInf are true. Theorem 2.3 then implies
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,M_{n}\subseteq M$ almost
surely. ∎
###### Assumptions 3.3.
* •
HeineBorel: Every closed bounded set in $\mathcal{Q}$ is compact.
* •
SampleHeineBorel: Almost surely following is true: There is
$N_{0}\in\mathbb{N}$ such that every closed and bounded subset of
$\bigcup_{n\geq N_{0}}M_{n}$ is compact.
* •
UpperBound: $\mathbb{E}[\sup_{q\in
B}\left\nonscript\;|\nonscript\;\mathfrak{c}(Y,q)\right\nonscript\;|\nonscript\;]<\infty$
for all bounded sets $B\subseteq\mathcal{Q}$.
* •
LowerBound: There are $o\in\mathcal{Q}$,
$\psi^{+},\psi^{-}\colon[0,\infty)\to[0,\infty)$,
$\mathfrak{a}^{+},\mathfrak{a}^{-}\in(0,\infty)$, and
$\sigma(Y_{1},\dots,Y_{n})$-measurable random variables
$\mathfrak{a}^{+}_{n},\mathfrak{a}^{-}_{n}\in[0,\infty)$ such that
$\displaystyle\mathfrak{a}^{+}\psi^{+}(\overline{q,\\!o})-\mathfrak{a}^{-}\psi^{-}(\overline{q,\\!o})$
$\displaystyle\leq\mathbb{E}[\mathfrak{c}(Y,q)]\,,$
$\displaystyle\mathfrak{a}^{+}_{n}\psi^{+}(\overline{q,\\!o})-\mathfrak{a}^{-}_{n}\psi^{-}(\overline{q,\\!o})$
$\displaystyle\leq\frac{1}{n}\sum_{i=1}^{n}\mathfrak{c}(Y_{i},q)$
for all $q\in\mathcal{Q}$. Furthermore,
$\mathfrak{a}^{+}_{n}\xrightarrow{n\to\infty}_{\mathsf{a.s.}}\mathfrak{a}^{+}$
and
$\mathfrak{a}^{-}_{n}\xrightarrow{n\to\infty}_{\mathsf{a.s.}}\mathfrak{a}^{-}$.
Lastly,
$\psi^{+}(\delta)/\psi^{-}(\delta)\xrightarrow{\delta\to\infty}\infty$.
###### Remark 3.4.
* •
Following implications hold:
HeineBorel $\displaystyle\Rightarrow\textsc{Polish}\,,$ HeineBorel
$\displaystyle\Rightarrow\textsc{SampleHeineBorel}\,,$ UpperBound
$\displaystyle\Rightarrow\textsc{Integrable}\,.$
* •
On HeineBorel: A space enjoying this property is also called boundedly compact
or proper metric space. The Euclidean spaces $\mathbb{R}^{s}$, finite
dimensional Riemannian manifolds, as well as $\mathcal{C}^{\infty}(U)$ for
open subsets $U\subseteq\mathbb{R}^{s}$ fulfill Heine–Borel [Edwards, 1995,
section 8.4.7]. See [Williamson and Janos, 1987] for a construction of further
spaces where Heine–Borel is true.
* •
On SampleHeineBorel and infinite dimension: If $M_{n}=\\{m_{n}\\}$ and
$M=\\{m\\}$ are singleton sets and $m_{n}\xrightarrow{n\to\infty}m$ almost
surely, then SampleHeineBorel holds. It is less strict than HeineBorel: In
separable Hilbert spaces of infinite dimension HeineBorel does not hold. But
with the metric $d$ induced by the inner product and $\mathfrak{c}=d^{2}$,
$\mathfrak{c}$-Fréchet means are unique and equal to the usual notion of mean.
Furthermore, strong laws of large numbers in Hilbert spaces are well-known,
see e.g. [Kawabe, 1986]. Thus, SampleHeineBorel is true. Let it be noted that
proving SampleHeineBorel in a space where HeineBorel is false may be of
similar difficulty as showing convergence of Fréchet means directly. In the
case of infinite dimensional Banach spaces, results on strong laws of large
numbers for a different notion of mean – the Bochner integral – are well
established, see e.g. [Hoffmann-Jørgensen and Pisier, 1976].
* •
On LowerBound: We illustrate this condition in the linear regression setting
with $\mathcal{Q}=\mathbb{R}^{s+1}$,
$\mathcal{Y}=(\\{1\\}\times\mathbb{R}^{s})\times\mathbb{R}$,
$\mathfrak{c}((x,y),\beta)=(y-\beta\\!^{\top}\\!x)^{2}-y^{2}=-2\beta\\!^{\top}\\!xy+\beta\\!^{\top}\\!xx\\!^{\top}\\!\beta$.
Let $(X,Y)$ be random variables with values in $\mathcal{Y}$. Let
$(X_{1},Y_{1}),\dots,(X_{n},Y_{n})$ be independent with the same distribution
as $(X,Y)$. We can set $o=0\in\mathbb{R}^{s+1}$,
$\mathfrak{a}^{+}=\lambda_{\mathsf{min}}(\mathbb{E}[XX\\!^{\top}\\!])$, where
$\lambda_{\mathsf{min}}$ denotes the smallest eigenvalue,
$\mathfrak{a}^{-}=2\|\mathbb{E}[XY]\|$,
$\mathfrak{a}^{+}_{n}=\lambda_{\mathsf{min}}(\frac{1}{n}\sum_{i=1}^{n}X_{i}X_{i}\\!^{\top}\\!)$,
$\mathfrak{a}^{-}_{n}=2\|\frac{1}{n}\sum_{i=1}^{n}X_{i}Y_{i}\|$,
$\psi^{+}(\delta)=\delta^{2}$ and $\psi^{-}(\delta)=\delta$. If
$\lambda_{\mathsf{min}}(\mathbb{E}[XX\\!^{\top}\\!])>0$, the largest
eigenvalue $\lambda_{\mathsf{max}}(\mathbb{E}[XX\\!^{\top}\\!])<\infty$, and
$\mathbb{E}[\|XY\|]<\infty$, all conditions are fulfilled.
For a further application of LowerBound, see the proof of 4.4 in the next
section.
###### Theorem 3.5.
Assume Polish, LowerSemiContinuity, IntegrableInf, SampleHeineBorel,
UpperBound, and LowerBound. Then
$d_{\subseteq}(M_{n},M)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0\,.$
###### Proof.
The proof consists of following steps:
1. 1.
Apply Theorem 3.2.
2. 2.
Reduction to a bounded set.
3. 3.
Show that $M_{n}$ is eventually precompact almost surely.
4. 4.
Apply Theorem 2.4.
Step 1. Polish, LowerSemiContinuity, and IntegrableInf are assumptions.
UpperBound implies Integrable. Thus, Theorem 3.2 yields
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,M_{n}\subseteq M$ almost
surely.
Step 2. Define $F(q)=\mathbb{E}[\mathfrak{c}(Y,q)]$,
$F_{n}(q)=\frac{1}{n}\sum_{i=1}^{n}\mathfrak{c}(Y_{i},q)$. We want to show
that there is a bounded set $B_{1}\subseteq\mathcal{Q}$ such that $F(q)\geq
F(m)+1$ and $F_{n}(q)\geq F_{n}(m)+1$ for all $q\in\mathcal{Q}\setminus B_{1}$
and $m\in M$. If $\mathcal{Q}$ is bounded, we can take $B_{1}=\mathcal{Q}$.
Assume $\mathcal{Q}$ is not bounded.
Let $m\in M$. By UpperBound, $F(m)<\infty$. Let $o\in\mathcal{Q}$ from
LowerBound. Due to LowerBound,
$F(q)\geq\mathfrak{a}^{+}\psi^{+}(\delta)-\mathfrak{a}^{-}\psi^{-}(\delta)\geq
F(m)+2$ for all
$q\in\mathcal{Q}\setminus\operatorname{\mathrm{B}}_{\delta}(o)$ and $\delta$
large enough. This holds for all $m\in M$ as $F(m)$ does not change with $m$.
We set $B_{1}=\operatorname{\mathrm{B}}_{\delta}(o)$. For $F_{n}$, it holds
$F_{n}(m)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}F(m)$ and
$\inf_{q\in\mathcal{Q}\setminus
B_{1}}F_{n}(q)\geq\mathfrak{a}^{+}_{n}\psi^{+}(\delta)-\mathfrak{a}^{-}_{n}\psi^{-}(\delta)$
with
$\mathfrak{a}^{+}_{n}\xrightarrow{n\to\infty}_{\mathsf{a.s.}}\mathfrak{a}^{+}$
and
$\mathfrak{a}^{-}_{n}\xrightarrow{n\to\infty}_{\mathsf{a.s.}}\mathfrak{a}^{-}$.
Thus, there is a random variable $N_{1}$ such that almost surely $F_{n}(q)\geq
F_{n}(m)+1$ for all $n\geq N_{1}$, $q\in\mathcal{Q}\setminus B_{1}$, and $m\in
M$.
Step 3. Take $N_{0}$ from SampleHeineBorel. Choose
$N_{2}\geq\max(N_{0},N_{1})$ such that $\epsilon_{n}<1$ for all $n\geq N_{2}$.
Then $M_{n}\subseteq B_{1}$ for all $n\geq N_{2}$. Thus, $\bigcup_{n\geq
N_{2}}M_{n}$ is bounded and – due to SampleHeineBorel, Polish, and C.5 –
precompact almost surely.
Step 4. Finally, step 1 and 3 together with Theorem 2.4 yield
$d_{\subseteq}(M_{n},M)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0$. ∎
## 4 Strong Laws for $H$-Fréchet Mean Sets
Let $(\mathcal{Q},d)$ be a metric space. Let $(\Omega,\Sigma,\mathbb{P})$ be a
probability space that is silently underlying all random variables in this
section. Let $Y$ be a random variable with values in $\mathcal{Q}$. Let
$h\colon[0,\infty)\to[0,\infty)$ be a non-decreasing function. Define
$H\colon[0,\infty)\to[0,\infty),x\mapsto\int_{0}^{x}h(t)\mathrm{d}t$. Fix an
arbitrary element $o\in\mathcal{Q}$. Denote the $H$-Fréchet mean set of $Y$ as
$M=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\mathbb{E}[H(\overline{Y,\\!q})-H(\overline{Y,\\!o})]$.
Let $Y_{1},\dots,Y_{n}$ be independent random variables with the same
distribution as $Y$. Choose
$(\epsilon_{n})_{n\in\mathbb{N}}\subseteq[0,\infty)$ with
$\epsilon_{n}\xrightarrow{n\to\infty}0$. Set
$M_{n}=\epsilon_{n}\text{-}\operatorname*{arg\,min}_{q\in\mathcal{Q}}\frac{1}{n}\sum_{i=1}^{n}(H(\overline{Y_{i},\\!q})-H(\overline{Y_{i},\\!o}))$.
###### Assumptions 4.1.
* •
InfiniteIncrease: $h(x)\xrightarrow{x\to\infty}\infty$.
* •
Additivity: There is $b\in[1,\infty)$ such that $h(2x)\leq bh(x)$ for all
$x\geq 0$.
* •
$h$-Moment: $\mathbb{E}[h(\overline{Y,\\!o})]<\infty$.
###### Remark 4.2.
* •
On Additivity: This implies $h(x+y)\leq b(h(x)+h(y))$ for all $x,y\geq 0$, see
C.2 (appendix). If $h$ is concave, Additivity holds with $b=2$ and we even
have $h(x+y)\leq h(x)+h(y)$. This condition is not very restrictive, but it
excludes functions that grow exponentially.
###### Corollary 4.3.
Assume Polish, Additivity, and $h$-Moment. Then, almost surely,
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,M_{n}\subseteq M\,.$
###### Proof.
We check the conditions of Theorem 3.2. Polish is an assumption.
LowerSemiContinuity is fulfilled as $(q,p)\mapsto d(q,p)$ and $x\mapsto H(x)$
are continuous. For Integrable, we note that $H$ is non-decreasing and apply
C.2 (i),
$\displaystyle\left|H(\overline{y,\\!q})-H(\overline{y,\\!o})\right|$
$\displaystyle\leq\left|\overline{y,\\!q}-\overline{y,\\!o}\right|h\\!\left(\max(\overline{y,\\!q},\overline{y,\\!o})\right)$
$\displaystyle\leq\overline{q,\\!o}\,h(\overline{q,\\!o}+\overline{y,\\!o})$
$\displaystyle\leq
b\,\overline{q,\\!o}\left(h(\overline{q,\\!o})+h(\overline{y,\\!o})\right)\,,$
where the last inequality follows from C.2 (ii) using Additivity. Thus,
$h$-Moment implies Integrable. To show IntegrableInf, we note that $H$ is non-
decreasing and apply C.2 (iii),
$\displaystyle H(\overline{y,\\!q})-H(\overline{y,\\!o})$ $\displaystyle\geq
H(\left|\overline{y,\\!o}-\overline{q,\\!o}\right|)-H(\overline{y,\\!o})$
$\displaystyle\geq
b^{-1}H(\overline{q,\\!o})-2\,\overline{q,\\!o}\,h(\overline{y,\\!o})$
due to Additivity. Furthermore,
$H(\delta)=\int_{0}^{\delta}h(x)\mathrm{d}x\geq\frac{1}{2}\delta
h(\frac{1}{2}\delta)$. With that, $h$-Moment implies IntegrableInf. Thus,
Theorem 3.2 can be applied. ∎
###### Corollary 4.4.
Assume SampleHeineBorel, Polish, Additivity, InfiniteIncrease, and $h$-Moment.
Then
$d_{\subseteq}(M_{n},M)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0\,.$
###### Proof.
We check the conditions of Theorem 3.5. SampleHeineBorel and Polish are
assumptions of the corollary. LowerSemiContinuity and IntegrableInf are shown
in the proof of 4.3. Following that proof, we find, due to Additivity,
$\displaystyle\left|H(\overline{y,\\!q})-H(\overline{y,\\!o})\right|$
$\displaystyle\leq
b\,\overline{q,\\!o}\left(h(\overline{q,\\!o})+h(\overline{y,\\!o})\right)\,,$
$\displaystyle H(\overline{y,\\!q})-H(\overline{y,\\!o})$ $\displaystyle\geq
b^{-1}H(\overline{q,\\!o})-2\,\overline{q,\\!o}\,h(\overline{y,\\!o})\,,$
$\displaystyle H(\delta)$ $\displaystyle\geq\frac{1}{2}\delta
h\\!\left(\frac{1}{2}\delta\right)\,.$
The first inequality together with $h$-Moment implies UpperBound. For
LowerBound, we use the second inequality. We set
$\psi^{+}(\delta)=b^{-1}H(\delta)$, $\psi^{-}(\delta)=2\delta$,
$\mathfrak{a}^{+}=\mathfrak{a}_{n}^{+}=1$,
$\mathfrak{a}^{-}=\mathbb{E}[h(\overline{Y,\\!o})]$, and
$\mathfrak{a}^{-}_{n}=\frac{1}{n}\sum_{i=1}^{n}h(\overline{Y_{i},\\!o})$ with
$\mathfrak{a}^{-}_{n}\xrightarrow{n\to\infty}_{\mathsf{a.s.}}\mathfrak{a}^{-}$
due to $h$-Moment. Because of the third inequality,
$\psi^{+}(\delta)/\psi^{-}(\delta)\geq\frac{1}{4}b^{-1}h(\frac{1}{2}\delta)\xrightarrow{\delta\to\infty}\infty$
by InfiniteIncrease. ∎
## 5 Strong Laws for $\alpha$-Fréchet Mean Sets
Let $(\mathcal{Q},d)$ be a metric space. Let $(\Omega,\Sigma,\mathbb{P})$ be a
probability space that is silently underlying all random variables in this
section. Let $Y$ be a random variable with values in $\mathcal{Y}$. Let
$\alpha>0$. Fix an arbitrary element $o\in\mathcal{Q}$. Denote the
$\alpha$-Fréchet mean set of $Y$ as
$M=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\mathbb{E}[\overline{Y,\\!q}^{\alpha}-\overline{Y,\\!o}^{\alpha}]$.
Let $Y_{1},\dots,Y_{n}$ be independent random variables with the same
distribution as $Y$. Choose
$(\epsilon_{n})_{n\in\mathbb{N}}\subseteq[0,\infty)$ with
$\epsilon_{n}\xrightarrow{n\to\infty}0$. Set
$M_{n}=\epsilon_{n}\text{-}\operatorname*{arg\,min}_{q\in\mathcal{Q}}\frac{1}{n}\sum_{i=1}^{n}(\overline{Y_{i},\\!q}^{\alpha}-\overline{Y_{i},\\!o}^{\alpha})$.
###### Corollary 5.1.
Let $\alpha>1$. Assume $\mathbb{E}[\overline{Y,\\!o}^{\alpha-1}]<\infty$ and
Polish.
1. (i)
Then $\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,M_{n}\subseteq M$
almost surely.
2. (ii)
Additionally, assume SampleHeineBorel. Then
$d_{\subseteq}(M_{n},M)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0$.
###### Proof.
Set $h(x)=\alpha^{-1}x^{\alpha-1}$. This function is non-decreasing, fulfills
Additivity with $b=2^{\alpha-1}$ and InfiniteIncrease. Due to
$\mathbb{E}[\overline{Y,\\!o}^{\alpha-1}]<\infty$, $h$-Moment is fulfilled.
Furthermore, $H(x)=x^{\alpha}$. Thus, 4.4 and 4.3 imply the claims. ∎
###### Corollary 5.2.
Let $\alpha\in(0,1]$. Assume Polish.
1. (i)
Then $\operatorname*{lim\,\overline{sup}}_{n\to\infty}\,M_{n}\subseteq M$
almost surely.
2. (ii)
Additionally, assume SampleHeineBorel. Then
$d_{\subseteq}(M_{n},M)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0$.
###### Proof.
First, consider the case $\alpha=1$. Apply C.3 (appendix) on
$\overline{Y,\\!o}$ to obtain a function $h\colon[0,\infty)\to[0,\infty)$
which is strictly increasing, continuous, concave, fulfills InfiniteIncrease,
and $\mathbb{E}[h(\overline{Y,\\!o})]<\infty$. Concavity implies Additivity
with $b=1$. As its derivative is strictly increasing,
$H(x)=\int_{0}^{x}h(t)\mathrm{d}t$ is convex and strictly increasing. Thus,
$H$ has an inverse $H^{-1}$ and $H^{-1}$ is concave. This implies that
$d_{H}(q,p)=H^{-1}(\overline{q,\\!p})$ is a metric.
As $H^{-1}$ is concave, there are $u_{0},u_{1}\in[0,\infty)$ such that
$H^{-1}(x)\leq u_{0}+u_{1}x$ for all $x\geq 0$. As $h$ is concave, there are
$v_{0},v_{1}\in[0,\infty)$ such that $h(u_{0}+u_{1}x)\leq v_{0}+v_{1}h(x)$ for
all $x\geq 0$. Thus,
$\mathbb{E}[h(d_{H}(Y,o))]=\mathbb{E}[h(H^{-1}(\overline{Y,\\!o}))]\leq
v_{0}+v_{1}\mathbb{E}[h(\overline{Y,\\!o})]<\infty$. Hence, $h$-Moment is true
for the metric $d_{H}$.
Moreover, Polish and HeineBorel-type properties of $(\mathcal{Q},d)$ are
preserved in $(\mathcal{Q},d_{H})$, as $H^{-1}$ is strictly increasing,
concave, and continuous, with $H^{-1}(0)=0$ and
$H^{-1}(\delta)\xrightarrow{\delta\to\infty}\infty$, and thus, the properties
boundedness, compactness, separability, and completeness coincide for $d$ and
$d_{H}$. Applying 4.3 and 4.4 on the minimizers of
$\mathbb{E}[H(d_{H}(Y,q))-H(d_{H}(Y,o))]=\mathbb{E}[\overline{Y,\\!q}-\overline{Y,\\!o}]$
now yields the claims for $\alpha=1$.
For $\alpha\in(0,1)$ just note, that $\tilde{d}(q,p)=d(q,p)^{\alpha}$ is a
metric, which preserves Polish and HeineBorel-type properties, and apply the
result for $\alpha=1$ on $\tilde{d}$. ∎
###### Remark 5.3.
For convergence, we need the $(\alpha-1)$-moment to be finite in the case of
$\alpha\geq 1$. [Schötz, 2019, Corollary 5] shows that in metric spaces with
nonnegative curvature the typical parametric rate of convergence
$n^{-\frac{1}{2}}$ is obtained for $\alpha$-Fréchet means assuming the
$2(\alpha-1)$-moment to be finite in the case of $\alpha\in[1,2]$ under some
further conditions.
## Appendix A Example: The Set of Medians
Let $s\in\mathbb{N}$. Consider the metric space $(\mathbb{R}^{s},d_{1})$,
where $d_{1}(q,p)=\|q-p\|_{1}=\sum_{j=1}^{s}\left|q_{j}-p_{j}\right|$. The
power Fréchet mean with $\alpha=1$ in this space is equivalent to the standard
($\alpha=2$) Fréchet mean in $(\mathbb{R}^{s},d_{1}^{\frac{1}{2}})$. For $s=1$
it is equal to the median. Let $Y=(Y^{1},\dots,Y^{s})$ be a random vector in
$\mathbb{R}^{s}$ such that
$\mathbb{P}(Y^{k}=0)=\mathbb{P}(Y^{k}=1)=\frac{1}{2}$ for $k=1,\dots,s$ and
$Y^{1},\dots,Y^{s}$ are independent. Let $Y_{1},Y_{2},\dots$ be independent
and identically distributed copies of $Y$. Let
$M=\operatorname*{arg\,min}_{q\in\mathbb{R}^{s}}\mathbb{E}[d_{1}(Y,q)]$ be the
Fréchet mean set of $Y$ and
$M_{n}=\epsilon_{n}\text{-}\operatorname*{arg\,min}_{q\in\mathbb{R}^{s}}\frac{1}{n}\sum_{i=1}^{n}d_{1}(Y_{i},q)$
its sample version.
### A.1 No Convergence in Hausdorff Distance
First consider the case $s=1$ and $\epsilon_{n}=0$. As $s=1$,
$M=\operatorname*{arg\,min}_{q\in\mathbb{R}}\mathbb{E}[\left\nonscript\;|\nonscript\;Y-q\right\nonscript\;|\nonscript\;]$
is the median of $Y$, which is $M=[0,1]$ as
$2\mathbb{E}[\left\nonscript\;|\nonscript\;Y-q\right\nonscript\;|\nonscript\;]=\left|1-q\right|+\left|q\right|$
achieves its minimal value $1$ precisely for all $q\in[0,1]$. Define
$p_{n}:=\frac{1}{n}\sum_{i=1}^{n}Y_{i}$. Then the empirical objective function
is $F_{n}(q):=p_{n}\left|1-q\right|+(1-p_{n})\left|q\right|$, i.e., the sample
Fréchet mean set is $M_{n}=\operatorname*{arg\,min}_{q\in\mathbb{R}}F_{n}(q)$.
If $n$ is odd, then either $M_{n}=\\{0\\}$ or $M_{n}=\\{1\\}$ holds. The same
is true for an even value of $n$ except when $p_{n}=\frac{1}{2}$, in which
case $M_{n}=[0,1]$. Thus, $d_{\subseteq}(M_{n},M)=0$, but
$d_{\mathsf{H}}(M_{n},M)$ does not converge almost surely.
### A.2 The Outer Limit as a Strict Subset
Next, we keep $\epsilon_{n}=0$, but consider the value of
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}M_{n}$ in a multi-dimensional
setting, i.e., $s\in\mathbb{N}$, as this yields a potentially surprising
result: By C.4, $M$ is just the Cartesian product of the median sets in each
dimension, i.e., $M=[0,1]^{s}$ (this is not to be confused with the geometric
median, which is the Fréchet mean with respect to the square root of the
Euclidean norm). Similarly, $M_{n}=\bigtimes_{k=1}^{s}M_{n}^{k}$ decomposes
into the sample Fréchet mean sets $M_{n}^{k}$ of each dimension $k=1,\dots,s$.
It holds $M_{n}^{k}=[0,1]$ if and only if the respective value of
$p_{n}^{k}:=\frac{1}{n}\sum_{i=1}^{n}Y_{i}^{k}$ is equal to $\frac{1}{2}$,
i.e., if and only if the symmetric simple random walk
$S_{n}^{k}:=\sum_{i=1}^{n}(2Y_{i}^{k}-1)$ hits $0$. Let
$N=\\#\\{n\in\mathbb{N}\nonscript\,|\allowbreak\nonscript\,\mathopen{}S_{n}^{1}=\dots=S_{n}^{s}=0\\}$.
Let $A\subseteq\mathbb{R}$ and $B_{n}\subseteq\mathbb{R}$ for all
$n\in\mathbb{N}$. If $A\subseteq B_{n}$ for infinitely many $n$, then
$A\subseteq\operatorname*{lim\,\overline{sup}}_{n\to\infty}B_{n}$. Thus, we
want to know whether $N$ is finite or infinite. This is answered by Pólya’s
Recurrence Theorem [Pólya, 1921]. It implies that for $s\in\\{1,2\\}$,
$N=\infty$ almost surely. Furthermore, if $s\geq 3$, then $N<\infty$ almost
surely. To find which points are not element of the outer limit of sample
Fréchet mean sets, note following fact: For an open subset
$A\subseteq\mathbb{R}$, if $A\subseteq\mathbb{R}\setminus B_{n}$ for all but
finitely many $n$, then
$A\subseteq\mathbb{R}\setminus\operatorname*{lim\,\overline{sup}}_{n\to\infty}B_{n}$.
We conclude, that in general a vector $x\in[0,1]^{s}$ is an element of
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}M_{n}$ if and only if at most
two entries are not in $\\{0,1\\}$, i.e., almost surely
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}M_{n}=\\{(x_{1},\dots,x_{s})\in[0,1]^{s}\nonscript\,|\allowbreak\nonscript\,\mathopen{}\\#\\{k\in\\{1,\dots,s\\}|x_{k}\in(0,1)\\}\leq
2\\}\,.$
Thus, $\operatorname*{lim\,\overline{sup}}_{n\to\infty}M_{n}=M$ for
$s\in\\{1,2\\}$ and
$\operatorname*{lim\,\overline{sup}}_{n\to\infty}M_{n}\subsetneq M$ for $s\geq
3$.
### A.3 Convergence in Hausdorff Distance
Lastly, we use the setting $s=1$ and $\epsilon_{n}\in[0,\infty)$, where we
want to find $\epsilon_{n}$ such that $[0,1]\subseteq M_{n}$. At least one of
$0$ and $1$ is a minimizer of
$F_{n}(q)=p_{n}\left|1-q\right|+(1-p_{n})\left|q\right|$ and the $F_{n}(q)$ is
linear on $[0,1]$. Thus, $[0,1]\subseteq M_{n}$ if and only if
$\epsilon_{n}\geq|F_{n}(0)-F_{n}(1)|$. This is equivalent to
$|p_{n}-\frac{1}{2}|<\frac{1}{2}\epsilon_{n}$. By Markov’s inequality
$\mathbb{P}\mathopen{}\left(\left|p_{n}-\frac{1}{2}\right|\geq\frac{1}{2}\epsilon_{n}\right)\mathclose{}\leq\frac{n^{-3}\mathbb{E}\left[(Y-\frac{1}{2})^{4}\right]}{2^{-4}\epsilon_{n}^{4}}\,.$
For $\epsilon_{n}=n^{-\frac{1}{4}}$, we obtain
$\sum_{n=1}^{\infty}\mathbb{P}\mathopen{}\left(\left|p_{n}-\frac{1}{2}\right|\geq\frac{1}{2}\epsilon_{n}\right)\mathclose{}\leq\sum_{n=1}^{\infty}n^{-2}<\infty\,.$
The Borel–Cantelli lemma implies that almost surely and for all $n$ large
enough, $|p_{n}-\frac{1}{2}|<\frac{1}{2}\epsilon_{n}$ and thus,
$[0,1]\subseteq M_{n}$. Together with 5.2, we obtain
$d_{\mathsf{H}}(M_{n},M)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0$.
## Appendix B Alternative Route to One-Sided Hausdorff Convergence
In this section, we show an alternative proof of a strong law of large numbers
for generalized Fréchet mean sets in one-sided Hausdorff distance. Although
the final result, Theorem B.5, is weaker than Theorem 3.5, it is very
illustrative to follow this line of proof: In contrast to the arguments in the
main part of the article, it does not rely on the powerful result [Korf and
Wets, 2001, Theorem 1.1], which seems to be rather complex to prove. Instead
our reasoning here is simpler and more self-contained. Furthermore, a
comparison between convergence in outer limit and in one-sided Hausdorff
distance seems more natural in view of the deterministic results Theorem 2.3
and Theorem B.2, and the stochastic results Theorem 3.2 and Theorem B.5.
### B.1 Convergence of Minimizer Sets of Deterministic Functions
Let $(\mathcal{Q},d)$ be a metric space. For $A\subseteq\mathcal{Q}$ and
$\delta>0$, denote $\operatorname{\mathrm{B}}_{\delta}(A)=\bigcup_{x\in
A}\operatorname{\mathrm{B}}_{\delta}(x)$.
###### Definition B.1.
1. (i)
Let $f,f_{n}\colon\mathcal{Q}\to\mathbb{R}$, $n\in\mathbb{N}$. The sequence
$(f_{n})_{n\in\mathbb{N}}$ _converges_ to $f$ _uniformly on bounded sets_ if
and only if for every $B\subseteq\mathcal{Q}$ with
$\operatorname{\mathsf{diam}}(B)<\infty$,
$\lim_{n\to\infty}\sup_{x\in B}\left|f_{n}(x)-f(x)\right|=0\,.$
We then write $f_{n}\xrightarrow{n\to\infty}_{\mathsf{ubs}}f$.
2. (ii)
A sequence $(B_{n})_{n\in\mathbb{N}}$ of sets $B_{n}\subseteq\mathcal{Q}$ is
called _eventually bounded_ if and only if
$\limsup_{n\to\infty}\operatorname{\mathsf{diam}}\\!\left(\bigcup_{k=n}^{\infty}B_{k}\right)<\infty\,.$
3. (iii)
A function $f$ has _approachable minimizers_ if and only if for all
$\epsilon>0$ there is a $\delta>0$ such that
$\delta\text{-}\operatorname*{arg\,min}f\subseteq
B_{\epsilon}(\operatorname*{arg\,min}f)$.
The last definition directly implies that
$d_{\subseteq}(\delta\text{-}\operatorname*{arg\,min}f,\operatorname*{arg\,min}f)\xrightarrow{\delta\to
0}0$ is equivalent to $f$ having approachable minimizers. Furthermore, if $f$
has approachable minimizers, then $\operatorname*{arg\,min}f\neq\emptyset$, as
for every $\delta>0$ the set $\delta\text{-}\operatorname*{arg\,min}f$ is non-
empty, but $\operatorname{\mathrm{B}}_{\epsilon}(\emptyset)=\emptyset$.
###### Theorem B.2.
Let $f,f_{n}\colon\mathcal{Q}\to\mathbb{R}$. Let
$(\epsilon_{n})_{n\in\mathbb{N}}\subseteq[0,\infty)$ with
$\epsilon_{n}\xrightarrow{n\to\infty}0$. Assume $f$ has approachable
minimizers, $f_{n}\xrightarrow{n\to\infty}_{\mathsf{ubs}}f$, and
$(\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n})_{n\in\mathbb{N}}$ is
eventually bounded. Then
$d_{\subseteq}(\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n},\operatorname*{arg\,min}f)\xrightarrow{n\to\infty}0$
and
$\inf f_{n}\xrightarrow{n\to\infty}\inf f\,.$
###### Proof.
Let $\epsilon>0$. As $f$ has approachable minimizers, there is $\delta>0$ such
that
$(3\delta)\text{-}\operatorname*{arg\,min}f\subseteq\operatorname{\mathrm{B}}_{\epsilon}(\operatorname*{arg\,min}f)$;
also $\operatorname*{arg\,min}f\neq\emptyset$. Let
$y\in\operatorname*{arg\,min}f$. As $f_{n}(y)\xrightarrow{n\to\infty}f(y)$,
there is $n_{1}\in\mathbb{N}$ such that $\inf f_{n}\leq\inf f+\delta$ for all
$n\geq n_{1}$. As $\epsilon_{n}\xrightarrow{n\to\infty}0$, there is
$n_{2}\in\mathbb{N}$ such that $\epsilon_{n}\leq\delta$ for all $n\geq n_{2}$.
As $(\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n})_{n\in\mathbb{N}}$ is
eventually bounded, there is $n_{3}\in\mathbb{N}$ such that
$\operatorname{\mathsf{diam}}(B)<\infty$ for $B=\bigcup_{n\geq
n_{3}}\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n}$. As
$f_{n}\xrightarrow{n\to\infty}_{\mathsf{ubs}}f$ there is $n_{4}$ such that
$\sup_{x\in B}\left|f_{n}(x)-f(x)\right|\leq\delta$. Let
$n\geq\max(n_{1},n_{2},n_{3},n_{4})$ and
$x\in\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n}$. Then
$f(x)\leq f_{n}(x)+\delta\leq\inf f_{n}+2\delta\leq\inf f+3\delta\,.$
Thus, $x\in(3\delta)\text{-}\operatorname*{arg\,min}f$. By the choice of
$\epsilon$ and $\delta$, we obtain
$\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n}\subseteq\operatorname{\mathrm{B}}_{\epsilon}(\operatorname*{arg\,min}f)$
or equivalently
$d_{\subseteq}(\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n},\operatorname*{arg\,min}f)\leq\epsilon$.
Finally, we show the convergence of the infima. We already know $\inf
f_{n}\leq\inf f+\epsilon$ for all $\epsilon>0$ and $n$ large enough. If $\inf
f_{n}\xrightarrow{n\to\infty}\inf f$ does not hold, there is a sequence
$x_{n}\in\epsilon_{n}\text{-}\operatorname*{arg\,min}f_{n}$ and $\epsilon>0$
such that $f_{n}(x_{n})<\inf f-\epsilon$ for all $n$ large enough. As before,
because of eventual boundedness and uniform convergence on bounded sets, we
have
$\sup_{k\in\mathbb{N}}\left|f_{n}(x_{k})-f(x_{k})\right|\xrightarrow{n\to\infty}0$.
Therefore, for all $\epsilon>0$ we have $f(x_{n})\leq f_{n}(x_{n})+\epsilon$
for $n$ large enough, which contradicts $f_{n}(x_{n})<\inf f-\epsilon$. ∎
In the following, we construct examples to show that none of the conditions
for one-sided Hausdorff convergence can be dropped.
###### Example B.3.
1. (i)
Let $f,f_{n}\colon\mathbb{N}_{0}\to\mathbb{R}$,
$f_{n}=1-\mathds{1}_{\\!\left\\{0,n\right\\}}$,
$f=1-\mathds{1}_{\\!\left\\{0\right\\}}$, $d(i,j)=1$ for $i\neq j$. It holds
that $f$ is continuous and has approachable minimizers, and the sequence of
nonempty sets $\operatorname*{arg\,min}f_{n}=\left\\{0,n\right\\}$ is
eventually bounded, as $\operatorname{\mathsf{diam}}(A)\leq 1$ for every
$A\subseteq\mathbb{N}_{0}$. Furthermore, $f_{n}$ converges to $f$ uniformly on
compact sets, which are exactly the finite subsets of $\mathbb{N}_{0}$, but
not uniformly on bounded sets like $\mathbb{N}_{0}$ itself. There is a
subsequence of minimizers $x_{n}=n\in\operatorname*{arg\,min}f_{n}$ that is
always bounded away from $0$, the minimizer of $f$. This shows that uniform
convergence on compact sets (instead of bounded sets) is not enough.
2. (ii)
As above, let $f,f_{n}\colon\mathbb{N}_{0}\to\mathbb{R}$,
$f_{n}=1-\mathds{1}_{\\!\left\\{0,n\right\\}}$,
$f=1-\mathds{1}_{\\!\left\\{0\right\\}}$, but define $d(i,j)=|i-j|$. It holds
that $f$ is continuous and has approachable minimizers, and
$f_{n}\xrightarrow{n\to\infty}_{\mathsf{ubs}}f$, but the sequence of nonempty
sets $\operatorname*{arg\,min}f_{n}=\left\\{0,n\right\\}$ is not eventually
bounded. Again, there is a subsequence of minimizers
$x_{n}=n\in\operatorname*{arg\,min}f_{n}$ that is always bounded away from
$0$, the minimizer of $f$. This shows that eventual boundedness of minimizer
sets cannot be dropped.
3. (iii)
Let $f,f_{n}\colon\mathbb{N}_{0}\to\mathbb{R}$, $f(0)=0$, $f(i)=\frac{1}{i}$,
$f_{n}(i)=f(i)\mathds{1}_{\\!\mathopen{}\left\\{i<n\right\\}\mathclose{}}$,
and set $d(i,j)=1$ for $i\neq j$. It holds that $f$ is continuous, but $f$
does not have approachable minimizers. The sequence of nonempty sets
$\operatorname*{arg\,min}f_{n}=\left\\{0,n,n+1,\dots\right\\}$ is eventually
bounded and $f_{n}\xrightarrow{n\to\infty}_{\mathsf{ubs}}f$. There is a
subsequence of minimizers $x_{n}=n\in\operatorname*{arg\,min}f_{n}$ that is
always bounded away from $0$, the minimizer of $f$. This shows that
approachability of minimizers of $f$ cannot be dropped.
### B.2 Strong Laws for $\mathfrak{c}$-Fréchet Mean Sets
Let $(\mathcal{Q},d)$ be a metric space, the descriptor space. Let
$\mathcal{Y}$ be a set, the data space. Let
$\mathfrak{c}\colon\mathcal{Y}\times\mathcal{Q}\to\mathbb{R}$ be a function,
the cost function. Let $(\Omega,\Sigma,\mathbb{P})$ be a probability space
that is silently underlying all random variables in this section. Let $Y$ be a
random variable with values in $\mathcal{Y}$. Denote the
$\mathfrak{c}$-Fréchet mean set of $Y$ as
$M=\operatorname*{arg\,min}_{q\in\mathcal{Q}}\mathbb{E}[\mathfrak{c}(Y,q)]$.
Let $Y_{1},\dots,Y_{n}$ be independent random variables with the same
distribution as $Y$. Choose
$(\epsilon_{n})_{n\in\mathbb{N}}\subseteq[0,\infty)$ with
$\epsilon_{n}\xrightarrow{n\to\infty}0$. Set
$M_{n}=\epsilon_{n}\text{-}\operatorname*{arg\,min}_{q\in\mathcal{Q}}\frac{1}{n}\sum_{i=1}^{n}\mathfrak{c}(Y_{i},q)$.
###### Assumptions B.4.
* •
Continuity: The function $q\mapsto\mathfrak{c}(Y,q)$ is continuous almost
surely.
###### Theorem B.5.
Assume HeineBorel, Continuity, UpperBound, and LowerBound. Then
$d_{\subseteq}(M_{n},M)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0\,.$
###### Proof.
Define $F(q)=\mathbb{E}[\mathfrak{c}(Y,q)]$,
$F_{n}(q)=\frac{1}{n}\sum_{i=1}^{n}\mathfrak{c}(Y_{i},q)$. The proof consists
of following steps:
1. 1.
Show that $F_{n}\xrightarrow{n\to\infty}_{\mathsf{ubs}}F$ almost surely.
2. 2.
Reduction to a bounded set.
3. 3.
Show that $F$ has approachable minimizers.
4. 4.
Show that $M_{n}$ is eventually bounded.
5. 5.
Apply Theorem B.2.
Step 1. To show uniform convergence on bounded sets, we will use the uniform
law of large numbers, Theorem C.1 below. Let $B\subseteq\mathcal{Q}$ be a
bounded set. By HeineBorel, $\overline{B}$ is compact. By Continuity,
$q\mapsto\mathfrak{c}(Y,q)$ is almost surely continuous. By UpperBound,
$\mathbb{E}[\sup_{q\in
B}\left\nonscript\;|\nonscript\;\mathfrak{c}(Y,q)\right\nonscript\;|\nonscript\;]<\infty$.
Thus, Theorem C.1 implies that $q\mapsto F(q)$ is continuous and
$\sup_{q\in
B}\left|F_{n}(q)-F(q)\right|\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0\,.$
Fix an arbitrary element $o\in\mathcal{Q}$. For all bounded sets $B$, there is
$\delta\in\mathbb{N}$ such that
$B\subseteq\operatorname{\mathrm{B}}_{\delta}(o)$. By the previous
considerations, uniform convergence holds almost surely for all
$(\operatorname{\mathrm{B}}_{\delta}(o))_{\delta\in\mathbb{N}}$. Thus,
$F_{n}\xrightarrow{n\to\infty}_{\mathsf{ubs}}F$ almost surely.
Step 2. Find $B_{1}\subseteq\mathcal{Q}$ and a random variable
$N_{1}\in\mathbb{N}$ as in step 2 in the proof of Theorem 3.5.
Step 3. Clearly, $M\subseteq B_{1}$ is bounded. Furthermore, for all
$\epsilon>0$ small enough the set
$D_{\epsilon}=\overline{B_{1}\setminus\operatorname{\mathrm{B}}_{\epsilon}(M)}$
is not empty (if it is, increase $\delta$), does not contain any element of
$M$ and, by HeineBorel, is compact. Thus, the continuous function $q\mapsto
F(q)$ attains its infimum on $D_{\epsilon}$ where $\inf_{q\in
D_{\epsilon}}F(q)>\inf_{q\in\mathcal{Q}}F(q)$. Take
$\zeta=\min(1,\frac{1}{2}(\inf_{q\in
D_{\epsilon}}F(q)-\inf_{q\in\mathcal{Q}}F(q)))$. Then
$\zeta\text{-}\operatorname*{arg\,min}_{q\in\mathcal{Q}}F(q)\subseteq\operatorname{\mathrm{B}}_{\epsilon}(M)$,
i.e., $F$ has approachable minimizers.
Step 4. For $\epsilon_{n}<1$ and $n\geq N_{1}$, it holds $M_{n}\subseteq
B_{1}$. Thus, $(M_{n})_{n\in\mathbb{N}}$ is eventually bounded almost surely.
Step 5. Finally, Theorem B.2 implies
$d_{\subseteq}(M_{n},M)\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0$. ∎
## Appendix C Auxiliary Results
There are many versions of uniform laws of large numbers in the literature. We
state and prove one version that is tailored to our needs.
###### Theorem C.1.
Let $(\mathcal{Y},\Sigma_{\mathcal{Y}})$ be a measurable space and $Y$ be a
random variable with values in $\mathcal{Y}$. Let $Y_{1},\dots,Y_{n}$ be
independent and have the same distribution as $Y$. Let $(\mathcal{Q},d)$ be a
metric space and $B\subseteq\mathcal{Q}$ compact. Let
$f\colon\mathcal{Y}\times B\to\mathbb{R}$ be such that $q\mapsto f(Y,q)$ is
almost surely continuous. Assume there is a random variable $Z$ such that
$\left|f(Y,q)\right|\leq Z$ for all $q\in B$ with $\mathbb{E}[Z]<\infty$. Then
$q\mapsto\mathbb{E}[f(Y,q)]$ is continuous and
$\sup_{q\in
B}\left|\frac{1}{n}\sum_{i=1}^{n}f(Y_{i},q)-\mathbb{E}[f(Y,q)]\right|\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0\,.$
###### Proof.
Let $\epsilon>0$. As $B$ is compact, there is a finite set
$\left\\{q_{1},\dots,q_{k}\right\\}\subseteq\mathcal{Q}$ such that
$B\subseteq\bigcup_{\ell=1}^{k}\operatorname{\mathrm{B}}_{\epsilon}(q_{\ell})$.
We split the supremum,
$\displaystyle\sup_{q\in
B}\left|\frac{1}{n}\sum_{i=1}^{n}f(Y_{i},q)-\mathbb{E}[f(Y,q)]\right|$
$\displaystyle\leq\sup_{\ell\in\left\\{1,\dots,k\right\\}}\sup_{q\in\operatorname{\mathrm{B}}_{\epsilon}(q_{\ell})}\left|\frac{1}{n}\sum_{i=1}^{n}\left(f(Y_{i},q)-f(Y_{i},q_{\ell})\right)-\mathbb{E}[f(Y,q)-f(Y,q_{\ell})]\right|$
$\displaystyle\quad+\sup_{\ell\in\left\\{1,\dots,k\right\\}}\left|\frac{1}{n}\sum_{i=1}^{n}f(Y_{i},q_{\ell})-\mathbb{E}[f(Y,q_{\ell})]\right|\,.$
For the second summand, by the standard strong law of large numbers applied to
each $\ell\in\\{1,\dots,k\\}$ with $\mathbb{E}[Z]<\infty$,
$\displaystyle\sup_{\ell\in\left\\{1,\dots,k\right\\}}\left|\frac{1}{n}\sum_{i=1}^{n}f(Y_{i},q_{\ell})-\mathbb{E}[f(Y,q_{\ell})]\right|\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0\,.$
For the first summand,
$\displaystyle\sup_{\ell\in\left\\{1,\dots,k\right\\}}\sup_{q\in\operatorname{\mathrm{B}}_{\epsilon}(q_{\ell})}\left|\frac{1}{n}\sum_{i=1}^{n}\left(f(Y_{i},q)-f(Y_{i},q_{\ell})\right)-\mathbb{E}[f(Y,q)-f(Y,q_{\ell})]\right|$
$\displaystyle\leq\frac{1}{n}\sum_{i=1}^{n}\sup_{q,p\in
B,\,\overline{q,p}\leq\epsilon}\left|f(Y_{i},q)-f(Y_{i},p)\right|+\mathbb{E}\left[\sup_{q,p\in
B,\,\overline{q,p}\leq\epsilon}\left\nonscript\;\middle|\nonscript\;f(Y,q)-f(Y,p)\right\nonscript\;\middle|\nonscript\;\right]\,.$
By the standard strong law of large numbers with $\mathbb{E}[Z]<\infty$,
$\displaystyle\frac{1}{n}\sum_{i=1}^{n}\sup_{q,p\in
B,\,\overline{q,p}\leq\epsilon}\left|f(Y_{i},q)-f(Y_{i},p)\right|$
$\displaystyle\xrightarrow{n\to\infty}_{\mathsf{a.s.}}\mathbb{E}\left[\sup_{q,p\in
B,\,\overline{q,p}\leq\epsilon}\left\nonscript\;\middle|\nonscript\;f(Y,q)-f(Y,p)\right\nonscript\;\middle|\nonscript\;\right]\,.$
Thus,
$\mathbb{P}\mathopen{}\left(\limsup_{n\to\infty}\sup_{q\in
B}\left|\frac{1}{n}\sum_{i=1}^{n}f(Y_{i},q)-\mathbb{E}[f(Y,q)]\right|\leq
a_{\epsilon}\right)\mathclose{}=1\,,$ (1)
where $a_{\epsilon}=2\mathbb{E}\left[\sup_{q,p\in
B,\,\overline{q,p}\leq\epsilon}\left\nonscript\;\middle|\nonscript\;f(Y,q)-f(Y,p)\right\nonscript\;\middle|\nonscript\;\right]$.
As $q\mapsto f(Y,q)$ is almost surely continuous and $B$ is compact, $q\mapsto
f(Y,q)$ is almost surely uniformly continuous, i.e., for all $\delta>0$ there
is $\varepsilon(\delta,Y)>0$ such that $\left|f(Y,q)-f(Y,p)\right|\leq\delta$
for all $\overline{q,\\!p}\leq\varepsilon(\delta,Y)$. As
$\mathbb{E}[Z]<\infty$, we can use dominated convergence to obtain
$\displaystyle\lim_{\epsilon\searrow 0}\mathbb{E}\left[\sup_{q,p\in
B,\,\overline{q,p}\leq\epsilon}\left\nonscript\;\middle|\nonscript\;f(Y,p)-f(Y,p)\right\nonscript\;\middle|\nonscript\;\right]$
$\displaystyle=\mathbb{E}\left[\lim_{\epsilon\searrow 0}\sup_{q,p\in
B,\,\overline{q,p}\leq\epsilon}\left\nonscript\;\middle|\nonscript\;f(Y,p)-f(Y,p)\right\nonscript\;\middle|\nonscript\;\right]=0\,.$
Thus, $a_{\epsilon}\xrightarrow{\epsilon\searrow 0}0$. Together with (1), this
implies
$\sup_{q\in
B}\left|\frac{1}{n}\sum_{i=1}^{n}f(Y_{i},q)-\mathbb{E}[f(Y,q)]\right|\xrightarrow{n\to\infty}_{\mathsf{a.s.}}0\,.$
We have also shown that $q\mapsto\mathbb{E}[f(Y,q)]$ is continuous, as
$\left|\mathbb{E}[f(Y,q)]-\mathbb{E}[f(Y,p)]\right|\leq a_{\overline{q,p}}$. ∎
###### Lemma C.2.
Let $h\colon[0,\infty)\to[0,\infty)$ be a non-decreasing function. Define
$H\colon[0,\infty)\to[0,\infty),x\mapsto\int_{0}^{x}h(t)\mathrm{d}t$. Let
$x,y\geq 0$. Then
1. (i)
$\left|H(x)-H(y)\right|\leq\left|x-y\right|h(\max(x,y))$.
Assume, there is $b\in[1,\infty)$ such that $h(2u)\leq bh(u)$ for all $u\geq
0$. Then
1. (ii)
$\frac{1}{2}h(x)+\frac{1}{2}h(y)\leq h(x+y)\leq b\left(h(x)+h(y)\right)$,
2. (iii)
$H(\left|x-y\right|)-H(x)\geq b^{-1}H(y)-2yh(x)$.
###### Proof.
1. (i)
This is a direct consequence of the mean value theorem.
2. (ii)
As $h$ is non-decreasing, $\max(h(x),h(y))\leq h(x+y)\leq\max(h(2x),h(2y))$.
By the definition of $b$ and with $\frac{1}{2}(u+v)\leq\max(u,v)\leq u+v$ for
$u,v\geq 0$ the claim follows.
3. (iii)
First, consider the case $x\geq y$. Define
$f(x,y)=H(x-y)-H(x)-b^{-1}H(y)+2yh(x)$. We want to show $f(x,y)\geq 0$. The
derivative of $f$ with respect to $y$ is
$\partial_{y}f(x,y)=-h(x-y)-b^{-1}h(y)+2h(x)\,.$
By applying the first inequality of (ii) to $h(x)=h((x-y)+y)$, we obtain
$\partial_{y}f(x,y)\geq 0$ as $b^{-1}\leq 1$. Hence, $f(x,y)\geq f(x,0)=0$, as
$H(y)=0$.
Now, consider the case $x\leq y$. Set $g(x,y)=H(y-x)-H(x)-b^{-1}H(y)+2yh(x)$,
which yields
$\partial_{y}g(x,y)=h(y-x)-b^{-1}h(y)+2h(x)\,.$
By applying the second inequality of (ii) to $h(y)=h((y-x)+x)$, we obtain
$\partial_{y}g(x,y)\geq 0$ as $b^{-1}\leq 1$. Thus, $g(x,y)\geq
g(x,x)=-(1+b^{-1})H(x)+2xh(x)$ as $H(0)=0$. By the definition of $H$, as $h$
is non-decreasing, $H(x)\leq xh(x)$. Hence, $g(x,y)\geq 0$ as $1+b^{-1}\leq
2$.
Together, we have shown $H(\left|x-y\right|)-H(x)-b^{-1}H(y)+2yh(x)\geq 0$ for
all $x,y\geq 0$.
∎
###### Lemma C.3.
Let $X$ be a random variable with values in $[0,\infty)$. Then there is a
strictly increasing, continuous, and concave function
$h\colon[0,\infty)\to[0,\infty)$ with
$h(\delta)\xrightarrow{\delta\to\infty}\infty$ such that
$\mathbb{E}[h(X)]<\infty$.
###### Proof.
If there is $K>0$ such that $\mathbb{P}(X<K)=1$ take $h(x)=x$. Now, assume
that $X$ is not almost surely bounded. We first construct a non-decreasing
function $\tilde{h}\colon[0,\infty)\to[0,\infty)$ such that
$\tilde{h}(x)\xrightarrow{x\to\infty}\infty$ with
$\mathbb{E}[\tilde{h}(X)]<\infty$. Then we construct a function $h$ from
$\tilde{h}$ with all desired properties.
Let $F$ be the distribution function of $X$, $F(x)=\mathbb{P}(X\leq x)$. Let
$z_{1}=0$ and $z_{n+1}=\inf\mathopen{}\left\\{x\geq
z_{n}+1\,\big{|}\,1-F(x)\leq\frac{1}{n}\right\\}\mathclose{}$. As
$F(x)\xrightarrow{x\to\infty}1$, $z_{n}<\infty$. Furthermore,
$z_{n+1}-z_{n}\geq 1$. Moreover, as $X$ is not almost surely bounded,
$1-F(x)>0$ for all $x\geq 0$. Set
$\displaystyle g(x)$
$\displaystyle=\sum_{n=1}^{\infty}(z_{n+1}-z_{n})^{-1}n^{-2}\mathds{1}_{[z_{n},z_{n+1})}(x)\,,$
$\displaystyle\tilde{h}(x)$
$\displaystyle=\int_{0}^{x}\frac{g(t)}{1-F(t)}\mathrm{d}t\,.$
Then
$\displaystyle\lim_{x\to\infty}\tilde{h}(x)=\int_{0}^{\infty}\frac{g(t)}{1-F(t)}\mathrm{d}t\geq\sum_{n=1}^{\infty}n^{-1}=\infty\,.$
Moreover, $\tilde{h}(x)$ is strictly increasing, as $g(t)\geq 0$ and
$1-F(t)\geq 0$. The function $\tilde{h}$ is continuously differentiable
everywhere except at point $z_{n}$, $n\in\mathbb{N}$. Thus,
$\displaystyle\mathbb{E}[\tilde{h}(X)]$
$\displaystyle=\int_{0}^{\infty}\mathbb{P}\mathopen{}\left(\tilde{h}(X)>t\right)\mathclose{}\mathrm{d}t$
$\displaystyle=\int_{0}^{\infty}\mathbb{P}\mathopen{}\left(X>\tilde{h}^{-1}(t)\right)\mathclose{}\mathrm{d}t$
$\displaystyle=\int_{0}^{\infty}\tilde{h}^{\prime}(t)\mathbb{P}\mathopen{}\left(X>t\right)\mathclose{}\mathrm{d}t$
$\displaystyle=\int_{0}^{\infty}g(t)\mathrm{d}t$
$\displaystyle=\sum_{n=1}^{\infty}n^{-2}<\infty\,.$
Let $a_{0}=1$, $x_{0}=0$, $x_{n+1}=\inf\mathopen{}\left\\{x\geq
x_{n}+a_{n}^{-1}\,\big{|}\,\tilde{h}(x)\geq n+1\right\\}\mathclose{}$ and
$a_{n+1}=(x_{n+1}-x_{n})^{-1}$. Let $h\colon[0,\infty)\to[0,\infty)$ be the
linear interpolation of $(x_{n},n)_{n\in\mathbb{N}_{0}}$. As
$\tilde{h}(x)\xrightarrow{x\to\infty}\infty$, all $x_{n}$ are finite. Hence,
$h(x)\xrightarrow{x\to\infty}\infty$. Because of $a_{n}>0$, $h$ is strictly
increasing. Furthermore, $a_{n+1}\leq a_{n}$ as $x_{n+1}\geq
x_{n}+a_{n}^{-1}$. As $h$ is continuous and $a_{n}$ is the derivative of $h$
in the interval $(x_{n},x_{n+1})$, $h$ is concave. Lastly,
$h(x)\leq\tilde{h}(x)+1$. Thus, $\mathbb{E}[h(X)]<\infty$. ∎
###### Lemma C.4 (Fréchet means in product spaces).
Let $K\in\mathbb{N}$. Let
$(\mathcal{Q}_{1},d_{1}),\dots,(\mathcal{Q}_{K},d_{K})$ be metric spaces. Let
$\alpha\geq 1$. Set $\mathcal{Q}:=\bigtimes_{k=1}^{K}\mathcal{Q}_{k}$ and
$d\colon\mathcal{Q}\times\mathcal{Q}\to[0,\infty)$,
$d(q,p):=(\sum_{k=1}^{K}d_{k}(q_{k},p_{k})^{\alpha})^{\frac{1}{\alpha}}$. Then
$(\mathcal{Q},d)$ is a metric space. Let $Y=(Y^{1},\dots,Y^{K})$ be a tuple of
random variables such that $Y^{k}$ has values in $\mathcal{Q}_{k}$ and
$\mathbb{E}[d(Y,o)^{\alpha-1}]<\infty$ for an element $o\in\mathcal{Q}$. Let
$M$ be the $\alpha$-Fréchet mean set of $Y$ in $(\mathcal{Q},d)$, and $M^{k}$
be the $\alpha$-Fréchet mean set of $Y^{k}$ in $(\mathcal{Q}_{k},d_{k})$. Then
$M$ is the Cartesian product of the sets $M^{k}$, i.e.,
$M=\bigtimes_{k=1}^{K}M^{k}\,.$
C.4 can be proven by straight forward calculations.
###### Lemma C.5.
Let $(\mathcal{Q},d)$ be a complete metric space and $A\subseteq\mathcal{Q}$.
Assume that all closed subsets $B\subseteq A$ are compact. Then $\overline{A}$
is compact.
###### Proof.
Let $(a_{k})_{k\in\mathbb{N}}\subseteq\overline{A}$. We show that
$(a_{k})_{k\in\mathbb{N}}$ has a converging subsequence with limit
$a\in\overline{A}$, which implies compactness of $\overline{A}$.
Let $\delta_{n}=2^{-n}$. Define
$B_{n}:=\mathcal{Q}\setminus\operatorname{\mathrm{B}}_{\delta_{n}}(\mathcal{Q}\setminus
A)$. The sets $B_{n}$ are closed and subsets of $A$. Thus, they are compact.
Let $b_{k}^{n}\in\operatorname*{arg\,min}_{b\in B_{n}}d(b,a_{k})$. Such an
element exists as $B_{n}$ is compact. Furthermore,
$d(b_{k}^{n},a_{k})\leq\delta_{n}$. Define the subindex sequences
$(k(n,\ell))_{\ell\in\mathbb{N}}\subseteq\mathbb{N}$ such that
$k(0,\ell)=\ell$ and $(b^{n}_{k(n,\ell)})_{\ell\in\mathbb{N}}$ is a converging
subsequence of $(b^{n}_{k(n-1,\ell)})_{\ell\in\mathbb{N}}$ with limit
$b^{n}_{k(n,\ell)}\xrightarrow{\ell\to\infty}b^{n}_{\infty}$ and
$d(b^{n}_{k(n,\ell)},b^{n}_{\infty})\leq\delta_{\ell}$. By the triangle
inequality $d(b^{n_{1}}_{k},b^{n_{2}}_{k})\leq
d(b^{n_{1}}_{k},a_{k})+d(b^{n_{2}}_{k},a_{k})\leq\delta_{n_{1}}+\delta_{n_{2}}$.
Thus,
$d(b^{n_{1}}_{\infty},b^{n_{2}}_{\infty})\leq\delta_{n_{1}}+\delta_{n_{2}}$,
which makes $(b^{n}_{\infty})_{n\in\mathbb{N}}$ a Cauchy-sequence. Define $a$
as its limit, i.e., $b^{n}_{\infty}\xrightarrow{n\to\infty}a$. As
$b^{n}_{\infty}\in B_{n}\subseteq\overline{A}$, also $a\in\overline{A}$.
Finally, the triangle inequality yields
$d(a_{k(n,n)},a)\leq
d(a_{k(n,n)},b^{n}_{k(n,n)})+d(b^{n}_{k(n,n)},b^{n}_{\infty})+d(b^{n}_{\infty},a)\xrightarrow{n\to\infty}0\,,$
i.e., $(a_{k(n,n)})_{n\in\mathbb{N}}$ is subsequence of
$(a_{k})_{k\in\mathbb{N}}$ which converges in $\overline{A}$. ∎
## References
* [Arnaudon et al., 2013] Arnaudon, M., Barbaresco, F., and Yang, L. (2013). Medians and means in Riemannian geometry: existence, uniqueness and computation. In Matrix information geometry, pages 169–197. Springer, Heidelberg.
* [Artstein and Wets, 1995] Artstein, Z. and Wets, R. J.-B. (1995). Consistency of minimizers and the SLLN for stochastic programs. J. Convex Anal., 2(1-2):1–17.
* [Bhattacharya and Patrangenaru, 2003] Bhattacharya, R. and Patrangenaru, V. (2003). Large sample theory of intrinsic and extrinsic sample means on manifolds. I. Ann. Statist., 31(1):1–29.
* [Choirat et al., 2003] Choirat, C., Hess, C., and Seri, R. (2003). A functional version of the birkhoff ergodic theorem for a normal integrand: A variational approach. Ann. Probab., 31(1):63–92.
* [Edwards, 1995] Edwards, R. E. (1995). Functional analysis. Dover Publications, Inc., New York. Theory and applications, Corrected reprint of the 1965 original.
* [Evans and Jaffe, 2020] Evans, S. N. and Jaffe, A. Q. (2020). Strong laws of large numbers for Fréchet means. arXiv preprint.
* [Fréchet, 1948] Fréchet, M. (1948). Les éléments aléatoires de nature quelconque dans un espace distancié. Ann. Inst. H. Poincaré, 10:215–310.
* [Hoffmann-Jørgensen and Pisier, 1976] Hoffmann-Jørgensen, J. and Pisier, G. (1976). The law of large numbers and the central limit theorem in Banach spaces. Ann. Probability, 4(4):587–599.
* [Hotz and Huckemann, 2015] Hotz, T. and Huckemann, S. (2015). Intrinsic means on the circle: uniqueness, locus and asymptotics. Ann. Inst. Statist. Math., 67(1):177–193.
* [Huckemann, 2011] Huckemann, S. F. (2011). Intrinsic inference on the mean geodesic of planar shapes and tree discrimination by leaf growth. Ann. Statist., 39(2):1098–1124.
* [Kawabe, 1986] Kawabe, J. (1986). Characterization of Hilbert spaces by the strong law of large numbers. J. Multivariate Anal., 20(1):155–160.
* [Kemperman, 1987] Kemperman, J. H. B. (1987). The median of a finite measure on a Banach space. In Statistical data analysis based on the $L_{1}$-norm and related methods (Neuchâtel, 1987), pages 217–230. North-Holland, Amsterdam.
* [Korf and Wets, 2001] Korf, L. A. and Wets, R. J.-B. (2001). Random lsc functions: An ergodic theorem. Mathematics of Operations Research, 26(2):421–445.
* [MacQueen, 1967] MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. In Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Statistics, pages 281–297, Berkeley, Calif. University of California Press.
* [Pólya, 1921] Pólya, G. (1921). Über eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Straßennetz. Math. Ann., 84(1-2):149–160.
* [Rockafellar and Wets, 1998] Rockafellar, R. and Wets, R. J.-B. (1998). Variational Analysis. Springer Verlag, Heidelberg, Berlin, New York.
* [Schötz, 2019] Schötz, C. (2019). Convergence rates for the generalized Fréchet mean via the quadruple inequality. Electron. J. Stat., 13(2):4280–4345.
* [Sturm, 2003] Sturm, K.-T. (2003). Probability measures on metric spaces of nonpositive curvature. In Heat kernels and analysis on manifolds, graphs, and metric spaces (Paris, 2002), volume 338 of Contemp. Math., pages 357–390. Amer. Math. Soc., Providence, RI.
* [Sverdrup-Thygeson, 1981] Sverdrup-Thygeson, H. (1981). Strong law of large numbers for measures of central tendency and dispersion of random variables in compact metric spaces. Ann. Statist., 9(1):141–145.
* [Williamson and Janos, 1987] Williamson, R. and Janos, L. (1987). Constructing metrics with the Heine-Borel property. Proc. Amer. Math. Soc., 100(3):567–573.
* [Ziezold, 1977] Ziezold, H. (1977). On expected figures and a strong law of large numbers for random elements in quasi-metric spaces. In Transactions of the Seventh Prague Conference on Information Theory, Statistical Decision Functions, Random Processes and of the Eighth European Meeting of Statisticians (Tech. Univ. Prague, Prague, 1974), Vol. A, pages 591–602.
|
# DyLiN: Making Light Field Networks Dynamic
Heng Yu1 Joel Julin1 Zoltán Á. Milacski1 Koichiro Niinuma2 László A. Jeni1
1Robotics Institute, Carnegie Mellon University 2Fujitsu Research of America
{hengyu, jjulin<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Light Field Networks, the re-formulations of radiance fields to oriented rays,
are magnitudes faster than their coordinate network counterparts, and provide
higher fidelity with respect to representing 3D structures from 2D
observations. They would be well suited for generic scene representation and
manipulation, but suffer from one problem: they are limited to holistic and
static scenes. In this paper, we propose the Dynamic Light Field Network
(DyLiN) method that can handle non-rigid deformations, including topological
changes. We learn a deformation field from input rays to canonical rays, and
lift them into a higher dimensional space to handle discontinuities. We
further introduce CoDyLiN, which augments DyLiN with controllable attribute
inputs. We train both models via knowledge distillation from pretrained
dynamic radiance fields. We evaluated DyLiN using both synthetic and real
world datasets that include various non-rigid deformations. DyLiN
qualitatively outperformed and quantitatively matched state-of-the-art methods
in terms of visual fidelity, while being $25-71\times$ computationally faster.
We also tested CoDyLiN on attribute annotated data and it surpassed its
teacher model. Project page: https://dylin2023.github.io.
Ground Truth
Video
HyperNeRF [28]
Render time: $3\text{\,}\mathrm{s}$
TiNeuVox [8]
Render time: $7\text{\,}\mathrm{s}$
Ours
Render time: $0.1\text{\,}\mathrm{s}$
__
Figure 1: Our proposed DyLiN for dynamic 3D scene rendering achieves higher
quality than its HyperNeRF teacher model and the state-of-the-art TiNeuVox
model, while being an order of magnitude faster. Right: DyLiN is of moderate
storage size (shown by dot radii). For each method, the relative improvement
in Peak Signal-to-Noise Ratio over NeRF ($\Delta\text{PSNR}$) is measured for
the best-performing scene.
## 1 Introduction
Machine vision has made tremendous progress with respect to reasoning about 3D
structure using 2D observations. Much of this progress can be attributed to
the emergence of coordinate networks [6, 21, 26], such as Neural Radiance
Fields (NeRF) [23] and its variants [2, 22, 39, 20]. They provide an object
agnostic representation for 3D scenes and can be used for high-fidelity
synthesis for unseen views. While NeRFs mainly focus on static scenes, a
series of works[29, 34, 10, 27] extend the idea to dynamic cases via
additional components that map the observed deformations to a canonical space,
supporting moving and shape-evolving objects. It was further shown that by
lifting this canonical space to higher dimensions the method can handle
changes in scene topology as well [28].
However, the applicability of NeRF models is considerably limited by their
computational complexities. From each pixel, one typically casts a ray from
that pixel, and numerically integrates the radiance and color densities
computed by a Multi-Layer Perceptron (MLP) across the ray, approximating the
pixel color. Specifically, the numerical integration involves sampling
hundreds of points across the ray, and evaluating the MLP at all of those
locations. Several works have been proposed for speeding up static NeRFs.
These include employing a compact 3D representation structure [18, 43, 9],
breaking up the MLP into multiple smaller networks [30, 31], leveraging depth
information [24, 7], and using fewer sampling points [17, 24, 42]. Yet, these
methods still rely on integration and suffer from sampling many points, making
them prohibitively slow for real-time applications. Recently, Light Field
Networks (LFNs) [32] proposed replacing integration with a direct ray-to-color
regressor, trained using the same sparse set of images, requiring only a
single forward pass. R2L [36] extended LFNs to use a very deep residual
architecture, trained by distillation from a NeRF teacher model to avoid
overfitting. In contrast to static NeRF acceleration, speeding up dynamic
NeRFs is a much less discussed problem in the literature. This is potentially
due to the much increased difficulty of the task, as one also has to deal with
the high variability of motion. In this direction, [8, 38] greatly reduce the
training time by using well-designed data structures, but their solutions
still rely on integration. LFNs are clearly better suited for acceleration,
yet, to the best of our knowledge, no works have attempted extending LFNs to
the dynamic scenario.
In this paper, we propose 2 schemes extending LFNs to dynamic scene
deformations, topological changes and controllability. First, we introduce
DyLiN, by incorporating a deformation field and a hyperspace representation to
deal with non-rigid transformations, while distilling knowledge from a
pretrained dynamic NeRF. Afterwards, we also propose CoDyLiN, via adding
controllable input attributes, trained with synthetic training data generated
by a pretrained Controllable NeRF (CoNeRF) [13] teacher model. To test the
efficiencies of our proposed schemes, we perform empirical experiments on both
synthetic and real datasets. We show that our DyLiN achieves better image
quality and an order of magnitude faster rendering speed than its original
dynamic NeRF teacher model and the state-of-the-art TiNeuVox [8] method.
Similarly, we also show that CoDyLiN outperforms its CoNeRF teacher. We
further execute ablation studies to verify the individual effectiveness of
different components of our model. Our methods can be also understood as
accelerated versions of their respective teacher models, and we are not aware
of any prior works that attempt speeding up CoNeRF.
Our contributions can be summarized as follows:
* •
We propose DyLiN, an extension of LFNs that can handle dynamic scenes with
topological changes. DyLiN achieves this through non-bending ray deformations,
hyperspace lifting for whole rays, and knowledge distillation from dynamic
NeRFs.
* •
We show that DyLiN achieves state-of-the-art results on both synthetic and
real-world scenes, while being an order of magnitude faster than the
competition. We also include an ablation study to analyze the contributions of
our model components.
* •
We introduce CoDyLiN, further extending our DyLiN to handle controllable input
attributes.
## 2 Related Works
##### Dynamic NeRFs.
NeRFs have demonstrated impressive performances in novel view synthesis for
static scenes. Extending these results to dynamic (deformable) domains has
sparked considerable research interest [29, 34, 10, 27, 28]. Among these
works, the ones that most closely resemble ours are D-NeRF [29] and HyperNeRF
[28]. D-NeRF uses a translational deformation field with temporal positional
encoding. HyperNeRF introduces a hyperspace representation, allowing
topological variations to be effectively captured. Our work expands upon these
works, as we propose DyLiN, a similar method for LFNs. We use the above
dynamic NeRFs as pretrained teacher models for DyLiN, achieving better
fidelity with orders of magnitude shorter rendering times.
##### Accelerated NeRFs.
The high computational complexity of NeRFs has motivated several follow-up
works on speeding up the numerical integration process. The following first
set of works are restricted to static scenarios. NSVF [18] represents the
scene with a set of voxel-bounded MLPs organized in a sparse voxel octree,
allowing voxels without relevant content to be skipped. KiloNeRF [31] divides
the scene into a grid and trains a tiny MLP network for each cell within the
grid, saving on pointwise evaluations. AutoInt [17] reduces the number of
point samples for each ray using learned partial integrals. In contrast to the
above procedures, speeding up dynamic NeRFs is much less discussed in the
literature, as there are only 2 papers published on this subject. Wang et al.
[38] proposed a method based on Fourier plenoctrees for real-time dynamic
rendering, however, the technique requires an expensive rigid scene capturing
setup. TiNeuVox [8] reduces training time by augmenting the MLP with time-
aware voxel features and a tiny deformation network, while using a multi-
distance interpolation method to model temporal variations. Interestingly, all
of the aforementioned methods suffer from sampling hundreds of points during
numerical integration, and none of them support changes in topology, whereas
our proposed DyLiN excels from both perspectives.
##### Light Field Networks (LFNs).
As opposed to the aforementioned techniques that accelerate numerical
integration within NeRFs, some works have attempted completely replacing
numerical integration with direct per-ray color MLP regressors called Light
Field Networks (LFNs). Since these approaches accept rays as inputs, they rely
heavily on the ray representation. Several such representations exist in the
literature. Plenoptic functions [3, 1] encode 3D rays with 5D representations,
i.e., a 3D point on a ray and 2 axis-angle ray directions. Light fields [11,
15] use 4D ray codes most commonly through two-plane parameterization: given 2
parallel planes, rays are encoded by the 2D coordinates of the 2 ray-plane
intersection points. Sadly, these representations are either discontinuous or
cannot represent the full set of rays. Recently, Sitzmann et al. [32] advocate
for the usage of the 6D Plücker coordinate representation, i.e., a 3D point on
a ray coupled with its cross product with a 3D direction. They argue that this
representation covers the whole set of rays and is continuous. Consequently,
they feed it as input to an LFN, and additionally apply Meta-Learning across
scenes to learn a multi-view consistency prior. However, they have not
considered alternative ray representations, MLP architectures or training
procedures, and only tested their method on toy datasets. R2L [36] employs an
even more effective ray encoding by concatenating few points sampled from it,
and proposes a very deep (88 layers) residual MLP network for LFNs. They
resolve the proneness to overfitting by training the MLP with an abundance of
synthetic images generated by a pretrained NeRF having a shallow MLP.
Interestingly, they find that the student LFN model produces significantly
better rendering quality than its teacher NeRF model, while being about 30
times faster. Our work extends LFNs to dynamic deformations, topological
changes and controllability, achieving similar gains over the pretrained
dynamic NeRF teacher models.
##### Knowledge Distillation.
The process of training a student model with synthetic data generated by a
teacher model is called Knowledge Distillation (KD) [4], and it has been
widely used in the vision and language domains [5, 15, 35, 37] as a form of
data augmentation. Like R2L [36], we also use KD for training, however, our
teacher and student models are both dynamic and more complex than their R2L
counterparts.
## 3 Methods
In this section, we present our two solutions for extending LFNs. First, in
Sec. 3.1, we propose DyLiN, supporting dynamic deformations and hyperspace
representations via two respective MLPs. We use KD to train DyLiN with
synthetic data generated by a pretrained dynamic NeRF teacher model. Second,
in Sec. 3.2, we introduce CoDyLiN, which further augments DyLiN with
controllability, via lifting attribute inputs to hyperspace with MLPs, and
masking their hyperspace codes for disentanglement. In this case, we also
train via KD, but the teacher model is a pretrained controllable NeRF.
### 3.1 DyLiN
#### 3.1.1 Network Architecture
Our overall DyLiN architecture $G_{\phi}$ is summarized in Fig. 2. It
processes rays instead of the widely adopted 3D point inputs as follows.
Figure 2: Schematic diagram of our proposed DyLiN architecture. We take a ray
$r=(o,d)$ and time $t$ as input. We deform $r$ into
$r^{\prime}=(o^{\prime},d^{\prime})$, and sample few points $x_{k}$,
$k=1,\dots,K$ along $r^{\prime}$ to encode it (blue). In parallel, we also
lift $r$ and $t$ to the hyperspace code $w$ (green), and concatenate it with
each $x_{k}$. We use the concatenation to regress the RGB color of $r$ at $t$
directly (red).
Specifically, our deformation MLP $T_{\omega}$ maps an input ray $r=(o,d)$ to
canonical space ray $r^{\prime}=(o^{\prime},d^{\prime})$:
$(o^{\prime},d^{\prime})=T_{\omega}(o,d,t).$ (1)
Unlike the pointwise deformation MLP proposed in Nerfies [27], which bends
rays by offsetting their points independently, our MLP outputs rays
explicitly, hence no ray bending occurs. Furthermore, after obtaining
$r^{\prime}$, we encode it by sampling and concatenating $K$ points along it.
Our hyperspace MLP $H_{\psi}$ is similar to $T_{\omega}$, except it outputs a
hyperspace representation $w$:
$w=H_{\psi}(o,d,t).$ (2)
In contrast to HyperNeRF [28], which predicts a hyperspace code $w$ for each
3D point, we use rays and compute a single $w$ for each ray.
Both MLPs further take the index $t$ as input to encode temporal deformations.
Once the $K$ points and $w$ are obtained, we concatenate them and feed the
result into our LFN $R_{\pi}$, which is a deep residual color MLP regressor.
Overall, we can collect the model parameters as $\phi=[\omega,\psi,\pi]$.
Note that without our two MLPs $T_{\omega}$ and $H_{\psi}$, our DyLiN falls
back to the vanilla LFN.
#### 3.1.2 Training Procedure
Our training procedure is composed of 3 phases.
First, we pretrain a dynamic NeRF model $F_{\theta}$ (e.g., D-NeRF [29] or
HyperNeRF [28]) by randomly sampling time $t$ and input ray $r$, and
minimizing the Mean Squared Error (MSE) against the corresponding RGB color of
monocular target video $I$:
$\min_{\theta}\,\mathbb{E}_{t,r=(o,d)}\left[\|F_{\theta}(o,d,t)-I(o,d,t)\|_{2}^{2}\right].$
(3)
Recall, that $F_{\theta}$ is slow, as it performs numerical integration across
the ray $r=(o,d)$.
Second, we employ the newly obtained $F_{\theta^{*}}$ as the teacher model for
our DyLiN student model $G_{\phi}$ via KD. Specifically, we minimize the MSE
loss against the respective pseudo ground truth ray color generated by
$F_{\theta^{*}}$ across $S$ ray samples:
$\min_{\phi}\,\mathbb{E}_{t,r=(o,d)}\left[\|G_{\phi}(o,d,t)-F_{\theta^{*}}(o,d,t)\|_{2}^{2}\right],$
(4)
yielding $G_{\tilde{\phi}}$. Note how this is considerably different from R2L
[36], which uses a static LFN that is distilled from a static NeRF.
Finally, we initialize our student model $G_{\phi}$ with parameters
$\tilde{\phi}$ and fine-tune it using the original real video data:
$\min_{\phi,\,\phi_{0}=\tilde{\phi}}\,\mathbb{E}_{t,r=(o,d)}\left[\|G_{\phi}(o,d,t)-I(o,d,t)\|_{2}^{2}\right],$
(5)
obtaining $\phi^{*}$.
### 3.2 CoDyLiN
#### 3.2.1 Network Architecture
We further demonstrate that our DyLiN architecture from Sec. 3.1.1 can be
extended to the controllable scenario using attribute inputs with hyperspace
MLPs and attention masks. Our proposed CoDyLiN network $Q_{\tau}$ is depicted
in Fig. 3.
Figure 3: Schematic diagram of our proposed CoDyLiN architecture. We augment
our DyLiN (blue, green, red) by introducing scalar attribute inputs
$\alpha_{i}\in[-1,1]$, $i=1,\dots,n$ and lifting them to their respective
hyperspace codes $w_{i}$ (orange, …, pink MLPs). Next, $M_{i}$ disentangles
$w_{i}$ from $w_{j}$, $j\neq i$ by masking it into $w_{i}^{\prime}$ (orange,
…, pink boxes and bottom insets). We concatenate the sampled points $x_{k}$,
$k=1,\dots,K$ with the $w_{i}^{\prime}$, $i=1,\dots,n$ and predict the RGB
color corresponding to the inputs (red). Arrows from $(o^{\prime},d^{\prime})$
and $w_{0}$ to $M_{i}$ are omitted from the top figure for simplicity. Compare
this with Fig. 2.
Specifically, we start from DyLiN $G_{\phi}$ and add scalar inputs
$\alpha_{i}\in[-1,1]$, $i=1,\dots,n$ next to $o,d,t$. Intuitively, these are
given strength values for specific local attributes, which can be interpolated
continuously. $n$ is the total number of attributes.
Each $\alpha_{i}$ is then processed independently with its own hyperspace MLP
$H_{i,\psi_{i}}$ to yield the hyperspace code $w_{i}$:
$w_{i}=H_{i,\psi_{i}}(o,d,t).$ (6)
Next, we include mask MLP regressors $M_{i,\rho_{i}}$ to generate scalar
attention masks $\hat{m}_{i}\in[0,1]$ for each $w_{i}$ (including $w_{0}=w$):
$\displaystyle\hat{m}_{i}$ $\displaystyle=M_{i,\rho_{i}}(w_{i},w,o,d),$ (7)
$\displaystyle\hat{m}_{0}$ $\displaystyle=1-\sum_{i=1}^{n}{\hat{m}_{i}},$
$\displaystyle w_{i}^{\prime}$ $\displaystyle=\hat{m}_{i}\cdot w_{i},\quad
i=0,\dots,n,$
This helps the architecture to spatially disentangle (i.e., localize) the
effects of attributes $\alpha_{i}$, while $\hat{m}_{0}$ can be understood as
the space not affected by any attributes.
Finally, we sample $K$ points on the ray similarly to Sec. 3.1.1, concatenate
those with the $w_{i}^{\prime}$ vectors, and process the result further with
LFN $R_{\pi}$. Again, we can use a shorthand for the parameters:
$\tau=[\omega,\psi,\psi_{1},\dots,\psi_{n},\rho_{1},\dots,\rho_{n},\pi]$.
Observe that without our MLPs $H_{i,\psi_{i}}$, $M_{i,\rho_{i}}$,
$i=1,\dots,n$, our CoDyLiN reverts to our simpler DyLiN. Different from CoNeRF
[13], we process rays instead of points, and use the $\alpha_{i}$ as inputs
instead of targets.
#### 3.2.2 Training Procedure
Akin to Sec. 3.1.2, we split training into pretraining and distillation steps,
but omit fine-tuning.
First, we pretrain a CoNeRF model $E_{\nu}$ [13] by randomly sampling
$(t,r,i)$, against 3 ground truths: ray color, attribute values $\alpha_{i}$
and 2D per-attribute masks $m_{2D,i}$. This yields us $E_{\nu^{*}}$. For
brevity, we omit the details of this step, and kindly forward the reader to
Section 3 in [13].
Second, we distill from our teacher CoNeRF model $E_{\nu^{*}}$ into our
student CoDyLiN $Q_{\tau}$ by randomly sampling
$t,r,\alpha_{1},\dots,\alpha_{n}$, and minimizing the MSE against 2 pseudo
ground truths, i.e., ray colors and 2D masks $\bar{m}_{2D,i}$:
$\min_{\tau}\,\mathbb{E}_{t,r=(o,d)}\biggl{[}\|Q_{\tau}(o,d,t,\alpha_{1:n})-\bar{E}_{\nu^{*}}(o,d,t,\alpha_{1:n})\|_{2}^{2}\\\
+\lambda_{m}\cdot\sum_{i=0}^{n}{\|\hat{m}_{i}(o,d,t,\alpha_{i})-\bar{m}_{2D}(o,d,t,\alpha_{1:n})_{i}\|_{2}^{2}}\biggr{]},$
(8)
where $\bar{E}_{\nu}$ is identical to $E_{\nu}$ except for taking
$\alpha_{1:n}=[\alpha_{1},\dots,\alpha_{n}]$ as input and outputting the masks
$\bar{m}_{2D,i}$, $i=0,\dots,n$. We denote the result of the optimization as
$Q_{\tau^{*}}$.
We highlight that our teacher and student models are both controllable in this
setup.
## 4 Experimental Setup
### 4.1 Datasets
To test our hypotheses, we performed experiments on three types of dynamic
scenes: synthetic, real and real controllable.
Synthetic Scenes. We utilized the synthetic $360^{\circ}$ dynamic dataset
introduced by [29], which contains 8 animated objects with complicated
geometry and realistic non-Lambertian materials. Each dynamic scene consists
of $50$ to $200$ training images and $20$ testing images. We used $400\times
400$ image resolution. We applied D-NeRF [29] as our teacher model with the
publicly available pretrained weights.
Real Scenes. We collected real dynamic data from $2$ sources. First, we
utilized 5 topologically varying scenes provided by [28] (Broom, 3D Printer,
Chicken, Americano and Banana), which were captured by a rig encompassing a
pole with two Google Pixel 3 phones rigidly attached roughly
$16\text{\,}\mathrm{cm}$ apart. Second, we collected human facial videos using
an iPhone 13 Pro camera. We rendered both sets at $960\times 540$ image
resolution. We pretrained a HyperNeRF [28] teacher model from scratch for each
scene.
Real Controllable Scenes. We borrowed 2 real controllable scenes from [13]
(closing/opening eyes/mouth, and transformer), which are captured either with
a Google Pixel 3a or an Apple iPhone 13 Pro, and contain annotations over
various attributes. We applied image resolution of $480\times 270$ pixels. We
pretrained a CoNeRF [13] teacher model from scratch per scene.
### 4.2 Settings
Throughout our experiments, we use the settings listed below, many of which
follow [36].
In order to retain efficiency, we define $T_{\omega}$ and $H_{\psi}$ to be
small MLPs, with $T_{\omega}$ consisting of $7$ layers of $128$ units with
$r^{\prime}\in\mathbb{R}^{6}$, and $H_{\psi}$ having $6$ layers of $64$ units
with $w\in\mathbb{R}^{8}$. Then, we use $K=16$ sampled points to represent
rays, where sampling is done randomly during training and evenly spaced during
inference.
Contrary to $T_{\omega}$ and $H_{\psi}$, our LFN $R_{\pi}$ is a very deep
residual color MLP regressor, containing $88$ layers with $256$ units per
layer, in order to have enough capacity to learn the video generation process.
We generate rays within Eqs. 3, 4, 5 and 8 by sampling ray origins
$o=(x_{o},y_{o},z_{o})$ and normalized directions $d=(x_{d},y_{d},z_{d})$
randomly from the uniform distribution $U$ as follows:
$\displaystyle x_{o}$ $\displaystyle\sim U(x_{o}^{min},x_{o}^{max}),\quad$
$\displaystyle x_{d}$ $\displaystyle\sim U(x_{d}^{min},x_{d}^{max}),$ (9)
$\displaystyle y_{o}$ $\displaystyle\sim U(y_{o}^{min},y_{o}^{max}),\quad$
$\displaystyle y_{d}$ $\displaystyle\sim U(y_{d}^{min},y_{d}^{max}),$ (10)
$\displaystyle z_{o}$ $\displaystyle\sim U(z_{o}^{min},z_{o}^{max}),\quad$
$\displaystyle z_{d}$ $\displaystyle\sim U(z_{d}^{min},z_{d}^{max}),$ (11)
where the $min,max$ bounds of the 6 intervals are inferred from the original
training video. In addition to uniform sampling, we also apply the hard
example mining strategy suggested in [36] to focus on fine-grained details. We
used $S=$10,000\text{\,}$$ training samples during KD in (4).
Subsequently, we also randomly sample time step $t$ uniformly from the unit
interval: $t\sim U(0,1)$.
Optionally, for our CoDyLiN experiments, we define each $H_{i,\psi_{i}}$ to be
a small MLP having $5$ layers of $128$ units with $w_{i}\in\mathbb{R}^{8}$.
During training, we uniformly sample attributes within $[-1,1]$:
$\alpha_{i}\sim U(-1,1)$, and let $\lambda_{m}=0.1$.
During training, we used Adam [14] with learning rate
$5\text{\times}{10}^{-4}\text{\,}$ and batch size $4,096\text{\,}$.
We performed all experiments on single NVIDIA A100 GPUs.
### 4.3 Baseline Models
For testing our methods, we compared quality and speed against several
baseline models, including NeRF [23], NV [19], NSFF [16], Nerfies [27],
HyperNeRF [28], two variants of TiNeuVox [8], DirectVoxGo [33], Plenoxels [9],
T-NeRF and D-NeRF [29], as well as CoNeRF [13].
(a)
(b)
Figure 4: Our two ablated baseline models, omiting components of our DyLiN.
(a) Without our two proposed MLPs. (b) Pointwise deformation MLP only,
predicting offsets jointly.
In addition, we performed an ablation study by comparing against 2 simplified
versions of our DyLiN architecture. First, we omitted both of our deformation
and hyperspace MLPs and simply concatenated the time step $t$ to the sampled
ray points (essentially resulting in a dynamic R2L). This method is
illustrated in Fig. 4(a). Second, we employed a pointwise deformation MLP ($5$
layers of $256$ units) inspired by [29], which deforms points along a ray by
predicting their offsets jointly, i.e., it can bend rays. This is contrast to
our DyLiN, which deforms rays explicitly without bending and also applies a
hyperspace MLP. This scheme is depicted in Fig. 4(b). In both baselines, the
deep residual color MLP regressors were kept intact. Next, we also tested the
effects of our fine-tuning procedure from (5) by training all of our models
both with and without it. Lastly, we assessed the dependences on the number of
sampled points along rays $K$ and on the number of training samples $S$ during
KD in (4).
### 4.4 Evaluation Metrics
For quantitatively evaluating the quality of generated images, we calculated
the Peak Signal-to-Noise Ratio (PSNR) [12] in decibels ($\mathrm{dB}$), the
Structural Similarity Index (SSIM) [40, 25], the Multi-Scale SSIM (MS-SSIM)
[41] and the Learned Perceptual Image Patch Similarity (LPIPS) [44] metrics.
Intuitively, PSNR is a pixelwise score, while SSIM and MS-SSIM also take pixel
correlations and multiple scales into account, respectively, yet all of these
tend to favor blurred images. LPIPS compares deep neural representations of
images and is much closer to human perception, promoting semantically better
and sharper images.
Furthermore, for testing space and time complexity, we computed the storage
size of parameters in megabytes (MB) and measured the wall-clock time in
milliseconds (ms) while rendering the synthetic Lego scene with each model.
## 5 Results
### 5.1 Quantitative Results
Tab. 1 and Tab. 2 contain our quantitative results for reconstruction quality
on synthetic and real dynamic scenes, accordingly. We found that among prior
works, TiNeuVox-B performed the best on synthetic scenes with respect to each
metric. On real scenes, however, NSFF took the lead. Despite having strong
metrics, NSFF is qualitatively poor and slow. Surprisingly, during ablation,
even our most basic model (DyLiN without the two MLPs from Fig. 4(a)) could
generate perceptually better looking images than TiNeuVox-B, thanks to the
increased training dataset size via KD. Incorporating the MLPs $T_{\omega}$
and $H_{\psi}$ into the model each improved results slightly. Interestingly,
fine-tuning on real data as in (5) gave a substantial boost. In addition, our
relative PSNR improvement over the teacher model (Tab.
1=$+$1.93\text{\,}\mathrm{dB}$$, up to $+$3.16\text{\,}\mathrm{dB}$$ per
scene; Tab. 2=$+$2.7\text{\,}\mathrm{dB}$$, up to
$+$13.14\text{\,}\mathrm{dB}$$) is better than that of R2L [36]
($+$1.4\text{\,}\mathrm{dB}$$, up to $+$2.8\text{\,}\mathrm{dB}$$).
Table 1: Quantitative results on synthetic dynamic scenes. Notations: Multi-
Layer Perceptron (MLP), PD (pointwise deformation), FT (fine-tuning). We
utilized D-NeRF as the teacher model for our DyLiNs. The winning numbers are
highlighted in bold.
Method | PSNR$\uparrow$ | SSIM$\uparrow$ | LPIPS$\downarrow$
---|---|---|---
NeRF[23] | 19.00 | 0.8700 | 0.1825
DirectVoxGo[33] | 18.61 | 0.8538 | 0.1688
Plenoxels[9] | 20.24 | 0.8688 | 0.1600
T-NeRF[29] | 29.51 | 0.9513 | 0.0788
D-NeRF[29] | 30.50 | 0.9525 | 0.0663
TiNeuVox-S[8] | 30.75 | 0.9550 | 0.0663
TiNeuVox-B[8] | 32.67 | 0.9725 | 0.0425
DyLiN, w/o two MLPs, w/o FT (ours) | 31.16 | 0.9931 | 0.0281
DyLiN, w/o two MLPs (ours) | 32.07 | 0.9937 | 0.0196
DyLiN, PD MLP only, w/o FT (ours) | 31.26 | 0.9932 | 0.0279
DyLiN, PD MLP only (ours) | 31.24 | 0.9940 | 0.0189
DyLiN, w/o FT (ours) | 31.37 | 0.9933 | 0.0275
DyLiN (ours) | 32.43 | 0.9943 | 0.0184
Table 2: Quantitative results on real dynamic scenes. Notations: Multi-Layer
Perceptron (MLP), PD (pointwise deformation), FT (fine-tuning). We utilized
HyperNeRF as the teacher model for our DyLiNs. The winning numbers are
highlighted in bold.
Method | PSNR$\uparrow$ | MS-SSIM$\uparrow$
---|---|---
NeRF[23] | 20.1 | 0.745
NV[19] | 16.9 | 0.571
NSFF[16] | 26.3 | 0.916
Nerfies[27] | 22.2 | 0.803
HyperNeRF[28] | 22.4 | 0.814
TiNeuVox-S[8] | 23.4 | 0.813
TiNeuVox-B[8] | 24.3 | 0.837
DyLiN, w/o two MLPs, w/o FT (ours) | 23.8 | 0.882
DyLiN, w/o two MLPs (ours) | 24.2 | 0.894
DyLiN, PD MLP only, w/o FT (ours) | 23.9 | 0.885
DyLiN, PD MLP only (ours) | 24.6 | 0.903
DyLiN, w/o FT (ours) | 24.0 | 0.886
DyLiN (ours) | 25.1 | 0.910
Table 3: Quantitative results for space and time complexity on the synthetic
Lego scene. Notations: Multi-Layer Perceptron (MLP), PD (pointwise
deformation), FT (fine-tuning).
| Storage | Wall-clock
---|---|---
Method | (MB) | time (ms)
NeRF[23] | 005.00 | 2,950.0
DirectVoxGo[33] | 205.00 | 1,090.0
Plenoxels[9] | 717.00 | 0,050.0
NV[19] | 439.00 | 0,074.9
D-NeRF[29] | 004.00 | 8,150.0
NSFF[16] | 014.17 | 5,450.0
HyperNeRF[28] | 015.36 | 2,900.0
TiNeuVox-S[8] | 023.70 | 3,280.0
TiNeuVox-B[8] | 023.70 | 6,920.0
DyLiN, w/o two MLPs, w/o FT (ours) | 068.04 | 0,115.4
DyLiN, w/o two MLPs (ours) | 068.04 | 0,115.4
DyLiN, PD MLP only, w/o FT (ours) | 072.60 | 0,115.7
DyLiN, PD MLP only (ours) | 072.60 | 0,115.7
DyLiN, w/o FT (ours) | 070.11 | 0,116.0
DyLiN (ours) | 070.11 | 0,116.0
Tab. 3 shows quantitative results for space and time complexity on the
synthetic Lego scene. We found that there is a trade-off between the two
metrics, as prior works are typically optimized for just one of those. In
contrast, all of our proposed DyLiN variants settle at the golden mean between
the two extremes. When compared to the strongest baseline TiNeuVox-B, our
method requires $3$ times as much storage but is nearly 2 orders of magnitude
faster. Plenoxels and NV, the only methods that require less computation than
ours, perform much worse in quality.
Fig. 5 reports quantitative ablation results for dependencies on the number of
sampled points per ray $K$ and on the number of training samples during KD
$S$, performed on the synthetic Standup scene. For dependence on $K$ (Fig.
5(a)), we found that there were no significant differences between test set
PNSR scores for $K\in\\{4,8,16,32\\}$, while we encountered overfitting for
$K\in\\{64,128\\}$. This justified our choice of $K=16$ for the rest of our
experiments. Regarding the effect of $S$ (Fig. 5(b)), overfitting occured for
smaller sample sizes including
$S\in\\{100;500;$1,000\text{\,}$;$5,000\text{\,}\\}$$. The test and training
set PSNR scores were much closer for $S=$10,000\text{\,}$$, validating our
general setting.
(a)
(b)
Figure 5: Quantitative results for ablation on the synthetic Standup scene.
(a) Dependence on the number of sampled points $K$ across ray $r^{\prime}$.
(b) Dependence on the number of training samples $S$ during Knowledge
Distillation (KD).
Our controllable numerical results are collected in Tab. 4. In short, our
CoDyLiN was able to considerably outperform CoNeRF with respect to MS-SSIM and
speed.
Table 4: Quantitative results on real controllable scenes. We utilized CoNeRF
as the teacher model for our CoDyLiN. The winning numbers are highlighted in
bold.
| Eyes/Mouth | Transformer
---|---|---
| | | Wall-clock | | | Wall-clock
Method | PSNR$\uparrow$ | MS-SSIM$\uparrow$ | time ($\mathrm{ms}$) | PSNR$\uparrow$ | MS-SSIM$\uparrow$ | time ($\mathrm{ms}$)
CoNeRF[13] | 21.4658 | 0.7458 | 6230.0 | 23.0319 | 0.8878 | 4360.0
CoDyLiN (ours) | 21.4655 | 0.9510 | 0116.3 | 23.5882 | 0.9779 | 0116.0
### 5.2 Qualitative Results
Fig. 6 and Fig. 7 depict qualitative results for reconstruction quality on
synthetic and real dynamic scenes, respectively. Both show that our full DyLiN
model generated the sharpest, most detailed images, as it was able to capture
cloth wrinkles (Fig. 6(j)) and the eye of the chicken (Fig. 7(e)). The
competing methods tended to oversmooth these features. We also ablated the
effect of omitting fine-tuning (Fig. 6(i), Fig. 7(d)), and results declined
considerably.
For the sake of completeness, Fig. 8 illustrates qualitative ablation results
for our model components on real dynamic scenes. We found that sequentially
adding our two proposed MLPs $T_{\omega}$ and $H_{\psi}$ improves the
reconstruction, e.g., the gum between the teeth (Fig. 8(e)) and the fingers
(Fig. 8(j)) become more and more apparent. Without the MLPs, these parts were
heavily blurred (Fig. 8(c), Fig. 8(h)).
We kindly ask readers to refer to the supplementary material for CoDyLiN’s
qualitative results.
Hook
(a) Ground Truth
(b) D-NeRF [29]
(c) TiNeuVox [8]
(d) Ours-1
(e) Ours-2
Jumping Jacks
(f) Ground Truth
(g) D-NeRF [29]
(h) TiNeuVox [8]
(i) Ours-1
(j) Ours-2
Figure 6: Qualitative results on synthetic dynamic scenes. We compare our
DyLiN (Ours-1, Ours-2) with the ground truth, the D-NeRF teacher model and
TiNeuVox. Ours-1 and Ours-2 were trained without and with fine-tuning on the
original data, respectively.
Chicken
(a) Ground Truth
(b) HyperNeRF [28]
(c) TiNeuVox [8]
(d) Ours-1
(e) Ours-2
Figure 7: Qualitative results on a real dynamic scene. We compare our DyLiN
(Ours-1, Ours-2) with the ground truth, the HyperNeRF teacher model and
TiNeuVox. Ours-1 and Ours-2 were trained without and with fine-tuning on the
original data, respectively.
Expression
(a) Ground Truth
(b) HyperNeRF [28]
(c) Ours-1
(d) Ours-2
(e) Ours-3
Peel Banana
(f) Ground Truth
(g) HyperNeRF [28]
(h) Ours-1
(i) Ours-2
(j) Ours-3
Figure 8: Qualitative results for ablation on real dynamic scenes. We compare
our DyLiN (Ours-1, Ours-2, Ours-3) with the ground truth and the HyperNeRF
teacher model. Ours-1 was trained without our two MLPs. Ours-2 was trained
with pointwise deformation MLP only. Ours-3 is our full model with both of our
proposed two MLPs.
## 6 Conclusion
We proposed two architectures for extending LFNs to dynamic scenes.
Specifically, we introduced DyLiN, which models ray deformations without
bending and lifts whole rays into a hyperspace, and CoDyLiN, which allows for
controllable attribute inputs. We trained both techniques via knowledge
distillation from various dynamic NeRF teacher models. We found that DyLiN
produces state-of-the-art quality even without ray bending and CoDyLiN
outperforms its teacher model, while both are nearly 2 orders of magnitude
faster than their strongest baselines.
Our methods do not come without limitations, however. Most importantly, they
focus on speeding up inference, as they require pretrained teacher models,
which can be expensive to obtain. In some experiments, our solutions were
outperformed in terms of the PSNR score. Using the winners as teacher models
could improve performance. Additionally, distillation from multiple teacher
models or joint training of the teacher and student models are also yet to be
explored. Moreover, we currently represent rays implicitly by sampling $K$
points along them, but increasing this number can lead to overfitting. An
explicit ray representation may be more effective. Finally, voxelizing and
quantizing our models could improve efficiency.
Our results are encouraging steps towards achieving real-time volumetric
rendering and animation, and we hope that our work will contribute to the
progress in these areas.
## Acknowledgements
This research was supported partially by Fujitsu. We thank Chaoyang Wang from
Carnegie Mellon University for the helpful discussion.
## References
* [1] E.H. Adelson and J.Y.A. Wang. Single Lens Stereo with a Plenoptic Camera. IEEE Trans. Pattern Anal. Mach. Intell., 14(2):99–106, 1992.
* [2] Jonathan T Barron, Ben Mildenhall, Matthew Tancik, Peter Hedman, Ricardo Martin-Brualla, et al. Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields. In Proc. IEEE/CVF ICCV, pages 5855–5864, 2021.
* [3] James R Bergen and Edward H Adelson. The Plenoptic Function and the Elements of Early Vision. Comput. Model. Vis. Process., 1:8, 1991.
* [4] Cristian Buciluǎ, Rich Caruana, and Alexandru Niculescu-Mizil. Model Compression. In Proc. 12th ACM SIGKDD ICKDDM, pages 535–541, 2006.
* [5] Guobin Chen, Wongun Choi, Xiang Yu, Tony Han, and Manmohan Chandraker. Learning Efficient Object Detection Models with Knowledge Distillation. Adv. NeurIPS, 30, 2017.
* [6] Zhiqin Chen. IM-NET: Learning implicit fields for generative shape modeling. PhD thesis, Applied Sciences: School of Computing Science, 2019.
* [7] Kangle Deng, Andrew Liu, Jun-Yan Zhu, and Deva Ramanan. Depth-supervised NeRF: Fewer Views and Faster Training for Free. In Proc. IEEE/CVF CVPR, pages 12882–12891, 2022.
* [8] Jiemin Fang, Taoran Yi, Xinggang Wang, Lingxi Xie, Xiaopeng Zhang, et al. Fast Dynamic Radiance Fields with Time-Aware Neural Voxels. arXiv:2205.15285, 2022.
* [9] Sara Fridovich-Keil, Alex Yu, Matthew Tancik, Qinhong Chen, Benjamin Recht, et al. Plenoxels: Radiance Fields Without Neural Networks. In Proc. IEEE/CVF CVPR, pages 5501–5510, 2022.
* [10] Guy Gafni, Justus Thies, Michael Zollhofer, and Matthias Nießner. Dynamic Neural Radiance Fields for Monocular 4D Facial Avatar Reconstruction. In Proc. IEEE/CVF CVPR, pages 8649–8658, 2021.
* [11] Steven J Gortler, Radek Grzeszczuk, Richard Szeliski, and Michael F Cohen. The Lumigraph. In Proc. 23rd CGIT, pages 43–54, 1996.
* [12] Alain Hore and Djemel Ziou. Image quality metrics: PSNR vs. SSIM. In 20th ICPR, pages 2366–2369. IEEE, 2010.
* [13] Kacper Kania, Kwang Moo Yi, Marek Kowalski, Tomasz Trzciński, and Andrea Tagliasacchi. CoNeRF: Controllable Neural Radiance Fields. In Proc. IEEE/CVF CVPR, pages 18623–18632, 2022.
* [14] Diederik P Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv:1412.6980, 2014.
* [15] Marc Levoy and Pat Hanrahan. Light Field Rendering. In Proc. 23rd CGIT, pages 31–42, 1996.
* [16] Zhengqi Li, Simon Niklaus, Noah Snavely, and Oliver Wang. Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes. In Proc. IEEE/CVF CVPR, 2021.
* [17] David B Lindell, Julien NP Martel, and Gordon Wetzstein. AutoInt: Anutomatic Integration for Fast Neural Volume Rendering. In Proc. IEEE/CVF CVPR, pages 14556–14565, 2021.
* [18] Lingjie Liu, Jiatao Gu, Kyaw Zaw Lin, Tat-Seng Chua, and Christian Theobalt. Neural Sparse Voxel Fields. Adv. NeurIPS, 33:15651–15663, 2020.
* [19] Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, et al. Neural Volumes: Learning Dynamic Renderable Volumes from Images. ACM Trans. Graph., 38(4):65:1–65:14, July 2019.
* [20] Ricardo Martin-Brualla, Noha Radwan, Mehdi SM Sajjadi, Jonathan T Barron, Alexey Dosovitskiy, et al. NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections. In Proc. IEEE/CVF CVPR, pages 7210–7219, 2021.
* [21] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy Networks: Learning 3D Reconstruction in Function Space. In Proc. IEEE/CVF CVPR, pages 4460–4470, 2019.
* [22] Ben Mildenhall, Peter Hedman, Ricardo Martin-Brualla, Pratul P Srinivasan, and Jonathan T Barron. NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images. In Proc. IEEE/CVF CVPR, pages 16190–16199, 2022.
* [23] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, et al. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Commun. ACM, 65(1):99–106, 2021.
* [24] Thomas Neff, Pascal Stadlbauer, Mathias Parger, Andreas Kurz, Joerg H Mueller, et al. Donerf: Towards real-time rendering of compact neural radiance fields using depth oracle networks. In CGF, volume 40, pages 45–59. Wiley Online Library, 2021.
* [25] Augustus Odena, Christopher Olah, and Jonathon Shlens. Conditional Image Synthesis with Auxiliary Classifier GANs. In ICML, pages 2642–2651. PMLR, 2017.
* [26] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation. In Proc. IEEE/CVF CVPR, pages 165–174, 2019.
* [27] Keunhong Park, Utkarsh Sinha, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, et al. Nerfies: Deformable Neural Radiance Fields. In Proc. IEEE/CVF ICCV, pages 5865–5874, 2021.
* [28] Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T Barron, Sofien Bouaziz, et al. HyperNeRF: A Higher-Dimensional Representation for Topologically Varying Neural Radiance Fields. ACM Trans. Graph., 40(6):1–12, 2021.
* [29] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-NeRF: Neural Radiance Fields for Dynamic Scenes. In Proc. IEEE/CVF CVPR, pages 10318–10327, 2021.
* [30] Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, et al. DeRF: Decomposed Radiance Fields. In Proc. IEEE/CVF CVPR, pages 14153–14161, 2021.
* [31] Christian Reiser, Songyou Peng, Yiyi Liao, and Andreas Geiger. KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs. In Proc. IEEE/CVF ICCV, pages 14335–14345, 2021.
* [32] Vincent Sitzmann, Semon Rezchikov, William T. Freeman, Joshua B. Tenenbaum, and Fredo Durand. Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering. In Proc. NeurIPS, 2021.
* [33] Cheng Sun, Min Sun, and Hwann-Tzong Chen. Direct Voxel Grid Optimization: Super-fast Convergence for Radiance Fields Reconstruction. In Proc. IEEE/CVF CVPR, 2022.
* [34] Edgar Tretschk, Ayush Tewari, Vladislav Golyanik, Michael Zollhöfer, Christoph Lassner, et al. Non-Rigid Neural Radiance Fields: Reconstruction and Novel View Synthesis of a Dynamic Scene From Monocular Video. In Proc. IEEE/CVF ICCV, pages 12959–12970, 2021.
* [35] Huan Wang, Yijun Li, Yuehai Wang, Haoji Hu, and Ming-Hsuan Yang. Collaborative Distillation for Ultra-Resolution Universal Style Transfer. In Proc. IEEE/CVF CVPR, pages 1860–1869, 2020.
* [36] Huan Wang, Jian Ren, Zeng Huang, Kyle Olszewski, Menglei Chai, et al. R2L: Distilling Neural Radiance Field to Neural Light Field for Efficient Novel View Synthesis. arXiv:2203.17261, 2022.
* [37] Lin Wang and Kuk-Jin Yoon. Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks. IEEE Trans. Pattern Anal. Mach. Intell., 2021.
* [38] Liao Wang, Jiakai Zhang, Xinhang Liu, Fuqiang Zhao, Yanshun Zhang, et al. Fourier PlenOctrees for Dynamic Radiance Field Rendering in Real-time. In Proc. IEEE/CVF CVPR, pages 13524–13534, 2022.
* [39] Peng Wang, Lingjie Liu, Yuan Liu, Christian Theobalt, Taku Komura, et al. NeuS: Learning Neural Implicit Surfaces by Volume Rendering for Multi-view Reconstruction. Adv. NeurIPS, 34:27171–27183, 2021.
* [40] Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. Image Process., 13(4):600–612, 2004.
* [41] Zhou Wang, Eero P Simoncelli, and Alan C Bovik. Multiscale structural similarity for image quality assessment. In The 37th Asilomar SSC, volume 2, pages 1398–1402. Ieee, 2003\.
* [42] Qiangeng Xu, Zexiang Xu, Julien Philip, Sai Bi, Zhixin Shu, et al. Point-NeRF: Point-based Neural Radiance Fields. In Proc. IEEE/CVF CVPR, pages 5438–5448, 2022.
* [43] Alex Yu, Ruilong Li, Matthew Tancik, Hao Li, Ren Ng, et al. PlenOctrees for Real-time Rendering of Neural Radiance Fields. In Proc. IEEE/CVF ICCV, pages 5752–5761, 2021.
* [44] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In Proc. IEEE/CVF CVPR, pages 586–595, 2018.
|
# Accelerating Distributed Stochastic Optimization via Self-Repellent Random
Walks
Jie Hu∗
Department of Electrical and Computer Engineering
North Carolina State University
<EMAIL_ADDRESS>
&Vishwaraj Doshi∗
Data Science & Advanced Analytics
IQVIA Inc.
<EMAIL_ADDRESS>
&Do Young Eun
Department of Electrical and Computer Engineering
North Carolina State University
<EMAIL_ADDRESS>
###### Abstract
We study a family of distributed stochastic optimization algorithms where
gradients are sampled by a token traversing a network of agents in random-walk
fashion. Typically, these random-walks are chosen to be Markov chains that
asymptotically sample from a desired target distribution, and play a critical
role in the convergence of the optimization iterates. In this paper, we take a
novel approach by replacing the standard linear Markovian token by one which
follows a non-linear Markov chain - namely the Self-Repellent Radom Walk
(SRRW). Defined for any given ‘base’ Markov chain, the SRRW, parameterized by
a positive scalar $\alpha$, is less likely to transition to states that were
highly visited in the past, thus the name. In the context of MCMC sampling on
a graph, a recent breakthrough in Doshi et al. (2023) shows that the SRRW
achieves $O(1/\alpha)$ decrease in the asymptotic variance for sampling. We
propose the use of a ‘generalized’ version of the SRRW to drive token
algorithms for distributed stochastic optimization in the form of stochastic
approximation, termed SA-SRRW. We prove that the optimization iterate errors
of the resulting SA-SRRW converge to zero almost surely and prove a central
limit theorem, deriving the explicit form of the resulting asymptotic
covariance matrix corresponding to iterate errors. This asymptotic covariance
is always smaller than that of an algorithm driven by the base Markov chain
and decreases at rate $O(1/\alpha^{2})$ \- the performance benefit of using
SRRW thereby amplified in the stochastic optimization context. Empirical
results support our theoretical findings.
**footnotetext: Equal contributors.
## 1 Introduction
Stochastic optimization algorithms solve optimization problems of the form
${\bm{\theta}}^{*}\in\operatorname*{arg\,min}_{{\bm{\theta}}\in{\mathbb{R}}^{d}}f({\bm{\theta}}),~{}~{}~{}\text{where}~{}f({\bm{\theta}})\triangleq\mathbb{E}_{X\sim{\bm{\mu}}}\left[F({\bm{\theta}},X)\right]=\sum_{i\in{\mathcal{N}}}\mu_{i}F({\bm{\theta}},i),\vspace{-2mm}$
(1)
with the objective function $f:\mathbb{R}^{d}\to\mathbb{R}$ and $X$ taking
values in a finite state space ${\mathcal{N}}$ with distribution
${\bm{\mu}}\triangleq[\mu_{i}]_{i\in{\mathcal{N}}}$. Leveraging partial
gradient information per iteration, these algorithms have been recognized for
their scalability and efficiency with large datasets (Bottou et al., 2018;
Even, 2023). For any given noise sequence $\\{X_{n}\\}_{n\geq
0}\subset{\mathcal{N}}$, and step size sequence $\\{\beta_{n}\\}_{n\geq
0}\subset\mathbb{R}_{+}$, most stochastic optimization algorithms can be
classified as stochastic approximations (SA) of the form
${\bm{\theta}}_{n+1}={\bm{\theta}}_{n}+\beta_{n+1}H({\bm{\theta}}_{n},X_{n+1}),~{}~{}~{}\forall~{}n\geq
0,$ (2)
where, roughly speaking, $H({\bm{\theta}},i)$ contains gradient information
$\nabla_{{\bm{\theta}}}F(\theta,i)$, such that ${\bm{\theta}}^{*}$ solves
${\mathbf{h}}({\bm{\theta}})\\!\triangleq\\!\mathbb{E}_{X\sim{\bm{\mu}}}[H({\bm{\theta}},X)]\\!=\\!\sum_{i\in{\mathcal{N}}}\mu_{i}H({\bm{\theta}},i)\\!=\\!{\bm{0}}$.
Such SA iterations include the well-known stochastic gradient descent (SGD),
stochastic heavy ball (SHB) (Gadat et al., 2018; Li et al., 2022), and some
SGD-type algorithms employing additional auxiliary variables (Barakat et al.,
2021).111Further illustrations of stochastic optimization algorithms of the
form (2) are deferred to Appendix A. These algorithms typically have the
stochastic noise term $X_{n}$ generated by i.i.d. random variables with
probability distribution ${\bm{\mu}}$ in each iteration. In this paper, we
study a stochastic optimization algorithm where the noise sequence governing
access to the gradient information is generated from general stochastic
processes in place of i.i.d. random variables.
This is commonly the case in distributed learning, where $\\{X_{n}\\}$ is a
(typically Markovian) random walk, and should asymptotically be able to sample
the gradients from the desired probability distribution ${\bm{\mu}}$. This is
equivalent to saying that the random walker’s empirical distribution converges
to ${\bm{\mu}}$ almost surely (a.s.); that is,
${\mathbf{x}}_{n}\triangleq\frac{1}{n+1}\sum_{k=0}^{n}{\bm{\delta}}_{X_{k}}\xrightarrow[n\to\infty]{a.s.}{\bm{\mu}}$
for any initial $X_{0}\in{\mathcal{N}}$, where ${\bm{\delta}}_{X_{k}}$ is the
delta measure whose $X_{k}$’th entry is one, the rest being zero. Such
convergence is most commonly achieved by employing the Metropolis Hastings
random walk (MHRW) which can be designed to sample from any target measure
${\bm{\mu}}$ and implemented in a scalable manner (Sun et al., 2018).
Unsurprisingly, convergence characteristics of the employed Markov chain
affect that of the SA sequence (2), and appear in both finite-time and
asymptotic analyses. Finite-time bounds typically involve the second largest
eigenvalue in modulus (SLEM) of the Markov chain’s transition kernel
${\mathbf{P}}$, which is critically connected to the mixing time of a Markov
chain (Levin & Peres, 2017); whereas asymptotic results such as central limit
theorems (CLT) involve asymptotic covariance matrices that embed information
regarding the entire spectrum of ${\mathbf{P}}$, i.e., all eigenvalues as well
as eigenvectors (Brémaud, 2013), which are key to understanding the sampling
efficiency of a Markov chain. Thus, the choice of random walker can
significantly impact the performance of (2), and simply ensuring that it
samples from ${\bm{\mu}}$ asymptotically is not enough to achieve optimal
algorithmic performance. In this paper, we take a closer look at the
distributed stochastic optimization problem through the lens of a non-linear
Markov chain, known as the Self Repellent Random Walk (SRRW), which was shown
in Doshi et al. (2023) to achieve asymptotically minimal sampling variance for
large values of $\alpha$, a positive scalar controlling the strength of the
random walker’s self-repellence behaviour. Our proposed modification of (2)
can be implemented within the settings of decentralized learning applications
in a scalable manner, while also enjoying significant performance benefit over
distributed stochastic optimization algorithms driven by vanilla Markov
chains.
Token Algorithms for Decentralized Learning. In decentralized learning, agents
like smartphones or IoT devices, each containing a subset of data,
collaboratively train models on a graph
${\mathcal{G}}({\mathcal{N}},{\mathcal{E}})$ by sharing information locally
without a central server (McMahan et al., 2017). In this setup,
$N\\!=\\!|{\mathcal{N}}|$ agents correspond to nodes
$i\\!\in\\!{\mathcal{N}}$, and an edge $(i,j)\\!\in\\!{\mathcal{E}}$ indicates
direct communication between agents $i$ and $j$. This decentralized approach
offers several advantages compared to the traditional centralized learning
setting, promoting data privacy and security by eliminating the need for raw
data to be aggregated centrally and thus reducing the risk of data breach or
misuse (Bottou et al., 2018; Nedic, 2020). Additionally, decentralized
approaches are more scalable and can handle vast amounts of heterogeneous data
from distributed agents without overwhelming a central server, alleviating
concerns about single point of failure (Vogels et al., 2021).
Among decentralized learning approaches, the class of ‘Token’ algorithms can
be expressed as stochastic approximation iterations of the type (2), wherein
the sequence $\\{X_{n}\\}$ is realized as the sample path of a token that
stochastically traverses the graph ${\mathcal{G}}$, carrying with it the
iterate ${\bm{\theta}}_{n}$ for any time $n\geq 0$ and allowing each visited
node (agent) to incrementally update ${\bm{\theta}}_{n}$ using locally
available gradient information. Token algorithms have gained popularity in
recent years (Hu et al., 2022; Triastcyn et al., 2022; Hendrikx, 2023), and
are provably more communication efficient (Even, 2023) when compared to
consensus-based algorithms - another popular approach for solving distributed
optimization problems (Boyd et al., 2006; Morral et al., 2017; Olshevsky,
2022). The construction of token algorithms means that they do not suffer from
expensive costs of synchronization and communication that are typical of
consensus-based approaches, where all agents (or a subset of agents selected
by a coordinator (Boyd et al., 2006; Wang et al., 2019)) on the graph are
required to take simultaneous actions, such as communicating on the graph at
each iteration. While decentralized Federated learning has indeed helped
mitigate the communication overhead by processing multiple SGD iterations
prior to each aggregation (Lalitha et al., 2018; Ye et al., 2022; Chellapandi
et al., 2023), they still cannot overcome challenges such as synchronization
and straggler issues.
Self Repellent Random Walk. As mentioned earlier, sample paths $\\{X_{n}\\}$
of token algorithms are usually generated using Markov chains with
${\bm{\mu}}\in\text{Int}(\Sigma)$ as their limiting distribution. Here,
$\Sigma$ denotes the $N$-dimensional probability simplex, with
$\text{Int}(\Sigma)$ representing its interior. A recent work by Doshi et al.
(2023) pioneers the use of non-linear Markov chains to, in some sense, improve
upon any given time-reversible Markov chain with transition kernel
${\mathbf{P}}$ whose stationary distribution is ${\bm{\mu}}$. They show that
the non-linear transition kernel222Here, non-linearity in the transition
kernel implies that ${\mathbf{K}}[{\mathbf{x}}]$ takes probability
distribution ${\mathbf{x}}$ as the argument (Andrieu et al., 2007), as opposed
to the kernel being a linear operator
${\mathbf{K}}[{\mathbf{x}}]={\mathbf{P}}$ for a constant stochastic matrix
${\mathbf{P}}$ in a standard (linear) Markovian setting.
${\mathbf{K}}[\cdot]:\text{Int}(\Sigma)\to[0,1]^{N\times N}$, given by
$\vspace{-2mm}K_{ij}[{\mathbf{x}}]\triangleq\frac{P_{ij}(x_{j}/\mu_{j})^{-\alpha}}{\sum_{k\in{\mathcal{N}}}P_{ik}(x_{k}/\mu_{k})^{-\alpha}},~{}~{}~{}~{}~{}~{}\forall~{}i,j\in{\mathcal{N}},$
(3)
for any ${\mathbf{x}}\in\text{Int}(\Sigma)$, when simulated as a self-
interacting random walk (Del Moral & Miclo, 2006; Del Moral & Doucet, 2010),
can achieve smaller asymptotic variance than the base Markov chain when
sampling over a graph ${\mathcal{G}}$, for all $\alpha\\!>\\!0$. The argument
${\mathbf{x}}$ for the kernel ${\mathbf{K}}[{\bm{x}}]$ is taken to be the
empirical distribution ${\mathbf{x}}_{n}$ at each time step $n\\!\geq\\!0$.
For instance, if node $j$ has been visited more often than other nodes so far,
the entry $x_{j}$ becomes larger (than target value $\mu_{j}$), resulting in a
smaller transition probability from $i$ to $j$ under
${\mathbf{K}}[{\mathbf{x}}]$ in (3) compared to $P_{ij}$. This ensures that a
random walker prioritizes more seldom visited nodes in the process, and is
thus ‘self-repellent’. This effect is made more drastic by increasing
$\alpha$, and leads to asymptotically near-zero variance at a rate of
$O(1/\alpha)$. Moreover, the polynomial function $(x_{i}/\mu_{i})^{-\alpha}$
chosen to encode self-repellent behaviour is shown in Doshi et al. (2023) to
be the only one that allows the SRRW to inherit the so-called ‘scale-
invariance’ property of the underlying Markov chain – a necessary component
for the scalable implementation of a random walker over a large network
without requiring knowledge of any graph-related global constants.
Conclusively, such attributes render SRRW especially suitable for distributed
optimization.333Recently, Guo et al. (2020) introduce an optimization scheme,
which designs self-repellence into the perturbation of the gradient descent
iterates (Jin et al., 2017; 2018; 2021) with the goal of escaping saddle
points. This notion of self-repellence is distinct from the SRRW, which is a
probability kernel designed specifically for a token to sample from a target
distribution ${\bm{\mu}}$ over a set of nodes on an arbitrary graph.
Effect of Stochastic Noise - Finite time and Asymptotic Approaches. Most
contemporary token algorithms driven by Markov chains are analyzed using the
finite-time bounds approach for obtaining insights into their convergence
rates (Sun et al., 2018; Doan et al., 2019; 2020; Triastcyn et al., 2022;
Hendrikx, 2023). However, as also explained in Even (2023), in most cases
these bounds are overly dependent on mixing time properties of the specific
Markov chain employed therein. This makes them largely ineffective in
capturing the exact contribution of the underlying random walk in a manner
which is qualitative enough to be used for algorithm design; and performance
enhancements are typically achieved via application of techniques such as
variance reduction (Defazio et al., 2014; Schmidt et al., 2017),
momentum/Nesterov’s acceleration (Gadat et al., 2018; Li et al., 2022),
adaptive step size (Kingma & Ba, 2015; Reddi et al., 2018), which work by
modifying the algorithm iterations themselves, and never consider potential
improvements to the stochastic input itself.
Complimentary to finite-time approaches, asymptotic analysis using CLT has
proven to be an excellent tool to approach the design of stochastic algorithms
(Hu et al., 2022; Devraj & Meyn, 2017; Morral et al., 2017; Chen et al.,
2020a; Mou et al., 2020; Devraj & Meyn, 2021). Hu et al. (2022) shows how
asymptotic analysis can be used to compare the performance of SGD algorithms
for various stochastic inputs using their notion of efficiency ordering, and,
as mentioned in Devraj & Meyn (2017), the asymptotic benefits from minimizing
the limiting covariance matrix are known to be a good predictor of finite-time
algorithmic performance, also observed empirically in Section 4.
From the perspective of both finite-time analysis as well as asymptotic
analysis of token algorithms, it is now well established that employing
‘better’ Markov chains can enhance the performance of stochastic optimization
algorithm. For instance, Markov chains with smaller SLEMs yield tighter
finite-time upper bounds (Sun et al., 2018; Ayache & El Rouayheb, 2021; Even,
2023). Similarly, Markov chains with smaller asymptotic variance for MCMC
sampling tasks also provide better performance, resulting in smaller
covariance matrix of SGD algorithms (Hu et al., 2022). Therefore, with these
breakthrough results via SRRW achieving near-zero sampling variance, it is
within reason to ask: Can we achieve near-zero variance in distributed
stochastic optimization driven by SRRW-like token algorithms on any general
graph?444This near-zero sampling variance implies a significantly smaller
variance than even an i.i.d. sampling counterpart, while adhering to graph
topological constraints of token algorithms. In this paper, we answer in the
affirmative.
SRRW Driven Algorithm and Analysis Approach. For any ergodic time-reversible
Markov chain with transition probability matrix
${\mathbf{P}}\triangleq[P_{ij}]_{i,j\in{\mathcal{N}}}$ and stationary
distribution ${\bm{\mu}}\in\text{Int}(\Sigma)$, we consider a general step
size version of the SRRW stochastic process analysed in Doshi et al. (2023)
and use it to drive the noise sequence in (2). Our SA-SRRW algorithm is as
follows:
$\text{Draw:}~{}~{}~{}~{}~{}~{}~{}~{}X_{n+1}\sim{\mathbf{K}}_{X_{n},\cdot}[{\mathbf{x}}_{n}]~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(4a)
$\text{Update:}~{}~{}~{}~{}~{}~{}{\mathbf{x}}_{n+1}={\mathbf{x}}_{n}+\gamma_{n+1}({\bm{\delta}}_{X_{n+1}}-{\mathbf{x}}_{n}),$
(4b)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{\bm{\theta}}_{n+1}={\bm{\theta}}_{n}+\beta_{n+1}H({\bm{\theta}}_{n},X_{n+1}),$
(4c)
where $\\{\beta_{n}\\}$ and $\\{\gamma_{n}\\}$ are step size sequences
decreasing to zero, and ${\mathbf{K}}[{\mathbf{x}}]$ is the SRRW kernel in
(3). Current non-asymptotic analyses require globally Lipschitz mean field
function (Chen et al., 2020b; Doan, 2021; Zeng et al., 2021; Even, 2023) and
is thus inapplicable to SA-SRRW since the mean field function of the SRRW
iterates (4b) is only locally Lipschitz (details deferred to Appendix B).
Instead, we successfully obtain non-trivial results by taking an asymptotic
CLT-based approach to analyze (4). This goes beyond just analyzing the
asymptotic sampling covariance555Sampling covariance corresponds to only the
empirical distribution ${\mathbf{x}}_{n}$ in (4b). as in Doshi et al. (2023),
the result therein forming a special case of ours by setting
$\gamma_{n}\\!=\\!1/(n\\!+\\!1)$ and considering only (4a) and (4b), that is,
in the absence of optimization iteration (4c). Specifically, we capture the
effect of SRRW’s hyper-parameter $\alpha$ on the asymptotic speed of
convergence of the optimization error term
${\bm{\theta}}_{n}-{\bm{\theta}}^{*}$ to zero via explicit deduction of its
asymptotic covariance matrix. See Figure 1 for illustration.
Figure 1: Visualization of token algorithms using SRRW versus traditional MC
in distributed learning. Our CLT analysis, extended from SRRW itself to
distributed stochastic approximation, leads to near-zero variance for the SA
iteration ${\bm{\theta}}_{n}$. Node numbers on the left denote visit counts.
Our Contributions.
1. Given any time-reversible ‘base’ Markov chain with transition kernel ${\mathbf{P}}$ and stationary distribution ${\bm{\mu}}$, we generalize first and second order convergence results of ${\mathbf{x}}_{n}$ to target measure ${\bm{\mu}}$ (Theorems 4.1 and 4.2 in Doshi et al., 2023) to a class of weighted empirical measures, through the use of more general step sizes $\gamma_{n}$. This includes showing that the asymptotic sampling covariance terms decrease to zero at rate $O(1/\alpha)$, thus quantifying the effect of self-repellent on ${\mathbf{x}}_{n}$. Our generalization is not for the sake thereof and is shown in Section 3 to be crucial for the design of step sizes $\beta_{n},\gamma_{n}$.
2. Building upon the convergence results for iterates ${\mathbf{x}}_{n}$, we analyze the algorithm (4) driven by the SRRW kernel in (3) with step sizes $\beta_{n}$ and $\gamma_{n}$ separated into three disjoint cases:
1. (i)
$\beta_{n}=o(\gamma_{n})$, and we say that ${\bm{\theta}}_{n}$ is on the
slower timescale compared to ${\mathbf{x}}_{n}$;
2. (ii)
$\beta_{n}\\!=\\!\gamma_{n}$, and we say that ${\bm{\theta}}_{n}$ and
${\mathbf{x}}_{n}$ are on the same timescale;
3. (iii)
$\gamma_{n}=o(\beta_{n})$, and we say that ${\bm{\theta}}_{n}$ is on the
faster timescale compared to ${\mathbf{x}}_{n}$.
For any $\alpha\geq 0$ and let $k=1,2$ and $3$ refer to the corresponding
cases (i), (ii) and (iii), we show that
${\bm{\theta}}_{n}\xrightarrow[n\to\infty]{a.s.}{\bm{\theta}}^{*}~{}~{}~{}~{}\text{and}~{}~{}~{}~{}({\bm{\theta}}_{n}-{\bm{\theta}}^{*})/\sqrt{\beta_{n}}\xrightarrow[n\to\infty]{dist.}N\left({\bm{0}},{\mathbf{V}}^{(k)}_{{\bm{\theta}}}(\alpha)\right),\vspace{-1mm}$
featuring distinct asymptotic covariance matrices
${\mathbf{V}}^{(1)}_{{\bm{\theta}}}(\alpha),{\mathbf{V}}^{(2)}_{{\bm{\theta}}}(\alpha)$
and ${\mathbf{V}}^{(3)}_{{\bm{\theta}}}(\alpha)$, respectively. The three
matrices coincide when $\alpha=0$,666The $\alpha=0$ case is equivalent to
simply running the base Markov chain, since from (3) we have
${\mathbf{K}}[\cdot]={\mathbf{P}}$, thus bypassing the SRRW’s effect and
rendering all three cases nearly the same.. Moreover, the derivation of the
CLT for cases (i) and (iii), for which (4) corresponds to two-timescale SA
with controlled Markov noise, is the first of its kind and thus a key
technical contribution in this paper, as expanded upon in Section 3.
3. For case (i), we show that ${\mathbf{V}}^{(1)}_{{\bm{\theta}}}(\alpha)$ decreases to zero (in the sense of Loewner ordering introduced in Section 2.1) as $\alpha$ increases, with rate $O(1/\alpha^{2})$. This is especially surprising, since the asymptotic performance benefit from using the SRRW kernel with $\alpha$ in (3), to drive the noise terms $X_{n}$, is amplified in the context of distributed learning and estimating ${\bm{\theta}}^{*}$; compared to the sampling case, for which the rate is $O(1/\alpha)$ as mentioned earlier. For case (iii), we show that ${\mathbf{V}}_{{\bm{\theta}}}^{(3)}(\alpha)\\!=\\!{\mathbf{V}}_{{\bm{\theta}}}^{(3)}(0)$ for all $\alpha\\!\geq\\!0$, implying that using the SRRW in this case provides no asymptotic benefit than the original base Markov chain, and thus performs worse than case (i). In summary, we deduce that ${\mathbf{V}}_{{\bm{\theta}}}^{(1)}(\alpha_{2})\\!<_{L}\\!{\mathbf{V}}_{{\bm{\theta}}}^{(1)}(\alpha_{1})\\!<_{L}\\!{\mathbf{V}}_{{\bm{\theta}}}^{(1)}(0)\\!=\\!{\mathbf{V}}_{{\bm{\theta}}}^{(3)}(0)\\!=\\!{\mathbf{V}}_{{\bm{\theta}}}^{(3)}(\alpha)$ for all $\alpha_{2}>\alpha_{1}>0$ and $\alpha>0$.777In particular, this is the reason why we advocate for a more general step size $\gamma_{n}=(n+1)^{-a}$ in the SRRW iterates with $a<1$, allowing us to choose $\beta_{n}=(n+1)^{-b}$ with $b\in(a,1]$ to satisfy $\beta_{n}=o(\gamma_{n})$ for case (i).
4. We numerically simulate our SA-SRRW algorithm on various real-world datasets, focusing on a binary classification task, to evaluate its performance across all three cases. By carefully choosing the function $H$ in SA-SRRW, we test the SGD and algorithms driven by SRRW. Our findings consistently highlight the superiority of case (i) over cases (ii) and (iii) for diverse $\alpha$ values, even in their finite time performance. Notably, our tests validate the variance reduction at a rate of $O(1/\alpha^{2})$ for case (i), suggesting it as the best algorithmic choice among the three cases.
## 2 Preliminaries and Model Setup
In Section 2.1, we first standardize the notations used throughout the paper,
and define key mathematical terms and quantities used in our theoretical
analyses. Then, in Section 2.2, we consolidate the model assumptions of our
SA-SRRW algorithm (4). We then go on to discuss our assumptions, and provide
additional interpretations of our use of generalized step-sizes.
### 2.1 Basic Notations and Definitions
Vectors are denoted by lower-case bold letters, e.g.,
${\mathbf{v}}\triangleq[v_{i}]\in\mathbb{R}^{D}$, and matrices by upper-case
bold, e.g., ${\mathbf{M}}\triangleq[M_{ij}]\in\mathbb{R}^{D\times D}$.
${\mathbf{M}}^{-T}$ is the transpose of the matrix inverse
${\mathbf{M}}^{-1}$. The diagonal matrix ${\mathbf{D}}_{{\mathbf{v}}}$ is
formed by vector ${\mathbf{v}}$ with $v_{i}$ as the $i$’th diagonal entry. Let
${\bm{1}}$ and ${\bm{0}}$ denote vectors of all ones and zeros, respectively.
The identity matrix is represented by ${\mathbf{I}}$, with subscripts
indicating dimensions as needed. A matrix is Hurwitz if all its eigenvalues
possess strictly negative real parts. $\mathds{1}_{\\{\cdot\\}}$ denotes an
indicator function with condition in parentheses. We use $\|\\!\cdot\\!\|$ to
denote both the Euclidean norm of vectors and the spectral norm of matrices.
Two symmetric matrices ${\mathbf{M}}_{1},{\mathbf{M}}_{2}$ follow Loewner
ordering ${\mathbf{M}}_{1}\\!<_{L}\\!{\mathbf{M}}_{2}$ if
${\mathbf{M}}_{2}\\!-\\!{\mathbf{M}}_{1}$ is positive semi-definite and
${\mathbf{M}}_{1}\\!\neq\\!{\mathbf{M}}_{2}$. This slightly differs from the
conventional definition with $\leq_{L}$, which allows
${\mathbf{M}}_{1}\\!=\\!{\mathbf{M}}_{2}$.
Throughout the paper, the matrix
${\mathbf{P}}\triangleq[P_{i,j}]_{i,j\in{\mathcal{N}}}$ and vector
${\bm{\mu}}\triangleq[\mu_{i}]_{i\in{\mathcal{N}}}$ are used exclusively to
denote an $N\times N$-dimensional transition kernel of an ergodic Markov
chain, and its stationary distribution, respectively. Without loss of
generality, we assume $P_{ij}>0$ if and only if $a_{ij}>0$. Markov chains
satisfying the detailed balance equation, where $\mu_{i}P_{ij}=\mu_{j}P_{ji}$
for all $i,j\in{\mathcal{N}}$, are termed time-reversible. For such chains, we
use $(\lambda_{i},{\mathbf{u}}_{i})$ (resp. $(\lambda_{i},{\mathbf{v}}_{i})$)
to denote the $i$’th left (resp. right) eigenpair where the eigenvalues are
ordered:
$-1\\!<\\!\lambda_{1}\\!\leq\\!\cdots\\!\leq\\!\lambda_{N-1}\\!<\\!\lambda_{N}\\!=\\!1$,
with ${\mathbf{u}}_{N}\\!=\\!{\bm{\mu}}$ and ${\mathbf{v}}_{N}\\!=\\!{\bm{1}}$
in ${\mathbb{R}}^{N}$. We assume eigenvectors to be normalized such that
${\mathbf{u}}_{i}^{T}{\mathbf{v}}_{i}\\!=\\!1$ for all $i$, and we have
${\mathbf{u}}_{i}\\!=\\!{\mathbf{D}}_{{\bm{\mu}}}{\mathbf{v}}_{i}$ and
${\mathbf{u}}_{i}^{T}{\mathbf{v}}_{j}\\!=\\!0$ for all
$i,j\\!\in\\!{\mathcal{N}}$. We direct the reader to Aldous & Fill (2002,
Chapter 3.4) for a detailed exposition on spectral properties of time-
reversible Markov chains.
### 2.2 SA-SRRW: Key Assumptions and Discussions
Assumptions: All results in our paper are proved under the following
assumptions.
1. (A1)
The function $H:{\mathbb{R}}^{D}\times{\mathcal{N}}\to{\mathbb{R}}^{D}$, is a
continuous at every ${\bm{\theta}}\in\mathbb{R}^{D}$, and there exists a
positive constant $L$ such that $\|H({\bm{\theta}},i)\|\leq
L(1+\|{\bm{\theta}}\|)$ for every
${\bm{\theta}}\in{\mathbb{R}}^{D},i\in{\mathcal{N}}$.
2. (A2)
Step sizes $\beta_{n}$ and $\gamma_{n}$ follow
$\beta_{n}\\!=\\!(n\\!+\\!1)^{-b}$, and $\gamma_{n}\\!=\\!(n\\!+\\!1)^{-a}$,
where $a,b\in(0.5,1]$.
3. (A3)
Roots of function ${\mathbf{h}}(\cdot)$ are disjoint, which comprise the
globally attracting set
$\Theta\\!\triangleq\\!\left\\{{\bm{\theta}}^{*}|{\mathbf{h}}({\bm{\theta}}^{*})\\!=\\!{\bm{0}},\nabla{\mathbf{h}}({\bm{\theta}}^{*})+\frac{\mathds{1}_{\\{b=1\\}}}{2}{\mathbf{I}}~{}\text{is
Hurwitz}\right\\}\\!\neq\\!\emptyset$ of the associated ordinary differential
equation (ODE) for iteration (4c), given by
${d{\bm{\theta}}(t)}/{dt}\\!=\\!{\mathbf{h}}({\bm{\theta}}(t))$.
4. (A4)
For any
$({\bm{\theta}}_{0},{\mathbf{x}}_{0},X_{0})\in{\mathbb{R}}^{D}\times\text{Int}(\Sigma)\times{\mathcal{N}}$,
the iterate sequence $\\{{\bm{\theta}}_{n}\\}_{n\geq 0}$ (resp.
$\\{{\mathbf{x}}_{n}\\}_{n\geq 0}$) is
${\mathbb{P}}_{{\bm{\theta}}_{0},{\mathbf{x}}_{0},X_{0}}$-almost surely
contained within a compact subset of ${\mathbb{R}}^{D}$ (resp.
$\text{Int}(\Sigma)$).
Discussions on Assumptions: Assumption A1 requires $H$ to only be locally
Lipschitz albeit with linear growth, and is less stringent than the globally
Lipschitz assumption prevalent in optimization literature (Li & Wai, 2022;
Hendrikx, 2023; Even, 2023).
Assumption A2 is the general umbrella assumption under which cases (i), (ii)
and (iii) mentioned in Section 1 are extracted by setting: (i) $a<b$, (ii)
$a=b$, and (iii) $a>b$. Cases (i) and (iii) render
${\bm{\theta}}_{n},{\mathbf{x}}_{n}$ on different timescales; the polynomial
form of $\beta_{n},\gamma_{n}$ widely assumed in the two-timescale SA
literature (Mokkadem & Pelletier, 2006; Zeng et al., 2021; Hong et al., 2023).
Case (ii) characterizes the SA-SRRW algorithm (4) as a single-timescale SA
with polynomially decreasing step size, and is among the most common
assumptions in the SA literature (Borkar, 2022; Fort, 2015; Li et al., 2023).
In all three cases, the form of $\gamma_{n}$ ensures $\gamma_{n}\leq 1$ such
that the SRRW iterates ${\mathbf{x}}_{n}$ in (4b) is within
$\text{Int}(\Sigma)$, ensuring that ${\mathbf{K}}[{\mathbf{x}}_{n}]$ is well-
defined for all $n\geq 0$.
In Assumption A3, limiting dynamics of SA iterations
$\\{{\bm{\theta}}_{n}\\}_{n\geq 0}$ closely follow trajectories
$\\{{\bm{\theta}}(t)\\}_{t\geq 0}$ of their associated ODE, and assuming the
existence of globally stable equilibria is standard (Borkar, 2022; Fort, 2015;
Li et al., 2023). In optimization problems, this is equivalent to assuming the
existence of at most countably many local minima.
Assumption A4 assumes almost sure boundedness of iterates ${\bm{\theta}}_{n}$
and ${\mathbf{x}}_{n}$, which is a common assumption in SA algorithms (Kushner
& Yin, 2003; Chen, 2006; Borkar, 2022; Karmakar & Bhatnagar, 2018; Li et al.,
2023) for the stability of the SA iterations by ensuring the well-definiteness
of all quantities involved. Stability of the weighted empirical measure
${\mathbf{x}}_{n}$ of the SRRW process is practically ensured by studying (4b)
with a truncation-based procedure (see Doshi et al., 2023, Remark 4.5 and
Appendix E for a comprehensive explanation), while that for
${\bm{\theta}}_{n}$ is usually ensured either as a by-product of the algorithm
design, or via mechanisms such as projections onto a compact subset of
$\mathbb{R}^{D}$, depending on the application context.
We now provide additional discussions regarding the step-size assumptions and
their implications on the SRRW iteration (4b).
SRRW with General Step Size: As shown in Benaim & Cloez (2015, Remark 1.1),
albeit for a completely different non-linear Markov kernel driving the
algorithm therein, iterates ${\mathbf{x}}_{n}$ of (4b) can also be expressed
as weighted empirical measures of $\\{X_{n}\\}_{n\geq 0}$, in the following
form:
$\vspace{-1mm}{\mathbf{x}}_{n}=\frac{\sum_{i=1}^{n}\omega_{i}{\bm{\delta}}_{X_{i}}+\omega_{0}{\mathbf{x}}_{0}}{\sum_{i=0}^{n}\omega_{i}},~{}~{}~{}\text{where}~{}~{}\omega_{0}=1,~{}~{}\text{and}~{}~{}\omega_{n}=\frac{\gamma_{n}}{\prod_{i=1}^{n}(1-\gamma_{i})},$
(5)
for all $n>0$. For the special case when $\gamma_{n}=1/(n+1)$ as in Doshi et
al. (2023), we have $\omega_{n}=1$ for all $n\geq 0$ and ${\mathbf{x}}_{n}$ is
the typical, unweighted empirical measure. For the additional case considered
in our paper, when $a<1$ for $\gamma_{n}$ as in assumption A2, we can
approximate $1-\gamma_{n}\approx e^{-\gamma_{n}}$ and $\omega_{n}\approx
n^{-a}e^{n^{(1-a)}/{(1-a)}}$. This implies that $\omega_{n}$ will increase at
sub-exponential rate, giving more weight to recent visit counts and allowing
it to quickly ‘forget’ the poor initial measure ${\mathbf{x}}_{0}$ and shed
the correlation with the initial choice of $X_{0}$. This ‘speed up’ effect by
setting $a<1$ is guaranteed in case (i) irrespective of the choice of $b$ in
Assumption A2, and in Section 3 we show how this can lead to further reduction
in covariance of optimization error ${\bm{\theta}}_{n}={\bm{\theta}}^{*}$ in
the asymptotic regime.
Additional assumption for case (iii): Before moving on to Section 3, we take
another look at the case when $\gamma_{n}=o(\beta_{n})$, and replace A3 with
the following, stronger assumption only for case (iii).
1. (A3′)
For any ${\mathbf{x}}\\!\in\\!\text{Int}(\Sigma)$, there exists a function
$\rho:\text{Int}(\Sigma)\\!\to\\!{\mathbb{R}}^{D}$ such that
$\|\rho({\mathbf{x}})\|\\!\leq\\!L_{2}(1+\|{\mathbf{x}}\|)$ for some
$L_{2}\\!>\\!0$,
$\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[H(\rho({\mathbf{x}}),i)]\\!=\\!0$
and $\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[\nabla
H(\rho({\mathbf{x}}),i)]+\frac{\mathds{1}_{\\{b=1\\}}}{2}{\mathbf{I}}$ is
Hurwitz.
While Assumption A3′ for case (iii) is much stronger than A3, it is not
detrimental to the overall results of our paper, since case (i) is of far
greater interest as impressed upon in Section 1. This is discussed further in
Appendix C.
## 3 Asymptotic Analysis of the SA-SRRW Algorithm
In this section, we provide the main results for the SA-SRRW algorithm (4). We
first present the a.s. convergence and the CLT result for SRRW with
generalized step size, extending the results in Doshi et al. (2023). Building
upon this, we present the a.s. convergence and the CLT result for the SA
iterate ${\bm{\theta}}_{n}$ under different settings of step sizes. We then
shift our focus to the analysis of the different asymptotic covariance
matrices emerging out of the CLT result, and capture the effect of $\alpha$
and the step sizes, particularly in cases (i) and (iii), on
${\bm{\theta}}_{n}-{\bm{\theta}}^{*}$ via performance ordering.
Almost Sure convergence and CLT: The following result establishes first and
second order convergence of the sequence $\\{{\mathbf{x}}_{n}\\}_{n\geq 0}$,
which represents the weighted empirical measures of the SRRW process
$\\{X_{n}\\}_{n\geq 0}$, based on the update rule in (4b).
###### Lemma 3.1.
Under Assumptions A1, A2 and A4, for the SRRW iterates (4b), we have
${\mathbf{x}}_{n}\xrightarrow[n\to\infty]{a.s.}{\bm{\mu}},\quad\text{and}\quad\gamma_{n}^{-1/2}({\mathbf{x}}_{n}-{\bm{\mu}})\xrightarrow[n\to\infty]{dist.}N({\bm{0}},{\mathbf{V}}_{{\mathbf{x}}}(\alpha)),$
$\text{where}~{}~{}~{}~{}{\mathbf{V}}_{{\mathbf{x}}}(\alpha)=\sum_{i=1}^{N-1}\frac{1}{2\alpha(1+\lambda_{i})+2-\mathds{1}_{\\{a=1\\}}}\cdot\frac{1+\lambda_{i}}{1-\lambda_{i}}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}.$
(6)
Moreover, for all $\alpha_{2}>\alpha_{1}>0$, we have
${\mathbf{V}}_{\mathbf{x}}(\alpha_{2})<_{L}{\mathbf{V}}_{\mathbf{x}}(\alpha_{1})<_{L}{\mathbf{V}}_{\mathbf{x}}(0)$.
Lemma 3.1 shows that the SRRW iterates ${\mathbf{x}}_{n}$ converges to the
target distribution ${\bm{\mu}}$ a.s. even under the general step size
$\gamma_{n}=(n+1)^{-a}$ for $a\in(0.5,1]$. We also observe that the asymptotic
covariance matrix ${\mathbf{V}}_{{\mathbf{x}}}(\alpha)$ decreases at rate
$O(1/\alpha)$. Lemma 3.1 aligns with Doshi et al. (2023, Theorem 4.2 and
Corollary 4.3) for the special case of $a=1$, and is therefore more general.
Critically, it helps us establish our next result regarding the first-order
convergence for the optimization iterate sequence
$\\{{\bm{\theta}}_{n}\\}_{n\geq 0}$ following update rule (4c), as well as its
second-order convergence result, which follows shortly after. The proofs of
Lemma 3.1 and our next result, Theorem 3.2, are deferred to Appendix D. In
what follows, $k=1,2$, and $3$ refer to cases (i), (ii), and (iii) in Section
2.2, respectively. All subsequent results are proven under Assumptions A1 to
A4, with A3′ replacing A3 only when the step sizes $\beta_{n},\gamma_{n}$
satisfy case (iii).
###### Theorem 3.2.
For $k\in\\{1,2,3\\}$, and any initial
$({\bm{\theta}}_{0},{\mathbf{x}}_{0},X_{0})\in\mathbb{R}^{D}\times\text{Int}(\Sigma)\times{\mathcal{N}}$,
we have ${\bm{\theta}}_{n}\to{\bm{\theta}}^{*}$ as $n\to\infty$ for some
${\bm{\theta}}^{*}\in\Theta$,
${\mathbb{P}}_{{\bm{\theta}}_{0},{\mathbf{x}}_{0},X_{0}}$-almost surely.
In the stochastic optimization context, the above result ensures convergence
of iterates ${\bm{\theta}}_{n}$ to a local minimizer ${\bm{\theta}}^{*}$.
Loosely speaking, the first-order convergence of ${\mathbf{x}}_{n}$ in Lemma
3.1 as well as that of ${\bm{\theta}}_{n}$ are closely related to the
convergence of trajectories
$\\{{\mathbf{z}}(t)\triangleq({\bm{\theta}}(t),{\mathbf{x}}(t))\\}_{t\geq 0}$
of the (coupled) mean-field ODE, written in a matrix-vector form as
$\textstyle\frac{d}{dt}{\mathbf{z}}(t)={\mathbf{g}}({\mathbf{z}}(t))\triangleq\begin{bmatrix}{\mathbf{H}}({\bm{\theta}}(t))^{T}{\bm{\pi}}[{\mathbf{x}}(t)]\\\
{\bm{\pi}}[{\mathbf{x}}(t)]-{\mathbf{x}}(t)\end{bmatrix}\in{\mathbb{R}}^{D+N}.$
(7)
where matrix
${\mathbf{H}}({\bm{\theta}})\\!\triangleq\\![H({\bm{\theta}},1),\\!\cdots\\!,H({\bm{\theta}},N)]^{T}\\!\in\\!{\mathbb{R}}^{N\\!\times\\!D}$
for any ${\bm{\theta}}\in\mathbb{R}^{D}$. Here,
${\bm{\pi}}[{\mathbf{x}}]\in\text{Int}(\Sigma)$ is the stationary distribution
of the SRRW kernel ${\mathbf{K}}[{\mathbf{x}}]$ and is shown in Doshi et al.
(2023) to be given by
$\pi_{i}[{\mathbf{x}}]\propto\sum_{j\in{\mathcal{N}}}\mu_{i}P_{ij}(x_{i}/\mu_{i})^{-\alpha}(x_{j}/\mu_{j})^{-\alpha}$.
The Jacobian matrix of (7) when evaluated at equilibria
${\mathbf{z}}^{*}=({\bm{\theta}}^{*},{\bm{\mu}})$ for
${\bm{\theta}}^{*}\in\Theta$ captures the behaviour of solutions of the mean-
field in their vicinity, and plays an important role in the asymptotic
covariance matrices arising out of our CLT results. We evaluate this Jacobian
matrix ${\mathbf{J}}(\alpha)$ as a function of $\alpha\geq 0$ to be given by
${\mathbf{J}}(\alpha)\\!\triangleq\\!\nabla
g({\mathbf{z}}^{*})\\!=\\!\begin{bmatrix}\nabla{\mathbf{h}}({\bm{\theta}}^{*})&-\alpha{\mathbf{H}}({\bm{\theta}}^{*})^{T}({\mathbf{P}}^{T}\\!\\!+{\mathbf{I}})\\\
{\bm{0}}_{N\\!\times\\!D}&2\alpha\bm{\mu}{\bm{1}}^{T}\\!\\!\\!-\\!\alpha{\mathbf{P}}^{T}\\!\\!\\!-\\!(\alpha\\!+\\!1){\mathbf{I}}\end{bmatrix}\\!\triangleq\\!\begin{bmatrix}{\mathbf{J}}_{11}&{\mathbf{J}}_{12}(\alpha)\\\
{\mathbf{J}}_{21}&{\mathbf{J}}_{22}(\alpha)\end{bmatrix}.$ (8)
The derivation of ${\mathbf{J}}(\alpha)$ is referred to Appendix E.1.888The
Jacobian ${\mathbf{J}}(\alpha)$ is $(D\\!+\\!N)\\!\times\\!(D\\!+\\!N)$–
dimensional, with ${\mathbf{J}}_{11}\\!\in\\!{\mathbb{R}}^{D\\!\times\\!D}$
and ${\mathbf{J}}_{22}(\alpha)\\!\in\\!{\mathbb{R}}^{N\\!\times\\!N}$.
Following this, all matrices written in a block form, such as matrix
${\mathbf{U}}$ in (9), will inherit the same dimensional structure. Here,
${\mathbf{J}}_{21}$ is a zero matrix since
${\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}$ is devoid of ${\bm{\theta}}$. While
matrix ${\mathbf{J}}_{22}(\alpha)$ is exactly of the form in Doshi et al.
(2023, Lemma 3.4) to characterize the SRRW performance, our analysis includes
an additional matrix ${\mathbf{J}}_{12}(\alpha)$, which captures the effect of
${\mathbf{x}}(t)$ on ${\bm{\theta}}(t)$ in the ODE (7), which translates to
the influence of our generalized SRRW empirical measure ${\mathbf{x}}_{n}$ on
the SA iterates ${\bm{\theta}}_{n}$ in (4).
For notational simplicity, and without loss of generality, all our remaining
results are stated while conditioning on the event that
$\\{{\bm{\theta}}_{n}\\!\to\\!{\bm{\theta}}^{*}\\}$, for some
${\bm{\theta}}^{*}\\!\in\\!\Theta$. We also adopt the shorthand notation
${\mathbf{H}}$ to represent ${\mathbf{H}}({\bm{\theta}}^{*})$. Our main CLT
result is as follows, with its proof deferred to Appendix E.
###### Theorem 3.3.
For any $\alpha\geq 0$, we have: (a) There exists ${\mathbf{V}}^{(k)}(\alpha)$
for all $k\in\\{1,2,3\\}$ such that
$\begin{bmatrix}\beta_{n}^{-1/2}({\bm{\theta}}_{n}-{\bm{\theta}}^{*})\\\
\gamma_{n}^{-1/2}({\mathbf{x}}_{n}-{\bm{\mu}})\end{bmatrix}\xrightarrow[n\to\infty]{\text{dist.}}N\left({\bm{0}},{\mathbf{V}}^{(k)}(\alpha)\right).$
(b) For $k=2$, matrix ${\mathbf{V}}^{(2)}(\alpha)$ solves the Lyapunov
equation
${\mathbf{J}}(\alpha){\mathbf{V}}^{(2)}(\alpha)+{\mathbf{V}}^{(2)}(\alpha){\mathbf{J}}(\alpha)^{T}+\mathds{1}_{\\{b=1\\}}{\mathbf{V}}^{(2)}(\alpha)=-{\mathbf{U}}$,
where the Jacobian matrix ${\mathbf{J}}(\alpha)$ is in (8), and
${\mathbf{U}}\triangleq\sum_{i=1}^{N-1}\frac{1+\lambda_{i}}{1-\lambda_{i}}\cdot\begin{bmatrix}{\mathbf{H}}^{T}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}{\mathbf{H}}&{\mathbf{H}}^{T}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}\\\
{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}{\mathbf{H}}&{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}\end{bmatrix}\triangleq\begin{bmatrix}{\mathbf{U}}_{11}&{\mathbf{U}}_{12}\\\
{\mathbf{U}}_{21}&{\mathbf{U}}_{22}\end{bmatrix}.$ (9)
(c) For $k\in\\{1,3\\}$, ${\mathbf{V}}^{(k)}(\alpha)$ becomes block diagonal,
which is given by
${\mathbf{V}}^{(k)}(\alpha)=\begin{bmatrix}{\mathbf{V}}^{(k)}_{{\bm{\theta}}}(\alpha)&{\bm{0}}_{D\\!\times\\!N}\\\
{\bm{0}}_{N\\!\times\\!D}&{\mathbf{V}}_{{\mathbf{x}}}(\alpha)\end{bmatrix},$
(10)
where ${\mathbf{V}}_{{\mathbf{x}}}(\alpha)$ is as in (6), and
${\mathbf{V}}^{(1)}_{{\bm{\theta}}}(\alpha)$ and
${\mathbf{V}}^{(3)}_{{\bm{\theta}}}(\alpha)$ can be written in the following
explicit form:
$\displaystyle{\mathbf{V}}^{(1)}_{{\bm{\theta}}}(\alpha)=\textstyle\int_{0}^{\infty}e^{t(\nabla_{{\bm{\theta}}}{\mathbf{h}}({\bm{\theta}}^{*})+\frac{\mathds{1}_{\\{b=1\\}}}{2}{\bm{I}})}{\mathbf{U}}_{{\bm{\theta}}}(\alpha)e^{t(\nabla_{{\bm{\theta}}}{\mathbf{h}}({\bm{\theta}}^{*})+\frac{\mathds{1}_{\\{b=1\\}}}{2}{\bm{I}})^{T}}dt,$
$\displaystyle{\mathbf{V}}^{(3)}_{{\bm{\theta}}}(\alpha)=\textstyle\int_{0}^{\infty}e^{t\nabla_{{\bm{\theta}}}{\mathbf{h}}({\bm{\theta}}^{*})}{\mathbf{U}}_{11}e^{t\nabla_{{\bm{\theta}}}{\mathbf{h}}({\bm{\theta}}^{*})}dt,$
where
$\displaystyle{\mathbf{U}}_{{\bm{\theta}}}(\alpha)=\sum_{i=1}^{N-1}\frac{1}{(\alpha(1+\lambda_{i})+1)^{2}}\cdot\frac{1+\lambda_{i}}{1-\lambda_{i}}{\mathbf{H}}^{T}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}{\mathbf{H}}.$
(11)
For $k\in\\{1,3\\}$, SA-SRRW in (4) is a two-timescale SA with controlled
Markov noise. While a few works study the CLT of two-timescale SA with the
stochastic input being a martingale-difference (i.i.d.) noise (Konda &
Tsitsiklis, 2004; Mokkadem & Pelletier, 2006), a CLT result covering the case
of controlled Markov noise (e.g., $k\in\\{1,3\\}$), a far more general setting
than martingale-difference noise, is still an open problem. Thus, we here
prove our CLT for $k\in\\{1,3\\}$ from scratch by a series of careful
decompositions of the Markovian noise, ultimately into a martingale-difference
term and several non-vanishing noise terms through repeated application of the
Poisson equation (Benveniste et al., 2012; Fort, 2015). Although the form of
the resulting asymptotic covariance looks similar to that for the martingale-
difference case in (Konda & Tsitsiklis, 2004; Mokkadem & Pelletier, 2006) at
first glance, they are not equivalent. Specifically,
${\mathbf{V}}^{(k)}_{{\bm{\theta}}}(\alpha)$ captures both the effect of SRRW
hyper-parameter $\alpha$, as well as that of the underlying base Markov chain
via eigenpairs $(\lambda_{i},{\mathbf{u}}_{i})$ of its transition probability
matrix ${\mathbf{P}}$ in matrix ${\mathbf{U}}$, whereas the latter only covers
the martingale-difference noise terms as a special case.
When $k=2$, that is, $\beta_{n}=\gamma_{n}$, algorithm (4) can be regarded as
a single-timescale SA algorithm. In this case, we utilize the CLT in Fort
(2015, Theorem 2.1) to obtain the implicit form of
${\mathbf{V}}^{(2)}(\alpha)$ as shown in Theorem 3.3. However,
${\mathbf{J}}_{12}(\alpha)$ being non-zero for $\alpha>0$ restricts us from
obtaining an explicit form for the covariance term corresponding to SA iterate
errors ${\bm{\theta}}_{n}-{\bm{\theta}}^{*}$. On the other hand, for
$k\in\\{1,3\\}$, the nature of two-timescale structure causes
${\bm{\theta}}_{n}$ and ${\mathbf{x}}_{n}$ to become asymptotically
independent with zero correlation terms inside ${\mathbf{V}}^{(k)}(\alpha)$ in
(10), and we can explicitly deduce
${\mathbf{V}}^{(k)}_{{\bm{\theta}}}(\alpha)$. We now take a deeper dive into
$\alpha$ and study its effect on ${\mathbf{V}}^{(k)}_{{\bm{\theta}}}(\alpha)$.
Covariance Ordering of SA-SRRW: We refer the reader to Appendix F for proofs
of all remaining results. We begin by focusing on case (i) and capturing the
impact of $\alpha$ on ${\mathbf{V}}_{{\bm{\theta}}}^{(1)}(\alpha)$.
###### Proposition 3.4.
For all $\alpha_{2}>\alpha_{1}>0$, we have
${\mathbf{V}}_{{\bm{\theta}}}^{(1)}(\alpha_{2})<_{L}{\mathbf{V}}_{{\bm{\theta}}}^{(1)}(\alpha_{1})<_{L}{\mathbf{V}}_{{\bm{\theta}}}^{(1)}(0)$.
Furthermore, ${\mathbf{V}}_{{\bm{\theta}}}^{(1)}(\alpha)$ decreases to zero at
a rate of $O(1/\alpha^{2})$.
Proposition 3.4 proves a monotonic reduction (in terms of Loewner ordering) of
${\mathbf{V}}_{{\bm{\theta}}}^{(1)}(\alpha)$ as $\alpha$ increases. Moreover,
the decrease rate $O(1/\alpha^{2})$ surpasses the $O(1/\alpha)$ rate seen in
${\mathbf{V}}_{{\mathbf{x}}}(\alpha)$ and the sampling application in Doshi et
al. (2023, Corollary 4.7), and is also empirically observed in our simulation
in Section 4.999Further insights of $O(1/\alpha^{2})$ are tied to the two-
timescale structure, particularly $\beta_{n}=o(\gamma_{n})$ in case (i), which
places ${\bm{\theta}}_{n}$ on the slow timescale so that the correlation terms
${\mathbf{J}}_{12}(\alpha),{\mathbf{J}}_{22}(\alpha)$ in the Jacobian matrix
${\mathbf{J}}(\alpha)$ in (8) come into play. Technical details are referred
to Appendix E.2, where we show the form of
${\mathbf{U}}_{{\bm{\theta}}}(\alpha)$. Suppose we consider the same SA now
driven by an i.i.d. sequence $\\{X_{n}\\}$ with the same marginal distribution
${\bm{\mu}}$. Then, our Proposition 3.4 asserts that a token algorithm
employing SRRW (walk on a graph) with large enough $\alpha$ on a general graph
can actually produce better SA iterates with its asymptotic covariance going
down to zero, than a ‘hypothetical situation’ where the walker is able to
access any node $j$ with probability $\mu_{j}$ from anywhere in one step (more
like a random jumper). This can be seen by noting that for large time $n$, the
scaled MSE $\mathbb{E}[\|{\bm{\theta}}_{n}-{\bm{\theta}}^{*}\|^{2}]/\beta_{n}$
is composed of the diagonal entries of the covariance matrix
${\mathbf{V}}_{{\bm{\theta}}}$, which, as we discuss in detail in Appendix
F.2, are decreasing in $\alpha$ as a consequence of the Loewner ordering in
Proposition 3.4. For large enough $\alpha$, the scaled MSE for SA-SRRW becomes
smaller than its i.i.d. counterpart, which is always a constant. Although
Doshi et al. (2023) alluded this for sampling applications with
${\mathbf{V}}_{{\mathbf{x}}}(\alpha)$, we broaden its horizons to distributed
optimization problem with ${\mathbf{V}}_{{\bm{\theta}}}(\alpha)$ using tokens
on graphs. Our subsequent result concerns the performance comparison between
cases (i) and (iii).
###### Corollary 3.5.
For any $\alpha>0$, we have
${\mathbf{V}}_{{\bm{\theta}}}^{(1)}(\alpha)<_{L}{\mathbf{V}}_{{\bm{\theta}}}^{(3)}(\alpha)={\mathbf{V}}_{{\bm{\theta}}}^{(3)}(0)$.
We show that case (i) is asymptotically better than case (iii) for $\alpha>0$.
In view of Proposition 3.4 and Corollary 3.5, the advantages of case (i)
become prominent.
## 4 Simulation
In this section, we simulate our SA-SRRW algorithm on the wikiVote graph
(Leskovec & Krevl, 2014), comprising $889$ nodes and $2914$ edges. We
configure the SRRW’s base Markov chain ${\mathbf{P}}$ as the MHRW with a
uniform target distribution ${\bm{\mu}}=\frac{1}{N}{\bm{1}}$. For distributed
optimization, we consider the following $L_{2}$ regularized binary
classification problem:
$\textstyle\min_{{\bm{\theta}}\in{\mathbb{R}}^{D}}\left\\{f({\bm{\theta}})\triangleq\frac{1}{N}\sum_{i=1}^{N}\log\left(1+e^{{\bm{\theta}}^{T}{\mathbf{s}}_{i}}\right)-y_{i}\left({\bm{\theta}}^{T}{\mathbf{s}}_{i}\right)+\frac{\kappa}{2}\|{\bm{\theta}}\|^{2}\right\\},\vspace{-1mm}$
(12)
where $\\{({\mathbf{s}}_{i},y_{i})\\}_{i=1}^{N}$ is the ijcnn1 dataset (with
$22$ features, i.e., ${\mathbf{s}}_{i}\in{\mathbb{R}}^{22}$) from LIBSVM
(Chang & Lin, 2011), and penalty parameter $\kappa=1$. Each node in the
wikiVote graph is assigned one data point, thus $889$ data points in total. We
perform SRRW driven SGD (SGD-SRRW) and SRRW driven stochastic heavy ball (SHB-
SRRW) algorithms (see (13) in Appendix A for its algorithm). We fix the step
size $\beta_{n}\\!=\\!(n+1)^{-0.9}$ for the SA iterates and adjust
$\gamma_{n}\\!=\\!(n+1)^{\\!-a}$ in the SRRW iterates to cover all three cases
discussed in this paper: (i) $a\\!=\\!0.8$; (ii) $a\\!=\\!0.9$; (iii)
$a\\!=\\!1$. We use mean square error (MSE), i.e.,
$\mathbb{E}[\|{\bm{\theta}}_{n}\\!-\\!{\bm{\theta}}^{*}\|^{2}]$, to measure
the error on the SA iterates.
Our results are presented in Figures 2 and 3, where each experiment is
repeated $100$ times. Figures 2(a) and 2(b), based on wikiVote graph,
highlight the consistent performance ordering across different $\alpha$ values
for both algorithms over almost all time (not just asymptotically). Notably,
curves for $\alpha\geq 5$ outperform that of the i.i.d. sampling (in black)
even under the graph constraints. Figure 2(c) on the smaller Dolphins graph
(Rossi & Ahmed, 2015) \- $62$ nodes and $159$ edges - illustrates that the
points of ($\alpha$, MSE) pair arising from SGD-SRRW at time $n=10^{7}$ align
with a curve in the form of
$g(x)\\!=\\!\frac{c_{1}}{(x+c_{2})^{2}}\\!+\\!c_{3}$ to showcase
$O(1/\alpha^{2})$ rates. This smaller graph allows for longer simulations to
observe the asymptotic behaviour. Additionally, among the three cases examined
at identical $\alpha$ values, Figures 3(a) \- 3(c) confirm that case (i)
performs consistently better than the rest, underscoring its superiority in
practice. Further results, including those from non-convex functions and
additional datasets, are deferred to Appendix H due to space constraints.
(a) SGD-SRRW.
(b) SHB-SRRW.
(c) Curve fitting result for MSE.
Figure 2: Simulation results under case (i): (a) and (b) show the performance
of SGD-SRRW and SHB-SRRW for various $\alpha$ values. (c) shows that MSE
decreases at $O(1/\alpha^{2})$ speed.
(a) $\alpha=1$, SGD-SRRW
(b) $\alpha=5$, SGD-SRRW
(c) $\alpha=10$, SGD-SRRW
Figure 3: Comparison of the performance among cases (i) - (iii) for
$\alpha\in\\{1,5,10\\}$.
## 5 Conclusion
In this paper, we show both theoretically and empirically that the SRRW as a
drop-in replacement for Markov chains can provide significant performance
improvements when used for token algorithms, where the acceleration comes
purely from the careful analysis of the stochastic input of the algorithm,
without changing the optimization iteration itself. Our paper is an instance
where the asymptotic analysis approach allows the design of better algorithms
despite the usage of unconventional noise sequences such as nonlinear Markov
chains like the SRRW, for which traditional finite-time analytical approaches
fall short, thus advocating their wider adoption.
## References
* Aldous & Fill (2002) David Aldous and James Allen Fill. Reversible markov chains and random walks on graphs, 2002. Unfinished monograph, recompiled 2014, available at http://www.stat.berkeley.edu/~aldous/RWG/book.html.
* Andrieu et al. (2007) Christophe Andrieu, Ajay Jasra, Arnaud Doucet, and Pierre Del Moral. Non-linear markov chain monte carlo. In _Esaim: Proceedings_ , volume 19, pp. 79–84. EDP Sciences, 2007\.
* Ayache & El Rouayheb (2021) Ghadir Ayache and Salim El Rouayheb. Private weighted random walk stochastic gradient descent. _IEEE Journal on Selected Areas in Information Theory_ , 2(1):452–463, 2021.
* Barakat & Bianchi (2021) Anas Barakat and Pascal Bianchi. Convergence and dynamical behavior of the adam algorithm for nonconvex stochastic optimization. _SIAM Journal on Optimization_ , 31(1):244–274, 2021.
* Barakat et al. (2021) Anas Barakat, Pascal Bianchi, Walid Hachem, and Sholom Schechtman. Stochastic optimization with momentum: convergence, fluctuations, and traps avoidance. _Electronic Journal of Statistics_ , 15(2):3892–3947, 2021.
* Benaim & Cloez (2015) M Benaim and Bertrand Cloez. A stochastic approximation approach to quasi-stationary distributions on finite spaces. _Electronic Communications in Probability 37 (20), 1-14.(2015)_ , 2015\.
* Benveniste et al. (2012) Albert Benveniste, Michel Métivier, and Pierre Priouret. _Adaptive algorithms and stochastic approximations_ , volume 22. Springer Science & Business Media, 2012.
* Borkar (2022) V.S. Borkar. _Stochastic Approximation: A Dynamical Systems Viewpoint: Second Edition_. Texts and Readings in Mathematics. Hindustan Book Agency, 2022. ISBN 9788195196111.
* Bottou et al. (2018) Léon Bottou, Frank E Curtis, and Jorge Nocedal. Optimization methods for large-scale machine learning. _SIAM review_ , 60(2):223–311, 2018.
* Boyd et al. (2006) Stephen Boyd, Arpita Ghosh, Balaji Prabhakar, and Devavrat Shah. Randomized gossip algorithms. _IEEE transactions on information theory_ , 52(6):2508–2530, 2006.
* Brémaud (2013) Pierre Brémaud. _Markov chains: Gibbs fields, Monte Carlo simulation, and queues_ , volume 31. Springer Science & Business Media, 2013.
* Chang & Lin (2011) Chih-Chung Chang and Chih-Jen Lin. Libsvm: a library for support vector machines. _ACM transactions on intelligent systems and technology (TIST)_ , 2(3):1–27, 2011.
* Chellaboina & Haddad (2008) VijaySekhar Chellaboina and Wassim M Haddad. _Nonlinear dynamical systems and control: A Lyapunov-based approach_. Princeton University Press, 2008.
* Chellapandi et al. (2023) Vishnu Pandi Chellapandi, Antesh Upadhyay, Abolfazl Hashemi, and Stanislaw H Zak. On the convergence of decentralized federated learning under imperfect information sharing. _arXiv preprint arXiv:2303.10695_ , 2023.
* Chen (2006) Han-Fu Chen. _Stochastic approximation and its applications_ , volume 64. Springer Science & Business Media, 2006.
* Chen et al. (2020a) Shuhang Chen, Adithya Devraj, Ana Busic, and Sean Meyn. Explicit mean-square error bounds for monte-carlo and linear stochastic approximation. In _International Conference on Artificial Intelligence and Statistics_ , pp. 4173–4183. PMLR, 2020a.
* Chen et al. (2020b) Zaiwei Chen, Siva Theja Maguluri, Sanjay Shakkottai, and Karthikeyan Shanmugam. Finite-sample analysis of stochastic approximation using smooth convex envelopes. _arXiv preprint arXiv:2002.00874_ , 2020b.
* Chen et al. (2022) Zaiwei Chen, Sheng Zhang, Thinh T Doan, John-Paul Clarke, and Siva Theja Maguluri. Finite-sample analysis of nonlinear stochastic approximation with applications in reinforcement learning. _Automatica_ , 146:110623, 2022.
* Davis (1970) Burgess Davis. On the intergrability of the martingale square function. _Israel Journal of Mathematics_ , 8:187–190, 1970.
* Defazio et al. (2014) Aaron Defazio, Francis Bach, and Simon Lacoste-Julien. Saga: a fast incremental gradient method with support for non-strongly convex composite objectives. In _Advances in neural information processing systems_ , volume 1, 2014.
* Del Moral & Doucet (2010) Pierre Del Moral and Arnaud Doucet. Interacting markov chain monte carlo methods for solving nonlinear measure-valued equations1. _The Annals of Applied Probability_ , 20(2):593–639, 2010.
* Del Moral & Miclo (2006) Pierre Del Moral and Laurent Miclo. Self-interacting markov chains. _Stochastic Analysis and Applications_ , 24(3):615–660, 2006.
* Delyon (2000) Bernard Delyon. Stochastic approximation with decreasing gain: Convergence and asymptotic theory. Technical report, Université de Rennes, 2000.
* Delyon et al. (1999) Bernard Delyon, Marc Lavielle, and Eric Moulines. Convergence of a stochastic approximation version of the em algorithm. _Annals of statistics_ , pp. 94–128, 1999.
* Devraj & Meyn (2017) Adithya M Devraj and Sean P Meyn. Zap q-learning. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_ , pp. 2232–2241, 2017.
* Devraj & Meyn (2021) Adithya M. Devraj and Sean P. Meyn. Q-learning with uniformly bounded variance. _IEEE Transactions on Automatic Control_ , 2021.
* Doan et al. (2019) Thinh Doan, Siva Maguluri, and Justin Romberg. Finite-time analysis of distributed td (0) with linear function approximation on multi-agent reinforcement learning. In _International Conference on Machine Learning_ , pp. 1626–1635. PMLR, 2019.
* Doan (2021) Thinh T Doan. Finite-time convergence rates of nonlinear two-time-scale stochastic approximation under markovian noise. _arXiv preprint arXiv:2104.01627_ , 2021.
* Doan et al. (2020) Thinh T Doan, Lam M Nguyen, Nhan H Pham, and Justin Romberg. Convergence rates of accelerated markov gradient descent with applications in reinforcement learning. _arXiv preprint arXiv:2002.02873_ , 2020.
* Doshi et al. (2023) Vishwaraj Doshi, Jie Hu, and Do Young Eun. Self-repellent random walks on general graphs–achieving minimal sampling variance via nonlinear markov chains. In _International Conference on Machine Learning_. PMLR, 2023.
* Duflo (1996) Marie Duflo. _Algorithmes stochastiques_ , volume 23. Springer, 1996.
* Even (2023) Mathieu Even. Stochastic gradient descent under markovian sampling schemes. In _International Conference on Machine Learning_ , 2023.
* Fort (2015) Gersende Fort. Central limit theorems for stochastic approximation with controlled markov chain dynamics. _ESAIM: Probability and Statistics_ , 19:60–80, 2015.
* Gadat et al. (2018) Sébastien Gadat, Fabien Panloup, and Sofiane Saadane. Stochastic heavy ball. _Electronic Journal of Statistics_ , 12:461–529, 2018.
* Guo et al. (2020) Xin Guo, Jiequn Han, Mahan Tajrobehkar, and Wenpin Tang. Escaping saddle points efficiently with occupation-time-adapted perturbations. _arXiv preprint arXiv:2005.04507_ , 2020.
* Hall et al. (2014) P. Hall, C.C. Heyde, Z.W. Birnbauam, and E. Lukacs. _Martingale Limit Theory and Its Application_. Communication and Behavior. Elsevier Science, 2014.
* Hendrikx (2023) Hadrien Hendrikx. A principled framework for the design and analysis of token algorithms. In _International Conference on Artificial Intelligence and Statistics_ , pp. 470–489. PMLR, 2023.
* Hong et al. (2023) Mingyi Hong, Hoi-To Wai, Zhaoran Wang, and Zhuoran Yang. A two-timescale stochastic algorithm framework for bilevel optimization: Complexity analysis and application to actor-critic. _SIAM Journal on Optimization_ , 33(1):147–180, 2023.
* Hu et al. (2022) Jie Hu, Vishwaraj Doshi, and Do Young Eun. Efficiency ordering of stochastic gradient descent. In _Advances in Neural Information Processing Systems_ , 2022.
* Jin et al. (2017) Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M Kakade, and Michael I Jordan. How to escape saddle points efficiently. In _International conference on machine learning_ , pp. 1724–1732. PMLR, 2017.
* Jin et al. (2018) Chi Jin, Praneeth Netrapalli, and Michael I Jordan. Accelerated gradient descent escapes saddle points faster than gradient descent. In _Conference On Learning Theory_ , pp. 1042–1085. PMLR, 2018\.
* Jin et al. (2021) Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M Kakade, and Michael I Jordan. On nonconvex optimization for machine learning: Gradients, stochasticity, and saddle points. _Journal of the ACM (JACM)_ , 68(2):1–29, 2021\.
* Karimi et al. (2019) Belhal Karimi, Blazej Miasojedow, Eric Moulines, and Hoi-To Wai. Non-asymptotic analysis of biased stochastic approximation scheme. In _Conference on Learning Theory_ , pp. 1944–1974. PMLR, 2019\.
* Karmakar & Bhatnagar (2018) Prasenjit Karmakar and Shalabh Bhatnagar. Two time-scale stochastic approximation with controlled markov noise and off-policy temporal-difference learning. _Mathematics of Operations Research_ , 43(1):130–151, 2018.
* Khaled & Richtárik (2023) Ahmed Khaled and Peter Richtárik. Better theory for SGD in the nonconvex world. _Transactions on Machine Learning Research_ , 2023. ISSN 2835-8856.
* Kingma & Ba (2015) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _ICLR_ , 2015.
* Konda & Tsitsiklis (2004) Vijay R Konda and John N Tsitsiklis. Convergence rate of linear two-time-scale stochastic approximation. _The Annals of Applied Probability_ , 14(2):796–819, 2004.
* Kushner & Yin (2003) Harold Kushner and G George Yin. _Stochastic approximation and recursive algorithms and applications_ , volume 35. Springer Science & Business Media, 2003.
* Lalitha et al. (2018) Anusha Lalitha, Shubhanshu Shekhar, Tara Javidi, and Farinaz Koushanfar. Fully decentralized federated learning. In _Advances in neural information processing systems_ , 2018.
* Leskovec & Krevl (2014) Jure Leskovec and Andrej Krevl. Snap datasets: Stanford large network dataset collection, 2014.
* Levin & Peres (2017) David A Levin and Yuval Peres. _Markov chains and mixing times_ , volume 107. American Mathematical Soc., 2017.
* Li & Wai (2022) Qiang Li and Hoi-To Wai. State dependent performative prediction with stochastic approximation. In _International Conference on Artificial Intelligence and Statistics_ , pp. 3164–3186. PMLR, 2022.
* Li et al. (2022) Tiejun Li, Tiannan Xiao, and Guoguo Yang. Revisiting the central limit theorems for the sgd-type methods. _arXiv preprint arXiv:2207.11755_ , 2022.
* Li et al. (2023) Xiang Li, Jiadong Liang, and Zhihua Zhang. Online statistical inference for nonlinear stochastic approximation with markovian data. _arXiv preprint arXiv:2302.07690_ , 2023.
* McMahan et al. (2017) Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In _Artificial intelligence and statistics_ , pp. 1273–1282. PMLR, 2017.
* Meyn (2022) Sean Meyn. _Control systems and reinforcement learning_. Cambridge University Press, 2022.
* Mokkadem & Pelletier (2005) Abdelkader Mokkadem and Mariane Pelletier. The compact law of the iterated logarithm for multivariate stochastic approximation algorithms. _Stochastic analysis and applications_ , 23(1):181–203, 2005.
* Mokkadem & Pelletier (2006) Abdelkader Mokkadem and Mariane Pelletier. Convergence rate and averaging of nonlinear two-time-scale stochastic approximation algorithms. _Annals of Applied Probability_ , 16(3):1671–1702, 2006.
* Morral et al. (2017) Gemma Morral, Pascal Bianchi, and Gersende Fort. Success and failure of adaptation-diffusion algorithms with decaying step size in multiagent networks. _IEEE Transactions on Signal Processing_ , 65(11):2798–2813, 2017.
* Mou et al. (2020) Wenlong Mou, Chris Junchi Li, Martin J Wainwright, Peter L Bartlett, and Michael I Jordan. On linear stochastic approximation: Fine-grained polyak-ruppert and non-asymptotic concentration. In _Conference on Learning Theory_ , pp. 2947–2997. PMLR, 2020\.
* Nedic (2020) Angelia Nedic. Distributed gradient methods for convex machine learning problems in networks: Distributed optimization. _IEEE Signal Processing Magazine_ , 37(3):92–101, 2020.
* Olshevsky (2022) Alex Olshevsky. Asymptotic network independence and step-size for a distributed subgradient method. _Journal of Machine Learning Research_ , 23(69):1–32, 2022.
* Pelletier (1998) Mariane Pelletier. On the almost sure asymptotic behaviour of stochastic algorithms. _Stochastic processes and their applications_ , 78(2):217–244, 1998.
* Reddi et al. (2018) Sashank J. Reddi, Satyen Kale, and Sanjiv Kumar. On the convergence of adam and beyond. In _International Conference on Learning Representations_ , 2018.
* Rossi & Ahmed (2015) Ryan A. Rossi and Nesreen K. Ahmed. The network data repository with interactive graph analytics and visualization. In _AAAI_ , 2015.
* Schmidt et al. (2017) Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. _Mathematical Programming_ , 162:83–112, 2017.
* Sun et al. (2018) Tao Sun, Yuejiao Sun, and Wotao Yin. On markov chain gradient descent. In _Advances in neural information processing systems_ , volume 31, 2018.
* Triastcyn et al. (2022) Aleksei Triastcyn, Matthias Reisser, and Christos Louizos. Decentralized learning with random walks and communication-efficient adaptive optimization. In _Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NeurIPS 2022)_ , 2022.
* Vogels et al. (2021) Thijs Vogels, Lie He, Anastasiia Koloskova, Sai Praneeth Karimireddy, Tao Lin, Sebastian U Stich, and Martin Jaggi. Relaysum for decentralized deep learning on heterogeneous data. In _Advances in Neural Information Processing Systems_ , volume 34, pp. 28004–28015, 2021.
* Wang et al. (2019) Jianyu Wang, Anit Kumar Sahu, Zhouyi Yang, Gauri Joshi, and Soummya Kar. Matcha: Speeding up decentralized sgd via matching decomposition sampling. In _2019 Sixth Indian Control Conference (ICC)_ , pp. 299–300. IEEE, 2019.
* Yaji & Bhatnagar (2020) Vinayaka G Yaji and Shalabh Bhatnagar. Stochastic recursive inclusions in two timescales with nonadditive iterate-dependent markov noise. _Mathematics of Operations Research_ , 45(4):1405–1444, 2020.
* Ye et al. (2022) Hao Ye, Le Liang, and Geoffrey Ye Li. Decentralized federated learning with unreliable communications. _IEEE Journal of Selected Topics in Signal Processing_ , 16(3):487–500, 2022.
* Zeng et al. (2021) Sihan Zeng, Thinh T Doan, and Justin Romberg. A two-time-scale stochastic optimization framework with applications in control and reinforcement learning. _arXiv preprint arXiv:2109.14756_ , 2021.
## Appendix A Examples of Stochastic Algorithms of the form (2).
In the literature of stochastic optimizations, many SGD variants have been
proposed by introducing an auxiliary variable to improve convergence. In what
follows, we present two SGD variants with decreasing step size that can be
presented in the form of (2): SHB (Gadat et al., 2018; Li et al., 2022) and
momentum-based algorithm (Barakat et al., 2021; Barakat & Bianchi, 2021).
$\begin{split}&\begin{cases}\\!{\bm{\theta}}_{n+1}\\!=\\!{\bm{\theta}}_{n}\\!-\\!\beta_{n+1}{\mathbf{m}}_{n}\\\
\\!{\mathbf{m}}_{n+1}\\!=\\!{\mathbf{m}}_{n}\\!+\\!\beta_{n+1}\\!(\nabla\\!F({\bm{\theta}}_{n},X_{n+1})\\!-\\!{\mathbf{m}}_{n}),\end{cases}\\!\\!\\!\\!\\!\begin{cases}\\!{\mathbf{v}}_{n+1}\\!=\\!{\mathbf{v}}_{n}\\!+\\!\beta_{n+1}\\!(\nabla\\!F({\bm{\theta}}_{n},X_{n+1})^{2}\\!\\!-\\!{\mathbf{v}}_{n}),\\\
\\!{\mathbf{m}}_{n+1}\\!=\\!{\mathbf{m}}_{n}\\!+\\!\beta_{n+1}\\!(\nabla\\!F({\bm{\theta}}_{n},X_{n+1})\\!-\\!{\mathbf{m}}_{n}),\\\
\\!{\bm{\theta}}_{n+1}\\!=\\!{\bm{\theta}}_{n}\\!-\\!\beta_{n+1}{\mathbf{m}}_{n}/\sqrt{{\mathbf{v}}_{n}+\epsilon},\end{cases}\\\
&~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(a).
SHB}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{(b).
Momentum-based Algorithm}\end{split}\vspace{-5mm}$ (13)
where $\epsilon>0$,
${\bm{\theta}}_{n},{\mathbf{m}}_{n},{\mathbf{v}}_{n},\nabla
F({\bm{\theta}},X)\in{\mathbb{R}}^{d}$, and the square and square root in (13)
(b) are element-wise operators.101010For ease of expression, we simplify the
original SHB and momentum-based algorithms from Gadat et al. (2018); Li et al.
(2022); Barakat et al. (2021); Barakat & Bianchi (2021), setting all tunable
parameters to $1$ and resulting in (13).
For SHB, we introduce an augmented variable ${\mathbf{z}}_{n}$ and function
$H({\mathbf{z}}_{n},X_{n+1})$ defined as follows:
${\mathbf{z}}_{n}\triangleq\begin{bmatrix}{\bm{\theta}}_{n}\\\
{\mathbf{m}}_{n}\end{bmatrix}\in{\mathbb{R}}^{2d},\quad
H({\mathbf{z}}_{n},X_{n+1})\triangleq\begin{bmatrix}-{\mathbf{m}}_{n}\\\
\nabla
F({\bm{\theta}}_{n},X_{n+1})-{\mathbf{m}}_{n}\end{bmatrix}\in{\mathbb{R}}^{2d}.$
For the general momentum-based algorithm, we define
${\mathbf{z}}_{n}\triangleq\begin{bmatrix}{\mathbf{v}}_{n}\\\
{\mathbf{m}}_{n}\\\ {\bm{\theta}}_{n}\end{bmatrix}\in{\mathbb{R}}^{3d},\quad
H({\mathbf{z}}_{n},X)\triangleq\begin{bmatrix}\nabla
F({\bm{\theta}}_{n},X_{n+1})^{2}-{\mathbf{v}}_{n}\\\ \nabla
F({\bm{\theta}}_{n},X_{n+1})-{\mathbf{m}}_{n}\\\
-{\mathbf{m}}_{n}/\sqrt{{\mathbf{v}}_{n}+\epsilon}\end{bmatrix}\in{\mathbb{R}}^{3d}.$
Thus, we can reformulate both algorithms in (13) as
${\mathbf{z}}_{n+1}={\mathbf{z}}_{n}+\beta_{n+1}H({\mathbf{z}}_{n},X_{n+1})$.
This augmentation approach was previously adopted in (Gadat et al., 2018;
Barakat et al., 2021; Barakat & Bianchi, 2021; Li et al., 2022) to analyze the
asymptotic performance of algorithms in (13) using an i.i.d. sequence
$\\{X_{n}\\}_{n\geq 0}$. Consequently, the general SA iteration (2) includes
these SGD variants. However, we mainly focus on the CLT for the general SA
driven by SRRW in this paper. Pursuing the explicit CLT results of these SGD
variants with specific form of function $H({\bm{\theta}},X)$ driven by the
SRRW sequence $\\{X_{n}\\}$ is out of the scope of this paper.
When we numerically test the SHB algorithm in Section 4, we use the exact form
of (13) (a) and the stochastic sequence $\\{X_{n}\\}$ is now driven by the
SRRW. Specifically, we consider MHRW with transition kernel ${\mathbf{P}}$ as
the base Markov chain of the SRRW process, e.g.,
$P_{ij}=\begin{cases}\min\left\\{\frac{1}{d_{i}},\frac{1}{d_{j}}\right\\}&\text{if
node $j$ is the neighbor of node $i$},\\\ 0&\text{otherwise},\end{cases}$ (14)
$P_{ii}=1-\sum_{j\in{\mathcal{N}}}P_{ij}.~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
Then, at each time step $n$,
$\text{Draw:}~{}~{}~{}~{}~{}~{}~{}~{}X_{n+1}\sim{\mathbf{K}}_{X_{n},\cdot}[{\mathbf{x}}_{n}],~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}$
(15)
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{where}~{}~{}~{}~{}~{}~{}~{}~{}K_{ij}[{\mathbf{x}}]\triangleq\frac{P_{ij}(x_{j}/\mu_{j})^{-\alpha}}{\sum_{k\in{\mathcal{N}}}P_{ik}(x_{k}/\mu_{k})^{-\alpha}},~{}~{}~{}~{}~{}~{}\forall~{}i,j\in{\mathcal{N}},$
$\text{Update:}~{}~{}~{}~{}~{}~{}{\mathbf{x}}_{n+1}={\mathbf{x}}_{n}+\gamma_{n+1}({\bm{\delta}}_{X_{n+1}}-{\mathbf{x}}_{n}),$
$~{}~{}{\bm{\theta}}_{n+1}={\bm{\theta}}_{n}-\beta_{n+1}{\mathbf{m}}_{n},$
$~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}{\mathbf{m}}_{n+1}={\mathbf{m}}_{n}+\beta_{n+1}(\nabla
F({\bm{\theta}}_{n},X_{n+1})-{\mathbf{m}}_{n}).$
## Appendix B Discussion on Mean Field Function of SRRW Iterates (4b)
Non-asymptotic analyses have seen extensive attention recently in both single-
timescale SA literature (Sun et al., 2018; Karimi et al., 2019; Chen et al.,
2020b; 2022) and two-timescale SA literature (Doan, 2021; Zeng et al., 2021).
Specifically, single-timescale SA has the following form:
${\mathbf{x}}_{n+1}={\mathbf{x}}_{n}+\beta_{n+1}H({\mathbf{x}}_{n},X_{n+1}),$
and function
$h({\mathbf{x}})\triangleq\mathbb{E}_{X\sim{\bm{\mu}}}[H({\mathbf{x}},X)]$ is
the mean field of function $H({\mathbf{x}},X)$. Similarly, for two-timescale
SA, we have the following recursions:
$\begin{split}{\mathbf{x}}_{n+1}={\mathbf{x}}_{n}+\beta_{n+1}H_{1}({\mathbf{x}}_{n},{\mathbf{y}}_{n},X_{n+1}),\\\
{\mathbf{y}}_{n+1}={\mathbf{y}}_{n}+\gamma_{n+1}H_{2}({\mathbf{x}}_{n},{\mathbf{y}}_{n},X_{n+1}),\end{split}$
where $\\{\beta_{n}\\}$ and $\\{\gamma_{n}\\}$ are on different timescales,
and function
$h_{i}({\mathbf{x}},{\mathbf{y}})\triangleq\mathbb{E}_{X\sim{\bm{\mu}}}[H_{i}({\mathbf{x}},{\mathbf{y}},X)]$
is the mean field of function $H_{i}({\mathbf{x}},{\mathbf{y}},X)$ for
$i=\\{1,2\\}$. All the aforementioned works require the mean field function
$h({\mathbf{x}})$ in the single-timescale SA (or
$h_{1}({\mathbf{x}},{\mathbf{y}}),h_{2}({\mathbf{x}},{\mathbf{y}})$ in the
two-timescale SA) to be globally Lipschitz with a Lipschitz constant $L$ to
proceed with the derivation of finite-time bounds including the constant $L$.
Here, we show that the mean field function
${\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}$ in the SRRW iterates (4b) is not
globally Lipschitz, where ${\bm{\pi}}[{\mathbf{x}}]$ is the stationary
distribution of the SRRW kernel ${\mathbf{K}}[{\mathbf{x}}]$ defined in (3).
To this end, we show that each entry of Jacobian matrix of
${\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}$ goes unbounded because a multivariate
function is Lipschitz if and only if it has bounded partial derivatives. Note
that from Doshi et al. (2023, Proposition 2.1), for the $i$-th entry of
${\bm{\pi}}[{\mathbf{x}}]$, we have
${\bm{\pi}}_{i}[{\mathbf{x}}]=\frac{\sum_{j\in{\mathcal{N}}}\mu_{i}P_{ij}\left(x_{i}/\mu_{i}\right)^{-\alpha}\left(x_{j}/\mu_{j}\right)^{-\alpha}}{\sum_{i\in{\mathcal{N}}}\sum_{j\in{\mathcal{N}}}\mu_{i}P_{ij}\left(x_{i}/\mu_{i}\right)^{-\alpha}\left(x_{j}/\mu_{j}\right)^{-\alpha}}.$
(16)
Then, the Jacobian matrix of the mean field function
${\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}$ , which has been derived in Doshi et
al. (2023, Proof of Lemma 3.4 in Appendix B), is given as follows:
$\begin{split}&\frac{\partial({\bm{\pi}}_{i}[{\mathbf{x}}]-x_{i})}{\partial
x_{j}}\\\
=&~{}\frac{2\alpha}{x_{j}}\cdot\frac{(\sum_{k\in{\mathcal{N}}}\mu_{i}P_{ik}\left(x_{i}/\mu_{i}\right)^{-\alpha}\left(x_{k}/\mu_{k}\right)^{-\alpha})(\sum_{k\in{\mathcal{N}}}\mu_{j}P_{jk}\left(x_{j}/\mu_{j}\right)^{-\alpha}\left(x_{k}/\mu_{k}\right)^{-\alpha})}{(\sum_{l\in{\mathcal{N}}}\sum_{k\in{\mathcal{N}}}\mu_{l}P_{lk}\left(x_{l}/\mu_{l}\right)^{-\alpha}\left(x_{k}/\mu_{k}\right)^{-\alpha})^{2}}\\\
&-\frac{\alpha}{x_{j}}\cdot\frac{\mu_{i}P_{ij}\left(x_{i}/\mu_{i}\right)^{-\alpha}\left(x_{j}/\mu_{j}\right)^{-\alpha}}{\sum_{l\in{\mathcal{N}}}\sum_{k\in{\mathcal{N}}}\mu_{l}P_{lk}\left(x_{l}/\mu_{l}\right)^{-\alpha}\left(x_{k}/\mu_{k}\right)^{-\alpha}}\end{split}$
(17)
for $i,j\in{\mathcal{N}},i\neq j$, and
$\begin{split}&\frac{\partial({\bm{\pi}}_{i}[{\mathbf{x}}]-x_{i})}{\partial
x_{i}}\\\
=&~{}\frac{2\alpha}{x_{i}}\cdot\frac{(\sum_{k\in{\mathcal{N}}}\mu_{i}P_{ik}\left(x_{i}/\mu_{i}\right)^{-\alpha}\left(x_{k}/\mu_{k}\right)^{-\alpha})^{2}}{(\sum_{l\in{\mathcal{N}}}\sum_{k\in{\mathcal{N}}}\mu_{l}P_{lk}\left(x_{l}/\mu_{l}\right)^{-\alpha}\left(x_{k}/\mu_{k}\right)^{-\alpha})^{2}}\\\
&-\frac{\alpha}{x_{i}}\cdot\frac{\sum_{k\in{\mathcal{N}}}\mu_{i}P_{ik}\left(x_{i}/\mu_{i}\right)^{-\alpha}\left(x_{k}/\mu_{k}\right)^{-\alpha}+\mu_{i}P_{ii}(x_{i}/\mu_{i})^{-2\alpha}}{\sum_{l\in{\mathcal{N}}}\sum_{k\in{\mathcal{N}}}\mu_{l}P_{lk}\left(x_{l}/\mu_{l}\right)^{-\alpha}\left(x_{k}/\mu_{k}\right)^{-\alpha}}-1\end{split}$
(18)
for $i\in{\mathcal{N}}$. Since the empirical distribution
${\mathbf{x}}\in\text{Int}(\Sigma)$, we have $x_{i}\in(0,1)$ for all
$i\in{\mathcal{N}}$. For fixed $i$, assume $x_{i}=x_{j}$ and as they approach
zero, the terms $(x_{i}/\mu_{i})^{-\alpha}$, $(x_{j}/\mu_{j})^{-\alpha}$
dominate the fraction in (17) and both the numerator and the denominator of
the fraction have the same order in terms of $x_{i},x_{j}$. Thus, we have
$\frac{\partial({\bm{\pi}}_{i}[{\mathbf{x}}]-x_{i})}{\partial
x_{j}}=O\left(\frac{1}{x_{j}}\right)$
such that the $(i,j)$-th entry of the Jacobian matrix can go unbounded as
$x_{j}\to 0$. Consequently, ${\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}$ is not
globally Lipschitz for ${\mathbf{x}}\in\text{Int}(\Sigma)$.
## Appendix C Discussion on Assumption A3′
When $\gamma_{n}=o(\beta_{n})$, iterates ${\mathbf{x}}_{n}$ has smaller step
size compared to ${\bm{\theta}}_{n}$, thus converges ‘slower’ than
${\bm{\theta}}_{n}$. From Assumption A3′, ${\bm{\theta}}_{n}$ will intuitively
converge to some point $\rho({\mathbf{x}})$ with the current value
${\mathbf{x}}$ from the iteration ${\mathbf{x}}_{n}$, i.e.,
$\mathbb{E}_{X\sim{\bm{\pi}}[{\mathbf{x}}]}[H(\rho({\mathbf{x}}),X)]=0$, while
the Hurwitz condition is to ensure the stability around $\rho({\mathbf{x}})$.
We can see that Assumption A3 is less stringent than A3′ in that it only
assumes such condition when ${\mathbf{x}}={\bm{\mu}}$ such that
$\rho({\bm{\mu}})={\bm{\theta}}^{*}$ rather than for all
${\mathbf{x}}\in\text{Int}(\Sigma)$.
One special instance of Assumption A3′ is by assuming the linear SA, e.g.,
$H({\bm{\theta}},i)=A_{i}{\bm{\theta}}+b_{i}$. In this case,
$\mathbb{E}_{X\sim{\bm{\pi}}[{\mathbf{x}}]}[H(\rho({\mathbf{x}}),X)]=0$ is
equivalent to
$\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[A_{i}]\rho({\mathbf{x}})+\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[b_{i}]=0$.
Under the condition that for every ${\mathbf{x}}\in\text{Int}(\Sigma)$, matrix
$\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[A_{i}]$ is invertible, we then
have
$\rho({\mathbf{x}})=-(\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[A_{i}])^{-1}\cdot\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[b_{i}].$
However, this condition is quite strict. Loosely speaking,
$\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[A_{i}]$ being invertible for any
${\mathbf{x}}$ is similar to saying that any convex combination of
$\\{A_{i}\\}$ is invertible. For example, if we assume
$\\{A_{i}\\}_{i\in{\mathcal{N}}}$ are negative definite and they all share the
same eigenbasis $\\{{\mathbf{u}}_{i}\\}$, e.g.,
$A_{i}=\sum_{j=1}^{D}\lambda^{i}_{j}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}$ and
$\lambda_{j}^{i}<0$ for all $i\in{\mathcal{N}},j\in[D]$. Then,
$\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[A_{i}]$ is invertible.
Another example for Assumption A3′ is when
$H({\bm{\theta}},i)=H({\bm{\theta}},j)$ for all $i,j\in{\mathcal{N}}$, which
implies that each agent in the distributed learning has the same local dataset
to collaboratively train the model. In this example,
$\rho({\mathbf{x}})={\bm{\theta}}^{*}$ such that
$\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[H(\rho({\mathbf{x}}),i)]=h({\bm{\theta}}^{*})=0,$
$\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}[\nabla
H(\rho({\mathbf{x}}),i)]+\frac{\mathds{1}_{\\{b=1\\}}}{2}{\mathbf{I}}=\nabla
h({\bm{\theta}}^{*})+\frac{\mathds{1}_{\\{b=1\\}}}{2}{\mathbf{I}}\quad\text{being
Hurwitz}.$
## Appendix D Proof of Lemma 3.1 and Lemma 3.2
In this section, we demonstrate the almost sure convergence of both
${\bm{\theta}}_{n}$ and ${\mathbf{x}}_{n}$ together. This proof naturally
incorporates the almost certain convergence of the SRRW iteration in Lemma
3.1, since ${\mathbf{x}}_{n}$ is independent of ${\bm{\theta}}_{n}$ (as
indicated in (4)), allowing us to separate out its asymptotic results. The
same reason applies to the CLT analysis of SRRW iterates and we refer the
reader to Section E.1 for the CLT result of ${\mathbf{x}}_{n}$ in Lemma 3.1.
We will use different techniques for different settings of step sizes in
Assumption A2. Specifically, for step sizes
$\gamma_{n}=(n+1)^{-a},\beta_{n}=(n+1)^{-b}$, we consider the following
scenarios:
Scenario 1:
We consider case(ii): $1/2<a=b\leq 1$, and will apply the almost sure
convergence result of the single-timescale stochastic approximation in Theorem
G.8 and verify all the conditions therein.
Scenario 2:
We consider both case(i): $1/2<a<b\leq 1$ and case (iii): $1/2<b<a\leq 1$. In
these two cases, step sizes $\gamma_{n},\beta_{n}$ decrease at different
rates, thereby putting iterates ${\mathbf{x}}_{n},{\bm{\theta}}_{n}$ on
different timescales and resulting in a two-timescale structure. We will apply
the existing almost sure convergence result of the two-timescale stochastic
approximation with iterate-dependent Markov chain in Yaji & Bhatnagar (2020,
Theorem 4) where our SA-SRRW algorithm can be regarded as a special
instance.111111However, Yaji & Bhatnagar (2020) paper only analysed the almost
sure convergence. The central limit theorem analysis remains unknown in the
literature for the two-timescale stochastic approximation with iterate-
dependent Markov chains. Thus, our CLT analysis in Section E for this two-
timescale structure with iterate-dependent Markov chain is still novel and
recognized as our contribution.
### D.1 Scenario 1
In Scenario 1, we have $\beta_{n}=\gamma_{n}$. First, we rewrite (4) as
$\begin{bmatrix}{\bm{\theta}}_{n+1}\\\
{\mathbf{x}}_{n+1}\end{bmatrix}=\begin{bmatrix}{\bm{\theta}}_{n}\\\
{\mathbf{x}}_{n}\end{bmatrix}+\gamma_{n+1}\begin{bmatrix}H({\bm{\theta}}_{n}.X_{n+1})\\\
{\bm{\delta}}_{X_{n+1}}-{\mathbf{x}}_{n}\end{bmatrix}.$ (19)
By augmentations, we define the variable
${\mathbf{z}}_{n}\triangleq\begin{bmatrix}{\bm{\theta}}_{n}\\\
{\mathbf{x}}_{n}\end{bmatrix}\in{\mathbb{R}}^{(N+D)\times 1}$ and the function
$G({\mathbf{z}}_{n},i)\triangleq\begin{bmatrix}H({\bm{\theta}}_{n},i)\\\
{\bm{\delta}}_{i}-{\mathbf{x}}_{n}\end{bmatrix}\in{\mathbb{R}}^{(N+d)\times
1}$. In addition, we define a new Markov chain $\\{Y_{n}\\}_{n\geq 0}$ in the
same state space ${\mathcal{N}}$ as SRRW sequence $\\{X_{n}\\}_{n\geq 0}$.
With slight abuse of notation, the transition kernel of $\\{Y_{n}\\}$ is
denoted by
${\mathbf{K}}^{\prime}[{\mathbf{z}}_{n}]\equiv{\mathbf{K}}[{\mathbf{x}}_{n}]$
and its stationary distribution
${\bm{\pi}}^{\prime}[{\mathbf{z}}_{n}]\equiv{\bm{\pi}}[{\mathbf{x}}_{n}]$,
where ${\mathbf{K}}[{\mathbf{x}}_{n}]$ and ${\bm{\pi}}({\mathbf{x}}_{n})$ are
the transition kernel and its corresponding stationary distribution of SRRW,
with ${\bm{\pi}}[{\mathbf{x}}]$ of the form
$\pi_{i}[{\mathbf{x}}]\propto\sum_{j\in{\mathcal{N}}}\mu_{i}P_{ij}(x_{i}/\mu_{i})^{-\alpha}(x_{j}/\mu_{j})^{-\alpha}.$
(20)
Recall that ${\bm{\mu}}$ is the fixed point, i.e.,
${\bm{\pi}}[{\bm{\mu}}]={\bm{\mu}}$, and ${\mathbf{P}}$ is the base Markov
chain inside SRRW (see (3)). Then, the mean field
$g({\mathbf{z}})=\mathbb{E}_{Y\sim{\bm{\pi}}^{\prime}({\mathbf{z}})}[G({\mathbf{z}},Y)]=\begin{bmatrix}\sum_{i\in{\mathcal{N}}}\pi_{i}[{\mathbf{x}}]H({\bm{\theta}},i)\\\
{\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}\end{bmatrix},$
and ${\mathbf{z}}^{*}=({\bm{\theta}}^{*},{\bm{\mu}})$ for
${\bm{\theta}}^{*}\in\Theta$ in Assumption A3 is the root of
$g({\mathbf{z}})$, i.e., $g({\mathbf{z}}^{*})=0$. The augmented iteration (19)
becomes
${\mathbf{z}}_{n+1}={\mathbf{z}}_{n}+\gamma_{n+1}G({\mathbf{z}}_{n},Y_{n+1})$
(21)
with the goal of solving $g({\mathbf{z}})=0$. Therefore, we can treat (21) as
an SA algorithm driven by a Markov chain $\\{Y_{n}\\}_{n\geq 0}$ with its
kernel ${\mathbf{K}}^{\prime}[{\mathbf{z}}]$ and stationary distribution
${\bm{\pi}}^{\prime}[{\mathbf{z}}]$, which has been widely studied in the
literature (e.g., Delyon (2000); Benveniste et al. (2012); Fort (2015); Li et
al. (2023)). In what follows, we demonstrate that for any initial point
${\mathbf{z}}_{0}=({\bm{\theta}}_{0},{\mathbf{x}}_{0})\in{\mathbb{R}}^{D}\times\text{Int}(\Sigma)$,
the SRRW iteration $\\{{\mathbf{x}}_{n}\\}_{n\geq 0}$ will almost surely
converge to the target distribution ${\bm{\mu}}$, and the SA iteration
$\\{{\bm{\theta}}_{n}\\}_{n\geq 0}$ will almost surely converge to the set
$\Theta$.
Now we verify conditions C1 - C4 in Theorem G.8. Our assumption A4 is
equivalent to condition C1 and assumption A2 corresponds to condition C2. For
condition C3, we set $\nabla w({\mathbf{z}})\equiv-g({\mathbf{z}})$, and the
set
$S\equiv\\{{\mathbf{z}}^{*}|{\bm{\theta}}^{*}\in\Theta,{\mathbf{x}}^{*}={\bm{\mu}}\\}$,
including disjoint points. For condition C4, since
${\mathbf{K}}^{\prime}[{\mathbf{z}}]$, or equivalently
${\mathbf{K}}[{\mathbf{x}}]$, is ergodic and time-reversible for a given
${\mathbf{z}}$, as shown in the SRRW work Doshi et al. (2023), it
automatically ensures a solution to the Poisson equation, which has been well
discussed in Chen et al. (2020a, Section 2) and Benveniste et al. (2012); Meyn
(2022). To show (97) and (98) in condition C4, for each given ${\mathbf{z}}$
and any $i\in{\mathcal{N}}$, we need to give the explicit solution
$m_{{\mathbf{z}}}(i)$ to the Poisson equation
$m_{{\mathbf{z}}}(i)-({\mathbf{K}}^{\prime}_{{\mathbf{z}}}m_{{\mathbf{z}}})(i)=G({\mathbf{z}},i)-g({\mathbf{z}})$
in (96). The notation
$({\mathbf{K}}^{\prime}_{{\mathbf{z}}}m_{{\mathbf{z}}})(i)$ is defined as
follows.
$({\mathbf{K}}^{\prime}_{{\bm{z}}}m_{{\mathbf{z}}})(i)=\sum_{j\in{\mathcal{N}}}{\mathbf{K}}^{\prime}_{{\bm{z}}}(i,j)m({\mathbf{z}},j).$
Let
${\mathbf{G}}({\mathbf{z}})\triangleq[G({\mathbf{z}},1),\cdots,G({\mathbf{z}},N)]^{T}\in{\mathbb{R}}^{N\times
D}$. We use $[{\mathbf{A}}]_{:,i}$ to denote the $i$-th column of matrix
${\mathbf{A}}$. Then, we let $m_{{\mathbf{z}}}(i)$ such that
$m_{{\mathbf{z}}}(i)=\sum_{k=0}^{\infty}\left([{\mathbf{G}}({\mathbf{z}})({\mathbf{K}}^{\prime}[{\mathbf{z}}]^{k})^{T}]_{[:,i]}-g({\mathbf{z}})\right)=\sum_{k=0}^{\infty}[{\mathbf{G}}({\mathbf{z}})(({\mathbf{K}}^{\prime}[{\mathbf{z}}]^{k})^{T}-{\bm{\pi}}^{\prime}[{\mathbf{z}}]{\bm{1}}^{T})]_{[:,i]}.$
(22)
In addition,
$({\mathbf{K}}^{\prime}_{{\mathbf{z}}}m_{{\mathbf{z}}})(i)=\sum_{k=1}^{\infty}[{\mathbf{G}}({\mathbf{z}})({\mathbf{K}}^{\prime}[{\mathbf{z}}]^{T}-{\bm{\pi}}^{\prime}[{\mathbf{z}}]{\bm{1}}^{T})]_{[:,i]}.$
(23)
We can check that the $m_{{\mathbf{z}}}(i)$ form in (22) is indeed the
solution of the above Poisson equation. Now, by induction, we get
${\mathbf{K}}^{\prime}[{\mathbf{z}}]^{k}-{\bm{1}}{\bm{\pi}}^{\prime}[{\mathbf{z}}]^{T}=({\mathbf{K}}^{\prime}[{\mathbf{z}}]-{\bm{1}}{\bm{\pi}}^{\prime}[{\mathbf{z}}]^{T})^{k}$
for $k\geq 1$ and for $k=0$,
${\mathbf{K}}^{\prime}[{\mathbf{z}}]^{0}-{\bm{1}}{\bm{\pi}}^{\prime}[{\mathbf{z}}]^{T}=({\mathbf{K}}^{\prime}[{\mathbf{z}}]-{\bm{1}}{\bm{\pi}}^{\prime}[{\mathbf{z}}]^{T})^{0}-{\bm{1}}{\bm{\pi}}^{\prime}[{\mathbf{z}}]^{T}$.
Then,
$\begin{split}m_{{\mathbf{z}}}(i)=&\sum_{k=0}^{\infty}[{\mathbf{G}}({\mathbf{z}})({\mathbf{K}}^{\prime}[{\mathbf{z}}]^{T}-{\bm{\pi}}^{\prime}[{\mathbf{z}}]{\bm{1}}^{T})^{k}]_{[:,i]}-g({\mathbf{z}})\\\
=&\left[{\mathbf{G}}({\mathbf{z}})\sum_{k=0}^{\infty}({\mathbf{K}}^{\prime}[{\mathbf{z}}]^{T}-{\bm{\pi}}^{\prime}[{\mathbf{z}}]{\bm{1}}^{T})^{k}\right]_{[:,i]}-g({\mathbf{z}})\\\
=&\left[{\mathbf{G}}({\mathbf{z}})({\mathbf{I}}-{\mathbf{K}}^{\prime}[{\mathbf{z}}]^{T}+{\bm{\pi}}^{\prime}[{\mathbf{z}}]{\bm{1}}^{T})^{-1}\right]_{[:,i]}-g({\mathbf{z}})\\\
=&\sum_{j\in{\mathcal{N}}}({\mathbf{I}}-{\mathbf{K}}^{\prime}[{\mathbf{z}}]+{\bm{1}}{\bm{\pi}}^{\prime}[{\mathbf{z}}]^{T})^{-1}(i,j)G({\mathbf{z}},j)-g({\mathbf{z}}).\end{split}$
(24)
Here,
$({\mathbf{I}}-{\mathbf{K}}^{\prime}[{\mathbf{z}}]+{\bm{1}}{\bm{\pi}}^{\prime}[{\mathbf{z}}]^{T})^{-1}$
is well defined because ${\mathbf{K}}^{\prime}[{\mathbf{z}}]$ is ergodic and
time-reversible for any given ${\mathbf{z}}$ (proved in Doshi et al. (2023,
Appendix A)). Now that both functions $H({\bm{\theta}},i)$ and
${\bm{\delta}}_{i}-{\mathbf{x}}$ are bounded for each compact subset of
${\mathbb{R}}^{D}\times\Sigma$ by our assumption A1, function
$G({\mathbf{z}},i)$ is also bounded within the compact subset of its domain.
Thus, function $m_{{\mathbf{z}}}(i)$ is bounded, and (97) is verified.
Moreover, for a fixed $i\in{\mathcal{N}}$,
$\sum_{j\in{\mathcal{N}}}({\mathbf{I}}-{\mathbf{K}}^{\prime}[{\mathbf{z}}]+{\bm{1}}{\bm{\pi}}^{\prime}[{\mathbf{z}}]^{T})^{-1}(i,j){\bm{\delta}}_{j}=({\mathbf{I}}-{\mathbf{K}}^{\prime}[{\mathbf{z}}]+{\bm{1}}{\bm{\pi}}^{\prime}[{\mathbf{z}}]^{T})^{-1}_{[:,i]}=({\mathbf{I}}-{\mathbf{K}}[{\mathbf{x}}]+{\bm{1}}{\bm{\pi}}[{\mathbf{x}}]^{T})^{-1}_{[:,i]}$
and this vector-valued function is continuous in ${\mathbf{x}}$ because
${\mathbf{K}}[{\mathbf{x}}],{\bm{\pi}}[{\mathbf{x}}]$ are continuous. We then
rewrite (24) as
$m_{{\mathbf{z}}}(i)=\begin{bmatrix}\sum_{j\in{\mathcal{N}}}({\mathbf{I}}-{\mathbf{K}}[{\mathbf{x}}]+{\bm{1}}{\bm{\pi}}[{\mathbf{x}}]^{T})^{-1}(i,j)H({\mathbf{x}},j)\\\
({\mathbf{I}}-{\mathbf{K}}[{\mathbf{x}}]^{T}+{\bm{\pi}}[{\mathbf{x}}]{\bm{1}}^{T})^{-1}_{[:,i]}\end{bmatrix}-\begin{bmatrix}\sum_{i\in{\mathcal{N}}}\pi_{i}[{\mathbf{x}}]H({\bm{\theta}},i)\\\
{\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}\end{bmatrix}.$
With continuous functions
$H({\bm{\theta}},i),{\mathbf{K}}[{\mathbf{x}}],{\bm{\pi}}[{\mathbf{x}}]$, we
have $m_{{\mathbf{z}}}(i)$ continuous with respect to ${\mathbf{z}}$, so does
$({\mathbf{K}}^{\prime}_{{\mathbf{z}}}m_{{\mathbf{z}}})(i)$. This implies that
functions $m_{{\mathbf{z}}}(i)$ and
$({\mathbf{K}}^{\prime}_{{\mathbf{z}}}m_{{\mathbf{z}}})(i)$ are locally
Lipschitz, which satisfies (98) with
$\phi_{{\mathcal{C}}}(x)=C_{{\mathcal{C}}}x$ for some constant
$C_{{\mathcal{C}}}$ that depends on the compact set ${\mathcal{C}}$.
Therefore, condition C4 is checked, and we can apply Theorem G.8 to show the
almost convergence result of (19), i.e., almost surely,
$\lim_{n\to\infty}{\mathbf{x}}_{n}={\bm{\mu}},\quad\text{and}~{}~{}~{}~{}\limsup_{n\to\infty}\inf_{{\bm{\theta}}^{*}\in\Theta}\|{\bm{\theta}}_{n}-{\bm{\theta}}^{*}\|=0.$
Therefore, the almost sure convergence of ${\mathbf{x}}_{n}$ in Lemma 3.1 is
also proved. This finishes the proof in Scenario 1.
### D.2 Scenario 2
Now in this subsection, we consider the steps sizes $\gamma_{n},\beta_{n}$
with $1/2<a<b\leq 1$ and $1/2<b<a\leq 1$. We will frequently use assumptions
(B1) - (B5) in Section G.3 and Theorem G.10 to prove the almost sure
convergence.
#### D.2.1 Case (i): $1/2<a<b\leq 1$
In case (i), ${\bm{\theta}}_{n}$ is on the slow timescale and
${\mathbf{x}}_{n}$ is on the fast timescale because iteration
${\bm{\theta}}_{n}$ has smaller step size than ${\mathbf{x}}_{n}$, making
${\bm{\theta}}_{n}$ converge slower than ${\mathbf{x}}_{n}$. Here, we consider
the two-timescale SA of the form:
${\bm{\theta}}_{n+1}={\bm{\theta}}_{n}+\beta_{n+1}H({\bm{\theta}},X_{n+1}),$
(25)
${\mathbf{x}}_{n+1}={\mathbf{x}}_{n}+\gamma_{n+1}({\bm{\delta}}_{X_{n+1}}-{\mathbf{x}}).$
Now, we verify assumptions (B1) - (B5) listed in Section G.3.
* •
Assumptions (B1) and (B5) are satisfied by our assumptions A2 and A4.
* •
Our assumption A3 shows that the function $H({\bm{\theta}},X)$ is continuous
and differentiable w.r.t ${\bm{\theta}}$ and grows linearly with
$\|{\bm{\theta}}\|$. In addition, ${\bm{\delta}}_{X}-{\mathbf{x}}$ also
satisfies this property. Therefore, (B2) is satisfied.
* •
Now that the function ${\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}$ is independent
of ${\bm{\theta}}$, we can set $\rho({\bm{\theta}})={\bm{\mu}}$ for any
${\bm{\theta}}\in{\mathbb{R}}^{D}$ such that
${\bm{\pi}}[{\bm{\mu}}]-{\bm{\mu}}=0$ from Doshi et al. (2023, Proposition
3.1), and
$\nabla_{{\mathbf{x}}}({\bm{\pi}}({\mathbf{x}})-{\mathbf{x}})|_{{\mathbf{x}}={\bm{\mu}}}=2\alpha{\mathbf{u}}{\bm{1}}^{T}-\alpha{\mathbf{P}}^{T}-(\alpha+1){\mathbf{I}}$
from Doshi et al. (2023, Lemma 3.4), which is Hurwitz. Furthermore,
$\rho({\bm{\theta}})={\bm{\mu}}$ inherently satisfies the condition
$\|\rho({\bm{\theta}})\|\leq L_{2}(1+\|{\bm{\theta}}\|)$ for any
$L_{2}\geq\|{\bm{\mu}}\|$. Thus, conditions (i) - (iii) in (B3) are satisfied.
Additionally,
$\sum_{i\in{\mathcal{N}}}{\bm{\pi}}_{i}[\rho({\bm{\theta}})]H({\bm{\theta}},i)=\sum_{i\in{\mathcal{N}}}{\bm{\pi}}_{i}[{\mathbf{x}}]={\mathbf{h}}({\bm{\theta}})$
such that for ${\bm{\theta}}^{*}\in\Theta$ defined in assumption A3,
$\sum_{i\in{\mathcal{N}}}{\bm{\pi}}_{i}[\rho({\bm{\theta}}^{*})]H({\bm{\theta}}^{*},i)={\mathbf{h}}({\bm{\theta}}^{*})=0$,
and $\nabla_{{\bm{\theta}}}{\mathbf{h}}({\bm{\theta}}^{*})$ is Hurwitz.
Therefore, (B3) is checked.
* •
Assumption (B4) is verified by the nature of SRRW, i.e., its transition kernel
${\mathbf{K}}[{\mathbf{x}}]$ and the corresponding stationary distribution
${\bm{\pi}}[{\mathbf{x}}]$ with ${\bm{\pi}}[{\bm{\mu}}]={\bm{\mu}}$.
Consequently, assumptions (B1) - (B5) are satisfied by our assumptoins A1 \-
A4 and by Theorem G.10, we have $\lim_{n\to\infty}{\mathbf{x}}_{n}={\bm{\mu}}$
and ${\bm{\theta}}_{n}\to\Theta$ almost surely.
Next, we consider $1/2<b<a\leq 1$. As discussed before, (B1), (B2), (B4) and
(B5) are satisfied by our assumptions A1 \- A4 and the properties of SRRW. The
only difference for this step size setting, compared to the previous one
$1/2<a<b\leq 1$, is that the roles of ${\bm{\theta}}_{n},{\mathbf{x}}_{n}$ are
now flipped, that is, ${\bm{\theta}}_{n}$ is now on the fast timescale while
${\mathbf{x}}_{n}$ is on the slow timescale. By a much stronger Assumption
A3′, for any ${\mathbf{x}}\in\text{Int}(\Sigma)$, (i)
$\mathbb{E}_{X\sim{\bm{\pi}}[{\mathbf{x}}]}[H(\rho({\mathbf{x}}),X)]=0$; (ii)
$\mathbb{E}_{X\sim{\bm{\pi}}[{\mathbf{x}}]}[\nabla H(\rho({\mathbf{x}}),X)]$
is Hurwitz; (iii) $\|\rho({\mathbf{x}})\|\leq L_{2}(1+\|{\mathbf{x}}\|)$.
Hence, conditions (i) - (iii) in (B3) are satisfied. Moreover, we have
${\bm{\pi}}[{\bm{\mu}}]-{\bm{\mu}}=0$,
$\nabla({\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}})|_{{\mathbf{x}}={\bm{\mu}}}$
being Hurwitz, as mentioned in the previous part. Therefore, (B3) is verified.
Accordingly, (B1) - (B5) are checked by our assumptions A1, A2, A3′, A4. By
Theorem G.10, we have $\lim_{n\to\infty}{\mathbf{x}}_{n}={\bm{\mu}}$ and
${\bm{\theta}}_{n}\to\Theta$ almost surely.
## Appendix E Proof of Theorem 3.3
This section is devoted to the proof of Theorem 3.3, which also includes the
proof of the CLT results for the SRRW iteration ${\mathbf{x}}_{n}$ in Lemma
3.1. We will use different techniques depending on the step sizes in
Assumption A2. Specifically, for step sizes
$\gamma_{n}=(n+1)^{-a},\beta_{n}=(n+1)^{-b}$, we will consider three cases:
case (i): $\beta_{n}=o(\gamma_{n})$; case (ii): $\beta_{n}=\gamma_{n}$; and
case (iii): $\gamma_{n}=o(\beta_{n})$. For case (ii), we will use the existing
CLT result for single-timescale SA in Theorem G.9. For cases (i) and (iii), we
will construct our own CLT analysis for the two-timescale structure. We start
with case (ii).
### E.1 Case (ii): $\beta_{n}=\gamma_{n}$
In this part, we stick to the notations for single-timescale SA studied in
Section D.1. To utilize Theorem G.9, apart from Conditions C1 - C4 that have
been checked in Section D.1, we still need to check conditions C5 and C6
listed in Section G.2.
Assumption A3 corresponds to condition C5. For condition C6, we need to obtain
the explicit form of function $Q_{{\mathbf{z}}}$ to the Poisson equation
defined in (96), that is,
$Q_{{\mathbf{z}}}(i)-({\mathbf{K}}^{\prime}_{{\mathbf{z}}}Q_{{\mathbf{z}}})(i)=\psi({\mathbf{z}},i)-\mathbb{E}_{j\sim{\bm{\pi}}[{\mathbf{z}}]}[\psi({\mathbf{z}},j)]$
where
$\psi({\mathbf{z}},i)\triangleq\sum_{j\in{\mathcal{N}}}{\mathbf{K}}^{\prime}_{{\mathbf{z}}}(i,j)m_{{\mathbf{z}}}(j)m_{{\mathbf{z}}}(j)^{T}-({\mathbf{K}}^{\prime}_{{\mathbf{z}}}m_{{\mathbf{z}}})(i)({\mathbf{K}}^{\prime}_{{\mathbf{z}}}m_{{\mathbf{z}}})(i)^{T}.$
Following the similar steps in the derivation of $m_{{\mathbf{z}}}(i)$ from
(22) to (24), we have
$Q_{{\mathbf{z}}}(i)=\sum_{j\in{\mathcal{N}}}({\mathbf{I}}-{\mathbf{K}}^{\prime}[{\mathbf{z}}]+{\bm{1}}{\bm{\pi}}^{\prime}[{\mathbf{z}}]^{T})^{-1}(i,j)m_{{\mathbf{z}}}(j)-\pi^{\prime}_{j}[{\mathbf{z}}]m_{{\mathbf{z}}}(j).$
We also know that $Q_{{\mathbf{z}}}(i)$ and
$({\mathbf{K}}^{\prime}_{{\mathbf{z}}}Q_{{\mathbf{z}}})(i)$ are continuous in
${\mathbf{z}}$ for any $i\in{\mathcal{N}}$. For any ${\mathbf{z}}$ in a
compact set $\Omega$, $Q_{{\mathbf{z}}}(i)$ and
$({\mathbf{K}}^{\prime}_{{\mathbf{z}}}Q_{{\mathbf{z}}})(i)$ are bounded
because function $m_{{\mathbf{z}}}(i)$ is bounded. Therefore, C6 is checked.
By Theorem G.9, assume ${\mathbf{z}}_{n}=\begin{bmatrix}{\bm{\theta}}_{n}\\\
{\mathbf{x}}_{n}\end{bmatrix}$ converges to a point
${\mathbf{z}}^{*}=\begin{bmatrix}{\bm{\theta}}^{*}\\\ {\bm{\mu}}\end{bmatrix}$
for ${\bm{\theta}}^{*}\in\Theta$, we have
$\gamma_{n}^{-1/2}({\mathbf{z}}_{n}-{\mathbf{z}}^{*})\xrightarrow[n\to\infty]{dist.}N(0,{\mathbf{V}}),$
(26)
where ${\mathbf{V}}$ is the solution of the following Lyapunov equation
${\mathbf{V}}\left(\frac{\mathds{1}_{\\{b=1\\}}}{2}{\mathbf{I}}+\nabla
g({\bm{z}}^{*})^{T}\right)+\left(\frac{\mathds{1}_{\\{b=1\\}}}{2}{\mathbf{I}}+\nabla
g({\bm{z}}^{*})\right){\mathbf{V}}+{\mathbf{U}}=0,$ (27)
and
${\mathbf{U}}=\sum_{i\in{\mathcal{N}}}\mu_{i}\left(m_{{\mathbf{z}}^{*}}(i)m_{{\mathbf{z}}^{*}}(i)^{T}-({\mathbf{K}}_{{\mathbf{z}}^{*}}m_{{\mathbf{z}}^{*}})(i)({\mathbf{K}}_{{\mathbf{z}}^{*}}m_{{\mathbf{z}}^{*}})(i)^{T}\right)$.
By algebraic calculations of derivative of ${\bm{\pi}}[{\mathbf{x}}]$ with
respect to ${\mathbf{x}}$ in (20),121212One may refer to Doshi et al. (2023,
Appendix B, Proof of Lemma 3.4) for the computation of
$\frac{\partial{\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}}{\partial{\mathbf{x}}}$.
we can rewrite $\nabla g({\mathbf{z}}^{*})$ in terms of
${\mathbf{x}},{\bm{\theta}}$, i.e.,
$\begin{split}{\mathbf{J}}(\alpha)\triangleq\nabla
g({\mathbf{z}}^{*})&=\begin{bmatrix}\frac{\partial\sum_{i\in{\mathcal{N}}}\pi_{i}[{\mathbf{x}}]H({\bm{\theta}},i)}{\partial{\bm{\theta}}}&\frac{\partial\sum_{i\in{\mathcal{N}}}\pi_{i}[{\mathbf{x}}]H({\bm{\theta}},i)}{\partial{\mathbf{x}}}\\\
\frac{\partial({\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}})}{\partial{\bm{\theta}}}&\frac{\partial{\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}}{\partial{\mathbf{x}}}\end{bmatrix}_{{\mathbf{z}}={\mathbf{z}}^{*}}\\\
&=\begin{bmatrix}\nabla{\mathbf{h}}({\bm{\theta}}^{*})&-\alpha{\mathbf{H}}^{T}({\mathbf{P}}^{T}+{\mathbf{I}})\\\
{\bm{0}}&2\alpha\bm{\mu}{\bm{1}}^{T}-\alpha{\mathbf{P}}^{T}-(\alpha+1){\mathbf{I}}\end{bmatrix}\triangleq\begin{bmatrix}{\mathbf{J}}_{11}&{\mathbf{J}}_{12}(\alpha)\\\
{\mathbf{J}}_{21}&{\mathbf{J}}_{22}(\alpha)\end{bmatrix},\end{split}$
where matrix
${\mathbf{H}}=[H({\bm{\theta}}^{*},1),\cdots,H({\bm{\theta}},N)]^{T}$. Then,
we further clarify the matrix ${\mathbf{U}}$. Note that
$m_{{\mathbf{z}}^{*}}(i)=\sum_{k=0}^{\infty}[{\mathbf{G}}({\mathbf{z}}^{*})(({\mathbf{P}}^{k})^{T}-{\bm{\mu}}{\bm{1}}^{T})]_{[:,i]}=\sum_{k=0}^{\infty}[{\mathbf{G}}({\mathbf{z}}^{*})({\mathbf{P}}^{k})^{T}]_{[:,i]}=\mathbb{E}\left[\left.\sum_{k=0}^{\infty}[G({\mathbf{z}}^{*},X_{k})]\right|X_{0}=i\right]\\!\\!,$
(28)
where the first equality holds because
${\mathbf{K}}^{\prime}[{\bm{\mu}}]={\mathbf{P}}$ from the definition of SRRW
kernel (3), the second equality stems from
${\mathbf{G}}({\mathbf{z}}^{*}){\bm{\mu}}=g({\mathbf{z}}^{*})=0$, and the last
term is a conditional expectation over the base Markov chain
$\\{X_{k}\\}_{k\geq 0}$ (with transition kernel ${\mathbf{P}}$) conditioned on
$X_{0}=i$. Similarly, with
$({\mathbf{K}}^{\prime}_{{\mathbf{z}}}m_{{\mathbf{z}}})(i)$ in the form of
(23), we have
$({\mathbf{K}}^{\prime}_{{\mathbf{z}}}m_{{\mathbf{z}}})(i)=\mathbb{E}\left[\left.\sum_{k=1}^{\infty}[G({\mathbf{z}}^{*},X_{k})]\right|X_{0}=i\right].$
From the form ‘$\sum_{i\in{\mathcal{N}}}\mu_{i}$’ inside the matrix
${\mathbf{U}}$, the Markov chain $\\{X_{k}\\}_{k\geq 0}$ is in its stationary
regime from the beginning, i.e., $X_{k}\sim{\bm{\mu}}$ for any $k\geq 0$.
Hence,
$\begin{split}{\mathbf{U}}=&~{}\mathbb{E}\left[\left(\sum_{k=0}^{\infty}[G({\mathbf{z}}^{*},X_{k})]\right)\left(\sum_{k=0}^{\infty}[G({\mathbf{z}}^{*},X_{k})]\right)^{T}\right]\\\
&~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\mathbb{E}\left[\left(\sum_{k=1}^{\infty}[G({\mathbf{z}}^{*},X_{k})]\right)\left(\sum_{k=1}^{\infty}[G({\mathbf{z}}^{*},X_{k})]\right)^{T}\right]\\\
=&~{}\mathbb{E}\left[G({\mathbf{z}}^{*},X_{0})G({\mathbf{z}}^{*},X_{0})^{T}\right]\\!+\\!\mathbb{E}\left[G({\mathbf{z}}^{*},X_{0})\left(\sum_{k=1}^{\infty}G({\mathbf{z}}^{*},X_{k})\right)^{T}\right]\\\
&~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\mathbb{E}\left[\left(\sum_{k=1}^{\infty}G({\mathbf{z}}^{*},X_{k})\right)G({\mathbf{z}}^{*},X_{0})^{T}\right]\\\
=&~{}\text{Cov}(G({\mathbf{z}}^{*},X_{0}),G({\mathbf{z}}^{*},X_{0}))\\\
&~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\sum_{k=1}^{\infty}\left[\text{Cov}(G({\mathbf{z}}^{*},X_{0}),G({\mathbf{z}}^{*},X_{k}))+\text{Cov}(G({\mathbf{z}}^{*},X_{k}),G({\mathbf{z}}^{*},X_{0}))\right],\\\
\end{split}$ (29)
where the covariance between $G({\mathbf{z}}^{*},X_{0})$ and
$G({\mathbf{z}}^{*},X_{k})$ for the Markov chain $\\{X_{n}\\}$ in the
stationary regime is
$\text{Cov}(G({\mathbf{z}}^{*},X_{0}),G({\mathbf{z}}^{*},X_{k}))$. By Brémaud
(2013, Theorem 6.3.7), it is demonstrated that ${\mathbf{U}}$ is the sampling
covariance of the base Markov chain ${\mathbf{P}}$ for the test function
$G({\mathbf{z}}^{*},\cdot)$. Moreover, Brémaud (2013, equation (6.34)) states
that this sampling covariance ${\mathbf{U}}$ can be rewritten in the following
form:
${\mathbf{U}}=\sum_{i=1}^{N-1}{\mathbf{G}}({\mathbf{z}}^{*})^{T}{\mathbf{u}}_{i}{\mathbf{u}}_{i}{\mathbf{G}}({\mathbf{z}}^{*})=\sum_{i=1}^{N-1}\frac{1+\lambda_{i}}{1-\lambda_{i}}\begin{bmatrix}{\mathbf{H}}^{T}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}{\mathbf{H}}&{\mathbf{H}}^{T}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}\\\
{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}{\mathbf{H}}&{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}\end{bmatrix}\triangleq\begin{bmatrix}{\mathbf{U}}_{11}&{\mathbf{U}}_{12}\\\
{\mathbf{U}}_{21}&{\mathbf{U}}_{22}\end{bmatrix},$ (30)
where $\\{(\lambda_{i},{\mathbf{u}}_{i})\\}_{i\in{\mathcal{N}}}$ is the
eigenpair of the transition kernel ${\mathbf{P}}$ of the ergodic and time-
reversible base Markov chain. This completes the proof of case 1.
###### Remark E.1.
For the CLT result (26), we can look further into the asymptotic covariance
matrix ${\mathbf{V}}$ as in (27). For convenience, we denote
${\mathbf{V}}=\begin{bmatrix}{\mathbf{V}}_{11}&{\mathbf{V}}_{12}\\\
{\mathbf{V}}_{21}&{\mathbf{V}}_{22}\end{bmatrix}$ and ${\mathbf{U}}$ in the
form of (30) such that
$\begin{bmatrix}{\mathbf{V}}_{11}&{\mathbf{V}}_{12}\\\
{\mathbf{V}}_{21}&{\mathbf{V}}_{22}\end{bmatrix}\left(\frac{\mathds{1}_{\\{b=1\\}}}{2}{\mathbf{I}}+{\mathbf{J}}(\alpha)^{T}\right)+\left(\frac{\mathds{1}_{\\{b=1\\}}}{2}{\mathbf{I}}+{\mathbf{J}}(\alpha)\right)\begin{bmatrix}{\mathbf{V}}_{11}&{\mathbf{V}}_{12}\\\
{\mathbf{V}}_{21}&{\mathbf{V}}_{22}\end{bmatrix}+{\mathbf{U}}=0.$ (31)
For the SRRW iteration ${\mathbf{x}}_{n}$, from (26) we know that
$\gamma_{n}^{-1/2}({\mathbf{x}}_{n}-{\bm{\mu}})\xrightarrow[n\to\infty]{dist.}N({\bm{0}},{\mathbf{V}}_{4})$.
Thus, in this remark, we want to obtain the closed form of
${\mathbf{V}}_{22}$. By algebraic computations of the bottom-right sub-block
matrix, we have
$\displaystyle\left(2\alpha\bm{\mu}{\bm{1}}^{T}\\!-\\!\alpha{\mathbf{P}}^{T}\\!-\\!\left(\alpha+1\\!-\\!\frac{\mathds{1}_{\\{a=1\\}}}{2}\right){\mathbf{I}}\right){\mathbf{V}}_{22}$
$\displaystyle+{\mathbf{V}}_{22}\left(2\alpha\bm{\mu}{\bm{1}}^{T}\\!-\\!\alpha{\mathbf{P}}^{T}\\!-\\!\left(\alpha+1\\!-\\!\frac{\mathds{1}_{\\{a=1\\}}}{2}\right){\mathbf{I}}\right)^{T}$
$\displaystyle+{\mathbf{U}}_{22}=0.$
By using result of the closed form solution to the Lyapunov equation (e.g.,
Lemma G.1) and the eigendecomposition of ${\mathbf{P}}$, we have
${\mathbf{V}}_{22}=\sum_{i=1}^{N-1}\frac{1}{2\alpha(1+\lambda_{i})+2-\mathds{1}_{\\{a=1\\}}}\cdot\frac{1+\lambda_{i}}{1-\lambda_{i}}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}.$
(32)
### E.2 Case (i): $\beta_{n}=o(\gamma_{n})$
In this part, we mainly focus on the CLT of the SA iteration
${\bm{\theta}}_{n}$ because the SRRW iteration ${\mathbf{x}}_{n}$ is
independent of ${\bm{\theta}}_{n}$ and its CLT result has been shown in Remark
E.1.
#### E.2.1 Decomposition of SA-SRRW iteration (4)
We slightly abuse the math notation and define the function
${\mathbf{h}}({\bm{\theta}},{\mathbf{x}})\triangleq\mathbb{E}_{i\sim{\bm{\pi}}[{\mathbf{x}}]}H({\bm{\theta}},i)=\sum_{i\in{\mathcal{N}}}\pi_{i}[{\mathbf{x}}]H({\bm{\theta}},i)$
such that
${\mathbf{h}}({\bm{\theta}},{\bm{\mu}})\equiv{\mathbf{h}}({\bm{\theta}})$.
Then, we reformulate (25) as
${\bm{\theta}}_{n+1}={\bm{\theta}}_{n}+\beta_{n+1}{\mathbf{h}}({\bm{\theta}}_{n},{\mathbf{x}}_{n})+\beta_{n+1}(H({\bm{\theta}}_{n},X_{n+1})-{\mathbf{h}}({\bm{\theta}}_{n},{\mathbf{x}}_{n})).$
(33a)
${\mathbf{x}}_{n+1}={\mathbf{x}}_{n}+\gamma_{n+1}({\bm{\pi}}[{\mathbf{x}}_{n}]-{\mathbf{x}}_{n})+\gamma_{n+1}({\bm{\delta}}_{X_{n+1}})-{\bm{\pi}}[{\mathbf{x}}_{n}]).$
(33b)
There exist functions
$q_{{\mathbf{x}}}:{\mathcal{N}}\to\mathbb{R}^{N},\tilde{H}_{{\bm{\theta}},{\mathbf{x}}}:{\mathcal{N}}\to\mathbb{R}^{D}$
satisfying the following Poisson equations
${\bm{\delta}}_{i}-{\bm{\pi}}({\mathbf{x}})=q_{{\mathbf{x}}}(i)-({\mathbf{K}}_{{\mathbf{x}}}q_{{\mathbf{x}}})(i)$
(34a)
$H({\bm{\theta}},i)-{\mathbf{h}}({\bm{\theta}},{\mathbf{x}})=\tilde{H}_{{\bm{\theta}},{\mathbf{x}}}(i)-({\mathbf{K}}_{{\mathbf{x}}}\tilde{H}_{{\bm{\theta}},{\mathbf{x}}})(i),$
(34b)
for any ${\bm{\theta}}\in{\mathbb{R}}^{D},{\mathbf{x}}\in\text{Int}(\Sigma)$
and $i\in{\mathcal{N}}$, where
$({\mathbf{K}}_{{\mathbf{x}}}q_{{\mathbf{x}}})(i)\triangleq\sum_{j\in{\mathcal{N}}}K_{ij}[{\mathbf{x}}]q_{{\mathbf{x}}}(j)$,
$({\mathbf{K}}_{{\mathbf{x}}}\tilde{H}_{{\bm{\theta}},{\mathbf{x}}})(j)\triangleq\sum_{j\in{\mathcal{N}}}K_{ij}[{\mathbf{x}}]\tilde{H}_{{\bm{\theta}},{\mathbf{x}}}(j)$.
The existence and explicit form of the solutions
$q_{{\mathbf{x}}},\tilde{H}_{{\bm{\theta}},{\mathbf{x}}}$, which are
continuous w.r.t ${\mathbf{x}},{\bm{\theta}}$, follow the similar steps that
can be found in Section D.1 from (22) to (24). Thus, we can further decompose
(33) into
$\begin{split}{\bm{\theta}}_{n+1}=&{\bm{\theta}}_{n}+\beta_{n+1}{\mathbf{h}}({\bm{\theta}}_{n},{\mathbf{x}}_{n})+\beta_{n+1}\underbrace{(\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}}(X_{n+1})-({\mathbf{K}}_{{\mathbf{x}}_{n}}\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}})(X_{n}))}_{M_{n+1}^{({\bm{\theta}})}}\\\
&+\beta_{n+1}\underbrace{(({\mathbf{K}}_{{\mathbf{x}}_{n+1}}\tilde{H}_{{\bm{\theta}}_{n+1},{\mathbf{x}}_{n+1}})(X_{n+1})-({\mathbf{K}}_{{\mathbf{x}}_{n}}\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}})(X_{n+1}))}_{r^{({\bm{\theta}},1)}_{n}}\\\
&+\beta_{n+1}\underbrace{(({\mathbf{K}}_{{\mathbf{x}}_{n}}\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}})(X_{n})-({\mathbf{K}}_{{\mathbf{x}}_{n+1}}\tilde{H}_{{\bm{\theta}}_{n+1},{\mathbf{x}}_{n+1}})(X_{n+1}))}_{r^{({\bm{\theta}},2)}_{n}},\end{split}$
(35a)
$\begin{split}{\mathbf{x}}_{n+1}=&{\mathbf{x}}_{n}+\gamma_{n+1}({\bm{\pi}}({\mathbf{x}}_{n})-{\mathbf{x}}_{n})+\gamma_{n+1}\underbrace{(q_{{\mathbf{x}}_{n}}(X_{n+1})-({\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n}})(X_{n}))}_{M_{n+1}^{({\mathbf{x}})}}\\\
&+\gamma_{n+1}\underbrace{([{\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n}}](X_{n+1})-[{\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n+1}}](X_{n+1}))}_{r^{({\mathbf{x}},1)}_{n}}\\\
&+\gamma_{n+1}\underbrace{(({\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n}})(X_{n})-({\mathbf{K}}_{{\mathbf{x}}_{n+1}}q_{{\mathbf{x}}_{n+1}})(X_{n+1}))}_{r^{({\mathbf{x}},2)}_{n}}.\end{split}$
(35b)
such that
${\bm{\theta}}_{n+1}={\bm{\theta}}_{n}+\beta_{n+1}{\mathbf{h}}({\bm{\theta}}_{n},{\mathbf{x}}_{n})+\beta_{n+1}M_{n+1}^{({\bm{\theta}})}+\beta_{n+1}r^{({\bm{\theta}},1)}_{n}+\beta_{n+1}r^{({\bm{\theta}},2)}_{n},$
(36a)
${\mathbf{x}}_{n+1}={\mathbf{x}}_{n}+\gamma_{n+1}({\bm{\pi}}({\mathbf{x}}_{n})-{\mathbf{x}}_{n})+\gamma_{n+1}M_{n+1}^{({\mathbf{x}})}+\gamma_{n+1}r^{({\mathbf{x}},1)}_{n}+\gamma_{n+1}r^{({\mathbf{x}},2)}_{n}.$
(36b)
We can observe that (36) differs from the expression in Konda & Tsitsiklis
(2004); Mokkadem & Pelletier (2006), which studied the two-timescale SA with
Martingale difference noise. Here, due to the presence of the iterate-
dependent Markovian noise and the application of the Poisson equation
technique, we have additional non-vanishing terms
$r^{({\bm{\theta}},2)}_{n},r^{({\mathbf{x}},2)}_{n}$, which will be further
examined in Lemma E.2. Additionally, when we apply the Poisson equation to the
Martingale difference terms $M_{n+1}^{({\bm{\theta}})}$,
$M_{n+1}^{({\mathbf{x}})}$, we find that there are some covariances that are
also non-vanishing as in Lemma E.1. We will mention this again when we obtain
those covariances. These extra non-zero noise terms make our analysis distinct
from the previous ones since the key assumption (A4) in Mokkadem & Pelletier
(2006) is not satisfied. We demonstrate that the long-term average performance
of these terms can be managed so that they do not affect the final CLT result.
Analysis of Terms $M_{n+1}^{({\bm{\theta}})},M_{n+1}^{({\mathbf{x}})}$
Consider the filtration
${\mathcal{F}}_{n}\triangleq\sigma({\bm{\theta}}_{0},{\mathbf{x}}_{0},X_{0},\cdots,{\bm{\theta}}_{n},{\mathbf{x}}_{n},X_{n})$,
it is evident that $M_{n+1}^{({\bm{\theta}})},M_{n+1}^{({\mathbf{x}})}$ are
Martingale difference sequences adapted to ${\mathcal{F}}_{n}$. Then, we have
$\begin{split}&\mathbb{E}\left[\left.M^{({\mathbf{x}})}_{n+1}(M^{({\mathbf{x}})}_{n+1})^{T}\right|{\mathcal{F}}_{n}\right]\\\
=&~{}\mathbb{E}[q_{{\mathbf{x}}_{n}}(X_{n+1})q_{{\mathbf{x}}_{n}}(X_{n+1})^{T}|{\mathcal{F}}_{n}]+({\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n}})(X_{n})\left(({\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n}})(X_{n})\right)^{T}\\\
&-\mathbb{E}[q_{{\mathbf{x}}_{n}}(X_{n+1})|{\mathcal{F}}_{n}]\left({\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n}})(X_{n})\right)^{T}-({\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n}})(X_{n})\mathbb{E}[q_{{\mathbf{x}}_{n}}(X_{n+1})^{T}|{\mathcal{F}}_{n}]\\\
=&~{}\mathbb{E}[q_{{\mathbf{x}}_{n}}(X_{n+1})q_{{\mathbf{x}}_{n}}(X_{n+1})^{T}|{\mathcal{F}}_{n}]-({\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n}})(X_{n})\left(({\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n}})(X_{n})\right)^{T}.\end{split}$
(37)
Similarly, we have
$\begin{split}&\mathbb{E}\left[\left.M^{({\bm{\theta}})}_{n+1}(M^{({\bm{\theta}})}_{n+1})^{T}\right|{\mathcal{F}}_{n}\right]\\\
=&~{}\mathbb{E}[\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}}(X_{n+1})\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}}(X_{n+1})^{T}|{\mathcal{F}}_{n}]-({\mathbf{K}}_{{\mathbf{x}}_{n}}\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}})(X_{n})\left(({\mathbf{K}}_{{\mathbf{x}}_{n}}\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}})(X_{n})\right)^{T},\end{split}$
(38)
and
$\begin{split}&\mathbb{E}\left[\left.M^{({\mathbf{x}})}_{n+1}(M^{({\bm{\theta}})}_{n+1})^{T}\right|{\mathcal{F}}_{n}\right]\\\
=&~{}\mathbb{E}[q_{{\mathbf{x}}_{n}}(X_{n+1})\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}}(X_{n+1})^{T}|{\mathcal{F}}_{n}]-({\mathbf{K}}_{{\mathbf{x}}_{n}}q_{{\mathbf{x}}_{n}})(X_{n})\left(({\mathbf{K}}_{{\mathbf{x}}_{n}}\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}})(X_{n})\right)^{T}.\end{split}$
We now focus on
$\mathbb{E}\left[\left.M^{({\mathbf{x}})}_{n+1}(M^{({\mathbf{x}})}_{n+1})^{T}\right|{\mathcal{F}}_{n}\right]$.
Denote by
$V_{1}({\mathbf{x}},i)\triangleq\sum_{j\in{\mathcal{N}}}{\mathbf{K}}_{i,j}[{\mathbf{x}}]q_{{\mathbf{x}}}(j)q_{{\mathbf{x}}}(j)^{T}-({\mathbf{K}}_{{\mathbf{x}}}q_{{\mathbf{x}}})(i)\left(({\mathbf{K}}_{{\mathbf{x}}}q_{{\mathbf{x}}})(i)\right)^{T},$
(39)
and let its expectation w.r.t the stationary distribution
${\bm{\pi}}({\mathbf{x}})$ be
$v_{1}({\mathbf{x}})\triangleq\mathbb{E}_{i\sim{\bm{\pi}}({\mathbf{x}})}[V_{1}({\mathbf{x}},i)]$,
we can construct another Poisson equation, i.e.,
$\begin{split}&\mathbb{E}\left[\left.M^{({\mathbf{x}})}_{n+1}(M^{({\mathbf{x}})}_{n+1})^{T}\right|{\mathcal{F}}_{n}\right]-\sum_{X_{n}\in{\mathcal{N}}}\pi_{X_{n}}({\mathbf{x}}_{n})\mathbb{E}\left[\left.M^{({\mathbf{x}})}_{n+1}(M^{({\mathbf{x}})}_{n+1})^{T}\right|{\mathcal{F}}_{n}\right]\\\
=&~{}V_{1}({\mathbf{x}}_{n},X_{n+1})-v_{1}({\mathbf{x}}_{n})\\\
=&~{}\varphi^{(1)}_{{\mathbf{x}}}(X_{n+1})-({\mathbf{K}}_{{\mathbf{x}}_{n}}\varphi^{(1)}_{{\mathbf{x}}_{n}})(X_{n+1}),\end{split}$
for some matrix-valued function
$\varphi^{(1)}_{{\mathbf{x}}}:{\mathcal{N}}\to\mathbb{R}^{N\times N}$. Since
$q_{{\mathbf{x}}}$ and ${\mathbf{K}}[{\mathbf{x}}]$ are continuous in
${\mathbf{x}}$, functions $V_{1},v_{1}$ are also continuous in ${\mathbf{x}}$.
Then, we can decompose (39) into
$\begin{split}V_{1}({\mathbf{x}}_{n},X_{n+1})=&\underbrace{v_{1}({\bm{\mu}})}_{{\mathbf{U}}_{22}}+\underbrace{v_{1}({\mathbf{x}}_{n})-v_{1}({\bm{\mu}})}_{{\mathbf{D}}^{(1)}_{n}}+\underbrace{\varphi^{(1)}_{{\mathbf{x}}_{n}}(X_{n+1})-({\mathbf{K}}_{{\mathbf{x}}_{n}}\varphi^{(1)}_{{\mathbf{x}}_{n}})(X_{n})}_{{\mathbf{J}}^{(1,a)}_{n}}\\\
&+\underbrace{({\mathbf{K}}_{{\mathbf{x}}_{n}}\varphi^{(1)}_{{\mathbf{x}}_{n}})(X_{n})-({\mathbf{K}}_{{\mathbf{x}}_{n}}\varphi^{(1)}_{{\mathbf{x}}_{n}})(X_{n+1})}_{{\mathbf{J}}^{(1,b)}_{n}}.\end{split}$
(40)
Thus, we have
$\mathbb{E}[M^{({\mathbf{x}})}_{n+1}(M^{({\mathbf{x}})}_{n+1})^{T}|{\mathcal{F}}_{n}]={\mathbf{U}}_{22}+{\mathbf{D}}_{n}^{(1)}+{\mathbf{J}}_{n}^{(1)},$
(41)
where
${\mathbf{J}}_{n}^{(1)}={\mathbf{J}}_{n}^{(1,a)}+{\mathbf{J}}_{n}^{(1,b)}$.
Following the similar steps above, we can decompose
$\mathbb{E}\left[\left.M^{({\mathbf{x}})}_{n+1}(M^{({\bm{\theta}})}_{n+1})^{T}\right|{\mathcal{F}}_{n}\right]$
and
$\mathbb{E}\left[\left.M^{({\bm{\theta}})}_{n+1}(M^{({\bm{\theta}})}_{n+1})^{T}\right|{\mathcal{F}}_{n}\right]$
as
$\mathbb{E}\left[\left.M^{({\mathbf{x}})}_{n+1}(M^{({\bm{\theta}})}_{n+1})^{T}\right|{\mathcal{F}}_{n}\right]={\mathbf{U}}_{21}+{\mathbf{D}}_{n}^{(2)}+{\mathbf{J}}_{n}^{(2)},$
(42a)
$\mathbb{E}\left[\left.M^{({\bm{\theta}})}_{n+1}(M^{({\bm{\theta}})}_{n+1})^{T}\right|{\mathcal{F}}_{n}\right]={\mathbf{U}}_{11}+{\mathbf{D}}_{n}^{(3)}+{\mathbf{J}}_{n}^{(3)}.$
(42b)
where
${\mathbf{J}}_{n}^{(2)}={\mathbf{J}}_{n}^{(2,a)}+{\mathbf{J}}_{n}^{(2,b)}$ and
${\mathbf{J}}_{n}^{(3)}={\mathbf{J}}_{n}^{(3,a)}+{\mathbf{J}}_{n}^{(3,b)}$.
Here, we note that matrices ${\mathbf{J}}_{n}^{i}$ for $i=1,2,3$ are in
presence of the current CLT analysis of the two-timescale SA with Martingale
difference noise. In addition, ${\mathbf{U}}_{11}$, ${\mathbf{U}}_{12}$ and
${\mathbf{U}}_{22}$ inherently include the information of the underlying
Markov chain (with its eigenpair ($\lambda_{i},{\mathbf{u}}_{i}$)), which is
an extension of the previous works (Konda & Tsitsiklis, 2004; Mokkadem &
Pelletier, 2006).
###### Lemma E.1.
For $M_{n+1}^{({\bm{\theta}})},M_{n+1}^{({\mathbf{x}})}$ defined in (35) and
their decomposition in (41) and (42), we have
${\mathbf{U}}_{11}=\sum_{i=1}^{N-1}\frac{1+\lambda_{i}}{1-\lambda_{i}}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T},\quad{\mathbf{U}}_{21}=\sum_{i=1}^{N-1}\frac{1+\lambda_{i}}{1-\lambda_{i}}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}{\mathbf{H}},\quad{\mathbf{U}}_{11}=\sum_{i=1}^{N-1}\frac{1+\lambda_{i}}{1-\lambda_{i}}{\mathbf{H}}^{T}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}{\mathbf{H}},$
(43a)
$\lim_{n\to\infty}{\mathbf{D}}_{n}^{(i)}=0~{}~{}\text{a.s.}~{}~{}~{}~{}\text{for}~{}~{}~{}~{}i=1,2,3,$
(43b)
$\lim_{n\to\infty}\gamma_{n}\mathbb{E}\left[\left\|\sum_{k=1}^{n}{\mathbf{J}}^{(i)}_{k}\right\|\right]=0,\quad\text{for}~{}~{}~{}~{}i=1,2,3.$
(43c)
###### Proof.
We now provide the properties of the four terms inside (41) as an example.
Note that
$\begin{split}{\mathbf{U}}_{11}=&~{}\mathbb{E}_{i\sim{\bm{\mu}}}[V_{1}({\bm{\mu}},i)]=\sum_{i\in{\mathcal{N}}}\mu_{i}\left[\sum_{j\in{\mathcal{N}}}{\mathbf{P}}(i,j)q_{{\bm{\mu}}}(j)q_{{\bm{\mu}}}(j)^{T}-({\mathbf{P}}q_{{\bm{\mu}}})(i)\left(({\mathbf{P}}q_{{\bm{\mu}}})(i)\right)^{T}\right]\\\
=&~{}\sum_{j\in{\mathcal{N}}}\mu_{j}q_{{\bm{\mu}}}(j)q_{{\bm{\mu}}}(j)^{T}-({\mathbf{P}}q_{{\bm{\mu}}})(j)\left(({\mathbf{P}}q_{{\bm{\mu}}})(j)\right)^{T}.\end{split}$
We can see that it has exactly the same structure as matrix ${\bm{U}}$ in
(27). Following the similar steps in deducing the explicit form of ${\bm{U}}$
from (28) to (30), we get
${\mathbf{U}}_{11}=\sum_{i=1}^{N-1}\frac{1+\lambda_{i}}{1-\lambda_{i}}{\mathbf{u}}_{i}{\mathbf{u}}_{i}^{T}.$
(44)
By the almost sure convergence result ${\mathbf{x}}_{n}\to{\bm{\mu}}$ in Lemma
3.1, $v_{1}({\mathbf{x}}_{n})\to v_{1}({\bm{\mu}})$ a.s. such that
$\lim_{n\to\infty}{\mathbf{D}}_{n}^{(1)}=0$ a.s.
We next prove that
$\lim_{n\to\infty}\gamma_{n}\mathbb{E}\left[\left\|\sum_{k=1}^{n}{\mathbf{J}}^{(1,a)}_{k}\right\|\right]=0$
and
$\lim_{n\to\infty}\gamma_{n}\mathbb{E}\left[\left\|\sum_{k=1}^{n}{\mathbf{J}}^{(1,b)}_{k}\right\|\right]=0$.
Since $\\{{\mathbf{J}}^{(1,a)}_{n}\\}$ is a Martingale difference sequence
adapted to ${\mathcal{F}}_{n}$, with the Burkholder inequality in Lemma G.2
and $p=1$, we show that
$\mathbb{E}\left[\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(1,a)}\right\|\right]\leq
C_{1}\mathbb{E}\left[\sqrt{\left(\sum_{k=1}^{n}\left\|{\mathbf{J}}_{k}^{(1,a)}\right\|^{2}\right)}\right].$
(45)
By assumption A4, ${\mathbf{x}}_{n}$ is always within some compact set
$\Omega$ such that $\sup_{n}\|{\mathbf{J}}_{n}^{(1,a)}\|\leq
C_{\Omega}<\infty$ and for a given trajectory $\omega$ of
${\mathbf{x}}_{n}(\omega)$,
$\gamma_{n}C_{p}\sqrt{\left(\sum_{k=1}^{n}\left\|{\mathbf{J}}_{k}^{(1,a)}\right\|^{2}\right)}\leq
C_{p}C_{\Omega}\gamma_{n}\sqrt{n},$ (46)
and the last term decreases to zero in $n$ since $a>1/2$.
For ${\mathbf{J}}_{n}^{(1,b)}$, we use Abel transformation and obtain
$\begin{split}\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(1,b)}=&\sum_{k=1}^{n}(({\mathbf{K}}_{{\mathbf{x}}_{k}}\varphi^{(1)}_{{\mathbf{x}}_{k}})(X_{k-1})-({\mathbf{K}}_{{\mathbf{x}}_{k-1}}\varphi^{(1)}_{{\mathbf{x}}_{k-1}})(X_{k-1}))\\\
&+({\mathbf{K}}_{{\mathbf{x}}_{0}}\varphi^{(1)}_{{\mathbf{x}}_{0}})(X_{0})-({\mathbf{K}}_{{\mathbf{x}}_{n}}\varphi^{(1)}_{{\mathbf{x}}_{n}})(X_{n}).\end{split}$
Since $({\mathbf{K}}_{{\mathbf{x}}}\varphi^{(1)}_{{\mathbf{x}}})(X)$ is
continuous in ${\mathbf{x}}$, for ${\mathbf{x}}_{n}$ within a compact set
$\Omega$ (assumption A4), it is local Lipschitz with a constant $L_{\Omega}$
such that
$\|({\mathbf{K}}_{{\mathbf{x}}_{k}}\varphi^{(1)}_{{\mathbf{x}}_{k}})(X_{k-1})-{\mathbf{K}}_{{\mathbf{x}}_{k-1}}\varphi^{(1)}_{{\mathbf{x}}_{k-1}})(X_{k-1})\|\leq
L_{\Omega}\|{\mathbf{x}}_{k}-{\mathbf{x}}_{k-1}\|\leq 2L_{\Omega}\gamma_{k}.$
where the last inequality arises from (4b), i.e.,
${\mathbf{x}}_{k}-{\mathbf{x}}_{k-1}=\gamma_{k}({\bm{\delta}}_{X_{k}}-{\mathbf{x}}_{k-1})$
and $\|{\bm{\delta}}_{X_{k}}-{\mathbf{x}}_{k-1}\|\leq 2$ because
${\mathbf{x}}_{n}\in\text{Int}(\Sigma)$. Also,
$\|({\mathbf{K}}_{{\mathbf{x}}_{0}}\varphi^{(1)}_{{\mathbf{x}}_{0}})(X_{0})\|+\|({\mathbf{K}}_{{\mathbf{x}}_{n}}\varphi^{(1)}_{{\mathbf{x}}_{n}})(X_{n})\|$
are upper-bounded by some positive constant $C_{\Omega}^{\prime}$. This
implies that
$\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(1,b)}\right\|\leq
C_{\Omega}^{\prime}+2L_{\Omega}\sum_{k=1}^{n}\gamma_{k}.$
Note that
$\gamma_{n}\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(1,b)}\right\|\leq\gamma_{n}C_{\Omega}^{\prime}+2L_{\Omega}\gamma_{n}\sum_{k=1}^{n}\gamma_{k}\leq\gamma_{n}C_{\Omega}^{\prime}+\frac{2L_{\Omega}}{a}n^{1-2a},$
(47)
where the last inequality is from
$\sum_{k=1}^{n}\gamma_{k}<\frac{1}{a}n^{1-a}$. We observe that the last term
in (47) is decreasing to zero in $n$ because $a>1/2$.
Note that
${\mathbf{J}}_{k}^{(1)}={\mathbf{J}}_{k}^{(1,a)}+{\mathbf{J}}_{k}^{(1,b)}$, by
triangular inequality we have
$\begin{split}\gamma_{n}\mathbb{E}\left[\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(11)}\right\|\right]&\leq\gamma_{n}\mathbb{E}\left[\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(11,A)}\right\|\right]+\gamma_{n}\mathbb{E}\left[\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(11,B)}\right\|\right]\\\
&\leq\gamma_{n}C_{1}\mathbb{E}\left[\sqrt{\left(\sum_{k=1}^{n}\left\|{\mathbf{J}}_{k}^{(11,A)}\right\|^{2}\right)}\right]+\gamma_{n}\mathbb{E}\left[\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(11,B)}\right\|\right]\\\
&=\mathbb{E}\left[\gamma_{n}C_{1}\sqrt{\left(\sum_{k=1}^{n}\left\|{\mathbf{J}}_{k}^{(11,A)}\right\|^{2}\right)}+\gamma_{n}\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(11,B)}\right\|\right],\end{split}$
(48)
where the second inequality comes from (45). By (46) and (47) we know that
both terms in the last line of (48) are uniformly bounded by constants over
time $n$ that depend on the set $\Omega$. Therefore, by dominated convergence
theorem, taking the limit over the last line of (48) gives
$\begin{split}&\lim_{n\to\infty}\mathbb{E}\left[\gamma_{n}C_{1}\sqrt{\left(\sum_{k=1}^{n}\left\|{\mathbf{J}}_{k}^{(11,A)}\right\|^{2}\right)}\\!+\\!\gamma_{n}\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(11,B)}\right\|\right]\\\
=&~{}\mathbb{E}\left[\lim_{n\to\infty}\gamma_{n}C_{1}\sqrt{\left(\sum_{k=1}^{n}\left\|{\mathbf{J}}_{k}^{(11,A)}\right\|^{2}\right)}\\!+\\!\gamma_{n}\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(11,B)}\right\|\right]\\!=\\!0.\end{split}$
Therefore, we have
$\lim_{n\to\infty}\gamma_{n}\mathbb{E}\left[\left\|\sum_{k=1}^{n}{\mathbf{J}}_{k}^{(1)}\right\|\right]=0,$
In sum, in terms of
$\mathbb{E}[M^{({\mathbf{x}})}_{n+1}(M^{({\mathbf{x}})}_{n+1})^{T}|{\mathcal{F}}_{n}]$
in (41), we have ${\mathbf{U}}_{11}$ in (44),
$\lim_{n\to\infty}{\mathbf{D}}_{n}^{(1)}=0$ a.s. and
$\lim_{n\to\infty}\gamma_{n}\mathbb{E}\left[\left\|\sum_{k=1}^{n}{\mathbf{J}}^{(1)}_{k}\right\|\right]=0$.
We can apply the same steps as above for the other two terms $i=2,3$ in (42)
and obtain the results. ∎
Analysis of Terms
$r^{({\bm{\theta}},1)}_{n},r^{({\bm{\theta}},2)}_{n},r^{({\mathbf{x}},1)}_{n},r^{({\mathbf{x}},2)}_{n}$
###### Lemma E.2.
For
$r^{({\bm{\theta}},1)}_{n},r^{({\bm{\theta}},2)}_{n},r^{({\mathbf{x}},1)}_{n},r^{({\mathbf{x}},2)}_{n}$
defined in (35), we have the following results:
$\|r^{({\bm{\theta}},1)}_{n}\|=O(\gamma_{n})=o(\sqrt{\beta_{n}}),\quad\sqrt{\gamma_{n}}\left\|\sum_{k=1}^{n}r^{({\bm{\theta}},2)}_{k}\right\|=O(\sqrt{\gamma_{n}})=o(1).$
(49a)
$\|r^{({\mathbf{x}},1)}_{n}\|=O(\gamma_{n})=o(\sqrt{\beta_{n}}),\quad\sqrt{\gamma_{n}}\left\|\sum_{k=1}^{n}r^{({\mathbf{x}},2)}_{k}\right\|=O(\sqrt{\gamma_{n}})=o(1).$
(49b)
###### Proof.
For $r^{({\bm{\theta}},1)}_{n}$, note that
$\begin{split}r^{({\bm{\theta}},1)}_{n}=&~{}({\mathbf{K}}_{{\mathbf{x}}_{n+1}}\tilde{H}_{{\bm{\theta}}_{n+1},{\mathbf{x}}_{n+1}})(X_{n+1})-({\mathbf{K}}_{{\mathbf{x}}_{n}}\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}})(X_{n+1})\\\
=&~{}\sum_{j\in{\mathcal{N}}}\left({\mathbf{K}}_{X_{n},j}[{\mathbf{x}}_{n+1}]\tilde{H}_{{\bm{\theta}}_{n+1},{\mathbf{x}}_{n+1}}(j)-{\mathbf{K}}_{X_{n},j}[{\mathbf{x}}_{n}]\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}}(j)\right)\\\
\leq&~{}\sum_{j\in{\mathcal{N}}}L_{{\mathcal{C}}}(\|{\bm{\theta}}_{n+1}-{\bm{\theta}}_{n}\|+\|{\mathbf{x}}_{n+1}-{\mathbf{x}}_{n}\|)\\\
\leq&NL_{{\mathcal{C}}}(C_{{\mathcal{C}}}\beta_{n+1}+2\gamma_{n+1})\end{split}$
(50)
where the second last inequality is because
${\mathbf{K}}_{i,j}[{\mathbf{x}}]\tilde{H}_{{\bm{\theta}},{\mathbf{x}}}(j)$ is
continuous in ${\bm{\theta}},{\mathbf{x}}$ ${\mathbf{K}}[{\bm{x}}]$, which
stems from continuous functions ${\bm{K}}[{\mathbf{x}}]$ and
$\tilde{H}_{{\bm{\theta}},{\mathbf{x}}}$. The last inequality is from update
rules (4) and $({\bm{\theta}}_{n},{\mathbf{x}}_{n})\in\Omega$ for some compact
subset $\Omega$ by assumption A4. Then, we have
$\|r^{({\bm{\theta}},1)}_{n}\|=O(\gamma_{n})=o(\sqrt{\beta_{n}})$ because of
$a>1/2\geq b/2$ by assumption A2.
We let
$\nu_{n}\triangleq({\mathbf{K}}_{{\mathbf{x}}_{n}}\tilde{H}_{{\bm{\theta}}_{n},{\mathbf{x}}_{n}})(X_{n})$
such that $r^{({\bm{\theta}},2)}_{n}=\nu_{n}-\nu_{n+1}$. Note that
$\sum_{k=1}^{n}r^{({\bm{\theta}},2)}_{k}=\nu_{1}-\nu_{n+1}$, and by assumption
A4, $\|\nu_{n}\|$ is upper bounded by a constant dependent on the compact set,
which leads to
$\sqrt{\gamma_{n}}\left\|\sum_{k=1}^{n}r^{({\bm{\theta}},2)}_{k}\right\|=\sqrt{\gamma_{n}}\|\nu_{1}-\nu_{n+1}\|=O(\sqrt{\gamma_{n}})=o(1).$
Similarly, we can also obtain
$\|r^{({\mathbf{x}},1)}_{n}\|=o(\sqrt{\beta_{n}})$ and
$\sqrt{\gamma_{n}}\left\|\sum_{k=1}^{n}r^{({\mathbf{x}},2)}_{k}\right\|=O(\sqrt{\gamma_{n}})=o(1)$.
∎
#### E.2.2 Effect of SRRW Iteration on SA Iteration
In view of the almost sure convergence results in Lemma 3.1 and Lemma 3.2, for
large enough $n$ so that both iterations ${\bm{\theta}}_{n},{\mathbf{x}}_{n}$
are close to the equilibrium $({\bm{\theta}}^{*},{\bm{\mu}})$, we can apply
the Taylor expansion to functions ${\mathbf{h}}({\bm{\theta}},{\mathbf{x}})$
and ${\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}$ in (36) at the point
$({\bm{\theta}}^{*},{\bm{\mu}})$, which results in
${\mathbf{h}}({\bm{\theta}},{\mathbf{x}})={\mathbf{h}}({\bm{\theta}}^{*},{\bm{\mu}})+\nabla_{{\bm{\theta}}}{\mathbf{h}}({\bm{\theta}}^{*},{\bm{\mu}})({\bm{\theta}}-{\bm{\theta}}^{*})+\nabla_{{\mathbf{x}}}{\mathbf{h}}({\bm{\theta}}^{*},{\bm{\mu}})({\mathbf{x}}-{\bm{\mu}})+O(\|{\bm{\theta}}-{\bm{\theta}}^{*}\|^{2}+\|{\mathbf{x}}-{\bm{\mu}}\|^{2}),$
(51a)
${\bm{\pi}}[{\mathbf{x}}]-{\mathbf{x}}={\bm{\pi}}[{\bm{\mu}}]-{\bm{\mu}}+\nabla_{{\mathbf{x}}}({\bm{\pi}}({\mathbf{x}})-{\mathbf{x}})|_{{\mathbf{x}}={\bm{\mu}}}({\mathbf{x}}-{\bm{\mu}})+O(\|{\mathbf{x}}-{\bm{\mu}}\|^{2}).$
(51b)
With matrix ${\mathbf{J}}(\alpha)$, we have the following:
$\begin{split}&{\mathbf{J}}_{11}=\nabla_{{\bm{\theta}}}{\mathbf{h}}({\bm{\theta}}^{*},{\bm{\mu}})=\nabla{\mathbf{h}}({\bm{\theta}}^{*}),\\\
&{\mathbf{J}}_{12}(\alpha)=\nabla_{{\mathbf{x}}}{\mathbf{h}}({\bm{\theta}}^{*},{\bm{\mu}})=-\alpha{\mathbf{H}}^{T}({\mathbf{P}}^{T}+{\mathbf{I}}),\\\
&{\mathbf{J}}_{22}(\alpha)=\nabla_{{\mathbf{x}}}({\bm{\pi}}({\mathbf{x}})-{\mathbf{x}})|_{{\mathbf{x}}={\bm{\mu}}}=2\alpha\bm{\mu}{\bm{1}}^{T}-\alpha{\mathbf{P}}^{T}-(\alpha+1){\mathbf{I}}.\end{split}$
(52)
Then, (36) becomes
${\bm{\theta}}_{n+1}={\bm{\theta}}_{n}+\beta_{n+1}({\mathbf{J}}_{11}({\bm{\theta}}_{n}-{\bm{\theta}}^{*})+{\mathbf{J}}_{12}(\alpha)({\mathbf{x}}_{n}-{\bm{\mu}})+r^{({\bm{\theta}},1)}_{n}+r^{({\bm{\theta}},2)}_{n}+M^{({\bm{\theta}})}_{n+1}+\eta_{n}^{({\bm{\theta}})}),$
(53a)
${\mathbf{x}}_{n+1}={\mathbf{x}}_{n}+\gamma_{n+1}({\mathbf{J}}_{22}(\alpha)({\mathbf{x}}_{n}-{\bm{\mu}})+r^{({\mathbf{x}},1)}_{n}+r^{({\mathbf{x}},2)}_{n}+M^{({\mathbf{x}})}_{n+1}+\eta_{n}^{({\mathbf{x}})}),$
(53b)
where
$\eta_{n}^{({\bm{\theta}})}=O(\|{\mathbf{x}}_{n}\|^{2}+\|{\bm{\theta}}_{n}\|^{2})$
and $\eta_{n}^{({\mathbf{x}})}=O(\|{\mathbf{x}}_{n}\|^{2})$.
Then, inspired by Mokkadem & Pelletier (2006), we decompose iterates
$\\{{\mathbf{x}}_{n}\\}$ and $\\{{\bm{\theta}}_{n}\\}$ into
${\mathbf{x}}_{n}=L^{({\bm{x}})}_{n}+\Delta^{({\bm{x}})}_{n}$ and
${\bm{\theta}}_{n}=L^{({\bm{\theta}})}_{n}+R^{({\bm{\theta}})}_{n}+\Delta^{({\bm{\theta}})}_{n}$.
Rewriting (53b) gives
${\mathbf{x}}_{n}-{\bm{\mu}}=\gamma_{n+1}^{-1}{\mathbf{J}}_{22}(\alpha)^{-1}({\mathbf{x}}_{n+1}-{\mathbf{x}}_{n})-{\mathbf{J}}_{22}(\alpha)^{-1}(r^{({\mathbf{x}},1)}_{n}+r^{({\mathbf{x}},2)}_{n}+M^{({\mathbf{x}})}_{n+1}+\eta_{n}^{({\mathbf{x}})}),$
and substituting the above equation back in (53a) gives
$\begin{split}{\bm{\theta}}_{n+1}&-{\bm{\theta}}^{*}=~{}{\bm{\theta}}_{n}-{\bm{\theta}}^{*}+\beta_{n+1}\bigg{(}{\mathbf{J}}_{11}({\bm{\theta}}_{n}-{\bm{\theta}}^{*})+\gamma_{n+1}^{-1}{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}({\mathbf{x}}_{n+1}-{\mathbf{x}}_{n})\\\
&-{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}(r^{({\mathbf{x}},1)}_{n}+r^{({\mathbf{x}},2)}_{n}+M^{({\mathbf{x}})}_{n+1}+\eta_{n}^{({\mathbf{x}})})+r^{({\bm{\theta}},1)}_{n}+r^{({\bm{\theta}},2)}_{n}+M^{({\bm{\theta}})}_{n+1}+\eta_{n}^{({\bm{\theta}})}\bigg{)}\\\
=&~{}({\mathbf{I}}+\beta_{n+1}{\mathbf{J}}_{11})({\bm{\theta}}_{n}-{\bm{\theta}}^{*})+[\beta_{n+1}\gamma_{n+1}^{-1}{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}({\mathbf{x}}_{n+1}-{\mathbf{x}}_{n})]\\\
&+\beta_{n+1}(M^{({\bm{\theta}})}_{n+1}-{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}M^{({\mathbf{x}})}_{n+1})\\\
&+\beta_{n+1}(r^{({\bm{\theta}},1)}_{n}+r^{({\bm{\theta}},2)}_{n}+\eta_{n}^{({\bm{\theta}})}-{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}(r^{({\mathbf{x}},1)}_{n}+r^{({\mathbf{x}},2)}_{n}+\eta_{n}^{({\mathbf{x}})})),\end{split}$
(54)
From (54) we can see the iteration $\\{{\bm{\theta}}_{n}\\}$ implicitly embeds
the recursions of three sequences
* •
$\beta_{n+1}\gamma_{n+1}^{-1}{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}({\mathbf{x}}_{n+1}-{\mathbf{x}}_{n})$;
* •
$\beta_{n+1}(M^{({\bm{\theta}})}_{n+1}-{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}M^{({\mathbf{x}})}_{n+1})$;
* •
$\beta_{n+1}(r^{({\bm{\theta}},1)}_{n}+r^{({\bm{\theta}},2)}_{n}+\eta_{n}^{({\bm{\theta}})}-{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}(r^{({\mathbf{x}},1)}_{n}+r^{({\mathbf{x}},2)}_{n}+\eta_{n}^{({\mathbf{x}})})))$.
Let $u_{n}\triangleq\sum_{k=1}^{n}\beta_{k}$ and
$s_{n}\triangleq\sum_{k=1}^{n}\gamma_{k}$. Below we define two iterations:
$\begin{split}L_{n}^{({\bm{\theta}})}=e^{\beta_{n}{\mathbf{J}}_{11}}&L_{n-1}^{({\bm{\theta}})}+\beta_{n}(M^{({\bm{\theta}})}_{n}-{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}M^{({\mathbf{x}})}_{n})\\\
&=\sum_{k=1}^{n}e^{(u_{n}-u_{k}){\mathbf{J}}_{11}}\beta_{k}(M^{({\bm{\theta}})}_{k}-{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}M^{({\mathbf{x}})}_{k})\end{split}$
(55a)
$\begin{split}R_{n}^{({\bm{\theta}})}=e^{\beta_{n}{\mathbf{J}}_{11}}&R_{n-1}^{({\bm{\theta}})}+\beta_{n}\gamma_{n}^{-1}{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}({\mathbf{x}}_{n}-{\mathbf{x}}_{n-1})\\\
&=\sum_{k=1}^{n}e^{(u_{n}-u_{k}){\mathbf{J}}_{11}}\beta_{k}\gamma_{k}^{-1}{\mathbf{J}}_{12}(\alpha){\mathbf{J}}_{22}(\alpha)^{-1}({\mathbf{x}}_{k}-{\mathbf{x}}_{k-1})\end{split}$
(55b)
and a remaining term
$\Delta_{n}^{({\bm{\theta}})}\triangleq{\bm{\theta}}_{n}-{\bm{\theta}}^{*}-L_{n}^{({\bm{\theta}})}-R_{n}^{({\bm{\theta}})}$.
Similarly, for iteration ${\mathbf{x}}_{n}$, define the sequence
$L_{n}^{({\mathbf{x}})}$ such that
|
# Attached prime ideals over skew Ore polynomials
Sebastián Higuera Universidad Nacional de Colombia - Sede Bogotá Campus
Universitario<EMAIL_ADDRESS>and Armando Reyes Universidad Nacional
de Colombia - Sede Bogotá Campus Universitario<EMAIL_ADDRESS>Dedicated
to Martha Rincón
###### Abstract.
In this paper, we investigate the attached prime ideals of inverse polynomial
modules over skew Ore polynomials.
###### Key words and phrases:
Attached prime ideal, inverse polynomial module, skew polynomial ring, skew
Ore polynomials, Bass module.
###### 2020 Mathematics Subject Classification:
16D10, 16D60, 16D80, 16E45, 16S36, 16S85, 16W50.
The authors were supported by the research fund of Department of Mathematics,
Faculty of Science, Universidad Nacional de Colombia - Sede Bogotá, Colombia,
HERMES CODE 53880.
## 1\. Introduction
Throughout the paper, every ring $R$ is associative (not necessarily
commutative) with identity. If $R$ is commutative, then it is denoted by $K$.
If $N_{R}$ is a right module, then the right annihilator of $N_{R}$ is defined
by ${\rm ann}_{R}(N)=\left\\{r\in R\mid Nr=0\right\\}$, and $N_{R}$ is said to
be prime if $N_{R}\neq 0$ and ${\rm ann}_{R}(N)={\rm ann}_{R}(N^{\prime})$ for
all submodule $N_{R}^{\prime}$ of $N_{R}$. If $M_{R}$ is a right module, then
a right prime ideal $P$ of $R$ is called associated of $M_{R}$ if there exists
a prime submodule $N_{R}$ of $M_{R}$ such that $P={\rm ann}_{R}(N)$. The set
of all associated prime ideals of $M_{R}$ is denoted by ${\rm Ass}(M_{R})$
[29, p. 86]. These ideals have been widely studied in the literature. For
instance, Brewer and Heinzer [10] showed that the associated prime ideals of
the commutative polynomial ring $K[x]$ are all extended, that is, every
$P\in{\rm Ass}(K[x]_{K[x]})$ may be expressed as $P=Q[x]$, where $Q\in{\rm
Ass}(K_{K})$ (see also Faith [18]). Annin [3, 4, 5] extended this result to
the setting of skew polynomial rings in the sense of Ore [40, 41], while
Nordstrom [38, 39] computed the associated prime ideals of simple torsion
modules over generalized Weyl algebras defined by Bavula [9]. Later, Ouyang
and Birkenmeier [42] defined the nilpotent associated primes as a
generalization of the associated prime ideals and described these ideals over
skew polynomial rings. In the setting of the skew PBW extensions introduced by
Gallego and Lezama [21], Niño et al. [35] characterized the associated primes
of modules over these rings, Later, Higuera et al. [25, 26] studied the
nilpotent associated prime ideals of a skew PBW extension and investigated the
associated prime ideals of induced modules over this kind of noncommutative
rings.
Given the importance of primary decomposition theory and its relationships
with associated prime ideals, Macdonald [32] considered a dual theory to the
primary decomposition which is commonly referred as secondary representation
where the main ideals of this theory are called attached primes. According to
Baig [8], $M_{K}$ is called a secondary module if $M_{K}\neq 0$ and the
endomorphism $\phi_{r}$ of $M_{K}$ defined by $\phi_{r}(m):=mr$ for all $m\in
M_{K}$ is either surjective or nilpotent (that is, there exists
$k\in\mathbb{N}$ such that $\phi_{r}^{k}=0$), for each $r\in K$ [8, Definition
3.1.1]. Secondary modules are also called $P$-secondary since their nilradical
is a prime ideal $P$ of $K$ [8, Claim 3.1.2]. If $M_{K}$ has a secondary
representation, that is, $M_{K}=\sum_{i=0}^{n}M_{i}$ where each $M_{i}$ is
secondary, then $M_{K}$ is called representable. If $M_{i}$ is
$P_{i}$-secondary for every $1\leq i\leq n$ with $P_{i}\neq P_{j}$ for $i\neq
j$, and the sums $\sum_{i\neq k}M_{i}$ are proper submodules of $M_{K}$, then
the representation is called minimal [8, Definition 3.1.9]. The prime ideals
$P_{1},\dotsc,P_{n}$ are called attached of $M_{K}$ and the set of all
attached prime ideals of $M_{K}$ is denoted by ${\rm Att}^{*}(M_{K})$ [8,
Definition 3.2.2].
Melkersson [34] studied the attached prime ideals over commutative polynomial
extensions. He investigated when multiplication by $f(x)\in K[x]$ defines a
surjective endomorphism over the module $M[x^{-1}]_{K[x]}$ which consists of
all the polynomials of the form $m(x)=m_{0}+m_{1}x^{-1}+\cdots+m_{k}x^{-k}$
with $m_{i}\in M_{K}$ for all $0\leq i\leq k$. He showed that for $g$ and $h$
endomorphisms of $M_{K}$ such that $gh=hg$, if $g$ is surjective and $h$ is
nilpotent then $f:=g+h$ is a surjective endomorphism [34, Lemma 2.1], and with
this result, he proved that if $M_{K}$ is Artinian or has a secondary
representation, then $M=c(f)M$ where $c(f)$ is the ideal generated by the
coefficients of $f(x)$. As a corollary, he obtained a characterization of the
attached prime ideals of the right module $M[x^{-1}]_{K[x]}$ by showing that
if $Q\in{\rm Att}^{*}(M[x^{-1}]_{K[x]})$ then we obtain that $Q=P[x]$, where
$P\in{\rm Att}^{*}(M_{K})$ [34, Corollary 2.3].
Annin [6] introduced the concept of attached prime ideal for arbitrary modules
(not necessarily representables), and provided an extension of Macdonald’s
theory of secondary representation to the noncommutative setting. Following
Annin [6], $N_{R}$ is called coprime if $N_{R}\neq 0$ and ${\rm
ann}_{R}(N)={\rm ann}_{R}(Q)$ for all non-zero quotient module $Q_{R}$ of
$N_{R}$ [6, Definition 2.1]. A right prime ideal $P$ of $R$ is called attached
of $M_{R}$ if there exists a coprime quotient module $Q_{R}$ of $M_{R}$ such
that $P={\rm ann}_{R}(Q)$. The set of attached prime ideals of $M_{R}$ is
denoted by ${\rm Att}(M_{R})$ [6, Definition 2.3]. Annin [7] defined the
completely $\sigma$-compatible modules and extended Melkersson’s result to the
noncommutative setting. He proved that if $Q\in{\rm Att}(M[x^{-1}]_{S})$ then
$Q=P[x]$ where $P\in{\rm Att}(M_{R})$ and $S$ is the skew polynomial ring
$R[x;\sigma]$ with $\sigma$ an automorphism of $R$ [7, Theorem 3.2].
Cohn [13] introduced the skew Ore polynomials of higher order as a
generalization of the skew polynomial rings considering the relation
$xr:=\Psi_{1}(r)x+\Psi_{2}(r)x^{2}+\dotsb$ for all $r\in R$, where the
$\Psi$’s are endomorphisms of $R$. Following Cohn’s ideas, Smits [44]
introduced the ring of skew Ore polynomials of higher order over a division
ring $D$ and commutation rule defined by
(1.1) $\displaystyle xr:=r_{1}x+\cdots+r_{k}x^{k},\ \text{for all}\ r\in R\
\text{and}\ k\geq 1.$
The relation (1.1) induces a family of endomorphisms
$\delta_{1},\ldots,\delta_{k}$ of the group $(D,+)$ with
$\delta_{i}(r):=r_{i}$ for every $1\leq i\leq k$ [44, p. 211]. Smits proved
that if $\\{\delta_{2},\ldots,\delta_{k}\\}$ is a set of left $D$-independient
endomorphisms (i.e., if $c_{2}\delta_{2}(r)+\cdots+c_{k}\delta_{k}(r)=0$ for
all $r\in D$ then $c_{i}=0$ for all $2\leq i\leq k$ [44, p. 212]), then
$\delta_{1}$ is a injective endomorphism [44, p. 213]. There exist some
algebras such as Clifford algebras, Weyl-Heisenberg algebras, and Sklyanin
algebras, in which this commutation relation is not sufficient to define the
noncommutative structure of the algebras since a free non-zero term $\Psi_{0}$
is required. Maksimov [33] considered the skew Ore polynomials of higher order
with free non-zero term $\Psi_{0}(r)$ where $\Psi_{0}$ satisfies the relation
$\Psi_{0}(rs)=\Psi_{0}(r)s+\Psi_{1}(r)\Psi_{0}(s)+\Psi_{2}(r)\Psi_{0}^{2}(s)+\dotsb$,
for every $r,s\in R$. Later, Golovashkin and Maksimov [22] introduced the
algebras $Q(a,b,c)$ over a field $\Bbbk$ of characteristic zero with two
generators $x$ and $y$, and generated by the quadratic relations
$yx=ax^{2}+bxy+cy^{2}$, where $a,b,c\in\Bbbk$. If $\\{x^{m}y^{n}\\}$ forms a
basis for $Q(a,b,c)$, then the ring generated by the quadratic relation is an
algebra of skew Ore polynomials and can be defined by a system of linear
mappings $\delta_{0},\ldots,\delta_{k}$ of $\Bbbk[x]$ into itself such that
for any $p(x)\in\Bbbk[x]$,
$yp(x)=\delta_{0}(p(x))+\delta_{1}(p(x))y+\cdots+\delta_{k}(p(x))y^{k}$, for
some $k\in\mathbb{N}$.
Motivated by Annin’s research [7] about the attached prime ideals of
$M[x^{-1}]_{S}$ and the importance of the algebras of skew Ore polynomials of
higher order, in this paper we introduce a family of noncommutative rings
called skew Ore polynomials and we study the attached prime ideals of the
inverse polynomial module over these rings. Since some of its ring-
theoretical, homological and combinatorial properties have been investigated
recently (e.g., [12, 36, 37] and references therein), this article can be
considered as a contribution to research on skew Ore polynomials of higher
order.
The paper is organized as follows. Section 2 establishes some preliminaries
and key results about skew Ore polynomials. In Section 3, we introduce the
completely $(\sigma,\delta)$-compatible modules and present original results
(Propositions 3.4, 3.3 and 3.5). Under compatibility conditions, we also
characterize the attached prime ideals of the right module $M[x^{-1}]_{A}$
where $A$ is a skew Ore polynomial ring (Theorems 3.8 and 3.12). As expected,
our results extend those above corresponding to skew polynomial rings of
automorphism type presented by Annin [6, 7]. Finally, we present some ideas
for a future work.
The symbols $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{R}$, and $\mathbb{C}$ denote
the set of natural numbers including zero, the ring of integer numbers, the
fields of real numbers and the complex numbers, respectively. The term module
will always mean right module unless stated otherwise. The symbol $\Bbbk$
denotes a field and $\Bbbk^{*}:=\Bbbk\ \backslash\ \\{0\\}$.
## 2\. Preliminaries
If $\sigma$ is an endomorphism of $R$, then a map $\delta:R\rightarrow R$ is
called a $\sigma$-derivation of $R$ if it is additive and satisfies that
$\displaystyle\delta(rs)=\sigma(r)\delta(s)+\delta(r)s$, for every $r,s\in R$
[23, p. 26]. Following Ore [40, 41], the skew polynomial ring (also called Ore
extension of $R$) over $R$ is defined as the ring $R[x;\sigma,\delta]$
generated by $R$ and $x$ such that it is a free left $R$-module with basis
$\left\\{x^{k}\ |\ k\in\mathbb{N}\right\\}$ and $xr:=\sigma(r)x+\delta(r)$ for
every $r\in R$ [23, p. 34].
A derivation $\delta$ of $R$ is called locally nilpotent if for all $r\in R$
there exists $n(r)\geq 1$ such that $\delta^{n(r)}(r)=0$ [20, p. 11].
Following the ideas of Cohn [13] and Smits [44], we introduce the following
kind of skew Ore polynomials of higher order.
###### Definition 2.1.
If $R$ is a ring, $\sigma$ is an automorphism of $R$ and $\delta$ is a locally
nilpotent $\sigma$-derivation of $R$, then we define the skew Ore polynomial
ring $A:=R(x;\sigma,\delta)$ which consists of the uniquely representable
elements $r_{0}+r_{1}x+\cdots+r_{k}x^{k}$ where $r_{i}\in R$ and
$k\in\mathbb{N}$, with the commutation rule $xr:=\sigma(r)x+x\delta(r)x$ for
all $r\in R$.
According to Definition (2.1), if $r\in R$ and $\delta^{n(r)}(r)=0$ for some
$n(r)\geq 1$, then
(2.1) $\displaystyle
xr=\sigma(r)x+\sigma\delta(r)x^{2}+\cdots+\sigma\delta^{n(r)-1}(r)x^{n(r)}.$
If we define the endomorphisms $\Psi_{i}:=\sigma\delta^{i-1}$ for all $i\geq
1$ and $\Psi_{0}:=0$, then $A$ is a skew Ore polynomial of higher order in the
sense of Cohn [13].
###### Example 2.2.
We present some examples of skew Ore polynomial rings.
1. (1)
If $\delta=0$ then $xr=\sigma(r)x$ and thus $R(x;\sigma)=R[x;\sigma]$ is the
skew polynomial ring where $\sigma$ is an automorphism of $R$.
2. (2)
The quantum plane $\Bbbk_{q}[x,y]$ is the free algebra generated by $x,y$ over
$\Bbbk$, and subject to the commutation rule $xy=qyx$ with $q\in\Bbbk^{*}$ and
$q\neq 1$. We note that $\Bbbk_{q}[x,y]\cong\Bbbk[y](x;\sigma)$ where
$\sigma(y):=qy$ is an automorphism of $\Bbbk[y]$.
3. (3)
The Jordan plane $\mathcal{J}(\Bbbk)$ defined by Jordan [27] is the free
algebra generated by the indeterminates $x,y$ over $\Bbbk$ and the relation
$yx=xy+y^{2}$. This algebra can be a written as the skew polynomial ring
$\Bbbk[y][x;\delta]$ with $\delta(y):=-y^{2}$. On the other hand, notice that
$\delta(x)=1$ is a locally nilpotent derivation of $\Bbbk[x]$, and thus the
Jordan plane also can be interpreted as $\Bbbk[x](y;\delta)$.
4. (4)
Díaz and Pariguan [14] introduced the $q$-meromorphic Weyl algebra $MW_{q}$ as
the algebra generated by $x,y$ over $\mathbb{C}$, and defining relation
$yx=qxy+x^{2}$, for $0<q<1$. Lopes [31] showed that using the generator
$Y=y+(q-1)^{-1}x$ instead of $y$, it follows that $Yx=qxY$ and thus the
algebra $MW_{q}$ can be written as a quantum plane $\mathbb{C}_{q}[x,y]$ [31,
Example 3.1]. Following examples (2) and (4), we conclude the algebra $MW_{q}$
is a skew Ore polynomial ring.
5. (5)
Consider the algebra $Q(0,b,c)$ defined by Golovashkin and Maksimov [22] with
$a=0$. It is straightforward to see that $\sigma(x)=bx$ is an automorphism of
$\Bbbk[x]$ with $b\neq 0$, $\delta(x)=c$ is a locally nilpotent
$\sigma$-derivation of $\Bbbk[x]$ and so $Q(0,b,c)$ can be interpreted as
$A=\Bbbk[x](y;\sigma,\delta)$.
6. (6)
If $\delta_{1}$ is an automorphism of $D$ and
$\\{\delta_{2},\ldots,\delta_{k}\\}$ is a set of left $D$-independient
endomorphism, then $\delta:=\delta_{1}^{-1}\delta_{2}$ is a
$\delta_{1}$-derivation of $D$, $\delta_{i+1}(r)=\delta_{1}\delta^{i}(r)$, and
$\delta^{k}(r)=0$ for all $r\in D$ [44, p. 214], and thus (1.1) coincides with
(2.1). In this way, the skew Ore polynomial rings of higher order defined by
Smits can be seen as $D(x;\delta_{1},\delta)$.
We need to investigate the localization technique to define the module
$M[x^{-1}]_{A}$ (Section 3.2). In the localization of noncommutative rings the
Ore condition plays an important role. A multiplicative subset $X$ of $R$
satisfies the left Ore condition if $Xr\cap Rx\neq\emptyset$ for every $r\in
R$ and $x\in X$. If $X$ satisfies the left Ore condition then $X$ is called a
left Ore set. Proposition 2.3 shows that the set containing all powers of $x$
satisfies the left Ore condition.
###### Proposition 2.3.
$X=\\{x^{k}\ |\ k\geq 0\\}$ is a left Ore set of the algebra $A$.
###### Proof.
It is clear that $X$ is a multiplicative subset of $A$, so we have to show
that $X$ satisfies the left Ore condition. Let
$a=r_{0}+r_{1}x+\cdots+r_{k}x^{k}$ be an element of $A$ with $r_{k}\neq 0$.
Since $\delta$ is locally nilpotent, for each $r_{i}$ in the expression of $a$
there exists $m_{i}\geq 0$ such that
$xr_{i}:=\sum_{j=1}^{m_{i}}\sigma(\delta^{j-1}(r_{i}))x^{j}=a_{i}x,$
where
$a_{i}:=\sigma(r_{i})+\sigma(\delta(r_{i}))x+\cdots+\sigma(\delta^{m_{i}-1}(r_{i}))x^{m_{i}-1}\in
A$. In this way, for each $r_{i}$ there exists $a_{i}\in A$ such that
$xr_{i}=a_{i}x$ for some $a_{i}\in A$, and so $xa=a^{\prime}x$ for some
$a^{\prime}\in A$. By induction on $p$, assume that for any $a\in A$ and
$x^{p}\in X$ there exists $\overline{a}\in A$ such that
$x^{p}a=\overline{a}x$. Thus, $x^{p+1}a=x\overline{a}x$ and since
$xa=\overline{a}^{\prime}x$ for some $\overline{a}^{\prime}\in A$, we get
$x^{p+1}a=a^{\prime\prime}x$ with $a^{\prime\prime}=\overline{a}^{\prime}x\in
A$. Hence, $X$ is a left Ore set of $A$. ∎
By Proposition 2.3, we can localize $A$ by $X$, and so we denote this
localization by $X^{-1}A$. It is straightforward to see that the indeterminate
$x^{-1}$ satisfies the relation
$x^{-1}r:=\sigma^{\prime}(r)x^{-1}+\delta^{\prime}(r)$, for all $r\in R$ with
$\sigma^{\prime}(r):=\sigma^{-1}(r)$ and
$\delta^{\prime}(r):=-\delta\sigma^{-1}(r)$. Dumas [17] studied the field of
fractions of $D[x;\sigma,\delta]$ where $\sigma$ is an automorphism of $D$ and
stated that one technique for this purpose is to consider it as a subfield of
a certain field of series [17, p. 193]. According to Dumas, if $Q$ is the
field of fractions of $D[x;\sigma,\delta]$ then $Q$ is a subfield of the field
of series of Laurent $D((x^{-1};\sigma^{-1},-\delta\sigma^{-1}))$ whose
elements are of the form
$r_{-k}x^{-k}+\cdots+r_{-1}x^{-1}+r_{0}+r_{1}x+\cdots$ for some
$k\in\mathbb{N}$, and satisfies the commutation rule
$\displaystyle xr$
$\displaystyle:=\sigma(r)x+\sigma\delta(r)x^{2}+\cdots=\sigma(r)x+x\delta(r)x,\
\text{and}$ $\displaystyle x^{-1}r$
$\displaystyle:=\sigma^{\prime}(r)x^{-1}+\delta^{\prime}(r),\ \text{for all}\
r\in R.$
By Definition 2.1 and Proposition 2.3, if $\sigma$ is an automorphism of $D$
and $\delta$ is a locally nilpotent $\sigma$-derivation of $D$ then
$X^{-1}A\subseteq D((x^{-1};\sigma^{-1},-\delta\sigma^{-1}))$ (see [1, 15, 16]
for more details about fields of series of Laurent).
###### Remark 2.4.
Following Lam et al. [28, p. 2468], if $\sigma$ is an automorphism of $R$ and
$\delta$ is a $\sigma$-derivation of $R$, then we denote by $f_{j}^{i}$ the
endomorphism of $R$ which is the sum of all possible words in
$\sigma^{\prime},\delta^{\prime}$ built with $i$ letters $\sigma^{\prime}$ and
$j-i$ letters $\delta^{\prime}$, for $i\leq j$. In particular, $f_{0}^{0}=1$,
$f_{j}^{j}=\sigma^{\prime j}$, $f_{j}^{0}=\delta^{\prime j}$, and
$f_{j}^{j-1}=\sigma^{\prime j-1}\delta^{\prime}+\sigma^{\prime
j-2}\delta^{\prime}\sigma^{\prime}+\cdots+\delta^{\prime}\sigma^{\prime j-1}$;
if $\delta\sigma=\sigma\delta$, then $f_{j}^{i}=\binom{j}{i}\sigma^{\prime
i}\delta^{\prime j-i}$. If $r\in R$ and $k\in\mathbb{N}$, then the following
formula holds:
(2.2) $\displaystyle\displaystyle x^{-k}r=\sum_{i=0}^{k}f_{k}^{i}(r)x^{-i}.$
In addition, if $r,s\in R$ and $k,k^{\prime}\in\mathbb{N}$ then
(2.3)
$\displaystyle\displaystyle(rx^{-k})(sx^{-k^{\prime}})=\sum_{i=0}^{k}rf_{k}^{i}(s)x^{-(k+k^{\prime})}.$
Taking into account the usual addition of polynomials and the product induced
by (2.2) and (2.3), we define the ring of polynomials in the indeterminate
$x^{-1}$ with coefficients in $R$ and denote it by $R[x^{-1}]$. If $M_{R}$ is
a right module then the inverse polynomial module $M[x^{-1}]_{R}$ is defined
as the set of all polynomials of the form $f(x)=m_{0}+\cdots+m_{k}x^{-k}$ with
$m_{i}\in M_{R}$ for all $i$, the usual addition of polynomials and the action
of $R$ over any monomial $mx^{-k}$ is defined by (2.2) as follows:
(2.4) $\displaystyle\displaystyle
mx^{-k}r:=\sum_{i=0}^{k}mf_{k}^{i}(r)x^{-i},\ \text{for all}\ m\in M_{R}\
\text{and}\ r\in R.$
###### Remark 2.5.
We use expressions as $m(x)=m_{0}+m_{1}x^{-1}+\cdots+m_{k}x^{-k}\in
M[x^{-1}]_{R}$. With this notation, we define the leading monomial of $m(x)$
as ${\rm lm}(m(x)):=x^{-k}$, the leading coefficient of $m(x)$ by ${\rm
lc}(m(x)):=m_{k}$, and the leading term of $m(x)$ as ${\rm
lt}(m(x)):=m_{k}x^{-k}$. The negative degree of $x^{-k}$ is defined by
$\deg(x^{-k}):=-k$ for any $k\in\mathbb{N}$, and $\deg(m(x)):={\rm
max}\\{\deg(x^{-i})\\}_{i=0}^{k}$ for all $m(x)\in M[x^{-1}]_{R}$. For any
element $m(x)\in M[x^{-1}]_{R}$, we denote by $C_{m}$ the set of all
coefficients of $m(x)$.
## 3\. Completely compatible rings and attached prime ideals
### 3.1. Completely $(\sigma,\delta)$-compatible rings
Annin [5] (c.f. Hashemi and Moussavi [24]) introduced the notion of
compatibility with the aim of studying the associated prime ideals of modules
over skew polynomial rings. If $\sigma$ is an endomorphism of $R$ and $\delta$
is a $\sigma$-derivation of $R$, then $M_{R}$ is called $\sigma$-compatible if
for each $m\in M_{R}$ and $r\in R$, $mr=0$ if and only if $m\sigma(r)=0$;
$M_{R}$ is $\delta$-compatible if for each $m\in M_{R}$ and $r\in R$, $mr=0$
implies $m\delta(r)=0$; if $M_{R}$ is both $\sigma$-compatible and
$\delta$-compatible, then $M_{R}$ is a ($\sigma,\delta$)-compatible module [5,
Definition 2.1]. Continuing with the compatibility conditions on modules,
$M_{R}$ is called completely $\sigma$-compatible if $(M/N)_{R}$ is
$\sigma$-compatible, for every submodule $N_{R}$ of $M_{R}$ [7, Definition
1.4]. Since completely $\sigma$-compatible modules were used by Annin [7] in
their research about attached prime ideals of $M[x^{-1}]_{S}$, it is to be
expected that we have to consider the completely $(\sigma,\delta)$-compatible
modules to characterize this type of ideals in the setting of the skew Ore
polynomials. In this way, we consider the following definition.
###### Definition 3.1.
If $\sigma$ is an endomorphism of $R$ and $\delta$ is a $\sigma$-derivation of
$R$ then $M_{R}$ is called completely $\sigma$-compatible if for all submodule
$N_{R}$ of $M_{R}$, $(M/N)_{R}$ is a $\sigma$-compatible module; $M_{R}$ is
completely $\delta$-compatible if for every submodule $N_{R}$ of $M_{R}$,
$(M/N)_{R}$ is $\delta$-compatible; $M_{R}$ is said to be completely
$(\sigma,\delta)$-compatible if it is both completely $\sigma$-compatible and
$\delta$-compatible.
###### Example 3.2.
* (1)
If $M_{R}$ is simple and $(\sigma,\delta)$-compatible, then it is not
difficult to see that $M_{R}$ is completely $(\sigma,\delta)$-compatible.
* (2)
Let $K$ be a local ring with maximal ideal $\mathfrak{m}$ and $\sigma$ any
automorphism of $K$. Annin [3] proved that $M_{K}:=K/\mathfrak{m}$ is a
$\sigma$-compatible module, and since $M_{K}$ is simple it follows that
$M_{K}$ is completely $\sigma$-compatible [3, Example 3.35]. Additionally, if
$\delta$ is a $\sigma$-derivation of $K$ such that $\delta(r)\in\mathfrak{m}$
for every $r\in\mathfrak{m}$, then $M_{K}$ is completely $\delta$-compatible.
Indeed, if $\overline{0}\neq\overline{s}\in M_{K}$ and $r\in K$ satisfy that
$\overline{s}r=0$ then $sr\in\mathfrak{m}$, and since $s\notin\mathfrak{m}$ we
obtain $r\in\mathfrak{m}$. If $\delta(r)\in\mathfrak{m}$ for every
$r\in\mathfrak{m}$, it follows that $s\delta(r)\in\mathfrak{m}$ and so
$\overline{s}\delta(r)=0$. Therefore, $M_{K}$ is $\delta$-compatible and thus
$M_{K}$ is completely $\delta$-compatible.
The following proposition presents some properties of completely
$(\sigma,\delta)$-compatible modules. These properties are required to prove
some results of the paper.
###### Proposition 3.3.
If $M_{R}$ is completely $(\sigma,\delta)$-compatible and $N_{R}$ is a
submodule of $M_{R}$ then the following assertions hold:
1. (1)
If $ma\in N_{R}$ then $m\sigma^{i}(a),m\delta^{j}(a)\in N_{R}$ for each
$i,j\in\mathbb{N}$.
2. (2)
If $mab\in N_{R}$ then
$m\sigma(\delta^{j}(a))\delta(b),m\sigma^{i}(\delta(a))\delta^{j}(b)\in N_{R}$
for all $i,j\in\mathbb{N}$. In particular, $ma\delta^{j}(b),m\delta^{j}(a)b\in
N_{R}$ for all $j\in\mathbb{N}$.
3. (3)
If $mab\in N_{R}$ or $m\sigma(a)b\in N_{R}$ then $m\delta(a)b\in N_{R}$.
###### Proof.
If $M_{R}$ is completely $(\sigma,\delta)$-compatible, then $(M/N)_{R}$ is
$(\sigma,\delta)$-compatible. Considering the elements
$\overline{ma}=\overline{mab}=\overline{0}\in(M/N)_{R}$, the assertions follow
from [2, Lemma 2.15]. ∎
Annin proved some properties of completely $\sigma$-compatible modules [7, p.
539]. Proposition 3.4 extends these statements for completely
$(\sigma,\delta)$-compatible modules.
###### Proposition 3.4.
Let $\sigma$ be an endomorphism of $R$ and $\delta$ a $\sigma$-derivation of
$R$.
* (1)
If $M_{R}$ is completely $(\sigma,\delta)$-compatible then $M_{R}$ is
$(\sigma,\delta)$-compatible.
* (2)
If $M_{R}$ is a completely $(\sigma,\delta)$-compatible module then
$(M/N)_{R}$ is completely $(\sigma,\delta)$-compatible, for every submodule
$N_{R}$ of $M_{R}$.
###### Proof.
* (1)
If $M_{R}$ is a completely $(\sigma,\delta)$-compatible module then
$(M/N)_{R}$ is a $(\sigma,\delta)$-compatible module, for every submodule
$N_{R}$ of $M_{R}$. In particular, $(M/\\{0\\})_{R}\cong M_{R}$ is
$(\sigma,\delta)$-compatible for the submodule $\\{0\\}_{R}$ of $M_{R}$.
* (2)
Suppose that $M_{R}$ is completely $(\sigma,\delta)$-compatible and consider a
submodule $N^{\prime}/N$ of $(M/N)_{R}$ where $N^{\prime}_{R}$ is a submodule
of $M_{R}$ with $N\subsetneq N^{\prime}$. By the third isomorphism theorem for
modules, $((M/N)/(N^{\prime}/N))_{R}\cong(M/N^{\prime})_{R}$, and since
$M_{R}$ is completely $(\sigma,\delta)$-compatible, we have that
$(M/N^{\prime})_{R}$ is $(\sigma,\delta)$-compatible and thus
$((M/N)/(N^{\prime}/N))_{R}$ is $(\sigma,\delta)$-compatible, whence
$(M/N)_{R}$ is a completely $(\sigma,\delta)$-compatible module.
∎
We present other important property of completely $(\sigma,\delta)$-compatible
modules.
###### Proposition 3.5.
If $\sigma$ is bijective and $M_{R}$ is a completely
$(\sigma,\delta)$-compatible module, then $M_{R}$ is a completely
$(\sigma^{\prime},\delta^{\prime})$-compatible module.
###### Proof.
Assume that $M_{R}$ is completely $(\sigma,\delta)$-compatible. If $N_{R}$ is
a submodule of $M_{R}$ then $mr\in N_{R}$ if and only if $m\sigma(r)\in
N_{R}$, for all $m\in M_{R}$ and $r\in R$. Since $\sigma$ is bijective we have
$m\sigma^{-1}(r)\in N_{R}$ if and only if $mr\in N_{R}$, and so $(M/N)_{R}$ is
a $\sigma^{\prime}$-compatible proving that $M_{R}$ is completely
$\sigma^{\prime}$-compatible. On the other hand, if $M_{R}$ is completely
$\sigma^{\prime}$-compatible then $mr\in N_{R}$ which implies that
$m\sigma^{-1}(r)\in N_{R}$. If $M_{R}$ is completely $\delta$-compatible and
$m\sigma^{-1}(r)\in N_{R}$, then $m\delta\sigma^{-1}(r)$ and thus $M_{R}$ is
completely $\delta^{\prime}$-compatible. Therefore $M_{R}$ is completely
$(\sigma^{\prime},\delta^{\prime})$-compatible. ∎
### 3.2. Attached prime ideals
In this section, we define an $A$-module structure to the inverse polynomial
module $M[x^{-1}]_{R}$ and study the attached prime ideals of the module
$M[x^{-1}]_{A}$. The action of $A$ over $M[x^{-1}]_{R}$ is given by
(3.1) $\displaystyle mx^{-1}r$
$\displaystyle:=m\sigma^{\prime}(r)x^{-1}+m\delta^{\prime}(r)\ \text{for all}\
r\in R\ \text{and}\ m\in M_{R},\ \text{and}$ (3.2) $\displaystyle x^{-i}x^{j}$
$\displaystyle:=x^{-i+j}\ \text{if}\ j\leq i\ \text{and}\ 0\
\text{otherwise}.$
###### Remark 3.6.
By (3.1) and (3.2), if $\delta:=0$ then $mx^{-i}rx^{j}:=m\sigma^{\prime
i}(r)x^{-i+j}$ for all $r\in R$ and $i,j\in\mathbb{N}$ with $j\leq i$, which
coincides with $M[x^{-1}]_{S}$ [7, p. 538].
If $N_{R}$ is a right module and $N[x^{-1}]_{R}$ (resp., $N[x^{-1}]_{A}$) is a
right module, then the right annihilator is denoted by ${\rm
ann}_{R}(N[x^{-1}])$ (resp., ${\rm ann}_{A}(N[x^{-1}])$). The following lemma
characterizes the ideals generated by right prime ideals of $A$ that
correspond to annihilators of quotient modules of $M[x^{-1}]_{A}$.
###### Lemma 3.7.
If $M_{R}$ is completely $(\sigma,\delta)$-compatible and $P$ is a right prime
ideal of $R$ such that $P={\rm ann}_{R}(M/N)$ for some submodule $N_{R}$ of
$M_{R}$, then
$PA={\rm ann}_{A}(M[x^{-1}]/N[x^{-1}]).$
###### Proof.
By Propositions 3.3 and 3.5, if $M_{R}$ is completely
$(\sigma,\delta)$-compatible then $C_{mf}\subseteq N$ for every $m(x)\in
M[x^{-1}]_{A}$, $f(x)\in PA$, and thus $m(x)f(x)\in N[x^{-1}]_{A}$ proving
that $PA\subseteq{\rm ann}_{A}(M[x^{-1}]/N[x^{-1}])$. For the other inclusion,
if $f(x)\notin PA$ then there exists a monomial $r_{l}x^{l}$ of $f(x)$ such
that $r_{l}\notin P$ for some $0\leq l\leq j$. So there exists $m\in M_{R}$
such that $mr_{l}\notin N_{R}$, which implies that $mf(x)\notin N[x^{-1}]_{A}$
and so $f(x)\notin{\rm ann}_{A}(M[x^{-1}]/N[x^{-1}])$ whence ${\rm
ann}_{A}(M[x^{-1}]/N[x^{-1}])\subseteq PA$. ∎
Theorem 3.8 shows that right ideals of $A$ generated by attached prime ideals
of $M_{R}$ are attached prime ideals of $M[x^{-1}]_{A}$, and extends [7,
Theorem 2.1].
###### Theorem 3.8.
If $M_{R}$ is completely $(\sigma,\delta)$-compatible then
$\displaystyle{\rm Att}(M[x^{-1}]_{A})\supseteq\left\\{PA\ |\ P\in{\rm
Att}(M_{R})\right\\}.$
###### Proof.
If $P$ is an attached prime ideal of $M_{R}$ and $(M/N)_{R}$ is the quotient
coprime of $M_{R}$ such that $P={\rm ann}_{R}(M/N)$ for some submodule $N_{R}$
of $M_{R}$, it follows that $PA={\rm ann}_{A}(M[x^{-1}]/N[x^{-1}])$ by Lemma
3.7. Let us prove that $(M[x^{-1}]/N[x^{-1}])_{A}$ is a quotient coprime. If
$M[x^{-1}]/N[x^{-1}]\neq 0$ then there exists a submodule $Q_{A}$ of
$M[x^{-1}]_{A}$ such that $Q\supsetneq N[x^{-1}]$. Let $C_{Q}$ be the subset
of $M$ that consists of all coefficients of the elements of $Q$ and consider
$Q_{R}^{\prime}$ the submodule of $M_{R}$ generated by $C_{Q}$. Notice that if
$Q\neq M[x^{-1}]$ then $Q^{\prime}\neq M$, and if $(M/N)_{R}$ is a coprime
module then $P={\rm ann}_{R}(M/N)={\rm ann}_{R}(M/Q^{\prime})$.
If $g(x)=r_{0}+\cdots+r_{j}x^{j}\in{\rm ann}_{A}(M[x^{-1}]/Q)$ then
$f(x)g(x)\in Q_{A}$ and hence $C_{fg}\subseteq Q_{R}^{\prime}$ for all
$f(x)=m_{0}+\cdots+m_{k}x^{-k}\in M[x^{-1}]_{A}$. By Propositions 3.3 and 3.5,
if $M_{R}$ is completely $(\sigma,\delta)$-compatible then $m_{i}r_{j}\in
Q_{R}^{\prime}$, whence $m_{i}r_{j}\in N_{R}$ for all $i,j$. Since
$m_{i}r_{j}\in N_{R}$, we have that $m(x)f(x)\in N[x^{-1}]_{A}$ by
Propositions 3.3 and 3.5, and thus $M[x^{-1}]f(x)\subseteq N[x^{-1}]$.
Therefore $f(x)\in{\rm ann}_{A}(M[x^{-1}]/N[x^{-1}])$ proving that ${\rm
ann}_{A}(M[x^{-1}]/Q)\subseteq{\rm ann}_{A}(M[x^{-1}]/N[x^{-1}])$.
If $f\in{\rm ann}_{A}(M[x^{-1}]/N[x^{-1}])$ then $M[x^{-1}]f\subseteq
N[x^{-1}]$, and since $N[x^{-1}]\subsetneq Q$ we have that
$M[x^{-1}]f\subsetneq Q$ showing that ${\rm
ann}_{A}(M[x^{-1}]/N[x^{-1}])\subseteq{\rm ann}_{A}(M[x^{-1}]/Q)$, and thus
$(M[x^{-1}]/N[x^{-1}])_{A}$ is a coprime module. ∎
###### Corollary 3.9 ([7, Theorem 2.1]).
If $M_{R}$ is completely $\sigma$-compatible then
$\displaystyle{\rm Att}(M[x^{-1}]_{S})\supseteq\left\\{P[x]\ |\ P\in{\rm
Att}(M_{R})\right\\}.$
Lemma 3.10 shows when a coprime module over the ring of skew Ore polynomials
$A$ is a coprime module over $R$ and generalizes [7, Lemma 2.4].
###### Lemma 3.10.
If $P_{A}$ is a coprime module and $P_{R}$ is a completely
$(\sigma,\delta)$-compatible module then $P_{R}$ is coprime.
###### Proof.
Consider a submodule $Q_{R}$ of $P_{R}$ and let us show that ${\rm
ann}_{R}(P)={\rm ann}_{R}(P/Q)$. If $r\in{\rm ann}_{R}(P)$ then $Pr=0\in Q$,
and so $r\in{\rm ann}_{R}(P/Q)$. For the other inclusion, assume that
$r\in{\rm ann}_{R}(P/Q)$ and $N_{A}:=\sum Q_{A}^{\prime}$ where
$Q_{A}^{\prime}$ is any submodule over $A$ with $Q^{\prime}\subseteq Q$. So
$N_{A}\subseteq Q_{R}\subsetneq P_{A}$, and since $P_{A}$ is a coprime module
then ${\rm ann}_{A}(P)={\rm ann}_{A}(P/N)$. Let $p\in P_{A}$ and denotes by
$prA_{A}$ the module generated by the element $pr$. Let us prove that
$prA\subseteq N$. Since
$x^{-1}r:=\sigma^{\prime}(r)x^{-1}+\delta^{\prime}(r)$, we have
$rx=x\sigma^{\prime}(r)+x\delta^{\prime}(r)x$ for every $r\in R$, and thus if
$f(x)=r_{0}+\cdots+r_{l}x^{l}\in A$ and $Prr_{j}\subseteq Q$ for every $0\leq
j\leq l$, then $prf(x)\in Q_{A}$ by Propositions 3.3 and 3.5. Hence
$prA\subseteq Q$ and so $prA\subseteq N$ by definition of $N_{A}$. In this
way, $Pr\subseteq N$ which implies that $r\in{\rm ann}_{A}(P/N)={\rm
ann}_{A}(P)$ proving that ${\rm ann}_{R}(P)={\rm ann}_{R}(P/Q)$ for every
proper submodule $Q_{R}$ of $P_{R}$, that is, $P_{R}$ is a coprime module. ∎
For any submodule $P_{R}$ of $M[x^{-1}]_{R}$, we set $P_{k}:=\\{m\in M\ |\
mx^{-k}\in P\\}$ for each $k\in\mathbb{N}$ and denote by $\langle
P_{k}\rangle$ the submodule of $M_{R}$ generated by $P_{k}$. Lemma 3.11
guarantees the existence of certain maximal submodules of $M_{R}$ and
generalizes [7, Lemma 2.5].
###### Lemma 3.11.
If $P_{R}$ is a maximal submodule of $M[x^{-1}]_{R}$, we either have $\langle
P_{k}\rangle=M$ or else $\langle P_{k}\rangle$ is a maximal submodule of
$M_{R}$ for each $k\in\mathbb{N}$. Additionally, there exists $k\in\mathbb{N}$
for which the latter holds.
###### Proof.
Assume that there exists a submodule $M_{R}^{\prime}$ such that $\langle
P_{k}\rangle\subsetneq M^{\prime}\subseteq M$ and let us see that
$M^{\prime}=M$. If $m^{\prime}\in M_{R}^{\prime}$ and $m\notin\langle
P_{k}\rangle$ then $m^{\prime}x^{-k}\notin P_{R}$, and since $P_{R}$ is a
maximal submodule of $M[x^{-1}]_{R}$ we obtain
$M[x^{-1}]_{R}=P_{R}+m^{\prime}x^{-k}R_{R}$, that is, for every $f(x)\in
M[x^{-1}]_{R}$ there exist $p\in P_{R}$ and $r\in R$ such that
$f(x)=p+m^{\prime}x^{-k}r$. If $m^{\prime}x^{-k}r=m\sigma^{\prime
k}(r)x^{-k}+mp_{k,r}$ where $p_{k,r}:=\sum_{i=0}^{k-1}f_{k}^{i}(r)x^{-i}$,
consider the element $f^{\prime}(x)=mx^{-k}+mp_{k,r}$. So
$f^{\prime}(x)=p+m^{\prime}x^{-k}r$ for some $p\in P_{R}$ which implies that
$p=(m-m^{\prime}\sigma^{\prime k}(r))x^{-k}\in P_{R}$, whence
$m-m^{\prime}\sigma^{\prime k}(r)\in\langle P_{k}\rangle\subseteq M^{\prime}$
and thus $m\in M_{R}^{\prime}$. Therefore $M\subseteq M^{\prime}$ and so
$M^{\prime}=M$ proving that $\langle P_{k}\rangle$ is a maximal submodule of
$M_{R}$. If $\langle P_{k}\rangle=M$ for all $k\in\mathbb{N}$, then
$P=M[x^{-1}]$ which is a contradiction. In this way, there is $k\in\mathbb{N}$
such that $\langle P_{k}\rangle$ is a maximal submodule of $M_{R}$. ∎
Annin [7] considered the Bass modules in his study of the attached prime
ideals of $M[x^{-1}]_{S}$ [7, p. 544]. We recall that $M_{R}$ is called a Bass
module if every proper submodule $N_{R}$ is contained in a maximal submodule
of $M_{R}$ [19, p. 205]. Theorem 3.12 characterizes the attached primes of
$M[x^{-1}]_{A}$ and extends [7, Theorem 3.2].
###### Theorem 3.12.
If $M[x^{-1}]_{R}$ is a completely $(\sigma,\delta)$-compatible Bass module
then
$\displaystyle{\rm Att}(M[x^{-1}]_{A})=\left\\{PA\ |\ P\in{\rm
Att}(M_{R})\right\\}.$
###### Proof.
In view of Theorem 3.8, we only need to prove
$\displaystyle{\rm Att}(M[x^{-1}]_{A})\subseteq\left\\{PA\ |\ P\in{\rm
Att}(M_{R})\right\\}.$
Let $I\in{\rm Att}(M[x^{-1}]_{A})$ and $Q_{A}$ be a submodule of
$M[x^{-1}]_{A}$ such that $(M[x^{-1}]/Q)_{A}$ is a coprime module with $I={\rm
ann}_{A}(M[x^{-1}]/Q)$. It is clear that $I\cap R$ is equal to ${\rm
ann}_{R}(M[x^{-1}]/Q)$. By Lemma 3.10, $(M[x^{-1}]/Q)_{R}$ is a coprime module
and since $M[x^{-1}]_{R}$ is a Bass module, $(M[x^{-1}]/Q)_{R}$ contains a
maximal submodule such that $P/Q\subseteq M[x^{-1}]/Q$. By the coprimality of
$(M[x^{-1}]/Q)_{R}$, $(M[x^{-1}]/P)_{R}$ is coprime and $I\cap R={\rm
ann}_{R}(M[x^{-1}]/P)$. Let us prove that $I\cap R\in{\rm Att}(M_{R})$ and
$I=(I\cap R)A$.
If $P_{R}$ is a maximal submodule of $M[x^{-1}]_{R}$, then there exists
$k\in\mathbb{N}$ such that $\langle P_{k}\rangle$ is a maximal submodule of
$M_{R}$ by Lemma 3.11, and we can set the smallest $k$ that satisfies this
hypothesis. If $\langle P_{k}\rangle$ is maximal then there exists $m_{k}\in
M$ such that $m_{k}\notin\langle P_{k}\rangle$, and so $mx^{-k}\notin P_{R}$
whence $mx^{-k}+P$ is a cyclic generator of the simple module
$(M[x^{-1}]/P)_{R}$. Let $\varphi$ be the map of $(M/\langle
P_{k}\rangle)_{R}$ over $(M[x^{-1}]/P)_{R}$ given by $\varphi(m_{k}+\langle
P_{k}\rangle):=m_{k}x^{-k}+P$. By the complete $(\sigma,\delta)$-compatibility
of $M_{R}$ and Propositions 3.3 and 3.5, if $m_{k}r\in\langle P_{k}\rangle$
for some $r\in R$ then $m_{k}\sigma^{\prime k}(r)\in\langle P_{k}\rangle$
whence $m_{k}\sigma^{\prime k}(r)x^{-k}\in P_{R}$. By minimality of $k$,
$m_{k}f_{k}^{i}(r)x^{-i}\in P_{R}$ for all $0\leq i\leq k-1$, which implies
that $m_{k}x^{-k}r\in P_{R}$ and hence $\varphi$ is well defined.
Let us see that $\varphi$ is surjective. If $m_{k}x^{-k}+P$ generates the
module $(M[x^{-1}]/P)_{R}$ and $\varphi(m_{k}+\langle
P_{k}\rangle):=m_{k}x^{-k}+P$ then $\varphi$ is surjective. Let $\psi$ be the
homomorphism of $M_{R}$ over $(M/\langle P_{k}\rangle)_{R}$ defined by
$\psi(m):=m+\langle P_{k}\rangle$ for every $m\in M_{R}$. If
$\varphi\circ\psi$ is a surjective homomorphism of $M_{R}$ over
$(M[x^{-1}]/P)_{R}$ and $I\cap{R}\in{\rm Att}((M[x^{-1}]/P)_{R})$, then $I\cap
R\in{\rm Att}(M_{R})$.
We need to prove that $I=(I\cap R)A$. Since $I$ is an ideal of $A$ we have
$(I\cap R)A\subseteq I$. For the other inclusion, take an element
$f(x)=r_{0}+\cdots+r_{j}x^{j}\in I$ and let us see by induction that $r_{i}\in
I\cap R$ for all $1\leq i\leq j$. Notice that
$(m_{k}x^{-k}+P)f(x)=m_{k}x^{-k}r_{0}+\text{lower terms}\in Q\subseteq P$, and
since every monomial of the “lower terms” belongs to the submodule $P_{R}$ by
minimality of $k$, we have $m_{k}x^{-k}r_{0}\in P_{R}$ which implies that
$r_{0}\in I\cap R$. Assume that $r_{0},\ldots,r_{i}\in I\cap R$ for some
$i\leq j$, and let us prove that $r_{i+1}\in I\cap R$. If
$r_{0},\ldots,r_{i}\in I\cap R$ then $r_{0}+\cdots+r_{i}x^{i}\in I$, whence
$r_{i+1}x^{i+1}+\cdots+r_{j}x^{j}\in I$.
$(m_{k}x^{-k-i-1})(r_{i+1}x^{i+1}+\cdots+r_{j}x^{j})=m_{k}\sigma^{\prime
k+i+1}(r_{i+1})x^{-k}+\text{lower terms}\in Q\subseteq P$.
By minimality of $k$, every monomial of the “lower terms” belongs to the
submodule $P_{R}$ and thus $m_{k}\sigma^{\prime k+i+1}(r_{i+1})x^{-k}\in
P_{R}$, and by relation $x^{-1}r:=\sigma^{\prime}(r)x^{-1}+\delta^{\prime}(r)$
it follows that
$m_{k}\sigma^{\prime k+i+1}(r_{i+1})x^{-k}=m_{k}x^{-k}\sigma^{\prime
i+1}(r_{i+1})+\text{lower terms}\in Q\subseteq P$,
where every monomial of the “lower terms” belongs to $P_{R}$ by minimality of
$k$. So, if $M[x^{-1}]_{R}$ is a completely $(\sigma,\delta)$-compatible
module and $m_{k}x^{-k}\sigma^{\prime i+1}(r_{i+1})\in P_{R}$, then
$m_{k}x^{-k}r_{i+1}\in P_{R}$ and thus $r_{i+1}\in I\cap R$ whence
$f(x)=r_{0}+\cdots+r_{j}x^{j}$ belongs to $(I\cap R)A$. Therefore $I=(I\cap
R)A$.
∎
###### Corollary 3.13 ([7, Theorem 3.2]).
If $M[x^{-1}]_{R}$ is a completely $\sigma$-compatible Bass module then
$\displaystyle{\rm Att}(M[x^{-1}]_{S})=\left\\{P[x]\ |\ P\in{\rm
Att}(M_{R})\right\\}.$
## 4\. Examples
The importance of our results is appreciated when we extend their application
to algebraic structures that are more general than those considered by Annin
[7], that is, noncommutative rings which cannot be expressed as skew
polynomial rings of endomorphism type. In this section, we consider families
of algebras that have been studied in the literature to exemplify the results
obtained in this paper.
###### Example 4.1.
Let $A$ be the Jordan plane $\mathcal{J}(\Bbbk)$ or the $q$-skew Jordan plane
$\mathcal{J}_{q}(\Bbbk)$ with $q\in\Bbbk^{*}$ and $q\neq 1$. Consider the
right module $M[y^{-1}]_{A}$ under the action defined by (3.1) and (3.2). If
$M_{\Bbbk[x]}$ is a right module such that $M[y^{-1}]_{\Bbbk[x]}$ is a
completely $(\sigma,\delta)$-compatible module, then the characterization of
the attached prime ideals of $M[y^{-1}]_{A}$ is obtained from Theorems 3.8 and
3.12.
###### Example 4.2.
Let $A$ be the $q$-meromorphic Weyl algebra $MW_{q}$ with $yx=qxy+x^{2}$, for
$0<q<1$. If $M[x^{-1}]_{\mathbb{C}[y]}$ is completely
$(\sigma,\delta)$-compatible where $\sigma(y):=q^{-1}y$ and
$\delta(y):=-q^{-1}$, then the characterization of the attached primes of
$M[x^{-1}]_{A}$ follows from Theorems 3.8 and 3.12. Thinking about the change
of variable presented by Lopes (Example 2.2 (4)), $MW_{q}$ can be interpreted
as the quantum plane $\mathbb{C}_{q}[x,y]$ with $yx=qxy$. In this way, if
$M[x^{-1}]_{\mathbb{C}[y]}$ is a completely $\sigma$-compatible module with
$\sigma(y):=q^{-1}y$, then the description of the attached prime ideals over
$M[x^{-1}]_{A}$ follows from Theorems 3.8 and 3.12 or Corollary 3.13.
###### Example 4.3.
If $A$ is the algebra of skew Ore polynomials of higher order $Q(0,b,c)$
subject to the relation $yx=bxy+cy^{2}$ where $b,c\in\Bbbk^{*}$ and
$M_{\Bbbk[x]}$ is a right module which satisfies that $M[y^{-1}]_{\Bbbk[x]}$
is completely $(\sigma,\delta)$-compatible, then Theorems 3.8 and 3.12
described the attached prime ideals of $M[y^{-1}]_{A}$. In a similar way, we
get the characterization of these ideals over the right module $M[x^{-1}]_{A}$
when $A$ is the algebra $Q(a,b,0)$.
###### Example 4.4.
With the aim of constructing new Artin-Schelter regular algebras, Zhang and
Zhang [45] defined the double Ore extensions (or double extensions, for short)
over a $\Bbbk$-algebra $R$ and presented $26$ new families of Artin-Schelter
regular algebras of global dimension four. It is possible to find some
similarities between the definition of double extensions and two-step iterated
skew polynomial rings. Nevertheless, there exist no inclusions between the
classes of all double extensions and of all length two iterated skew
polynomial rings (c.f. [11]). Several researchers have investigated different
relations of double extensions with Poisson, Hopf, Koszul and Calabi-Yau
algebra (see [43] and reference therein). We start by recalling the definition
of a double extension in the sense of Zhang and Zhang, and since some typos
occurred in their papers [45, p. 2674] and [46, p. 379] concerning the
relations that the data of a double extension must satisfy, we follow the
corrections presented by Carvalho et al. [11].
###### Definition 4.5 ([45, Definition 1.3]; [11, Definition 1.1]).
If $B$ is a $\Bbbk$-algebra and $R$ is a subalgebra of $B$, then
* (a)
$B$ is called a right double extension of $R$ if the following conditions
hold:
* (i)
$B$ is generated by $R$ and two new variables $y_{1}$ and $y_{2}$.
* (ii)
$y_{1}$ and $y_{2}$ satisfy the relation
(4.1)
$y_{2}y_{1}=p_{12}y_{1}y_{2}+p_{11}y_{1}^{2}+\tau_{1}y_{1}+\tau_{2}y_{2}+\tau_{0},$
where $p_{12},p_{11}\in\Bbbk$ and $\tau_{1},\tau_{2},\tau_{0}\in R$.
* (iii)
$B$ is a free left $R$-module with a basis $\left\\{y_{1}^{i}y_{2}^{j}\ |\
i,j\geq 0\right\\}$.
* (iv)
$y_{1}R+y_{2}R+R\subseteq Ry_{1}+Ry_{2}+R$.
* (b)
A right double extension $B$ of $R$ is called a double extension if
* (i)
$p_{12}\neq 0$.
* (iii)
$B$ is a free right $R$-module with a basis $\left\\{y_{2}^{i}y_{i}^{j}\ |\
i,j\geq 0\right\\}$.
* (iv)
$y_{1}R+y_{2}R+R=Ry_{1}+Ry_{2}+R$.
Condition (a)(iv) from Definition 4.5 is equivalent to the existence of two
maps
$\sigma(r):=\begin{pmatrix}\sigma_{11}(r)&\sigma_{12}(r)\\\
\sigma_{21}(r)&\sigma_{22}(r)\end{pmatrix}\ \text{and}\
\delta(r):=\begin{pmatrix}\delta_{1}(r)\\\ \delta_{2}(r)\end{pmatrix}$ for all
$r\in R$,
such that
(4.2) $\begin{pmatrix}y_{1}\\\ y_{2}\end{pmatrix}r:=\begin{pmatrix}y_{1}r\\\
y_{2}r\end{pmatrix}=\begin{pmatrix}\sigma_{11}(r)&\sigma_{12}(r)\\\
\sigma_{21}(r)&\sigma_{22}(r)\end{pmatrix}\begin{pmatrix}y_{1}\\\
y_{2}\end{pmatrix}+\begin{pmatrix}\delta_{1}(r)\\\
\delta_{2}(r)\end{pmatrix}.$
If $B$ is a right double extension of $R$ then we write
$B:=R_{P}[y_{1},y_{2};\sigma,\delta,\tau]$ where
$P:=\\{p_{12},p_{11}\\}\subseteq\Bbbk$,
$\tau:=\\{\tau_{1},\tau_{2},\tau_{0}\\}\subseteq R$ and $\sigma,\delta$ are as
above. The set $P$ is called a parameter and $\tau$ a tail. If $\delta:=0$ and
$\tau$ consists of zero elements then the double extension is denoted by
$R_{P}[y_{1},y_{2};\sigma]$ and is called a trimmed double extension [45,
Convention 1.6 (c)]. It is straightforward to see that the relation (4.1) is
given by
(4.3) $y_{2}y_{1}=p_{12}y_{1}y_{2}+p_{11}y_{1}^{2}.$
Since $p_{12},p_{11}\in\Bbbk$ the expression (4.3) can be written as
$y_{1}y_{2}=p_{12}^{-1}y_{2}y_{1}-p_{12}^{-1}p_{11}y_{1}^{2}$. It is clear
that $\sigma(y_{2})=p_{12}^{-1}y_{2}$ is an automorphism of $\Bbbk[y_{2}]$ and
$\delta(y_{2})=-p_{12}^{-1}p_{11}$ is a locally nilpotent $\sigma$-derivation
of $\Bbbk[y_{2}]$. In this way, the trimmed double extension
$R_{P}[y_{1},y_{2};\sigma]$ can be seen as
$A=\Bbbk[y_{2}](y_{1};\sigma,\delta)$. If $M[y_{2}^{-1}]_{A}$ is a right
module under the action given by (3.1) and (3.2) and $M_{\Bbbk[y_{2}]}$ is a
module such that $M[y_{1}^{-1}]_{\Bbbk[y_{2}]}$ is completely
$(\sigma,\delta)$-compatible, then Theorems 3.8 and 3.12 describe the attached
prime ideals of $M[y_{2}^{-1}]_{A}$.
## 5\. Future work
As we mentioned in the introduction, Nordstrom [38, 39] studied the associated
primes of simple torsion modules over generalized Weyl algebras which are
$\mathbb{N}$-graded rings and contain the skew polynomial ring $R[x;\sigma]$
as a subring. As a possible future work, we have in mind to investigate the
attached prime ideals of inverse polynomial modules over generalized Weyl
algebras.
Lezama and Latorre [30] introduced the semi-graded rings with the aim of
generalizing $\mathbb{N}$-graded rings (such as the skew polynomial rings of
endomorphism type and the generalized Weyl algebras), skew PBW extensions and
many other algebras of interest in noncommutative algebraic geometry and
noncommutative differential geometry. We think as future work to investigate
the theory of the attached prime ideals and secondary representations over
semi-graded rings.
## References
* [1] Alev, J., Dumas, F. (1995). Automorphismes de certains completés du corps de Weyl quantique. Collect. Math. 46:1–9.
* [2] Alhevaz, A., Moussavi, A. (2012). On skew Armendariz and skew quasi-Armendariz modules. Bull. Iranian Math. Soc. 38(1):55–84.
* [3] Annin, S. (2002). Associated and Attached Primes Over Noncommutative Rings. PhD Thesis. University of California, Berkeley.
* [4] Annin, S. (2002). Associated primes over skew polynomial ring. Commun. Algebra 30(5):2511–2528.
* [5] Annin, S. (2004). Associated primes over Ore extension rings. J. Algebra Appl. 3(2):193–205.
* [6] Annin, S. (2008). Attached primes over noncommutative rings. J. Pure Appl. Algebra 212(3):510–521.
* [7] Annin, S. (2011). Attached primes under skew polynomial extensions. J. Algebra Appl. 10(3):537–547.
* [8] Baig, M. (2009). Primary decomposition and secondary representation of modules over a commutative ring. Master Thesis. Georgia State University.
* [9] Bavula, V. V. (1992). Generalized Weyl algebras and their representations. St. Petersburg Math. J. 4(1):71–92.
* [10] Brewer, J., Heinzer, W. (1974). Associated primes of principal ideals. Duke Math. J. 41(1):1–7.
* [11] Carvalho, P. A. A. B., Lopes, S. A., Matczuk, J. (2011). Double Ore Extensions Versus Iterated Ore Extensions. Commun. Algebra 39(8):2838–2848.
* [12] Chacón, A., Reyes, A. (2024). On the schematicness of some Ore polynomials of higher order generated by homogenous quadratic relations. J. Algebra Appl. 2550207\.
* [13] Cohn, P. M. (1961). Quadratic extensions of skew fields. Proc. London Math. Soc. (3) 3(1):531–556.
* [14] Díaz, R., Pariguan, E. (2009). On the $q$-meromorphic Weyl algebra. São Paulo J. Math. Sci. 3(2): 283–29
* [15] Dumas, F., Martin, F. (2023). Invariants of formal pseudodifferential operator algebras and algebraic modular forms. Rev. Un. Mat. Argentina 65(1):1–31.
* [16] Dumas, F. (1992). Skew power series rings with general commutation formula. Theoret. Comput. Sci. 98(1):99–114.
* [17] Dumas, F. (1991). Sous-corps de fractions rationnelles des corps gauches de series de Laurent. Topics in Invariant Theory. Vol. 1478. Berlin, Heidelberg: Springer.
* [18] Faith, C. (2000). Associated primes in commutative polynomial rings. Commun. Algebra 28(8):3983–3986.
* [19] Faith, C. (1995). Rings whose modules have maximal submodules. Publ. Mat. 39(1):201–214.
* [20] Freudenburg, G. (2006). Algebraic theory of locally nilpotent derivations. Vol. 136. Berlin: Springer.
* [21] Gallego, C., Lezama, O. (2011). Gröbner Bases for Ideals of $\sigma$-PBW Extensions. Commun. Algebra 39(1):50–75.
* [22] Golovashkin, A. V., Maksimov, V. M. (2005). On algebras of skew polynomials generated by quadratic homogeneous relations. J. Math. Sci. (N.Y.) 129(2):3757–3771.
* [23] Goodearl, K. R., Warfield, R. B. (2004). An Introduction to Noncommutative Noetherian Rings. Vol. 61. Cambridge University Press.
* [24] Hashemi, E., Moussavi, A. (2005). Polinomial extensions of quasi-Baer rings. Acta Math. Hungar. 107(3):207–224.
* [25] Higuera, S., Ramírez, M. C., Reyes, A. (2024). On the uniform dimension and the associated primes of skew PBW extensions. https://arxiv.org/abs/2404.18698.
* [26] Higuera, S., Reyes, A. (2023). On weak annihilators and nilpotent associated primes of skew PBW extensions. Commun. Algebra 51(11):4839–4861.
* [27] Jordan, D. (2001). The Graded Algebra Generated by Two Eulerian Derivatives. Algebr. Represent. Theory 4(3):249–275.
* [28] Lam, T. Y., Leroy, A., Matczuk, J. (1997) Primeness, semiprimeness and prime radical of Ore extensions, Commun. Algebra 25(8):2459–2506.
* [29] Lam, T. Y. (1998). Lectures on Modules and Rings. Graduate Texts in Mathematics. Vol. 189. Berlin: Springer-Verlag.
* [30] Lezama, O., Latorre, E. (2017). Non-commutative algebraic geometry of semi-graded rings. Int. J. Appl. Comput. Math. 27(4):361–389.
* [31] Lopes, S. A. (2023). Noncommutative Algebra and Representation Theory: Symmetry, Structure & Invariants. Commun. Math. 32(3):63–117.
* [32] Macdonald, I. G. (1973). Secondary representation of modules over a commutative ring. Sympos. Math. 11:23–43.
* [33] Maksimov, V. M. (2000). On a generalization of the ring of skew Ore polynomials. Russian Math. Surveys 55(4):817–818.
* [34] Melkersson, L. (1998). Content and inverse polynomials on Artinian modules. Commun. Algebra 26(4):1141–1145.
* [35] Niño, A., Ramírez, M. C., Reyes, A. (2020). Associated prime ideals over skew PBW extensions. Commun. Algebra 48(12):5038–5055.
* [36] Niño, A., Ramírez, M. C., Reyes, A. (2024). A first approach to the Burchnall-Chaundy theory for quadratic algebras having PBW bases. https://arxiv.org/abs/2401.10023.
* [37] Niño, A., Reyes, A. (2023). On centralizers and pseudo-multidegree functions for non-commutative rings having PBW bases. J. Algebra Appl. 2550109\.
* [38] Nordstrom, H. E. (2005). Associated primes over Ore extensions and generalized Weyl algebras. PhD Thesis. University of Oregon.
* [39] Nordstrom, H. E. (2012). Simple Modules Over Generalized Weyl Algebras and Their Associated Primes. Commun. Algebra 40(9):3224–3235.
* [40] Ore, O. (1931). Linear Equations in Non-commutative Fields. Ann. of Math. (2) 32(3):463–477.
* [41] Ore, O. (1933). Theory of Non-Commutative Polynomials. Ann. of Math. (2) 34(3):480–508.
* [42] Ouyang, L., Birkenmeier, G. F. (2012). Weak annihilator over extension rings. Bull. Malays. Math. Sci. Soc. 35(2):345–347.
* [43] Ramírez, M. C., Reyes, A. (2024). A view toward homomorphisms and cv-polynomials between double Ore extensions. Algebra Colloq. To appear. https://arxiv.org/abs/2401.14162.
* [44] Smits, T. H. M. (1968). Skew polynomial rings. Indag. Math. (N.S.) 30(1):209–224.
* [45] Zhang, J. J., Zhang, J. (2008). Double Ore extensions. J. Pure Appl. Algebra 212(12):2668–2690.
* [46] Zhang, J. J., Zhang J. (2009). Double extension regular algebras of type (14641). J. Algebra 322(2):373–409.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.